An artificial intelligence expert has warned the development of killer robots, or Lethal Autonomous Weapons Systems (LAWS), capable of engaging targets without human intervention puts the principles of human dignity at risk. “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill,” Stuart Russell, Professor of Computer Science at the University of California, Berkeley, warns. “For example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’.”
A report written by members of the Harvard Law School’s International Human Rights Clinic, entitled Mind the Gap: The Lack of Accountability for Killer Robots, counselled the European Union to ban so-called ‘killer robots’, due to the serious moral and ethical ramifications of machines possessing “the ability to select and engage their targets without meaningful human control.”
“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future,” Russell adds.
According to Russell, DARPA is already working on such technology and he estimates that it is only a couple of years away from being a reality. Potential LAWS weapons could be armed quadcopters or self-driving tanks with the capacity to identify and eliminate hostile targets.
Russell spoke at the recent United Nations Convention on Certain Conventional Weapons event. Germany were receptive to LAWS restrictions, saying it would “not accept that the decision over life and death is taken solely by an autonomous system”, but the UK, US, and Israel refused to commit to an international treaty to restrict the use of LAWS.
Thank you International Business Times for providing us with this information.