Why Do Human Rights Organizations Insist on Banning ‘Killer Robots’?

Since the announcement of the United Nations last March about the use of Killer Robots for the first time ever in wars, international pressure increased and human rights organizations in various countries campaigned to issue a decision banning or framing their use.
The United Nations meeting in Geneva December 13-17, 2021, debated the ban of Autonomous weapon systems, commonly known as "Killer Robots."
In fact, the majority of the 125 countries that belong to an agreement called the Convention on Certain Conventional Weapons, or CCW, said they wanted to ban Killer Robots. Yet, they were objected by the powers that are developing these weapons, the United States and Russia in particular.
Thus, The CCW could not reach a consensus about the Killer Robots, and only a vague statement about considering possible measures acceptable to all was published on the conference last day.
From their side, human rights organizations expressed their disappointment and considered that the Geneva meeting’s outcomes fell “drastically short.”
Clare Conboy, representative of Stop Killer Robots group campaigning against the use of robots, told Reuters at the end of the Geneva conference: “The outcome was one that keeps the minority of militarized states investing in developing these weapons very happy."
She said that she expected the many countries in favor of a new law such as New Zealand or Austria to begin negotiations outside of the United Nations.
Life and Death Decisions
In March 2021, the United Nations Security Council report on the Libyan civil war declared that the first human beings ever to be killed by the Killer Robots were in Libya last year. Since then, condemnations and criticism have been raised worldwide.
Amnesty International pointed out: “Governments and companies are urgently working to develop increasingly autonomous lethal weapons systems using new technology and artificial intelligence. These 'Killer Robots' can be used in conflict zones, by police forces, and in border control.”
The Human Rights organization stressed that: “The machine must not be allowed to decide on matters of life and death. We need to stop the Killer Robots; we need to act now to protect our humanity and make the world a safer place.”
Peter Maurer, president of the International Committee of the Red Cross and a renowned opponent of Killer Robots, said in a Geneva meeting: “Fundamentally, autonomous weapon systems raise ethical concerns for society about substituting human decisions about life and death with sensor, software and machine processes.”
Robots lack Compassion
Human Rights Watch and Harvard Law School’s International Human Rights Clinic affirmed the necessity of implementing an international agreement that requires human control at all times.
They explained in a briefing paper the reasons behind their recommendations: “Robots lack the compassion, empathy, mercy and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life.”
In addition, AMNESTY stressed that: “Machines cannot make complex ethical choices because they lack empathy and understanding. They make decisions based on biased, flawed, and unfair processes.”
It stated: “Emerging technologies - such as facial and voice recognition - often fail to identify women, people of color, and people with disabilities. This means that lethal autonomous weapons can never be adequately programmed to replace human decision-making.”
No Consciousness No Accountability
Human Rights Watch explained that: “Showing respect for human life entails minimizing killing. Legal and ethical judgment helps humans weigh different factors to prevent arbitrary and unjustified loss of life in armed conflict and beyond.”
It continued: “It would be difficult to recreate such judgment, developed over both human history and an individual life, in fully autonomous weapons, and they could not be pre-programmed to deal with every possible scenario in accordance with accepted legal and ethical norms.”
AMNESTY emphasized: “Replacing soldiers with machines makes the decision to go to war easier—and these weapons fall into the hands of others through illegal transfers and battlefield takeovers. In addition, these technologies will be used to maintain security, monitor borders, and threaten human rights, such as the right to protest, the right to life, and the prohibition of torture and other ill-treatment. Countries such as the United States, China, Israel, South Korea, Russia, Australia, India, and the United Kingdom continue to invest in lethal autonomous weapons despite these concerns.”
More Dangerous than Nuclear Weapons
Ghita el-Ghiyati researcher in Psychology at Bahcesehir University in Istanbul told Al-Estiklal: “I believe that the Killing Robot is more dangerous for Humans than the Nuclear bombs. Because, as humans, we all strive to survive, thus we all avoid the choices that will put our lives in danger except in very rare and specific situations. The robots are not driven by survival needs, and that makes all the difference.”
She added: “How could we prevent a robot pre-programmed for killing from committing genocide and mass destruction? How would we be able to apply the international laws of war on Robots to stop war crimes and atrocities? How could we be sure and certain that the autonomous weapon systems will not get out of Human control and possibly end human life on earth?”
She elucidated: “From another part, the innate need to earn Reward and to avoid Punishment motivates human behavior and pushes humans to care for each other and to avoid harming others.”
In my opinion, Ms. Ghita, explained: “The lack of emotions and consciousness are two significant obstacles for robots in complying with the principles of humanity.”
Indeed, “It is not evident for the pre-programmed robots to deal with complex and unpredictable situations. Moreover, it is not possible to program the innate ability to differentiate between what is right and wrong and it is impossible to create an algorithm entitling all possible right and wrong life scenarios.” Ms. el-Ghiyati added.
Ms. el-Ghiyati concluded by saying: “The Legal and ethical frameworks control human behaviors and attitudes and provide humans with the ultimate tools to minimize harm. Giving the authority of decision making to the Killer Robots in wars makes the future of life scary and terrifying.”
Sources
- U.N. talks adjourn without a deal to regulate 'Killer Robots'
- A Moral and Legal Imperative to Ban Killer Robots
- Stop Killer Robots [Arabic]
- Killer Robots Aren't Science Fiction. A Push to Ban Them is Growing
- UN talks fail to open negotiations on ‘Killer Robots
- UN fails to agree on ‘killer robot’ ban as nations pour billions into autonomous weapons research