What Are the Consequences of Using AI in Weapons?

A new generation of robot-weapons is capable of shooting and launching missiles, operating without any human control.
A new generation of robot-weapons is capable of shooting and launching missiles, operating without any human control.
This technology, reminiscent of a science fiction movie, is rapidly becoming a reality.
Certain companies are now developing weapons that execute combat missions and make autonomous decisions, bypassing the need for human approval.
Tech Advancements
Though some reports suggest these weapons are still novel and not widespread, the Stockholm International Peace Research Institute highlights a significant development.
According to their findings, Harop drones, deployed by the Israeli occupation during its 2017 attack on Syria, operated autonomously. These drones could identify potential targets and launch attacks without human intervention.
As global technology firms increasingly sideline human judgment in targeting and shooting, the world edges toward an era dominated by so-called "killer robots."
This shift carries significant risks. The most advanced drones and autonomous machines leverage vast amounts of data to recognize patterns and make independent decisions.
Killer robots refer to fully autonomous weapons. The U.S. Department of Defense defines these as artificial intelligence systems capable of selecting and engaging targets independently, without human input.
These systems assess battlefield conditions and decide on attacks based on processed data.
However, these autonomous weapons lack the critical aspects of human intelligence that enable adherence to rules, norms, and laws, posing challenges to civilian protection and compliance with international human rights and humanitarian law.
Fully autonomous weapons stand apart from remote-controlled systems like drones, which are ultimately guided by humans.
Once programmed, fully autonomous weapons operate without human guidance. This autonomy raises profound ethical and legal questions.
Human rights groups and UN officials are increasingly calling for restrictions on autonomous weapons, fearing they might trigger a new and uncontrollable global arms race.

International Build-Up
The ownership and development of lethal autonomous weapons systems remain shrouded in secrecy.
While a number of nations are investing in the research and development of fully autonomous weapons, pinpointing the exact proprietors of these systems is challenging.
According to the New Humanitarian Network, countries like the United States, Russia, Britain, and “Israel” are at the forefront of developing such technologies.
The United States, for instance, is working on the Crusher, an unmanned ground combat vehicle, while Britain has tested an unmanned combat aircraft named Taranis.
South Korea has deployed autonomous "sentries" equipped with machine guns in the Demilitarized Zone, although these have yet to be used against human targets.
Israel Aerospace Industries (IAI) has been selling Harop suicide drones to countries like India and Germany for over a decade.
These drones, which can be remotely piloted, are not limited to autonomous operation; operators can direct them to attack any target captured by their cameras.
However, The Economist suggests that the Harop drones used by the Israeli occupation against Syrian air defense systems in 2017 may have operated autonomously, capable of detecting and attacking targets without human intervention.
The New York Times conducted an investigation revealing that numerous Ukrainian companies are advancing technological weapons that marginalize human judgment in targeting and shooting.
Several factories in Ukraine are manufacturing remotely controlled machines, with many of these weapons serving as precursors to fully autonomous systems.
One such company, Veri, utilizes basic algorithms to analyze and interpret images, aiding in decision-making processes.
More advanced firms employ deep learning to develop programs capable of identifying and attacking targets autonomously. Saker, a Ukrainian drone manufacturer, has built an autonomous targeting system using artificial intelligence.
This winter, the company began deploying its technology on the front lines, testing various systems.
Following a surge in demand, Saker successfully launched a reconnaissance drone that uses AI to identify targets, along with other weapons designed to automatically track and fire upon targets.
Despite these advancements, gunmakers currently maintain that they cannot allow machine guns to fire without human intervention. However, they acknowledge that creating such fully autonomous firing systems would be a straightforward task.

Ethical Dispute
Artificial Intelligence in warfare raises profound legal and ethical questions, particularly concerning the delegation of life-and-death decisions to machines, which many argue crosses a moral red line.
This debate centers on the humanity and appropriateness of using AI in combat scenarios that can result in human casualties.
Bonnie Docherty, a researcher in the arms division at Human Rights Watch and a professor of international law at Harvard Law School, expressed serious ethical concerns in an interview with the university’s official newspaper.
She emphasized that entrusting machines with decisions about life and death dehumanizes violence and reduces human beings to mere data points.
Docherty also highlighted the risk of algorithmic bias, where AI systems could discriminate against individuals based on race, gender, or disability, whether through deliberate programming or unintentional biases.
From a legal perspective, Docherty pointed out that machines lack the ability to distinguish between soldiers and civilians, a critical aspect of lawful combat.
Even if technology could address this issue, machines still lack the human judgment necessary for assessing proportionality—balancing civilian harm against military benefit. This nuanced decision-making process cannot be fully programmed due to the infinite variables present on a battlefield.
Accountability presents another significant legal challenge. Docherty explained that autonomous weapons systems fall into a gray area of responsibility, where the machine itself cannot be held accountable, and it is unjust to hold the human operator fully responsible for the autonomous system’s actions.
This gap could undermine international criminal law and create significant legal ambiguities.
In July 2018, the ethical and legal controversies surrounding autonomous weapons led thousands of scientists and AI experts to pledge against developing or deploying fully autonomous weapons
In a notable protest, 4,000 Google employees signed a letter urging the company to cancel its Project Maven contract with the Pentagon, aimed at enhancing drone strike capabilities through AI.
This initiative received support from 1,200 academics and over 20 Nobel Peace Prize laureates, ultimately leading Google to abandon the project.
Last December, the UN General Assembly’s Disarmament and International Security Committee adopted a resolution emphasizing the urgent need for the international community to address the challenges posed by autonomous weapons systems.
The resolution garnered support from Japan, the United States, and 150 other countries, while Russia, India, Belarus, and Mali voted against it, and 11 countries, including China, North Korea, and Israel, abstained.
UN Secretary-General Antonio Guterres has called for a new international arms treaty by 2026, addressing the complexities of modern weaponry, including human rights, law, and ethics.
Over 100 countries have expressed support for such a treaty, advocating for a ban and regulation of autonomous weapons systems.
In the last decade, military AI applications have advanced dramatically, encompassing intelligent defense systems and predictive analysis.
Today, at least 30 countries employ AI technologies in their defense operations, integrating autonomous modes into their military arsenals.