UNEP Foresight Report: AI-based weapons already in use could cause major disruptions in 4-6 years

Calling for urgent action, authors remined that without human oversight, AI could recommend pro-escalation tactics without clear logic or rationale & intensify wars
The Ukrainian military deployed AI-equipped drones containing explosives to fly into battlefields and strike Russian oil refineries.
The Ukrainian military deployed AI-equipped drones containing explosives to fly into battlefields and strike Russian oil refineries.Photo for representation: iStock
Published on

Artificial intelligence (AI) has already made its presence felt in the battlefields, with its use during the Ukraine and West Asia conflicts. This dangerous global shift towards AI-based and autonomous weapons could potentially induce disruptions to the global environment over the next four to six years, warned a new report by the United Nations Environment Programme (UNEP).

Autonomous weapons systems execute their functions in the absence of direction or input from humans. Such systems can be further supported by AI. Widespread use of these systems, which can select targets and destroy them without direct human guidance, could result in massive civilian casualties and environmental damage.

Some 25 per cent of experts believe that disruptions due to AI and automated weapon systems are “very likely” to occur and another 34 per cent voted that this could “likely” occur, according to Navigating New Horizons A Global Foresight Report on Planetary Health and Human Wellbeing.

As for the intensity of perceived impact, the experts provided a score of 2.6 on a scale of 1-3, where 1 is low and 3 is high. 

These systems are ranked 4 on a perception score, which is determined by comparing them with 17 other potential signals of change like emerging zoonotic diseases or advancements of space technologies.

The report is based on nearly 1,200 responses submitted by 790 respondents to a survey conducted in May 2023. This exercise identified 280 specific signals of change.

The Foresight Expert Panel — comprising 22 distinguished members of the scientific community from developing and industrialised countries — further shortlisted the signals.

Israel’s ‘Lavender’ AI system was used to classify civilians and military operations in Gaza. The Ukrainian military deployed AI-equipped drones containing explosives to fly into battlefields and strike Russian oil refineries, according to news website The Guardian

Humans exert no control over the final decisions made by these systems. Without human oversight, AI could recommend pro-escalation tactics without clear logic or rationale, a 2024 study quoted in the report said.

Such systems could speed up and intensify a warfare by causing harm to both civilians and the environment.

When populated areas get targeted, soil and groundwater contamination can occur in the long term directly through the munitions themselves and indirectly through the collapse of buildings, which then release hazardous materials including asbestos, industrial chemicals and fuel.

As AI technology continues to advance, bringing us closer to superintelligent systems, the regulation of these technologies becomes critical to mitigate potential misuse, the report read.

Since 2018, United Nations Secretary-General Antonio Guterres has maintained that lethal autonomous weapons systems are politically unacceptable. In 2023, he urged states to conclude a legally binding instrument by 2026 to prohibit lethal autonomous weapon systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law. He also advocated for governments to regulate all other types of autonomous weapons systems.

Another risk from AI is that it could be used with biological agents, such as pathogens or their associated toxins, in warfare and conflicts.

This convergence of AI and biotechnological advancements could cause disruptions and changes in 4-6 years, the report estimated. Some 19 per cent of experts believed that disruptions are “very likely” to occur and another 37 per cent voted that this could “likely” occur.

The perceived intensity of the impact of such a change is negative, assigned a score of 2.5 on a scale of 1-3.

Despite their use in war being prohibited under the Geneva Protocol (1925) and the Biological Weapons Convention (1972), their potential for harm is being amplified by the convergence of emerging bio- and other technologies, the pace of developments, and the inability of existing legal and safety frameworks to keep pace with such change.
Authors of Navigating New Horizons — A Global Foresight Report on Planetary Health and Human Wellbeing.

The risk is emerging from advancements made in synthetic biology, a field of science that redesigns organisms for useful purposes by providing them with new abilities.

Synthetic biology, along with AI, could create new biological weapons. An example would be the use of nano-aerial vehicles controlled by AI to dump bioagents on a target. Another example is a biological attack on livestock or crops.

Such technologies could disrupt ecosystems, harm agriculture, and cause widespread ecological damage, the report stated, calling for urgent action. Existing frameworks, it added, must keep pace with rapid scientific progress, ensuring that these powerful tools do not cause damage.

Related Stories

No stories found.
Down To Earth
www.downtoearth.org.in