DARPA’s Autonomous Robots might Do More Damage than Good

The development of ‘killer robots’ by the US Defense Advanced Research Projects Agency (DARPA), has been termed as technology that can leave humans ‘utterly defenseless’.

Stuart Russell, Professor of Computer Science at the University of California, Berkley, recently wrote a paper in the journal, Nature, wherein he drew attention towards the potential risks that developing such automated bots entails. DAPRA has commissioned two programmes seeking to create drones that can track and kill targets, even in absence of any human intervention.

The robots, called LAWS, Lethal Autonomous Weapons Systems, would be designed as armed quadcopters of mini-tanks that would be proficient enough to make decision regarding who should live or die, all by themselves. The armed drones that are sent to kill enemies in a city or swarms of autonomous boats sent to attack ships, fall under the category of such robots.

The two programmes DARPA is currently working on are - Fast Lightweight Autonomy (FLA) and Collaborative Operations in Denied Environment (CODE). Under FLA, it is designing a tiny rotorcraft to operate unaided at high speed in urban areas and inside buildings. Whereas, CODE has been envisioned with an aim to develop teams of autonomous aerial vehicles, capable of carrying out all steps of a strike, including find, fix, track, target, engage, assess, in situations where communication with a human commander are hampered.

However, Russell has expressed his concern over the likely culmination of such a technological course. He fears that such autonomous weapons can become lethal when their targets are humans. He said, “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill - for example, they might be tasked to eliminate anyone exhibiting threatening behaviour”.

Even the 1949 Geneva Convention, one of several treaties that specify humane treatment of enemies during wartime, prohibits the use of such killer drones.