It may have seemed like an obscure
United Nations conclave, but a meeting this week in Geneva was followed intently by experts in artificial intelligence (AI), military strategy, disarmament and humanitarian law. The reason?
Killer robots - drones, guns and bombs that decide on their own, with artificial brains, whether to attack and kill - and what should be done, if anything, to regulate or ban them.
Once the domain of science fiction films like "Terminator" and "RoboCop," killer robots, more technically known as
Lethal Autonomous Weapons Systems, have been invented and tested at an accelerated pace with little oversight. Some prototypes have even been used in actual conflicts. The evolution of these machines is considered a seismic event in warfare, akin to the invention of gunpowder and nuclear bombs. This year, for the first time, a majority of the 125 nations that belong to an agreement called the Convention on Certain Conventional Weapons said they wanted curbs on killer robots. But they were opposed by members that are developing these weapons, most notably the US and
Russia.
The conference was widely considered by disarmament experts to be the best opportunity so far to devise ways to regulate, if not prohibit, the use of killer robots. But, it concluded on Friday with only a vague statement about considering possible measures acceptable to all. The Campaign to
Stop Killer Robots, a disarmament group, said the outcome fell "drastically short".
Critics say it is morally repugnant to assign lethal decision making to machines. How does a machine differentiate an adult from a child, a fighter with a bazooka from a civilian with a broom? "Autonomous weapon systems raise ethical concerns about substituting decisions about life and death with sensor and software," Peter Maurer, president of the International Committee of the Red Cross, said. NYT