Technology has always distanced the soldiers who use weapons from the people who get hit. But robotics engineer Ron Arkin at the Georgia Institute of Technology, Atlanta, is working to imagine wars in which weapons make their own decisions about wielding lethal force.
He is particularly interested in how such machines might be programmed to act ethically, obeying the rules of engagement.
Arkin has developed an "ethical governor", which aims to ensure that robot attack aircraft behave ethically in combat, and is demonstrating the system in simulations based on recent campaigns by US troops, using real maps from the Middle East.
Virtual battlefield
In one scenario, modelled on a situation encountered by US forces in Afganistan in 2006, the drone identifies a group of Taliban soldiers inside a defined "kill zone". But the drone doesn't fire. Its maps indicate that the group is inside a cemetery, so opening fire would breach international law.
In another scenario, the drone identifies an enemy vehicle convoy close to a hospital. Here the ethical governor only allows fire that will damage the vehicles without harming the hospital. Arkin has also built in a "guilt" system which, if a serious error is made, forces a drone to start behaving more cautiously. You can see videos of these simulations on Arkin's website.
In developing the software, he drew on studies of military ethics, as well as discussions with military personnel, and says his aim is to reduce non-combatant casualties. One Vietnam veteran told him of soldiers shooting at anything that moved in some situations. "I can easily make a robot do that today, but instead we should be thinking about how to make them perform better than that," Arkin says.
Complex scenarios
Simulations are a powerful way to imagine one possible version of the future of combat, says Illah Nourbakhsh, a roboticist at Carnegie Mellon University, Pittsburgh, US. But they gloss over the complexities of getting robots to understand the world well enough to make such judgements, he says; something unlikely to be possible for decades.
Arkin stresses that his research, funded by the US army, is not designed to develop prototypes for future battlefield use. "The most important outcome of my research is not the architecture, but the discussion that it stimulates."
However, he maintains that the development of machines that decide how to use lethal force is inevitable, making it important that when such robots do arrive they can be trusted.
via Plan to teach military robots the rules of war - tech - 18 June 2009 - New Scientist.
Some future kid on a playground does a little dance that seems terrorist-like and one of these things swoops out of the sky and blows him to bits. Great.
No comments:
Post a Comment