there will lots of collateral damage, women, children…

Eye in the Sky movie gives a real insight into the future of warfare

Toby Walsh, Data61

Eye in the Sky is a tight British thriller staring Helen Mirren, Breaking Bad’s Aaron Paul and the late and already missed Alan Rickman. That should be enough to know to get you down to the cinema to see this recently released movie.

The movie is a surprisingly cerebral look at the future of war. The enemy are Islamic terrorists hiding in plain sight in East Africa. The “eye in the sky” of the title is a Predator drone loitering at 25,000ft ready to rain death down onto the population below with its aptly named Hellfire missiles.

The director Gavid Hood (Tsotsi, Wolverine) doesn’t take sides. Other than the obligatory pot shots at some gung ho Yanks and ineffectual Brits.

The movie features some amazing but very real technology coming to the battlefield soon. A miniature surveillance drone the size of an insect. Surveillance software that recognises ear prints. And another drone that looks and flies like a humming bird.

But it doesn’t hold back on the moral and ethical dilemmas of future warfare. Indeed, very slight spoiler alert, the whole movie can be seen as an extended debate on a famous problem in ethics, the Trolley Problem.

Ethical dilemmas

There are a number of different formulations of the Trolley Problem. In fact, how people perceive the ethics of the Trolley Problem depend on how it is formulated.

The basic setup is a runaway trolley careering down a track. There are five people tied to the track. They are sure to die unless you throw a lever and direct the trolley onto a siding. But here’s the ethical kicker, this will kill one person tied to the track in the siding. What do you do?

One variant removes the siding and replaces it with a fat man, standing next to you on a bridge over the track. You can push the fat man off the bridge and thereby stop the trolley. Do you push him off or not?

The movie plays with this problem, changing the setting several times, and testing our response to these changes.

Behind this are some even more topical ethical problems.

Killer robots

Earlier this week, the the UK’s Royal Navy announced it will run the first robot war games in October this year.

It is clear from news stories like this that the military in the UK, US, China and elsewhere are rushing to take advantage of what has been called the third revolution in warfare – lethal autonomous weapons – or as the media often call them, killer robots.

In the movie Eye in the Sky, the actors struggle to make ethical decisions with technology that is increasingly autonomous. Taking a line out of the movie, the decision making does not reduce itself to simply adding up numbers. There are many, often conflicting dimensions: ethical, legal, military, and political.

Humans in the loop

Ultimately, in the movie, a human “in the loop” still has to make the final life or death decision.

But what happens when, as is likely in the near future, there is no human any more in the loop? It is very likely that robots will be making simplistic and erroneous decisions.

It was for these sort of reasons that I, and thousands of my colleagues working in Artificial Intelligence (AI) and Robotics, signed an open letter calling for offensive autonomous weapons to be banned.

And it is for these sort of reasons that I will be going to the UN in Geneva next month to talk to diplomats at the Convention on Certain Conventional Weapons to persuade them to push forwards with their discussions on a ban.

There is only a small window of opportunity to get a ban in place before this moves from the big screen and comes to the conflict zones of our world.

As demonstrated by the Drone Papers, secret documents that it’s claimed offer an “unprecedented glimpse into Obama’s drone wars”, the technology will deceive us into thinking we can fight clean, clinical wars. But in reality, nine out of ten people being killed will not be the intended targets.

Just as in this movie, there will lots of collateral damage, women, children and other people who just happen to be in the wrong place.

As with other technologies that have been successfully banned, like blinding lasers and anti-personnel mines, we get to choose if killer robots are part of our future of not.

Let’s make the right choice.

The Conversation

Toby Walsh, Professor of AI at UNSW, Research Group Leader at Data61, Data61

La version originale de cet article a été publiée sur The Conversation.