Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best Tortoise experience possible, please make sure any blockers are switched off and refresh the page.

If you have any questions or need help, let us know at memberhelp@tortoisemedia.com

The killer robots

The killer robots

0:00

“Killer robots” are already being developed and used by lots of countries. But if these machines are really autonomous, who do we blame if things go wrong?


Transcript
Claudia williams:

Hello, I’m Claudia and this is the Sensemaker. 

One story, everyday, to make sense of the world.

Today, slaughter bots: can we control the machines that are programmed to kill us? 

*** 

In the United States, a major report  from a “national security commission” on artificial intelligence talks about AI enabling a “new paradigm in war fighting” – and urges massive  amounts of investment in the field.

DW News

Imagine you’re in the middle of a war and it looks like there’s no end in sight. 

It’s expensive and it’s bloody. So instead of sending in more troops, you send in a swarm of slick, silver killer robots that operate, on their own. And no, I’m not describing an episode of Black Mirror. 

These killer robots are already being developed and used by China, Israel, South Korea, Russia, the United Kingdom and the United States. 

Just last year Azerbaijan was using so-called “kamikaze drones” in strikes against Armenian forces in the disputed region of Nagorno-Karabakh.  

Next fighting between Azerbaijan and Armenia over the disputed territory Nagorno-Karabakh is in its ninth day. Both sides have accused each other of attacking civilian areas and the casualties are going up.

BBC News

You see, when people talk about AI and weapons, it’s normally in terms of the future. But experts in machine learning and artificial intelligence have said that because the technology for autonomous weapons is now readily available, the age of the killer robot is now here.

Officially, they’re called autonomous lethal weapons. They’re mostly small drones that can select and engage targets without human intervention, mainly being deployed in situations that are highly volatile.

You can see why they are called slaughter bots.

They can operate anywhere: in the air, on land, on and under water – and even in space. Unsurprisingly, their existence has generated a lot of controversy.

We’ve spent the last five years detailing all of the legal challenges raised by killer robots under international humanitarian law, international human rights law, the massive accountability gap that would be created by the use of a fully autonomous weapon.

France24 English

To address this issue head on, the United Nations has been hosting talks in Geneva for the last four years to figure out if and how we should be using these killer robots. But last week the US rejected calls for a binding agreement that would regulate the use of these weapons. John Tasioulas director of the Institute for Ethics in AI, called Joe Biden’s position “sad but unsurprising.”

The idea of a truly “autonomous” weapon raises eyebrows for one obvious reason: if these machines are really autonomous, who do we blame when things go wrong? 

*** 

Already, today, autonomous weapons are being used to find a target over long distances and destroy it without human intervention. And this revolution is just getting started  – turbocharged by artificial intelligence.

DW News

The appeal of these machines is pretty clear, there’s evidence to show they’re better at the job than humans.

It cost the pentagon about $850,000 a year for each soldier in Afghanistan but a small rover equipped with weapons costs roughly $230,000 and kamikaze drones cost just $6000 a piece. There is also the point about fewer soldiers’ lives being put at risk. 

As one British army commander put it, bluntly: “I can build a machine that can go into a dangerous place and kill the enemy or we can send your son — because that’s the alternative. How do you feel now?” 

There have also been compelling arguments about how machines could be more ethical than people because they wouldn’t be able to commit war crimes such as rape. 

But it’s just not that simple. Whilst for years many have tried to convince the world that AI is objective and unbiased – there is a lot of evidence to show the opposite. 

In September 2019 the photograph of Joshua Bada from England was rejected when he applied for a new passport. The reason, the system mistook his lips for an open-mouth as a result the man was told his application had not been accepted because he needed to provide a neutral expression and a closed mouth, something he had in fact done. But the algorithms couldn’t interpret his lips correctly because they were obviously not trained with a sufficient amount of images of black people.

DW Shift

Even though these machines would undoubtedly be able to process lots of data far quicker than humans, if the data sets they’re receiving are inaccurate or limited or riddled with human biases then atrocities like genocide and ethnic cleansing seem almost inevitable. 

As of right now, lethal autonomous weapons are still largely controlled by humans who have the final say. But experts warn that the more sophisticated they become, they will also learn new capabilities that will eventually make their missions unstoppable. 

It wouldn’t be an overstatement to say that they could go rogue. 

And as experts have pointed out, machine learning works best when there are distinctions between two groups: a good guy and a bad guy, for instance.  

Most of the time, in a war setting, this isn’t the case, especially with the rise of guerrilla and insurgent warfare. 

The Iraqi air force is targeting the group in the area but it looks like many civilians have been killed instead. An Iraqi MP urged the government to launch an investigation to how such a mistake could happen, saying it harms the reputation of the Iraqi army and raises questions about the intelligence they have on ISIL.

Al Jazeera English

Even human soldiers aren’t always able to distinguish enemy combatants from civilians. 

***

Ultimately, the idea of ethical robotic killing machines seems contradictory and unrealistic. 

The UN Secretary-General António Guterres has called these weapons “morally repugnant and politically unacceptable.” 

Having autonomous weapons means it’s easier and cheaper to kill people. For army strategists, that might be a positive.   

But society is left with a clear question: is this something we really want? 

Today’s story was written by Nimo Omer and produced by Ella Hill.