Micro-sized cheaper lethal autonomous drones, a dangerous threat

0

Called lethal autonomous weapons — but “killer robots” isn’t an unreasonable moniker — the proposed weapons would mostly be drones, not humanoid robots, which are still really hard to build and move. But they could be built much smaller than existing military drones, and they could potentially be much cheaper.

A conquering army wants to take a major city but doesn’t want troops to get bogged down in door-to-door fighting as they fan out across the urban area. Instead, it sends in a flock of thousands of small drones, with simple instructions: Shoot everyone holding a weapon. A few hours later, the city is safe for the invaders to enter.

This sounds like something out of a science fiction movie. But the technology to make it happen is mostly available today — and militaries worldwide seem interested in developing it.

Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a “human in the loop” — that is, with no person involved at any point between identifying a target and killing them. And as facial recognition and decision-making algorithms become more powerful, it will only get easier.

Called “lethal autonomous weapons” — but “killer robots” isn’t an unreasonable moniker — the proposed weapons would mostly be drones, not humanoid robots, which are still really hard to build and move. But they could be built much smaller than existing military drones, and they could potentially be much cheaper.

Now, researchers in AI and public policy are trying to make the case that killer robots aren’t just a bad idea in the movies — they’re a bad idea in real life. There are certainly ways to use AI to reduce the collateral damage and harms of war, but fully autonomous weapons would also usher in a host of new moral, technical, and strategic dilemmas, which is why scientists and activists have pushed the United Nations and world governments to consider a preemptive ban. Their hope is that we can keep killer robots in the realm of science fiction.

Killer robots, explained

Military drones already fly the skies in areas where the US is at war or engaged in military operations. Human controllers decide when these drones will fire. Lethal autonomous weapons (LAWS) don’t quite exist yet, but the technology to replace the humans with an algorithm that makes the decision of when to shoot does.

“Technologically, autonomous weapons are easier than self-driving cars,” Stuart Russell, a computer science professor at UC Berkeley and leading AI researcher, told me. “People who work in the related technologies think it’d be relatively easy to put together a very effective weapon in less than two years.”

That weapon would not look like the Terminator. The simplest version would use existing military drone hardware. But while today’s drones transmit a video feed back to a military base, where a soldier decides whether the drone should fire on the target, with an autonomous weapon the soldier won’t make that decision — an algorithm would.

The algorithm could have a fixed list of people it can target and fire only if it’s highly confident (from its video footage) that it has identified one of those targets. Or it could be trained, from footage of combat, to predict whether a human would tell it to fire, and fire if it thinks that’s the instruction it would be given. Or it could be taught to fire on anyone in a war zone holding something visually identifiable as a gun and not wearing the uniform of friendly forces.

“When people hear ‘killer robots,’ they think Terminator, they think science fiction, they think of something that’s far away,” Toby Walsh, a professor of artificial intelligence at the University of New South Wales and an activist against lethal autonomous weapons development, told me. “Instead, it’s simpler technologies that are much nearer, and that are being prototyped as we speak.”

In the past few years, the state of AI has grown by leaps and bounds. Facial recognition has gotten vastly more accurate, as has object recognition, two skills that would likely be essential for lethal autonomous weapons.

New techniques have enabled AI systems to do things that would have been impossible just a few years ago, from writing stories to creating fake faces to, most relevantly to LAWS, making instantaneous tactical decisions in online war games. That means that lethal autonomous weapons have rapidly gone from impossible to straightforward — and they’ve gotten there before we’ve developed any sort of consensus on whether they are acceptable to develop or use.

Why militaries want killer robots

Taking the human out of the loop — and designing weapons that fire on their own without human intervention — has terrifying moral implications. (It has terrifying strategic implications too; we’ll get to that in a bit.) Why would anyone even want to do it?

From a military perspective, the most straightforward argument for autonomous weapons is that they open up a world of new capabilities. If drones have to be individually piloted by a human who makes the crucial decisions about when the drone could fire, you can only have so many of them in the sky at once.

Furthermore, current drones need to transmit and receive information from their base. That introduces some lag time, limits where they can operate, and leaves them somewhat vulnerable — they are useless if communications get cut off by enemies who can block (or “jam”) communication channels.

LAWS would change that. “Because you don’t need a human, you can launch thousands or millions of [autonomous weapons] even if you don’t have thousands or millions of humans to look after them,” Walsh told me. “They don’t have to worry about jamming, which is probably one of the best ways to protect against human-operated drones.”

But that’s not the only case being made for these weapons.

“The most interesting argument for autonomous weapons,” Walsh told me, “is that robots can be more ethical.” Humans, after all, sometimes commit war crimes, deliberately targeting innocents or killing people who’ve surrendered. And humans get fatigued, stressed, and confused, and end up making mistakes. Robots, by contrast, “follow exactly their code,” Walsh said.

Pentagon defense expert and former US Army Ranger Paul Scharre explores that idea in his 2018 book, Army of None: Autonomous Weapons and the Future of War. “Unlike human soldiers,” he points out, “machines never get angry or seek revenge.” And “it isn’t hard to imagine future weapons that could outperform humans in distinguishing between a person holding a rifle and one holding a rake.”

Ultimately, though, Scharre argues that this argument has a fatal flaw: “What’s legal and what’s right aren’t always the same.” He tells the story of a time his unit in Afghanistan was scouting and their presence was discovered by the Taliban. The Taliban sent out a 6-year-old girl, who unconvincingly pretended to be herding her goats while really reporting the location of the US soldiers by radio to the Taliban.

“The laws of war don’t set an age for combatants,” Scharre points out in the book. Under the laws of war, a Taliban combatant was engaging in a military operation near the US soldiers and it would be legal to shoot her. Of course, the soldiers didn’t even consider it — because killing children is wrong. But a robot programmed to follow the laws of war wouldn’t consider details like that. Sometimes soldiers do much worse than what the law permits them to do. But on other occasions, they do better — because they’re human, and bound by moral codes as well as legal ones. Robots wouldn’t be.

Emilia Javorsky, the founder of Scientists Against Inhumane Weapons, points out that there’s a much better way to use robots to prevent war crimes, if that’s really our goal. “Humans and machines make different mistakes, and if they work together, you can avoid both kinds of mistakes. You see this in medicine — diagnostic algorithms make one kind of mistake; doctors tend to make a different kind of mistake.”

So we could design weapons that are programmed to know the laws of war — and accordingly will countermand any order from a human that violates those laws — and that do not have the authority to kill without human oversight. Scientists Against Inhumane Weapons and other researchers who study LAWS have no objections to systems like those. Their argument is simply that, as a matter of international law and as a focus for weapons development and research, there should always be a human in the loop.

If this avenue is pursued, we could have the best of both worlds: robots that have automatic guardrails against making mistakes but also have human input to make sure the automatic decisions are the right ones. But right now, analysts worry that we’re moving toward full autonomy: a world where robots are making the call to kill people without human input.

What could possibly go wrong?

Fully autonomous weapons will make it easier and cheaper to kill people — a serious problem all by itself in the wrong hands. But opponents of lethal autonomous weapons warn that the consequences could be worse than that.

For one thing, if LAWS development continues, eventually the weapons might be extremely inexpensive. Already today, drones can be purchased or built by hobbyists fairly cheaply, and prices are likely to keep falling as the technology improves. And if the US used drones on the battlefield, many of them would no doubt be captured or scavenged. “If you create a cheap, easily proliferated weapon of mass destruction, it will be used against Western countries,” Russell told me.

Lethal autonomous weapons also seem like they’d be disproportionately useful for ethnic cleansing and genocide; “drones that can be programmed to target a certain kind of person,” Ariel Conn, communications director at the Future of Life Institute, told me, are one of the most straightforward applications of the technology.

Then there are the implications for broader AI development. Right now, US machine learning and AI is the best in the world, which means that the US military is loath to promise that it will not exploit that advantage on the battlefield. “The US military thinks it’s going to maintain a technical advantage over its opponents,” Walsh told me.

That line of reasoning, experts warn, opens us up to some of the scariest possible scenarios for AI. Many researchers believe that advanced artificial intelligence systems have enormous potential for catastrophic failures — going wrong in ways that humanity cannot correct once we’ve developed them, and (if we screw up badly enough) potentially wiping us out.

In order to avoid that, AI development needs to be open, collaborative, and careful. Researchers should not be conducting critical AI research in secret, where no one can point out their errors. If AI research is collaborative and shared, we are more likely to notice and correct serious problems with advanced AI designs.

And most crucially, advanced AI researchers should not be in a hurry. “We’re trying to prevent an AI race,” Conn told me. “No one wants a race, but just because no one wants it doesn’t mean it won’t happen. And one of the things that could trigger that is a race focused on weapons.”

If the US leans too much on its AI advantage for warfare, other countries will certainly redouble their own military AI efforts. And that would create the conditions under which AI mistakes are most likely and most deadly.

What people are trying to do about it

In combating killer robots, researchers point with optimism to a ban on another technology that was rather successful: the prohibition on the use of biological weapons. That ban was enacted in 1972, amid advances in bioweaponry research and growing awareness of the risks of biowarfare.

Several factors made the ban on biological weapons largely successful. First, state actors didn’t have that much to gain by using the tools. Much of the case for biological weapons was that they were unusually cheap weapons of mass destruction — and access to cheap weapons of mass destruction is mostly bad for states.

Opponents of LAWS have tried to make the case that killer robots are similar. “My view is that it doesn’t matter what my fundamental moral position is, because that’s not going to convince a government of anything,” Russell told me. Instead, he has focused on the case that “we struggled for 70-odd years to contain nuclear weapons and prevent them from falling in the wrong hands. In large quantities, [LAWS] would be as lethal, much cheaper, much easier to proliferate” — and that’s not in our national security interests.

But the UN has been slow to agree even to a debate over a lethal autonomous weapons treaty. There are two major factors at play: First, the UN’s process for international treaties is generally a slow and deliberative one, while rapid technological changes are altering the strategic situation with regard to lethal autonomous weapons faster than that process is set up to handle. Second, and probably more importantly, the treaty has some strong opposition.

The US (along with Israel, South Korea, the United Kingdom, and Australia) has thus far opposed efforts to secure a UN treaty opposing lethal autonomous weapons. The US’s stated reason is that since in some cases there could be humanitarian benefits to LAWS, a ban now before those benefits have been explored would be “premature.” (Current Defense Department policy is that there will be appropriate human oversight of AI systems.)

Opponents nonetheless argue that it’s better for a treaty to be put in place as soon as possible. “It’s going to be virtually impossible to keep [LAWS] to narrow use cases in the military,” Javorsky argues. “That’s going to spread to use by non-state actors.” And often it’s easier to ban things before anyone has them already and wants to keep the tools they’re already using. So advocates have worked for the past several years to bring up LAWS for debate in the UN, where the details of a treaty can be hammered out.

There’s a lot to hammer out. What exactly makes a system autonomous? If South Korea deploys, on the border of the Demilitarized Zone with North Korea, gun systems that automatically shoot unauthorized persons, that’s a lethal autonomous weapon — but it’s also a lot like a land mine. “Arguably, it can be a bit better at discriminating than a minefield can, so maybe it even has advantages,” Russell said.

Or take “loitering munitions,” an existing technology. Fired into the air, Scharre writes, they circle over a wide area until they home in on the radar systems they want to destroy. No human is involved in the final decision to dive in and attack. These are autonomous weapons, though they target radar systems, not humans.

These and other issues would have to be settled for a useful UN ban on autonomous weapons. And with the US opposed, an international treaty against lethal autonomous weapons is unlikely to succeed.

There’s another form of advocacy that might impede military uses of AI: the reluctance of AI researchers to work on such uses. Leading AI researchers in the US are largely in Silicon Valley, not working for the US military, and partnerships between Silicon Valley and the military have so far been fraught. When it was revealed that Google was working with the Department of Defense on drones through Project Maven, Google employees revolted, and the project was not renewed. Microsoft employees have similarly objected to military uses of their work.

It’s possible that tech workers can delay the day when a treaty is needed, or create pressure to make such a treaty happen, simply by declining to write the software that will power our killer robots — and there are signs that they’re inclined to do so.

How scared should we be?

Killer robots have the potential to do a lot of harm, and make the means of killing lots of people more available to totalitarian states and to non-state actors. That’s pretty scary.

But in many ways, the situation with lethal autonomous weapons is just one manifestation of a much larger trend.

AI is making things possible that were never possible before, and doing so quickly, such that our capabilities frequently get out ahead of thought, reflection, and strong public policy. As AI systems become more powerful, this dynamic will become more and more destabilizing.

Whether it’s killer robots or fake news, algorithms used to shoot suspected combatants or trained to make parole decisions about prisoners, we’re handing over more and more critical aspects of society to systems that aren’t fully understood and that are optimizing for goals that might not quite reflect our own.

Advanced AI systems aren’t here yet. But they get closer every day, and it’s time to make sure we’ll be ready for them. The best time to come up with sound policy and international agreements is before these science fiction scenarios become reality.

This article is republished from Vox

Please follow Blitz on Google News Channel

LEAVE A REPLY

Please enter your comment!
Please enter your name here