Micro-sized killer drone poses grave threat to national security

0

Lethal autonomous weapons, particularly micro-sized killer drone is no more any gossip. It has already turned into reality. Most importantly, a small AI-powered killer drones are going to costs little more than a smartphone. This newest scientific innovation poses gravest threat to national security of every country in the world.

Experts agree that lethal autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist groups looking to maximize damage, with Tegmark arguing that “small AI-powered killer drones are likely to cost little more than a smartphone.” Additionally, killer robots will minimize the human investment required for terrorist attacks, with scholars arguing that “greater degrees of autonomy enable a greater amount of damage to be done by a single person.” Artificial intelligence could make terrorist activity cheaper financially and in terms of human capital, lowering the organizational costs required to commit attacks.

Secondly, using autonomous weapons will reduce the trace left by terrorists. A large number of munitions could be launched — and a large amount of damage done — by a small number of people operating at considerable distance from the target, reducing the signature left behind. In Tegmark’s words, for “a terrorist wanting to assassinate a politician … all they need to do is upload their target’s photo and address into the killer robot: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure nobody knows who was responsible.” With autonomous weapons technology, terrorist groups will be able to launch increasingly complex attacks, and, when they want to, escape without detection.

In 2016, the Islamic State of Iraq and the Levant (ISIL) carried out its first successful drone attack in combat, killing two Peshmerga warriors in northern Iraq. The attack continued the group’s record of employing increasingly sophisticated technologies against its enemies, a trend mimicked by other nonstate armed groups around the world. The following year, the group announced the formation of the “Unmanned Aircraft of the Mujahedeen,” a division dedicated to the development and use of drones, and a more formal step toward the long-term weaponization of drone technology.

Terrorist groups are increasingly using 21st-century technologies, including drones and elementary artificial intelligence (AI), in attacks. As it continues to be weaponized, AI could prove a formidable threat, allowing adversaries — including nonstate actors — to automate killing on a massive scale. The combination of drone expertise and more sophisticated AI could allow terrorist groups to acquire or develop lethal autonomous weapons, or “killer robots,” which would dramatically increase their capacity to create incidents of mass destruction in Western cities. As it expands its artificial intelligence capabilities, the U.S. government should also strengthen its anti-AI capacity, paying particular attention to nonstate actors and the enduring threats they pose. For the purposes of this article, I define artificial intelligence as technology capable of “mimicking human brain patterns,” including by learning and making decisions.

AI Could Turn Drones into Killer Robots

The aforementioned ISIL attack was not the first case of nonstate actors employing drones in combat. In January 2018, an unidentified Syrian rebel group deployed a swarm of 13 homemade drones carrying small submunitions to attack Russian bases at Khmeimim and Tartus, while an August 2018 assassination attempt against Venezuela’s Nicolas Maduro used exploding drones. Iran and its militia proxies have deployed drone-carried explosives several times, most notably in the September 2019 attack on Saudi oil facilities near the country’s eastern coast.

Pundits fear that the drone’s debut as a terrorist tool against the West is not far off, and that “the long-term implications for civilian populations are sobering,” as James Phillips and Nathaniel DeBevoise note in a Heritage Foundation commentary. In September 2017, FBI Director Christopher Wray told the Senate that drones constituted an “imminent” terrorist threat to American cities, while the Department of Homeland Security warned of terrorist groups applying “battlefield experiences to pursue new technologies and tactics, such as unmanned aerial systems.” Meanwhile, ISIL’s success in deploying drones has been met with great excitement in jihadist circles. The group’s al-Naba newsletter celebrated a 2017 attack by declaring “a new source of horror for the apostates!”

The use of drones in combat indicates an intent and capability to innovate and use increasingly savvy technologies for terrorist purposes, a process sure to continue with more advanced forms of AI. Modern drones possess fairly elementary forms of artificial intelligence, but the technology is advancing: Self-piloted drones are in development, and the European Union is funding projects to develop autonomous swarms to patrol its borders.

AI will enable terrorist groups to threaten physical security in new ways, making the current terrorism challenge even more difficult to address. According to a February 2018 report, terrorists could benefit from commercially available AI systems in several ways. The report predicts that autonomous vehicles will be used to deliver explosives; low-skill terrorists will be endowed with widely available high-tech products; attacks will cause far more damage; terrorists will create swarms of weapons to “execute rapid, coordinated attacks”; and, finally, attackers will be farther removed from their targets in both time and location. As AI technology continues to develop and begins to proliferate, “AI [will] expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets.”

For many military experts and commentators, lethal autonomous weapon systems, or “killer robots,” are the most feared application of artificial intelligence in military technology. In the words of the American Conservative magazine, the difference between killer robots and current AI-drone technology is that, with killer robots, “the software running the drone will decide who lives and who dies.” Thus, killer robots, combining drone technology with more advanced AI, will possess the means and power to autonomously and independently engage humans. The lethal autonomous weapon has been called the “third revolution in warfare,” following gunpowder and nuclear weapons, and is expected to reinvent conflict, not least terrorist tactics.

Although completely autonomous weapons have not yet reached the world’s battlefields, current weapons are on the cusp. South Korea, for instance, has developed and deployed the Samsung SGR-A1 sentry gun to its border with North Korea. The gun supposedly can track movement and fire without human intervention. Robots train alongside marines in the California desert. Israel’s flying Harpy munition can loiter for hours before detecting and engaging targets, while the United States and Russia are developing tanks capable of operating autonomously. And the drones involved in the aforementioned rebel attack on Russian bases in Syria were equipped with altitude and leveling sensors, as well as preprogrammed GPS to guide them to a predetermined target.

Of particular concern is the possibility of swarming attacks, composed of thousands or millions of tiny killer robots, each capable of engaging its own target. The potentially devastating terrorist application of swarming autonomous drones is best summarized by Max Tegmark, who has said that “if a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.” Precisely that hypothetical scenario was illustrated in a recent viral YouTube video, “Slaughterbots,” which depicted the release of thousands of small munitions into British university lecture halls. The drones then pursued and attacked individuals who had shared certain political social media posts. The video also depicts an attack targeting sitting U.S. policymakers on Capitol Hill. The video has been viewed over three million times, and was met with increasing concern about potential terrorist applications of inevitable autonomous weapons technology. So far, nonstate actors have only deployed “swarmed” drones sparingly, but it points to a worrying innovation: Swarming, weaponized killer robots aimed at civilian crowds would be nearly impossible to defend against, and, if effective, cause massive casualties.

Terrorists will be interested in acquiring lethal autonomous weapons

Terrorist groups will be interested in artificial intelligence and lethal autonomous weapons for three reasons — cost, traceability, and effectiveness.

Firstly, killer robots are likely to be extremely cheap, while still maintaining lethality. Experts agree that lethal autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist groups looking to maximize damage, with Tegmark arguing that “small AI-powered killer drones are likely to cost little more than a smartphone.” Additionally, killer robots will minimize the human investment required for terrorist attacks, with scholars arguing that “greater degrees of autonomy enable a greater amount of damage to be done by a single person.” Artificial intelligence could make terrorist activity cheaper financially and in terms of human capital, lowering the organizational costs required to commit attacks.

Secondly, using autonomous weapons will reduce the trace left by terrorists. A large number of munitions could be launched — and a large amount of damage done — by a small number of people operating at considerable distance from the target, reducing the signature left behind. In Tegmark’s words, for “a terrorist wanting to assassinate a politician … all they need to do is upload their target’s photo and address into the killer robot: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure nobody knows who was responsible.” With autonomous weapons technology, terrorist groups will be able to launch increasingly complex attacks, and, when they want to, escape without detection.

Finally, killer robots could reduce, if not eliminate, the physical costs and dangers of terrorism, rendering the operative “essentially invulnerable.” Raising the possibility of “fly and forget” missions, lethal autonomous weapons might simply be deployed toward a target, and engage that target without further human intervention. As P. W. Singer noted in 2012, “one [will] not have to be suicidal to carry out attacks that previously might have required one to be so. This allows new players into the game, making al-Qaeda 2.0 and the next-generation version of the Unabomber or Timothy McVeigh far more lethal.” Additionally, lethal autonomous weapons could potentially reduce human aversion to killing, making terrorism even more palatable as a tactic for political groups. According to the aforementioned February 2018 report, “AI systems can allow the actors who would otherwise be performing the tasks to retain their anonymity and experience a greater degree of psychological distance from the people they impact”; this would not only improve a terrorist’s chances of escape, as mentioned, but reduce or even eliminate the moral or psychological barriers to murder.

Terrorist acquisition of lethal autonomous weapons

The proliferation of artificial intelligence and killer robot technology to terrorist organizations is realistic and likely to occur through three avenues — internal development, sales, and leaks.

Firstly, modern terrorist organizations have advanced scientific and engineering departments, and actively seek out skilled scientists for recruitment. ISIL, for example, has appealed for scientists to trek to the caliphate to work on drone and AI technology. The individual technologies behind swarming killer robots — including unmanned aerial vehicles, facial recognition, and machine-to-machine communication — already exist, and have been adapted by terrorist organizations for other means. According to a French defense industry executive, “the technological challenge of scaling it up to swarms and things like that doesn’t need any inventive step. It’s just a question of time and scale and I think that’s an absolute certainty that we should worry about.”

Secondly, autonomous weapons technology will likely proliferate through sales. Because AI research is led by private firms, advanced AI technology will be publicly sold on the open market. As Michael Horowitz argues, “militant groups and less-capable states may already have what they need to produce some simple autonomous weapon systems, and that capability is likely to spread even further for purely commercial reasons.” The current framework controlling high-tech weapons proliferation — the Wassenaar Arrangement and Missile Technology Control Regime — is voluntary, and is constantly tested by great-power weapons development. Given interest in developing AI-guided weapons, this seems unlikely to change. Ultimately, as AI expert Toby Walsh notes, the world’s weapons companies can, and will, “make a killing (pun very much intended) selling autonomous weapons to all sides of every conflict.”

Finally, autonomous weapons technology is likely to leak. Innovation in the AI field is led by the private sector, not the military, because of the myriad commercial applications of the technology. This will make it more difficult to contain the technology, and prevent it from proliferating to nonstate actors. Perhaps the starkest warning has been issued by Paul Scharre, a former U.S. defense official: “We are entering a world where the technology to build lethal autonomous weapons is available not only to nation-states but to individuals as well. That world is not in the distant future. It’s already here.”

Counter-Terrorism Options

Drones and AI provide a particularly daunting counter-terrorism challenge, simply because effective counter-drone or anti-AI expertise does not yet exist. That said, as Daveed Gartenstein-Ross has noted, “in recent years, we have seen multiple failures in imagination as analysts tried to discern what terrorists will do with emerging technologies. A failure in imagination as artificial intelligence becomes cheaper and more widely available could be even costlier.” Action is urgently needed, and for now, counter-terrorism policies are likely to fit into two categories, each with flaws: defenses and bans.

Firstly, and most likely, Western states could strengthen their defenses against drones and weaponized AI. This might involve strengthening current counter-drone and anti-AI capabilities, improving training for local law enforcement, and establishing plans for mitigating drone or autonomous weapons incidents. AI technology and systems will surely play an important role in this space, including in the development of anti-AI tools. However, anti-AI defenses will be costly, and will need to be implemented across countless cities throughout the entire Western world, something Michael Horton calls “a daunting challenge that will require spending billions of dollars on electronic and kinetic countermeasures.” Swarms, Scharre notes, will prove “devilishly hard to target,” given the number of munitions and their ability to spread over a wide area. In addition, defenses will likely take a long time to erect effectively and will leave citizens exposed in the meantime. Beyond defenses, AI will also be used in counter-terrorism intelligence and online content moderation, although this will surely spark civil liberties challenges.

Secondly, the international community could look to ban AI use in the military through an international treaty sanctioned by the United Nations. This has been the strategy pursued by activist groups such as the Campaign to Stop Killer Robots, while leading artificial intelligence researchers and scientific commentators have published open letters warning of the risk of weaponized AI. That said, great powers are not likely to refrain from AI weapons development, and a ban might outlaw positive uses of militarized AI. The international community could also look to stigmatize, or delegitimize, weaponized AI and lethal autonomous weapons sufficiently to deter terrorist use. Although modern terrorist groups have proven extremely willing to improvise and innovate, and effective at doing so, there is an extensive list of weapons — chemical weapons, biological weapons, cluster munitions, barrel bombs, and more — accessible to terrorist organizations, but rarely used. This is partly down to the international stigma associated with those munitions — if a norm is strong enough, terrorists might avoid using a weapon. However, norms take a long time to develop, and are fragile and untrustworthy solutions. Evidently, good counter-terrorism options are limited.

The U.S. government and its intelligence agencies should continue to treat AI and lethal autonomous weapons as priorities, and identify new possible counter-terrorism measures. Fortunately, some progress has been made: Nicholas Rasmussen, former director of the National Counterterrorism Center, admitted at a Senate Homeland Security and Governmental Affairs Committee hearing in September 2017 that “there is a community of experts that has emerged inside the federal government that is focused on this pretty much full time. Two years ago this was not a concern … We are trying to up our game.”

Nonstate actors are already deploying drones to attack their enemies. Lethal autonomous weapon systems are likely to proliferate to terrorist groups, with potentially devastating consequences. The United States and its allies should urgently address the rising threat  by preparing stronger defenses against possible drone and swarm attacks, engaging with the defense industry and AI experts warning of the threat, and supporting realistic international efforts to ban or stigmatize military applications of artificial intelligence. Although the likelihood of such an event is low, a killer robot attack could cause massive casualties, strike a devastating blow to the U.S. homeland, and cause widespread panic. The threat is imminent, and the time has come to act.

Please follow Blitz on Google News Channel

LEAVE A REPLY

Please enter your comment!
Please enter your name here