Wednesday, January 13, 2010

From HAL To Chaim: Israel And The Robotic Age Of Warfare--And International Law

According to the Wall Street Journal, when it comes to the robotic age: in Israel, the future is now:
Israel is developing an army of robotic fighting machines that offers a window onto the potential future of warfare.


Sixty years of near-constant war, a low tolerance for enduring casualties in conflict, and its high-tech industry have long made Israel one of the world's leading innovators of military robotics.

WSJ's Charles Levinson reports from Jerusalem to discuss Israel's development of robotic, unmanned combat systems. He tells Simon Constable on the News Hub how they are deploying unmanned boats, ground vehicles and aerial vehicles.

"We're trying to get to unmanned vehicles everywhere on the battlefield for each platoon in the field," says Lt. Col. Oren Berebbi, head of the Israel Defense Forces' technology branch. "We can do more and more missions without putting a soldier at risk."

In 10 to 15 years, one-third of Israel's military machines will be unmanned, predicts Giora Katz, vice president of Rafael Advanced Defense Systems Ltd., one of Israel's leading weapons manufacturers.

"We are moving into the robotic era," says Mr. Katz.

Over 40 countries have military-robotics programs today. The U.S. and much of the rest of the world is betting big on the role of aerial drones: Even Hezbollah, the Iranian-backed Shiite guerrilla force in Lebanon, flew four Iranian-made drones against Israel during the 2006 Lebanon War.
The first thing I think about when I read this--especially considering events over the past year or so--is how international law would be applied to this. After all, the general world reaction whenever Israel has taken any kind of step to defend itself, whether it has been to limit the fuel or electricity to Gaza, restrict the entry of materials Hamas uses to create the rockets it fires at civilian targets, or finally take military action--the world reacts by claiming that Israel cannot do that.

So what would the world reaction be if Israel were to use sizable numbers of robots in order to cut down on its casualties? Well, heres a clue--

According to a March 2008 post from AI Panic:
London-based charity Landmine Action wants autonomous robots capable of killing people banned under the same kind of treaty that has outlawed landmines in over 150 countries. According to the New Scientist it is the first time a high profile non-governmental organisation has campaigned against such a technology. This campaign follows the reasoning of Noel Sharkey, who condemned these automation plans earlier this year.

As I’ve written before, the robots in use by the military nowadays (and the next years) are almost fully automatic, but so far the trigger has still to be pulled by a human soldier. However, it is only a question of time until the software is strong enough so that this decision will be made entirely by the machine. And once the software is in place, there will be no ethical opposition - at least in the US Department of Defence, who wants them in future to work without supervision.

A reaction to this news article comes from Wired, where the idea of danger through war robots is dismissed:
But to argue as if this is in the here or now, or even in the next decade, is just plain silly. The Pentagon has not only never advocated taking the man-out-the-loop of targeting decisions for drones or robots, its current policies and procedures would prohibit such a move (some might argue that international law already prohibits autonomous armed drones). [...] Unless and until those policies are drastically altered, it’s safe to say we are safe from renegade Terminators.
AI Panic takes the long view that the fact that robots do not act independently now does not justify ignoring that possibility down the line--and it is unlikely, considering the current climate at the UN and among NGO's and other such groups, that those who claim expertise and concern for international law today are going to think any differently. Even now, there has been an outcry against the use of drones by both Israel and the US to kill terrorist targets.

But the issue is not so simple: the fact is that using robots does not only save the lives of soldiers who attack--it also helps to save lives among the civilians in a targeted area.

In the Winter 2009 issue of The New Atlantis, P. W. Singer, a senior fellow at the Brookings Institution and the director of the institution’s 21st Century Defense Initiative, writes in "Military Robots and the Laws of War", that
it is easy to see how collateral damage can be greatly reduced by robotic precision.

The unmanning of the operation also means that the robot can take risks that a human wouldn’t otherwise, risks that might mean fewer mistakes. During that Kosovo campaign, for example, such a premium was placed on not losing any NATO pilots that planes were restricted from flying below fifteen thousand feet so that enemy fire couldn’t hit them. In one case, NATO planes flying at this level bombed a convoy of vehicles, thinking they were Serbian tanks. It turned out to be a convoy of refugee buses. If the planes could have flown lower or had the high-powered video camera of a drone, this tragic mistake might have been avoided.

The removal of risk also allows decisions to be made in a more deliberate manner than normally possible. Soldiers describe how one of the toughest aspects of fighting in cities is how you have to burst into a building and, in a matter of milliseconds, figure out who is an enemy and who is a civilian and shoot the ones that are a threat before they shoot you, all the while avoiding hitting any civilians. You can practice again and again, but you can never fully avoid the risk of making a terrible mistake in that split second, in a dark room, in the midst of battle. By contrast, a robot can enter the room and only shoot at someone who shoots first, without endangering a soldier’s life.

Many also feel that unmanned systems can remove the anger and emotion from the humans behind them. A remote operator isn’t in the midst of combat and isn’t watching his buddies die around him as his adrenaline spikes; he can take his time and act deliberately in ways that can lessen the likelihood of civilians being killed. Marc Garlasco of Human Rights Watch told me how “the single most distinguishing weapons I have seen in my career were Israeli UAVs.” He described how, unlike jet fighters that had to swoop in fast and make decisions on what targets to bomb in a matter of seconds, the UAVs he observed during the 2006 Lebanon war could loiter over a potential target for minutes or even hours, and pick and choose what to strike or not. In Vietnam, an astounding fifty thousand rounds of ammunition were expended for every enemy killed. Robots, on the other hand, might live up to the sniper motto of “one shot, one kill.” As journalist Michael Fumento put it in describing SWORDS, the operator “can coolly pick out targets as if playing a video game.”
But that of course brings us back to the fear of being just a little bit too 'cool':
But as journalist Chuck Klosterman put it, a person playing video games is usually “not a benevolent God.” We do things in the virtual world, daring and violent things, that we would never do if we were there in person. Transferred to war, this could mean that the robotic technologies that make war less intimate and more mediated might well reduce the likelihood of anger-fueled rages, but also make some soldiers too calm, too unaffected by killing. Many studies, like Army psychologist Dave Grossman’s seminal book On Killing (1995), have shown how disconnecting a person, especially via distance, makes killing easier and abuses and atrocities more likely. D. Keith Shurtleff, an Army chaplain and the ethics instructor for the Soldier Support Institute at Fort Jackson in South Carolina, worries that “as war becomes safer and easier, as soldiers are removed from the horrors of war and see the enemy not as humans but as blips on a screen, there is a very real danger of losing the deterrent that such horrors provide.”

And yet, for the time being at least, it appears that there is not yet a general consensus on how to apply international law--or Star Trek for that matter--to the application of robotics to the battlefield. When Singer was interviewed by CNET in 2009 following the publication of his book, Wired For War, for which Singer has a website as well. Here is an excerpt of the interview:
How will robot warfare change our international laws of war? If an autonomous robot mistakenly takes out 20 little girls playing soccer in the street and people are outraged, is the programmer going to get the blame? The manufacturer? The commander who sent in the robot fleet?
Singer: That's the essence of the problem of trying to apply a set of laws that are so old they qualify for Medicare to these kind of 21st-century dilemmas that come with this 21st-century technology. It's also the kind of question that you might have once only asked at Comic-Con and now it's a very real live question at the Pentagon.
I went around trying to get the answer to this sort of question meeting with people not only in the military but also in the International Committee of the Red Cross and Human Rights Watch. We're at a loss as to how to answer that question right now. The robotics companies are only thinking in terms of product liability...and international law is simply overwhelmed or basically ignorant of this technology. There's a great scene in the book where two senior leaders within Human Rights Watch get in an argument in front of me of which laws might be most useful in such a situation.
Is this where they bring up Star Trek? 
Singer: Yeah, one's bringing up the Geneva Conventions and the other one's pointing to the Star Trek Prime Directive.
Be that as it may, as a recognized leader in the field, it is likely that Israel will be among the first to find itself in a situation that will bring the attention--and full force--of the international community.

And being first is not always a good thing.

Technorati Tag: and and .

No comments: