A while ago, I wrote a post entitled Living in the Future, where I made a couple observations about how we're living in a bit of a science fiction world, what with anti-satellite technology, mobile phones, cloning, stem cells and the internet. While looking at the New York Time's website, I came across an interesting article entitled A Soldier, Taking Orders From Its Ethical Judgment Center. The article is looking at the idea of utilizing robotic soldiers on the battlefield because it's felt that they would be far more unlikely to violate international laws, as a human soldier might. There are obviously other advantages to robotic soldiers, especially in a day and age when a climbing casualty rate among soldiers has an inverse affect on morale and the desire to fight back home.
On a science fiction level, this is nothing new at all. Robots deployed to battlefields is something that's been seen before, and from my own readings, with fairly mixed results. Just look at films such as The Terminator (and assorted sequels) or the Matrix, and you certainly have the case for only using organics on the battlefield, on the oft chance that something goes terribly wrong and the machines that you just deployed turn out to be just as unstoppable when they turn against their commanding officers. Off the top of my head, I can't think of any films or books that I've read where robotic soldiers were used successfully, but I'm sure that there are a couple out there.
This comes at a fairly interesting time, as something similar just broke at the Pentagon recently with an article at New Scientist, which talked about the development of hunter robots to track down uncooperative people en-masse. In this instance, I think that the Spyders from Minority Report fit the bill nicely - small, three legged robots with iris scanners and the ability to subdue a person with a small electrical charge - the film proves its foresight in the world and the developments of technology. There are obvious needs and reasons for such a machine and tactic, but there are frightening implications. As the New Scientist article points out: "how long before we see packs of droids hunting down pesky demonstrators with paralyzing weapons?" This would certainly be an unthinkable use for such technology, and in both cases, I can see a number of scenarios where these types of things could be misused.
Furthermore, machines break down, sometimes when you most need them. I know I have enough problems with my work computer, and while soldiers feel exhaustion, they can tell you when something is wrong. Machines have a far more limited ability to do something like this. Furthermore, I would be very concerned about two things - technology transfer to other, potentially hostile nations in the event that military hardware is captured in the field, and counter-electronics combat that could potentially render machinery ineffective. There was a great scene in one of the recent Clone Wars episodes where the soldiers threw specialized grenades to disable the droids that were attacking them. (Why on earth hasn't this been done far more often in Star Wars?) I remember reading back in 2002/2003 at how easy it is to build your own EMP weapon, and while military hardware is generally shielded, I would be very concerned about attacks coming in the form of viruses that would corrupt machinery. No matter the defenses enabled on anything electronic, I would bet that there's some geek out there that would be able to overcome the defenses if bribed with enough Mountain Dew. (Stereotypes aside, there are people who are crazy good at that sort of thing).
Personally, I'm not thrilled at the prospect of having robotic soldiers trampling all over the battlefields. Drones are certainly acceptable, because they're only able to do what the soldier on the other end is telling it to do, but don't you think that people should take to heart what Asimov laid down to protect humanity?
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.