Newsletter issue #3 is now out! For this letter, I decided to focus on one thing that I’ve been thinking about lately: how Isaac Asimov’s Three Laws of Robotics helped add to the conversation about robots and AI, why we need more fiction that is aimed at solving technological problems, and why more leaders really should read stories that are about that.
Ted Chiang's longest work to date, The Lifecycle of Software Objects, is a fascinating story that takes a bit of a new look at how an artificial intelligence might develop. The story is understated, quiet and humble, but is exciting and touching at the same time. This was a story that I absolutely devoured in a single sitting that stretched late into the night, something that rarely happens with any story.
The book's description includes a quote from Alan Turing that helps to set this story apart from other robot stories:
“Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.”
The story follows Ana Alvarado and Derek Brooks as their own lives intertwines over the course of a decade. Software AI has been achieved, and is a growing industry at the start of the novella, one that changes as the story progresses. Fans of stories such as I, Robot or other reads about robotics will find this to be a vastly different type of story, and for that reason, it's very refreshing. Working to create Digients, a sort of artificial intelligence profile or avatar online, we see the introduction of Jax, Marco and Polo, who essentially grow up under the care of Ana and Derek.
Fiction is a product of our own lives and surroundings. Lifecycles is a good example of how Chiang was able to take a very old story type (Man creating life himself) and create something that feels new and fresh. Very often, our perceptions of robotics are shaped by dramatic presentations such as The Terminator, or The Matrix, stories that predict that the rise of a robotic race will automatically deduce that the human race will be pegged for extinction. Similarly, it’s also assumed that a comprehensive knowledge and sheer logical reasoning will prove to be a superior mind.
Chiang takes on the other side suggested by Turing in his approach to the development of an artificial intelligence, which strikes me as a far more realistic method for developing a viable computer intelligence: you make them grow up. This happens over the decade that the story takes place, but it’s far more complicated than that. In other stories, there’s never really a reason for creating a robot, or at the very least, there’s no reason given for developing a new mind. Here, it’s very much the same thing, but the reasons are stark: it’s a business, and there’s little demand for highly realistic artificial avatars that talk back or cause problems as they’re developing.
The important thing here is that there are some major philosophical issues at the heart of any sort of AI, especially when one is assuming that it will be a being along the same lines of a human: can they be purpose driven, or is there any intent to their design? Religious arguments aside, I don’t see any particular purpose to human beings, just a happy accident of chemistry and circumstances at the right time billions of years ago. So to, would a machine guided by rigid logical programming be the same thing? I think not, although the appearance could be replicated somewhat with fast programming.
Intelligence is also complicated: it’s not just that a robot would have to have 700 million languages at its disposal, or whatever actions pre-programmed into it: any such being really isn’t truly free as people seem to be. Rather, complicated intelligence (and this is from my own limited understanding) comes more with the ability to draw connections between different, unrelated things. Driving along one route, I try to infer what lies between another road that’s running parallel to me, based on what I can see in between the two locations. My dog sees my sister and realizes that not only is she outside, but that if she runs to the window, she’ll see her as she runs down into the yard. These types of responses aren’t ones that I can’t see being rigidly programmed into a computer, but are things that will learned from experience.
Interestingly, the book has far more in common with another, slightly lesser known story from Isaac Asimov (and film adaptation), The Bicentennial Man, which sees a robot learn to become a human, going to the extreme and replacing metal for flesh. I greatly enjoyed the movie (I’ve never understood the hate that it seems to elicit), and The Lifecycle of Software Objects takes some similar lines of reasoning and does them in a far better fashion.
Set amongst a sort of tech boom that would be familiar to anyone who used the internet since the 1990s, companies come and go, but the people remain, and find their own way through life. In a large way, the cover is exceedingly deceptive here, because this isn’t a story about robots, it’s a story about people who deal with robots, and one another. Chiang does a marvelous job here, setting the lives of two people in a mere thirty thousand words, where things never feel like they are rushed or that anything major has been glossed over.
Where there’s the approach to growing an intelligence, there’s a lot to be said for the people on the other side of the equation. As we watch the trio of Digients grow up in their own little world (and a times, jumping into a robot body that their owners have for them), it’s apparent that they’re as much a product of their parents than their surroundings, and that frequently, their upbringing has as much an impact on themselves as it does their caretakers. What I found most fascinating is that this isn’t really a story about robots at all: they’re central to the plot, but the real point here is in the long relationship between Ana and Derek, and how two people who are so similar can be so far apart and estranged from one another. It’s a love story in its own right, between the two humans, but also for a parent for their creation. In doing so, Chiang presents an interesting idea that robots and artificial intelligences wouldn’t be so different from us, and that the creation of an AI isn’t any different than parenting.
The story is also available online, here.
One of the significant elements of the ongoing 'War on Terror' in Afghanistan and Iraq is the continual use of Predator Drones, and other unmanned systems that allow for the remote control of weapons to minimize casualties amongst American forces overseas, while still achieving their objectives. Interestingly, the soldiers who pilot them have been suffering from Post Traumatic Stress Syndrome (PTSD) , essentially experiencing warfare in similar ways, despite operating in vastly different conditions.
According to Military.com: "But that whiplash transition is taking a toll on some of them mentally, and so is the way the unmanned aircraft's cameras enable them to see people getting killed in high-resolution detail, some officers say." (Source) This is further explained at the relatively up close and personal view that soldiers piloting the Drones get of the action, as opposed to that of a fighter pilot, far above the action, who might not see the impact that their actions have.
The situation that these pilots find themselves in bears much resemblance to some of the actions in Orson Scott Card's classic Science Fiction novel, Ender's Game. In this book, Andrew 'Ender' Wiggin, is training aboard an orbital facility, designed to bring out the best tactical leaders in a fight against an alien race. At the last act of the book, Ender has graduated from school, and tasked with what he believes are further training simulations against the aliens, when in reality, he is directing military assets, time and time again against the alien's defenses, destroying them at the end. Upon realizing what he's done, he has a sort of nervous breakdown, and while hailed as a hero, moves to live a secluded life off planet.
Now, in 2010, we are living in what a lot of people would consider a fantastical, science fiction-styled world, where computers fit in the palm of one's hand, and where militaries have the ability to strike against militants and foreign militaries with fairly automated devices. A 2009 book, Wired for War, by P.W. Singer, of the Brookings Institute, looks closely to the developments of military hardware in warfare, and looks to the very nature of automated weapons and the extent to where people will be in control of said weapons. The machines that go to war now are not the machines of science fiction literature and films: they're more like remote controls, with a person 'in the loop' at the end of the communications console, who directs the craft against targets and basic functions. The move to a more robotic system will occur as the human controllers are released from more controls, with a computer that's able to take over more functions. Some robotic systems, such as the ones that are designed to shoot mortars out of the sky, can react much faster than a human operator, and in order to effectively operate, they are more automated. Some drones can largely act on their own, with their mission programmed into them, with a human looking to push the button to start it up.
However, like in Ender's Game, operators are still on the front lines, abit virtually, carrying out their commander's intent and subsequent orders in a way that helps to deliver their mission, much like soldiers on the ground, operating in ways that might not be as appropriate for drones. At the end of the day, however, there is a central mission that needs to be carried out, issued by a commanding officer, whereupon, the details of the mission should be carried out in the most appropriate manner. This often depends upon the quality of the leader at the top, the resources that are available at their disposal, and the abilities of the people underneath them to carry them out. In this way, the story of Ender's Game and that of a Drone Pilot could easily be reconciled with one another. The same can often be said for any other military science fiction book out there, and the quality of the novel or film will not depend upon the technology that is present, but the world surrounding military events.
This is why, when reading about Predator Drones, I'm reminded of the events that take place in Ender's Game. The specific technology, governments and people don't necessarily matter in these contexts, but the framework laid out and put together in a largely rational and logical fashion endures, lasting far longer than technological predictions that will likely date the book. As such, Ender's Game is an interesting read in the science fiction universe, and has applications during the present day. Indeed, a number of these lessons can be applied, no matter the time period and technology present: ancient Roman militaries would act in the same general way that a modern commander would: locate the problem, determine a mission, find the right way to overcome said problem and execute a plan to achieve one's goals.
However, what does change, is the methods in which soldiers interact with the battlefield. In the instance of Drone pilots and Andrew Wiggin, both deal with the realities of war remotely and virtually. Indeed, one of the biggest issues that one might face with operating said machinery would be the emotional impact and power associated with the ability to strike without reprisal. As the battlefield becomes more automated, warfare becomes fare more effective, cleaner, and potentially quicker, at least on the tactical level. Yet, soldiers are still at war.
In the end, Drone warfare is essentially another tool available for military commanders, and as such, the soldiers who operate them will come under the same stresses, conflicts and moral issues as any other soldier assigned to a mission. This circumstances change as soldiers are further removed from the battlefield, but it should be remembered that despite the distance from the actual conflict, there will still be repercussions, as these soldiers fall within far larger strategy and operational plans, and are thus still at war, as has been carried out for thousands of years.
Looking over my bookshelves, I had a bit of a revelation: there are very few books that really use robots as characters in them. Taking a look, I only see Isaac Asimov's I, Robot and several additional collections of short stories, a collection of Ray Bradbury stories that contains 'There Will Come Soft Rains', a couple of Iain M. Bank's Culture novels, Arthur C. Clarke's novel 2001: A Space Odyssey, Ekaterina Sedia's Alchemy of Stone and maybe a couple of others that I passed over. An additional trio of books: Ambassadors from Earth, Edison's Eve and Wired For War all represent a significant figure when it comes to real - life robotic systems and theory. However, looking over the movies that I have on my shelves, robotic characters readily come to mind: C-3P0 and R2-D2 from Star Wars, The Terminator from that franchise, Robbie from Forbidden Planet, the replicants from Blade Runner, Ash from Alien, Andrew from Bicentennial Man, Sonny from I, Robot, and so forth.
I have to wonder about this: there is a large gap in recognizable characters between the two mediums, film and literature. Film seems to contain far more in the way of robots, androids and mechs that come to mind, while I have a difficult time remembering the names of some of the characters from some of my absolute favorite science fiction books.
The first element in which film readily becomes the better medium is its visual nature, allowing for elaborate costumes, props and CGI'ed components of metal and plastics that make up what audiences really think about with robotic characters. Some of the most dramatic imagery from science fiction cinema includes robots: C-3P0 and R2 in the hallway of the Tanative IV, The Terminator coming out of the flames, Ash getting his head bashed in, and so forth. Simply put, robotics are more visual, allow for some differences between living characters and their mechanical servants.
The use of the term 'Robot' goes back to 1923 (1) with Karel Čapek's play, Rossum's Universal Robots, and according to genre historian Adam Roberts, came at a certain time of anti-machinery sentiment with science fiction at the time, with other books, such as with Aldous Huxley's Brave New World and Olaf Stapledon's The Last and First Men look to the use of mechanical and scientific processes and as a result, a population that overly depends upon them as something wholly against nature and counter-productive to humanity as a whole: societies are generally dystopic and dehumanize their inhabitants. This somewhat fits with some modern science fiction films, such as the far futures of The Terminator and The Matrix, and even with Wall*E, where an overreliance of machines results in our destruction, or at least an enormous disruption of society. (2) Indeed, Robot comes from the Czech term robota, which translates to servitude.(3)
Indeed, it should come as no surprise that early views towards robotics weren't necessarily looked at in any sort of favorable light: throughout history, a constant struggle between leaders and those being led has come about, and one lesson that a history teacher (Mr. David Munford, thank you), imparted was the destruction of clocks and machines during one early worker uprising. The use of factories in particular lends itself well to machinery and associated dystopia images and themes. Henry Ford put to good use the assembly line, which relegated skilled labor to fastening single bolts day in and day out. It is particularly ironic that those human workers were in turn replaced by robots who do the same roles for them.
In literature, then, the use of robotics goes far beyond characters, but is typically used as part of a larger theme that a novel is trying to push across to the reader. The Three Laws of Robotics that are central to Isaac Asimov's robot books are particularly conscious of this fact, and represents some level of paranoia on the part of the human race that at some points, robots will eventually take over humanity because of their inherent strengths over human flesh: stronger, faster, smarter, etc. This makes Asimov’s novels somewhat different from the earlier books with mechanical imagery linked to dystopia: Asimov’s world shows where a fall of society has not occurred because of the indulgences by humans, but generally only because the robots that we’ve essentially created in our own image are just as screwed up as we are. Dystopia, in this case, may be in Asimov’s futures – we certainly see that in his Foundation stories – but for the time being, he views a world with robotics as one where robotics act as a natural counterpart for humanity, rather than a replacement, although the threat, held in place by his three laws, is still there.
In films, however, different elements are brought out: robots are the servants of humanity & associated sentient life in Star Wars, performing vital and specialized tasks while interfacing with their creators. The same goes for the robots in Blade Runner and Wall*E. At other points, they're used for war, such as in Ron Moore's Battlestar Galactica, where they then turn on their human creators for a variety of reasons, or under the control of a vast, superhuman intellect, such as in the Terminator franchise. Here, these elements often, but not always, hearken back to a sort of dystopia, where robotics are part of a larger problem: it represents the failure of the human race to continue with its biological need to reproduce, and demonstrates some basic elements of life itself: Darwinism or survival of the fittest. Those that cannot keep up, will be destroyed, or at least overcome.
Within literature, the larger themes of dystopia and robotics are used, with the protagonist generally someone who overcomes the system/society/social norm to relearn what it means to be human, and there is a larger theme of the scientific, mechanical, logical order, represented by robotics, and a more organic, theological, chaos, represented by people. At points, this is represented with some very pointed examples: Ray Bradbury’s ‘There Will Come Soft Rains’, which shows a robotic house living diligently on long after its inhabitants have destroyed themselves. However, the reason that robots themselves seem to be fewer and farther between is because there is an inherent need for this dystopia theme to be present in the film: it represents the weakness of humanity, carries with it religious overtones and two extremely different styles of thinking all wrapped up into a single character, which oftentimes, seems to be difficult to work in or really justify as a regular character in a book that takes just part of the story, especially if they are not the central part of a story. Their existence represents so much in relation to their human counterparts, it would seem almost a waste to have a story with a side character as a robotic entity, rather than fleshing out everything that he/she/it represents.
With movies, these themes are there occasionally, but generally, explosions and violence comes first and foremost in the eyes of paying audience members.
1 - Jeff Prucher, Editor. Brave New Worlds: The Oxford Dictionary of Science Fiction. Oxford University Press, 2007, 164 2 - Adam Roberts. The History of Science Fiction. Palgrave Press, 2005, 159 3 - Ibid, 168