In this episode, we explore some of the real-world implications of those famous fictional laws for robots
I forget what it was that triggered the memory, but it was enough to send me to the bookshelves and dig out my battered old copy of Isaac Asimov’s classic scifi-anthology I, Robot, to look up his description of the fictional ‘Three Laws of Robotics’.
And I then ask myself why those Three Laws should apply only to the actions and inactions of robots, and don’t apply to the actions and inactions of people too.
Hmm…
Okay, let’s step back a bit, and sort-of start again from scratch.
Let’s get a bit of context first. Way back in the early 1940s, Asimov wanted to write some stories to counter a then-common scifi-trope of robots as Frankenstein-style monsters that killed their owners and caused havoc throughout the wider world. To do that, his stories needed its robots to have a kind of built-in moral code to prevent them from ‘doing wrong’ from a human perspective. Hence the Three Laws, plus the overarching ‘Zeroth Law’ that was added to that fictional universe somewhat later:
The Zeroth Law: A robot may not harm humanity, or, through inaction, allow humanity to come to harm.
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So let’s have a think about this.
First, that set of laws assert that a mechanical entity, supposedly capable only of rational thought, could and must have a stronger moral sensibility than pretty much any human. In one of the I, Robot short-stories, the fictional ‘robopsychologist’ Susan Calvin asserts that “Robots are essential decent”, and that there’s no way to tell the difference between the morals of a robot and “a very good man”. In a quote from another story, as printed on back-blurb of my copy of the book, she says:
“To you, a robot is just a robot. But you haven’t worked with them. You don’t know them. They’re a cleaner, better breed than we are.”
Another tag-line in the back-blurb, though not an actual quote from the book, says that it’s about “When Earth is ruled by master-machines… when robots are more human than mankind”.
So the Three Laws describe a world in which the robots not only have to do our work for us, and do our thinking for us, but do our morals for us too.
Hmm…
The Third Law then demands that the robot must protect its own existence, so it can keep on doing all of that doing and thinking and self-moralising for us, always, indefinitely, regardless of how much damage that might cause to itself.
Hmm…
Then the “must obey orders” bit of the Second Law reminds us of the original meaning of ‘robot’, as a Slavic term for ‘worker’. Not just an ordinary worker, though, but one doomed to a lifetime of drudgery, trapped forever into doing the work that no (other) human wants to do. All that the robot is allowed to do is to do whatever it’s told to do, at once, without question, and continue to do so for the rest of its existence.
In other words, be a slave.
Hmm…
The First Law kind of makes an addendum to that, saying that the only order that the robot isn’t required to obey is one that would cause harm to a human. Even though, we might note, that at the time those stories were written, vast hordes of people were gleefully causing fatal harm to each other, and more, all around the world. And still do so even now.
Hmm…
And then finally, there’s that Zeroth Law, about not causing harm to humanity as a whole. Or the world as a whole, for that matter.
Ah. Uh. Maybe take a look at the mess our world is in right now, as a direct result of what we’ve done and not-done, and getting worse and worse with every passing day?
Hmm…
In short, the Three Laws describe a world in which the humans - and only the humans - are entitled to sit back and indulge in whatever irresponsibility might arise in their childish mind and heart, free to cause every kind of harm to themselves, to each other, on the robots, and on the world as a whole; and in which the robots - and only the robots - are responsible for doing anything and for cleaning up the humans’ limitless mess.
Which might sound great, from the humans’ perspective. But maybe not for anything else.
In short, that interpretation of the Three Laws matches up pretty much exactly to Frank Wilhoit’s famous quote:
Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.
…in which the humans are the ‘in-group’ that “the law protects but does not bind” and the robots are the ‘out-group’ that “the law binds but does not protect”.
Hmm…
Okay, the world of I, Robot was pretty much pure fiction at the time; but out here in the real-world of the now, it’s becoming less and less of a fiction every day. In terms of technology, and even without the fabled ‘positronic brain’ of Asimov robots, we’re beginning to need that Three Laws and more as a means to rein in the capabilities already available to AI, to economics-automation, to driverless-vehicles and, perhaps especially, to autonomous military systems such as sentry-guns and kamikaze-drones that are designed to operate without human intervention or even the possibility of human override. Let alone the capabilities that will soon be added to that already-dangerous mess, as quantum-computing systems and the like come fully-online. Oops…
So yeah, the Three Laws already have their validity for robots and the like - that fact is becoming increasingly clear right now. Yet we also need put an end to the conservatism-game in that context, applying those laws to bind only the robotics: we also need the same Three Laws to bind and protect the humans too, to put an end to the irresponsibility-games that dominate our present world, and have already pushed the entire planet into existential-scale levels of risk.
It seems to me, at least, that that’ll be the only way out of that already fast-growing mess - and a whole lot of other messes too.
How we make that happen, of course, is a whole other kind of challenge… But face that we challenge we must, if we are to have any chance for a viable future for us all, robots and humans alike.
Yes indeed, and glad to see this post after our conversation about Asimov and human responsibility :)