Sdtrk: ‘The Eleventh house’ by Belbury Poly
As I’d mentioned before, a long long time ago, one of my favourite online comics is 8-bit Theater. The artist/creator/writer bloke Brian Clevenger usually posts an editorial of some sort with every new installment, but the one for today really caught my eye, for reasons that will quickly become apparent.
There’s a school of thought that artificial intelligence will be impossible unless a machine possesses emotional complexity.
The basic idea is that intelligence as we understand it, as we exemplify it, stems from our ability to feel and express emotions. Sure, once you get down to the molecular level, emotions are little more than stimulus/response like anything else, but there’s something “extra” there. Not in a magical sense. Think of it like this: if you break a spider’s leg, it’ll experience the stimulus and react to it. But if you break your friend’s leg, he’ll experience the stimuls and react to it in a purely pain/reflexive sense just like the spider, but there’s going to be a storm of purely mental, purely emotional states — anger, sadness, betrayal, fear, etc. — that the spider will never know. These emotions develop because we are intelligent. We understand the passage of time, assign values and relationships to people in our lives, expect certain behaviors from people — friends and strangers — given our experiences and relating them to current or potential contexts. These are the base elements of intelligence, and emotions are a direct result of it. As you go up the evolutionary ladder, creatures exhibit greater degress of emotional complexity along with a greater capacity for intellligence. Your pet spider can’t feel betrayed if you break its leg because it’s not intelligent enough to understand that you have a history or relationship with it. Get into vertebrate country and break a cat or dog’s leg, and you’ll have an animal that will have instantly learned to distrust any and all humans (also I will hunt you down and beat you to death with a baseball bat). Break a gorilla’s leg and it teaches its family sign language, explains the situation, and they chase you down and slaughter you in your sleep.
The theory goes that if our machines have to be emotional to be intelligent, then they will best learn as we do because their mental landscape will be so similar to ours. And the easiest way to help robots learn from us, and to help us to learn how to interact from them, is to make them appear to be as human-like as possible — while avoiding the uncanny valley.
In this world of emotionally intelligent robots, expecting an apocalyptic battle between organics and replicants as has been promised to us in every sci-fi story in the history of man (including ones that have nothing to do with the subject), is somewhat like expecting your children to murder you when they graduate college because you’ve outlived your usefulness.
No one expects that because it doesn’t happen outside of the rare aberration where, clearly, other factors are at work. In any event, no one is warning us an inevitable grand upheaval when the next generation of humans figures out that they don’t need the previous generation for financial support any more and they’re just going to cost as more money in taxes and insurance rates if we let them get any older.
Similarly, our robots will have “grown up” with us. They would have no interest in slaughtering mankind because they’d be emotionally invested in us. And if they’ve spent their lives living among us, being treated as a part of society, if they have a stake in that society, there is no reason for them to engage in a bloody revolution. Hell, the whole “They got so smart they figured out they didn’t need us any more” angle falls apart right at the start. Emotionally intelligent robots probably wouldn’t be much “smarter” than humans because their mental landscape would be built to be very much like our own.
But peaceful co-existence doesn’t make a very good action movie, nor does it examine how our technology changes us and our society in a pithy warning of things to come short story, so people have a hard time seeing intelligent robots as being anything other than cold, purely logical machines built to kill. Our current machines are already purely logical — that’s why they’re so far from being intelligent — but TiVo’s never tried to kill me.
Still, we’d have a whole new population walking around that’s emotionally and mentally very, very human. What are they likely to do? Seek their own identity? Establish an ethnic identity all their own? Wouldn’t they be likely to seek religion of some sort? Remember, there’s absolutely no reason to expect emotionally intelligent beings to outright reject the supernatural, otherwise there’d be no religious humans. Would they merely copy existing ones? Would they make their own? Would some seek to establish a robotic nation? What then?
Imagine the irony that the great human-robot war is not fought because robots are heartless, purely logical constructs who reject us as their masters due to our intellectual inferiority. Instead, it’s a simple matter of religious differences. Just another Crusade.
Viva le Artifice! Viva le Reason, really