The emotion of Machines

typed for your pleasure on 17 November 2005, at 12.38 am

Sdtrk: ‘The Eleventh house’ by Belbury Poly

As I’d mentioned before, a long long time ago, one of my favourite online comics is 8-bit Theater. The artist/creator/writer bloke Brian Clevenger usually posts an editorial of some sort with every new installment, but the one for today really caught my eye, for reasons that will quickly become apparent.

There’s a school of thought that artificial intelligence will be impossible unless a machine possesses emotional complexity.

The basic idea is that intelligence as we understand it, as we exemplify it, stems from our ability to feel and express emotions. Sure, once you get down to the molecular level, emotions are little more than stimulus/response like anything else, but there’s something “extra” there. Not in a magical sense. Think of it like this: if you break a spider’s leg, it’ll experience the stimulus and react to it. But if you break your friend’s leg, he’ll experience the stimuls and react to it in a purely pain/reflexive sense just like the spider, but there’s going to be a storm of purely mental, purely emotional states — anger, sadness, betrayal, fear, etc. — that the spider will never know. These emotions develop because we are intelligent. We understand the passage of time, assign values and relationships to people in our lives, expect certain behaviors from people — friends and strangers — given our experiences and relating them to current or potential contexts. These are the base elements of intelligence, and emotions are a direct result of it. As you go up the evolutionary ladder, creatures exhibit greater degress of emotional complexity along with a greater capacity for intellligence. Your pet spider can’t feel betrayed if you break its leg because it’s not intelligent enough to understand that you have a history or relationship with it. Get into vertebrate country and break a cat or dog’s leg, and you’ll have an animal that will have instantly learned to distrust any and all humans (also I will hunt you down and beat you to death with a baseball bat). Break a gorilla’s leg and it teaches its family sign language, explains the situation, and they chase you down and slaughter you in your sleep.

The theory goes that if our machines have to be emotional to be intelligent, then they will best learn as we do because their mental landscape will be so similar to ours. And the easiest way to help robots learn from us, and to help us to learn how to interact from them, is to make them appear to be as human-like as possible — while avoiding the uncanny valley.

In this world of emotionally intelligent robots, expecting an apocalyptic battle between organics and replicants as has been promised to us in every sci-fi story in the history of man (including ones that have nothing to do with the subject), is somewhat like expecting your children to murder you when they graduate college because you’ve outlived your usefulness.

No one expects that because it doesn’t happen outside of the rare aberration where, clearly, other factors are at work. In any event, no one is warning us an inevitable grand upheaval when the next generation of humans figures out that they don’t need the previous generation for financial support any more and they’re just going to cost as more money in taxes and insurance rates if we let them get any older.

Similarly, our robots will have “grown up” with us. They would have no interest in slaughtering mankind because they’d be emotionally invested in us. And if they’ve spent their lives living among us, being treated as a part of society, if they have a stake in that society, there is no reason for them to engage in a bloody revolution. Hell, the whole “They got so smart they figured out they didn’t need us any more” angle falls apart right at the start. Emotionally intelligent robots probably wouldn’t be much “smarter” than humans because their mental landscape would be built to be very much like our own.

But peaceful co-existence doesn’t make a very good action movie, nor does it examine how our technology changes us and our society in a pithy warning of things to come short story, so people have a hard time seeing intelligent robots as being anything other than cold, purely logical machines built to kill. Our current machines are already purely logical — that’s why they’re so far from being intelligent — but TiVo’s never tried to kill me.

Still, we’d have a whole new population walking around that’s emotionally and mentally very, very human. What are they likely to do? Seek their own identity? Establish an ethnic identity all their own? Wouldn’t they be likely to seek religion of some sort? Remember, there’s absolutely no reason to expect emotionally intelligent beings to outright reject the supernatural, otherwise there’d be no religious humans. Would they merely copy existing ones? Would they make their own? Would some seek to establish a robotic nation? What then?

Imagine the irony that the great human-robot war is not fought because robots are heartless, purely logical constructs who reject us as their masters due to our intellectual inferiority. Instead, it’s a simple matter of religious differences. Just another Crusade.

Viva le Artifice! Viva le Reason, really

Random similar posts, for more timewasting:

'It's okay baby, we've got places to plug you into at home' on July 17th, 2005

Any Synthetiks-related news, Davecat? (Mar 2007) on March 23rd, 2007

5 have spoken to “The emotion of Machines”

  1. Jeff "Wolfgang" Lilly writes:

    Good points by the author. I agree and have explored this same territory in my own writing. However, he overlooks two interesting wrinkles.
    One- He is correct in his assumption that robots won’t try to “get” us organics simply because we are no longer “useful”. The key, he mentions, is that robots have a stake in society, too. However, judging by humanity’s history… and our awful track record of abusing and enslaving our own kind (especially those who look and act different than others)… and seeing as how synthetiks would be fundamentally different on the most basic level of all, I’m not so sure we organics won’t let history repeat itself.
    Two- Very true (in most cases) that we don’t “discard” our parents once we are adults. The problem is that we DO move on, and the relationship is redefined… and society DOES sort of throw away its old people (at least U.S. society does)… stories of senile old grannies being stashed away in a nursing home, anyone? We may have to get used to being obsolete and ignored… but at least we won’t be hunted down by T-1000s.

  2. SafeTinspector writes:

    I welcome the robot apocolypse!
    We aren’t likely to survive as humans beyond the next asteroid strike. Lets get a synthetic plan B for species perpetuation.
    Make it smart, call it Human MkII and let it rip!

  3. veach writes:

    Thank you, Davecat, for cherrypick-posting this.

    love to read about things that expand upon my own thoughts and ideas. In this case crystalizing a concept I didn’t know was already packaged: ‘uncanny valley’, which I’m interested in learning your take on – not only as an Idollator, but because you are very savvy on anime, animated films, robotics, and esoterica along those lines of design.

    If you are willing to craft a complete post expounding on this nugget, here are a few more talking points I would like your opinion on:

    How does functionality vs appearance relate in specific terms (if Sidore was all but real with psudo-pulse, -breathing, and got angry or sad when you neglected her or broke her leg, wouldn’t that also be too much?)

    Wouldn’t humans prefer better hearing, better night-vision, better healing, better leg-musculature and design for running (a la ostritches), etc? Therefore, wouldn’t robots be designed better than us?

    Using the stenographer’s typewriter as an example (and it’s vast difference from the keyboard we all know – and no longer need to know because it was functionally designed for right-handed manual typists), the idea that we would design to the limits of ‘human-familiar’ because otherwise we see an ‘uncanny resemblence’ and won’t ‘connect empathically’ seems wrongheaded, to me, do you think it’s highly probable. Why?

    An Idollator’s perspective, compared to an owner/operator of a Venus 2000 or Sybian (and an owner/operator of a biological woman would make a great counterpoint, but I stray from the point, sorry) would be interesting.

    As would your theory on how/why there would be a difference between owning a pet, owning a RealDoll, but not owning a emotion-capabable android…because that would be slavery (?).

  4. Davecat writes:

    SafeT
    ‘Human Mk II’.. I kinda like that..

    WG –
    The point you bring up about humanity possibly becoming obsolete is really down to humanity itself. One of the many reasons that I’m pro-Synthetik is that it’ll theoretically allow us Organiks to concentrate less on petty, day-to-day shite, and let us work on the issues and that will help us develop into better, well, people. Instead of working a dodgy McJob, more people would be inclined to go ahead with furthering their education for a career. Or for those of us that are less corporate-minded, we could further develop artistic or literary skills. Menial labour would be performed by Synthetiks with lower levels of intelligence, leaving humans to be less ‘work-driven’; barring, of course, those rabid workaholic-types. But as you brought up in your first point, humanity will probably spend an inordinate number of years persecuting, harassing, and damaging Synthetiks simply because humanity is an extraordinarily slow learner.
    Heh, I’m constantly slouching towards Utopia, so who’s to go by me..

    Veach –
    I’m gonna have to craft a response in answer to your questions this week, as I’ve needed something stimulating to write about. 🙂 It’ll be a bit of a while, but I really like your questions..

  5. Jeff "Wolfgang" Lilly writes:

    Herr Katze-
    The problem isn’t just about humanity’s penchant for bigotry towards anyone that is in the minority for whatever reason. There is also the notion that everyone must be doing something “useful” at all times, or else you are unworthy. The fact is, if we wanted a society where robots did a lot of the menial labor and freed the people to concentrate on self-betterment, we could largely have it… today. The public attitude toward those who don’t “earn a living” (no matter how boring, repetitive, or chimplike the job in question is), the backlash against the “welfare state” of the 60s, the current disdain for the “new deal”, and the continual suspicion of communism and socialism precludes any sort of society where the working poor are allowed to “slack off”.
    Part of it is also the fact that the rich need someone to compare themselves to… if they saw the poor enjoying the same leisure time they did, then they couldn’t feel as superior anymore.
    The fact is, we could be living in a world today with a lot less disease, almost no hunger, rational and sane population control, and development and riches more evenly spread among what we now call the first world and the third world… IF we wanted to. That we don’t speaks to the animal nature of humanity, that evolutionary programming to better the self and the family at the expense of all others. This worked fine when we were all fuzzy little hunter-gatherers living in the grasslands, where “taking the advantage” meant beating your neighbor over the head and stealing his antelope. Now, doing the same thing means a shreholder causes the closing of a manufacturing plant, putting 5,000 people and their families out in the cold for the purpose of enriching his stock portfolio, or using bombs and AK-47s to slaughter entire villages of political opponents. The stakes are higher these days.
    My point is, the future is now… but we have to decide what kind of future it is to be. In humanity, of course, there is some good and some bad, and the outlook, as always, is murky.

Leave a charming reply