Hatrack River Writers Workshop   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Writers Workshop » Forums » Open Discussions About Writing » The Perils of Synthetic Protagonists

   
Author Topic: The Perils of Synthetic Protagonists
Osiris
Member
Member # 9196

 - posted      Profile for Osiris   Email Osiris         Edit/Delete Post 
I have a story that's been floating around my head for nearly six months now, and I've outlined a good deal of it recently. I think it'd be appropriate as a WoTF submission.

My hesitance in actually sitting to write this down stems from the fact that my POV character/MC is an android. I'm finding myself more 'in touch' with one of the supporting (human) characters, but I really wanted to try and write this from the android's POV.

So, one of the things I worry about is that the story will seem contrived if the android seems too human, yet the plot revolves around the android being able to be indistinguishable from humans in appearance and surface behaviors.

I also want to avoid the stock motivations that have been written for androids in the past: desire for free will, wanting to be more like humans, etc. But then how will the reader sympathize with the character if its motivations aren't in some way human?

So, I'm wondering how others would handle this situation. I guess I'm inviting people to a thought experiment on this with the goal of getting some clarity, as I have been sitting on this story for a long time. I'm not sure if I'm just worrying needlessly and I should just go write it and let the characters figure it out.


Posts: 1043 | Registered: Jul 2010  |  IP: Logged | Report this post to a Moderator
WakefieldMahon
Member
Member # 9555

 - posted      Profile for WakefieldMahon   Email WakefieldMahon         Edit/Delete Post 
I am actually in the middle of a story in which the primary character is a sentient artificial being. I ran into a similar issue.

I'm sure someone will disagree with me but motivation, beyond primal response is in and of itself human. If the android does not want something then there really isn't a story or the android isn't actually the focus of the story.

Ask yourself why you wanted an android. What point were you trying to make?

What does the android "want"?

Legal citizenship?
To protect a loved one?
To destroy?
To build?
To love?
To seek revenge?
To murder for fun?
To end it's own existence?
To end it's cognizance?
To make better scones than the lady on TV?

I hope that helps. I'm sure folks have better ideas, I'm sure you do too. I'd love to see how it turns out!


Posts: 39 | Registered: Jun 2011  |  IP: Logged | Report this post to a Moderator
MattLeo
Member
Member # 9331

 - posted      Profile for MattLeo   Email MattLeo         Edit/Delete Post 
I think you can divide the potential range of AI emotions into three categories: teleological, instrumental and emergent.

Teleological emotions would be ones fundamental to the purpose of the device. For example the requirements specification for a childrearing robot would likely state that the robot display nurturing love, loyalty and protectiveness towards its charges. There would not have to be any justification for the robot to feel this way, it would be programmed to imprint on the children it was assigned to.

Instrumental emotions would constitute the vast majority of emotions displayed by an AI. They would exist to support the device in the fulfillment of its mission. For example, an AI would be provided with a drive for self-preservation, because it cannot perform its duties if destroyed, although I suspect robotic "fear" would be somewhat different from human fear which is prey behavior. Our Nannybot would feel suspicion of strangers whose behavior towards its children it could not explain because that emotion is entailed in its pre-programmed protectiveness.


The big advantage of emotion over reason is speed. In a way, emotions are cached conclusions. The Nannybot, when confronted with a person it considers suspicious approaching its children acts to protect its children *without logical justification for its actions*. Furthermore for the system to function reliably, the Nannybot's attitude must be refractory. Once the Nannybot is suspicious of a person, it takes some convincing to make it trust him. If there weren't some hysteresis in the Nannybot's feelings its behavior would be unpredictable in borderline situations, allowing a hacker to "flip" the robot's feelings long enough to subvert it, either by making the robot trust him, or distrust authorized human caretakers.

The third category is the one that is most familiar to us from fiction: the machine that has emotions that arise from its complexity, not from programming. Not to put too fine a point on it, emergent emotions would be regarded as bugs; motivations that do not support and probably detract from the device's purpose.

The robot that conceives an unconsummatable desire to be human has a bug. A robot that conceives of a love of something it is not programmed to love is wasting its resources (from the designers' point of view). A robot that discovers an inexplicable desire to kill all humans is faulty unless it was designed to be a doomsday killer robot.

It is the emergent emotions that are cliches in sci-fi. Asimov avoids them by mostly sticking to the teleological and instrumental emotions, but there are borderline cases where highly sophisticated robots deduce motivations they ought to have in order to obey the Three Laws (e.g. the zeroth law). That makes robots viable characters in his stories because their motivations have to be logical and understandable, rather than narratively convenient.

[This message has been edited by MattLeo (edited July 19, 2011).]


Posts: 1459 | Registered: Dec 2010  |  IP: Logged | Report this post to a Moderator
Osiris
Member
Member # 9196

 - posted      Profile for Osiris   Email Osiris         Edit/Delete Post 
Thanks Matt for the detailed discussion. Based on what you stated, I think I'm on the right track with the motivation developing for this character, which I believe fall into the teological emotion camp.

It reasons that its programmer's nefarious use of it as a covert strongarm for his corporation is a waste of its resources and does not fully explore its design specifications. Asimov's three laws don't apply to this particular android, as that would also be considered a waste of its resources as it is designed to be able to perform any task that a human could perform. It is programmed with a drive to do this, yet its designer is not allowing it to fulfill this drive.


Posts: 1043 | Registered: Jul 2010  |  IP: Logged | Report this post to a Moderator
Robert Nowall
Member
Member # 2764

 - posted      Profile for Robert Nowall   Email Robert Nowall         Edit/Delete Post 
I've always been drawn to the notion that an android could pass for human becaus that's exactly what they're designed to do...
Posts: 8809 | Registered: Aug 2005  |  IP: Logged | Report this post to a Moderator
Natej11
Member
Member # 8547

 - posted      Profile for Natej11   Email Natej11         Edit/Delete Post 
I think an interesting way to portray an android meant to appear human would be to let the reader see that, though within the android's mind it acts in a logical way as directed by its programming, outwardly it does have those human characteristics.

So if for a person you said "John was angry," for your android you'd say "John showed anger," to indicate the human behavior he was mimicking at the time.

Another example: John conveyed urgency in his tone, as he waved his hand in the gesture humans used to express a desire for haste.


Posts: 620 | Registered: Mar 2009  |  IP: Logged | Report this post to a Moderator
Wordcaster
Member
Member # 9183

 - posted      Profile for Wordcaster   Email Wordcaster         Edit/Delete Post 
I have an AI (not an android, though) in the story I sm currently working on. It is devoid of unselfish motivation and will remain that way through the story.

Just as humans have various degress of selflessness vs self-preservation, I think the same would happen in an AI community. Whether emotions are synthetic or genuine, it shouldn't matter. Often human emotions are piqued by chemicals produced in the body, so in a way they can be synthetic too (e.g. drug induced).

I don't think there is any limit to the humanity you want your android to achieve. Feel free to make the AI character such that it tells the best story.


Posts: 475 | Registered: Jul 2010  |  IP: Logged | Report this post to a Moderator
History
Member
Member # 9213

 - posted      Profile for History   Email History         Edit/Delete Post 
AN ASIDE: I just wish to say I find MattLeo's posts, and his writing [I enjoyed reading one of his novels], extremely intelligent, interesting, and enlightening. I learn a lot from how he perceives and breaks down the craft of writing. Truly fascinating. Humbly, I am in awe of his intellect. I expect great things from him. (Personally I sometime thinks and writes, but mostly I just writes ). I very much look forward to your Bhu-Jew novel, Matt.

TOPIC: Great discussion. Asimov's THE BICENTENNIAL MAN is the quintessential robot-wants-to-be-human story. And there are many tales of AIs whose lack of human emotion and understanding result in harm to humans as they seek to fulfill its purpose [e.g. COLOSSUS: THE FORBIN PROJECT, even the first such tale...FRANKENSTEIN].

An android shares human form, thus it is at least in physical appearance emulating humanity. And, by design, this emulation is purposeful by its creators--if not in seeking an AI who will be like a human being, then to diminish human fear of the android by making it familiar and eliciting human behavioral responses that permit the AI to function and fulfill its programming among humans. If it lacks human emotion and empathy, or the desire for these characteristics, to some degree it is a cuckoo chick in a robin's nest. What must it be like to know oneself as completely and knowingly alien and participating in an illusion desired to appease one's makers?

If the "I" in AI is self-awareness and not merely the ability to process information without consideriation of its consequences, then there is an inescapable compatibility with human angst over "Who am I? Why am I?"
Understanding the answer to these questions from an android's/AI's perspective, and how it contrasts with a human's perspective, is the challenge--and the heart (or central processing unit ) of the story.
If original and insightful, you'll have a great tale.
[Recall Arthur C Clarke's Hugo and Nebula winning novel RENDEZVOUS WITH RAMA, in which that which is mechanical and alien is fascinating, inspiring of wondrous speculation, yet ultimately not-quite-knowable.]

Respectfully,
Dr. Bob

[This message has been edited by History (edited July 19, 2011).]


Posts: 1475 | Registered: Aug 2010  |  IP: Logged | Report this post to a Moderator
Osiris
Member
Member # 9196

 - posted      Profile for Osiris   Email Osiris         Edit/Delete Post 
Thanks to everyone for their responses.
@Nate, I was planning on doing exactly that, where the internal dialog would be more logical/computational and the external would be more human-like.

@history, I loved RENDEZVOUS WITH RAMA, one of my all-time favorites for exactly the reasons you mentioned, the wondrous yet unknowable nature of the ship.

If I were to to compare the android's motivation to the human equivalent, and made one of those silly S.A.T. comparison things out it, it'd go something like this:

'Fulfillment of design specification' is to androids as 'reaching for your dreams is to humans'. Or something like that The sentence looks grammatically wrong to me. Anyway, I'm sure you get the idea.


Posts: 1043 | Registered: Jul 2010  |  IP: Logged | Report this post to a Moderator
Grayson Morris
Member
Member # 9285

 - posted      Profile for Grayson Morris           Edit/Delete Post 
There are plenty of bona fide humans-in-the-flesh who spend their lives "passing" in greater society, trying to fit into a framework of "normality" they aren't fundamentally designed for. Maybe looking at some of the struggles these people go through might help you with motivations and actions for your protagonist.

I'm speaking primarily of the high-functioning autism community (Asperger's), but I think my first sentence also applies to other "abnormal" groups.


Posts: 381 | Registered: Oct 2010  |  IP: Logged | Report this post to a Moderator
MattLeo
Member
Member # 9331

 - posted      Profile for MattLeo   Email MattLeo         Edit/Delete Post 
Dr. Bob brings up an important point about androids: simulating humanity in interactions with humans (in some context) must certainly be part of any android's design specification. If the android were to interact with some human for an extended length of time, it would clearly have to be provided with behavior that was complex enough to be convincing. That includes the development of what appears to be emotional stances towards individuals.

[By the way Dr. Bob -- you almost embarrassed me off this thread! I'd actually worked out most of this emotional stuff for an AI in *Norumbega*.]

I think it behooves us to think about what we mean when we say "emotions" (a problem only a speculative fiction writer has).

There's the real-time *experience* of emotion, which moves us toward certain immediate behavior (running away, confrontation, courting). The experience of strong emotion performs a kind of forward feedback homeostasis regulation. In anticipation of future injury to our goals we look for a quick response that mitigates that injury. In anticipation of a windfall, we shift our behavior towards exploiting that windfall.

Then there's *attitude*; the entire complex of not-quite-justifiable beliefs, anticipation and strategy that we attach to a situation or person. If you are sensitive to your internal narrative, you'll hear these attitudes associated with the word "always" ("He's always trying to cut me out of the conversation.")

Attitude is a very unreliable guide to action, but if you think about it in an evolutionary context it's a huge win. A high social status person gains more by taking quick action against a scheming rival than he loses by taking unjustified action against an occasional loyal subordinate -- within limits of course.

On the other hand, I don't think it is an exaggeration to say that most of our troubles and difficulties come from this very evolutionarily useful tool. Say you hate your boss, but not enough to quit your job. What does that attitude do for you, other than make you miserable and undermine your ability to work together?

Androids, unlike humans, are *designed*. They don't have to make do with the level of emotional stubbornness settled on by natural selection thousands of years ago. The degree that they cling to a point of view is a design parameter that can be tweaked for various circumstances. In general, I'd think the designers would make androids far more able to change their attitudes. For that reason, I think it likely that an advanced android would be easier to get along with than a human, because it's designed to adapt to humans in a way humans find difficult.

So if you were a rogue programmer, how could you cheat? How could you get an android to do something it should not do, but in a way that evades quality assurance testing?

What you want to do is set up a condition in which the AI will generate an emergent motivation, because it's impractical to test for those. Since you don't want to be obvious, one of the more subtle things you could do is tweak how refractory the AI's attitude will be toward a particular constellation of people and circumstances that won't be tested. This fault would cause the AI to experience incompatible imperatives in that situation, the way your desire to throttle your boss clashes with your desire to keep drawing a paycheck.

When this happens to human characters, it's called *emotional conflict*.


Posts: 1459 | Registered: Dec 2010  |  IP: Logged | Report this post to a Moderator
shimiqua
Member
Member # 7760

 - posted      Profile for shimiqua   Email shimiqua         Edit/Delete Post 
Grayson Morris said
quote:
there are plenty of bona fide humans-in-the-flesh who spend their lives "passing" in greater society, trying to fit into a framework of "normality" they aren't fundamentally designed for.

I would say,um.. yeah all of us. We are all not normal, and I'm not just talking about us as writers, all human beings aren't exactly normal in all ways, especially when you figure in that normalcy is always changing. The definition of "normalcy" is dependent on culture, setting, and group.

Humans change depending on who they are with, and what the setting is. I think that is part of what makes us human is the ability to change, to sense the framework of what is expected, and live within it. People who don't have that ability are considered as either weirdos, or pioneers, or rock stars. No matter what they are regarded as separate from the rest of us.

I think it might be interesting to write the character of the android while focusing on the feeling of being on the outs of normal. What if you took the android out of the definition he was designed for. He would do something or say something abnormal.

The android, or POV character, could not notice what what he is doing is wrong, but could note how people react to him, so the reader will understand. I think that could create a lot of sympathy for the android, without the android feeling things outside the plausibility of his programming.

I think Grayson really hit on this, but no matter what, the android will not be human. Will not be normal. People, meaning humans, will react to him as they would an abnormal person. I think even in a setting where there are a lot of androids around, the humans around the robots will need to access their dominance, to prove they are better than. I think there would be a lot of the shunning, or condescension most of us remember from high school.

Anyway, in a roundabout way, I'm trying to say to focus on how the other characters view the robot more than dwelling on how the robot is feeling. The reader will infuse their own feelings into the situation, and that will create the likability and relationship for the POV character.

Hope that made sense. I didn't get much sleep last night.
~Sheena


Posts: 1201 | Registered: Jan 2008  |  IP: Logged | Report this post to a Moderator
shimiqua
Member
Member # 7760

 - posted      Profile for shimiqua   Email shimiqua         Edit/Delete Post 
Matt Leo (who is brilliant) said this while I was tping my response...
quote:
In general, I'd think the designers would make androids far more able to change their attitudes. For that reason, I think it likely that an advanced android would be easier to get along with than a human, because it's designed to adapt to humans in a way humans find difficult.

which basically negats everything I just said, but I think it depends on the story, and on how advanced you would want the android to be.

The nannybot that MattLeo created earlier on in the discussion, would probably behave very strangely if someone took him or her from their post with the babies, and asked it to tend to a pack of Dobermans. That nannybot would look awfully strange if it started rocking a dog and singing nursery rhymes to it.

The reaction people have to the robot doing something wrong would create sympathy for the character without having to show the robot feeling mocked, or being embarrassed.

Anyway, just another way to approach the problem. Good luck with it.

~Sheena


Posts: 1201 | Registered: Jan 2008  |  IP: Logged | Report this post to a Moderator
Osiris
Member
Member # 9196

 - posted      Profile for Osiris   Email Osiris         Edit/Delete Post 
Thank you everyone for your responses, this has been a interesting discussion and everyone provided some great viewpoints that I'll be keeping in mind when I write this piece.

quote:

The reaction people have to the robot doing something wrong would create sympathy for the character without having to show the robot feeling mocked, or being embarrassed.

I agree 100%, and your comment reminded me of a specific scene in my mind to accomplish this. In fact one of the supporting human characters sees herself as a 'big sister' to the android, which opens up lots of possibilities for this sort of sympathy-building effect.

[This message has been edited by Osiris (edited July 20, 2011).]


Posts: 1043 | Registered: Jul 2010  |  IP: Logged | Report this post to a Moderator
   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2