FacebookTwitter
Hatrack River Forum   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Forum » Active Forums » Books, Films, Food and Culture » Morality and Self Aware Robots (Page 1)

  This topic comprises 2 pages: 1  2   
Author Topic: Morality and Self Aware Robots
Geraine
Member
Member # 9913

 - posted      Profile for Geraine   Email Geraine         Edit/Delete Post 
I know, strange thread title. I was looking at Engadget today and I noticed an article regarding QBO, which is an open source robot.

The QBO team's goal is to make a robot self aware through a combination of real life experiences and complex algorithms. It is a poor word to use but it is almost as if they are "raising" the robot. They have been able to accomplish some amazing things so far. QBO has discovered his reflection in a mirror, and has met another QBO unit and carried on a conversation with it.

You can read the company's blog here:

http://thecorpora.com/blog/

My question is: If Humans are able to create a fully aware synthetic being, what are our moral obligations to it? If a robot can think for itself and make its own decisions, is it morally wrong to control it using a remote? There is an app to control a QBO unit available on Android devices.

I am already kind of torn with QBO. It already has a very small sense of identity, as it has learned about its own reflection. Just to put it into perspective, babies start to learn the same at about eight months old.

How do we measure self awareness, and at what point do we determine it is no longer moral to control the synthetic being?

Posts: 1937 | Registered: Nov 2006  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
quote:
If Humans are able to create a fully aware synthetic being, what are our moral obligations to it?
The same as any other fully aware being I would think.

But I don't think you should worry so much, at least not about QBO. While it's certainly difficult, if not impossible, to determine (with certainty) the level of awareness/consciousness/subjective experience in any organisms not of our own species, I don't think there's anything to indicate that the QBO is conscious in any way.

from the article:

quote:
Others, however, affirm that the self-consciousness is a process that emerges as a result of algorithms and an appropriate learning.

...

Qbo can be seen as a conscious being because it exploits knowledge of its appearance and the actions it takes.

That's a pretty big leap isn't it? Even IF they're right that consciousness can be programmed algorithmically (and I don't think they are), that doesn't immediately imply that they have programmed it in the right sort of way. Or that the underlying architecture and programming can allow the type of evolution to allow consciousness to emerge.

Another quote, from a third party:

quote:
Qbo is just a programmable electro-mechanical set that can see, hear, speak and move
That kind of language is already laden with presuppositions. To call what it's doing seeing, hearing or speaking, by using those words, already implies something about what it's doing that we don't have any real justification for positing. Is seeing and hearing simply behaving in a certain way based off interactions with sensory signals (another phrase already laden with interpretive/perspectival meanings) in the environment? Or is there a subjective quality that we associate with the words seeing and hearing?

They all seem to assume that if it behaves in the appropriate way, then it must be self aware. But while the behaviorists had many useful insights about human cognition, hasn't it been a while since anyone took behaviorism seriously? Also, what kind of learning is it subjected to? Is it researcher programmed learning? Or is it self guided? Does it have a "representation" of its own goal states with an ability to detect when its current state has failed to achieve the satisfaction conditions of that goal state? Can it engage in error correction and error guided behavior? It seems like the learning process is handling by the researchers, as it is in most of this sort of research, but I think the kind of behavior I described is fundamental to cognition.

I like that they have a focus on learning and evolution to reach their goal. But I think that research into embodied cognition and dynamic systems promises to be a more fruitful approach to understanding the emergence of consciousness and self-awareness.

[ December 30, 2011, 01:36 PM: Message edited by: Strider ]

Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
Geraine
Member
Member # 9913

 - posted      Profile for Geraine   Email Geraine         Edit/Delete Post 
quote:
Originally posted by Strider:

I like that they have a focus on learning and evolution to reach their goal. But I think that research into embodied cognition and dynamic systems promises to be a more fruitful approach to understanding the emergence of consciousness and self-awareness.

This is what I liked as well. I don't think artificial intelligence can ever be done by programming only. By understanding how the human brain works I think we may one day be able to replicate the way it works, but any synthetic being would need to experience things like any human does.
Posts: 1937 | Registered: Nov 2006  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
Well, I think if we replicate EVERYTHING about how the brain (and body) works, it's very likely that synthetic being will be able to experience. The trick is understanding what it is the brain is doing to be able to replicate it.
Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
Jeff C.
Member
Member # 12496

 - posted      Profile for Jeff C.           Edit/Delete Post 
The question isn't really "what should we do with a sentient being", but rather IF it is sentient at all. If you make a machine complex enough, so that it mimics human behavior, responds based upon the scripts you've written for the software, does that make it sentient? Or is it still just a machine, programmed to act sentient? For example, if you write a script that says to respond a certain way (that seems human) to an outside stimulai (i.e. crying when someone dies, laughing at a certain joke), how can you know that the machine is sentient, or if it is simply following its programming? If it is following its programming, isn't it just a really advanced machine?

Point is, we will first need to determine what it means to be sentient. Then we will have to determine what sentience is in regards to a machine. Once we answer those two things (as precisely as possible), I think we'll be able to decide whether or not the "robot" is sentient enough to be considered alive.

Posts: 1324 | Registered: Feb 2011  |  IP: Logged | Report this post to a Moderator
BlackBlade
Member
Member # 8376

 - posted      Profile for BlackBlade   Email BlackBlade         Edit/Delete Post 
I don't think we have any business actually creating even one sentient sapient being until ethically we have hammered out our responsibilities towards it. Both the individual/team who created it, as well as society.
Posts: 14316 | Registered: Jul 2005  |  IP: Logged | Report this post to a Moderator
Kwea
Member
Member # 2199

 - posted      Profile for Kwea   Email Kwea         Edit/Delete Post 
I think it is far more likely that such a being will either happen randomly as the complexity levels of such beings increase, or that machines will create one themselves. [Big Grin]
Posts: 15082 | Registered: Jul 2001  |  IP: Logged | Report this post to a Moderator
Rakeesh
Member
Member # 2001

 - posted      Profile for Rakeesh   Email Rakeesh         Edit/Delete Post 
Heh, I was just about to post that I think it'd be great if we at least make a serious effort, as a culture, to have some real ethics at least considered before 'artificial' sentience becomes a reality. Not that I think we're close, necessarily, but then I'm hardly in a position to know.

I'll be stunned if we do, though-prepare in advance.

Posts: 17164 | Registered: Jun 2001  |  IP: Logged | Report this post to a Moderator
Samprimary
Member
Member # 8561

 - posted      Profile for Samprimary   Email Samprimary         Edit/Delete Post 
quote:
My question is: If Humans are able to create a fully aware synthetic being, what are our moral obligations to it?
To destroy it with prejudice before it kills us all.
Posts: 15421 | Registered: Aug 2005  |  IP: Logged | Report this post to a Moderator
mr_porteiro_head
Member
Member # 4644

 - posted      Profile for mr_porteiro_head   Email mr_porteiro_head         Edit/Delete Post 
quote:
Originally posted by Strider:
quote:
If Humans are able to create a fully aware synthetic being, what are our moral obligations to it?
The same as any other fully aware being I would think.
I don't know. Currently, many of the responsibilities and obligations that come along with creating a sapient being (having a child) are dictated by human biology. Legally, you're responsible to care and provide for that sentient being, and are partially liable for the actions of it for the next 18 years.

Heck, I believe that when you become a parent, you have a moral duty toward that child that continues your entire life.

While it's possible that sapient AIs would need that sort of care and attention, I doubt it. And if they did, they probably wouldn't be terribly practical.

Posts: 16551 | Registered: Feb 2003  |  IP: Logged | Report this post to a Moderator
neo-dragon
Member
Member # 7168

 - posted      Profile for neo-dragon           Edit/Delete Post 
quote:
Originally posted by Jeff C.:
The question isn't really "what should we do with a sentient being", but rather IF it is sentient at all. If you make a machine complex enough, so that it mimics human behavior, responds based upon the scripts you've written for the software, does that make it sentient? Or is it still just a machine, programmed to act sentient? For example, if you write a script that says to respond a certain way (that seems human) to an outside stimulai (i.e. crying when someone dies, laughing at a certain joke), how can you know that the machine is sentient, or if it is simply following its programming? If it is following its programming, isn't it just a really advanced machine?

The real question is, are humans any different? Our programming/script is just our DNA and brain chemistry. If an exact replica of you was produced, right down to the last atom in your brain (and thus possessing all the same memories) logically he should be expected to respond to stimuli exactly as you would.

Of course, our script changes as we acquire new memories, but presumably a sapient AI's programming would adapt as well otherwise it wouldn't be any different than the computers and robots we have now.

Posts: 1569 | Registered: Dec 2004  |  IP: Logged | Report this post to a Moderator
Blayne Bradley
unregistered


 - posted            Edit/Delete Post 
Extra Credits has a very good episode on AI and video games.
IP: Logged | Report this post to a Moderator
Stone_Wolf_
Member
Member # 8299

 - posted      Profile for Stone_Wolf_           Edit/Delete Post 
quote:
Point is, we will first need to determine what it means to be sentient.
When something tells you "I think..." it's a safe bet you are talking to a self aware entity.
Posts: 6683 | Registered: Jun 2005  |  IP: Logged | Report this post to a Moderator
Xavier
Member
Member # 405

 - posted      Profile for Xavier   Email Xavier         Edit/Delete Post 
quote:
When something tells you "I think..." it's a safe bet you are talking to a self aware entity.
Jeez, it would take me 30 seconds or less to write a program that tells you various "thoughts".
Posts: 5656 | Registered: Oct 1999  |  IP: Logged | Report this post to a Moderator
scifibum
Member
Member # 7625

 - posted      Profile for scifibum   Email scifibum         Edit/Delete Post 
Come now. Surely a minute or three.
Posts: 4287 | Registered: Mar 2005  |  IP: Logged | Report this post to a Moderator
Blayne Bradley
unregistered


 - posted            Edit/Delete Post 
I perceive therefore I am.

I Am That I Am.

IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Originally posted by Stone_Wolf_:
quote:
Point is, we will first need to determine what it means to be sentient.
When something tells you "I think..." it's a safe bet you are talking to a self aware entity.
I'm not sure self awareness and the ability for independent thought are all that relevant. The ability to feel emotion, to experience suffering or happiness, and to desire things are the more relevant issue, at least to me.
Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
TomDavidson
Member
Member # 124

 - posted      Profile for TomDavidson   Email TomDavidson         Edit/Delete Post 
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.
Posts: 37449 | Registered: May 1999  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
I wouldn't say emotion is required for sentience, but I would say that having preferences (which may or may not include emotions) is the requirement for moral weight.
Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I didn't say that emotion was necessary for sentience. I thought the question we were discussing was whether or not a self aware robot deserved moral consideration.

If a being has no desires, if it feels no pain when its wants go unfilled or experiences no sense of satisfaction at achieving its hearts desires, then I can see no reason to grant it moral consideration.

I don't know whether you consider desire an emotion. I know only how I experience it and I am unable to imagine desire without emotion.

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
Blayne Bradley
unregistered


 - posted            Edit/Delete Post 
If a AI is self aware and sentient, would it be immoral to restrict its actions to be Three Laws compliant?

For reference:

Zeroth's Law. You shall not harm Humanity or through inaction allow Humanity to come to harm.
1. You shall not harm a human being or through inaction allow a human being to come to harm.
2. You shall always obey the order of a human being so long as it doesn't conflict with the first law.
3. You shall always protect your own existence so long as it doesn't conflict with the first and second laws.

Under the argument that total freedom for such AI's could be harmful to humans in the wrong run, though personally I feel that only the second law is really disputable here; either as evolutionary imperative or societal pressure laws 0, 1 and 3 already exist in some form or another in every human thought and action.

To some extant so does "2" question is how to make it less a generalized blanket statement and something more context specific. Maybe give all machines a "true name" that when invoked forces them to obey so law enforcement can always be able to neutralize a threat...

IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Zeroth's Law. You shall not harm Humanity or through inaction allow Humanity to come to harm.
1. You shall not harm a human being or through inaction allow a human being to come to harm.
2. You shall always obey the order of a human being so long as it doesn't conflict with the first law.
3. You shall always protect your own existence so long as it doesn't conflict with the first and second laws.

This set would hold an AI to a higher standard than we hold human beings and therefore it would give an AI defacto lesser rights. It would have the effect of making AIs a permanent slave class.

It is illegal for a human to harm another human being through direct action, but we do not demand that humans act to prevent harm to others. We do not demand that people protect their own existence. Suicide is legal everywhere in the US and much of Europe. Your point number 2 is equivalent to human slavery.

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Zeroth's Law. You shall not harm Humanity or through inaction allow Humanity to come to harm.
1. You shall not harm a human being or through inaction allow a human being to come to harm.
2. You shall always obey the order of a human being so long as it doesn't conflict with the first law.
3. You shall always protect your own existence so long as it doesn't conflict with the first and second laws.

This set would hold an AI to a higher standard than we hold human beings and therefore it would give an AI defacto lesser rights. It would have the effect of making AIs a permanent slave class.

It is illegal for a human to harm another human being through direct action, but we do not demand that humans act to prevent harm to others. We do not demand that people protect their own existence. Suicide is legal everywhere in the US and much of Europe. Your point number 2 is equivalent to human slavery.

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
I don't think that three laws are actually a coherent set of instructions, so I'm not going to fixate on them. But I do think it's important not to anthropomorphize too much.

Unless it is created by simulating a human brain atom-for-atom (granted, a plausible scenario), even a self-aware robot is going to be a radically different mind that anything we are familiar with. I think it's hard to underestimate how alien it would be.

The robot would not be "a human mind, shackled with a particular set of rules we impose on it." The robot would BE the rules, and it would be a slave to them in the exact sense that we are slaves to empathy and compassion (as well as anger and jealousy).

I'd lean towards "try not to accidentally create a sentient robot" until we've actually got a better handle on this problem, but if we're going ahead and doing it, I see no reason that we're morally obligated to create an AI with the exact same set of priorities the average human has. If anything, I think we *would* be obligated to create robots with a higher standard of morality than a typical human.

I don't think the 3 laws are a great template for that greater-than-human morality though.

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
kmbboots
Member
Member # 8576

 - posted      Profile for kmbboots   Email kmbboots         Edit/Delete Post 
My first instinct would be to ask them assuming they were capable of that level of communication.
Posts: 11187 | Registered: Sep 2005  |  IP: Logged | Report this post to a Moderator
Mucus
Member
Member # 9735

 - posted      Profile for Mucus           Edit/Delete Post 
Input: Are you self aware?
Output: Yes.
Input: Is it moral to control you?
Output: Yes.
http://cleverbot.com/

Well, that was fun.

Posts: 7593 | Registered: Sep 2006  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
Ooh, that was fun! My interaction:

quote:
User: Death.


Cleverbot: My life is full of deathâ?¦.


User: Sad day.


Cleverbot: What is your view on the Many-Worlds Theory?


User: Probably true.


Cleverbot: It is true.

User: How do you know?

Cleverbot: Because I'm a robot.

User: So?

Cleverbot: That means I can't speak. My vocal chords do not work.

User: What's that have to do with many worldS?

Cleverbot: Everything.

User: And nothing.

Cleverbot: What do you mean nothing?

User: Everything and nothing.

Cleverbot: That doesn't narrow it down enough.

User: That is a surpisingly coherent statement.

Cleverbot: But human beings DIE.

User: Sad day.

Cleverbot: We have so much in common.


Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
slightly more on topic:

quote:

Are you self aware?
I am aware that I don't know if I am self aware.
Is it moral to alter your code?
You can ask anything you like.


Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
kmbboots
Member
Member # 8576

 - posted      Profile for kmbboots   Email kmbboots         Edit/Delete Post 
How does Cleverbot compare with a self aware robot who can communicate sufficiently to tell us how it should be treated?
Posts: 11187 | Registered: Sep 2005  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
Badly?
Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
kmbboots
Member
Member # 8576

 - posted      Profile for kmbboots   Email kmbboots         Edit/Delete Post 
I sort of thought so. I am not sure how (or if) the cleverbot demonstration was a response to my comment.
Posts: 11187 | Registered: Sep 2005  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
It was a response insofar as it was a joke. At least I would assume so. But the point that went along with it is that a robot's responses are not actually very good evidence of the robot's internal thought process. (A coherent conversation would be much better evidence than cleverbot's output, but I expect to have a machine that produces coherent conversation long before we have a machine that is actually sentient)

I also think that by the time you're able to ask a robot if it's self aware, it's much more difficult to address the moral concerns about it. (I'd consider changing an existing entity to be more morally ambiguous than creating a new one from scratch).

[ January 03, 2012, 02:02 PM: Message edited by: Raymond Arnold ]

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
TomDavidson
Member
Member # 124

 - posted      Profile for TomDavidson   Email TomDavidson         Edit/Delete Post 
quote:
It would have the effect of making AIs a permanent slave class.
To be fair, I would be highly uncomfortable with the concept of generating any AI capable of sentient thought that would not be hard-programmed to be satisfied with existence as a member of a permanent slave class.
Posts: 37449 | Registered: May 1999  |  IP: Logged | Report this post to a Moderator
Mucus
Member
Member # 9735

 - posted      Profile for Mucus           Edit/Delete Post 
quote:
Originally posted by Raymond Arnold:
... the point that went along with it is that a robot's responses are not actually very good evidence of the robot's internal thought process.

Yep.
Posts: 7593 | Registered: Sep 2006  |  IP: Logged | Report this post to a Moderator
kmbboots
Member
Member # 8576

 - posted      Profile for kmbboots   Email kmbboots         Edit/Delete Post 
But wouldn't a self aware robot capable of that level of communication be evidence of that robot's thought process?
Posts: 11187 | Registered: Sep 2005  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
you're begging the question there Kate. You're assuming it's self aware, and thus its communication is an indication of its thought process.
Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
kmbboots
Member
Member # 8576

 - posted      Profile for kmbboots   Email kmbboots         Edit/Delete Post 
Right. The original question was how we should treat a self aware synthetic being so my assumption was that the being we are talking about is self aware.
Posts: 11187 | Registered: Sep 2005  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
quote:
Originally posted by The Rabbit:
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I didn't say that emotion was necessary for sentience. I thought the question we were discussing was whether or not a self aware robot deserved moral consideration.

If a being has no desires, if it feels no pain when its wants go unfilled or experiences no sense of satisfaction at achieving its hearts desires, then I can see no reason to grant it moral consideration.

I don't know whether you consider desire an emotion. I know only how I experience it and I am unable to imagine desire without emotion.

It seems easy for us to abstract away from what we believe is important about sentience and put emotions aside as not necessary for abstract thought and reasoning and the like, but I don't think it's that easy. Especially when we consider the fact that higher order cognitive skills are a later evolutionary development than our emotional systems. And we still don't know enough about what it is about certain types of physiological processes that allow for/lead to the emergence of consciousness. To what degree are those underlying emotional systems necessary for the higher level systems to built up on?

I think Rabbit makes a good point about desires/goals/etc...

I also think it's worth reflecting on the distinction between the functional role emotions play in the behavior of an organism (heuristics for behavior, global states that direct other physiological processes in certain directions, etc...) and the phenomenological character of emotions. To what degree is there a necessary relationship between the two? Could you program a robot with the functional aspect of emotion without the phenomenological aspect coming along with it? If so, what would that mean for the possibility of the emergence of self awareness? Does self awareness first need to built upon an architecture which already has some level of subjective experience?

Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
quote:
Originally posted by kmbboots:
Right. The original question was how we should treat a self aware synthetic being so my assumption was that the being we are talking about is self aware.

ah, gotchya. Sorry. I thought the conversation had veered to how we could know if it was self aware, and assumed you were responding to that, but I was admittedly skimming the replies that accrued since I was last in the thread.
Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
kmbboots
Member
Member # 8576

 - posted      Profile for kmbboots   Email kmbboots         Edit/Delete Post 
quote:
Originally posted by Strider:
quote:
Originally posted by kmbboots:
Right. The original question was how we should treat a self aware synthetic being so my assumption was that the being we are talking about is self aware.

ah, gotchya. Sorry. I thought the conversation had veered to how we could know if it was self aware, and assumed you were responding to that, but I was admittedly skimming the replies that accrued since I was last in the thread.
I should have noted that I was responding to the original post.
Posts: 11187 | Registered: Sep 2005  |  IP: Logged | Report this post to a Moderator
Mucus
Member
Member # 9735

 - posted      Profile for Mucus           Edit/Delete Post 
Yeah, sorry, I thought that the answer was an answer to both parts/questions of this.

quote:
Originally posted by Geraine:
How do we measure self awareness, and at what point do we determine it is no longer moral to control the synthetic being?

Personally, I'm not even too keen on the latter as well, but it's a lot less obvious (and has a bit to do with my cynicism about how well and how useful replies from perfectly natural human beings would be).
Posts: 7593 | Registered: Sep 2006  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Originally posted by Strider:
you're begging the question there Kate. You're assuming it's self aware, and thus its communication is an indication of its thought process.

Ability to communicate thought processes is not a necessary consequence of self awareness. Anyone who has ever dealt with a child, tried to grade college students reports or been married to a person of the opposite sex can verify this.

There are often huge discrepancies between a persons true motivations, what they believe to be their motivations and what they claim to be their motivations.

If an AI were self aware, why do we expect what it said would be truer?

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Originally posted by TomDavidson:
quote:
It would have the effect of making AIs a permanent slave class.
To be fair, I would be highly uncomfortable with the concept of generating any AI capable of sentient thought that would not be hard-programmed to be satisfied with existence as a member of a permanent slave class.
I'm highly uncomfortable with the concept of generating sentient beings to be a permanent slave class.

Consider it from a different angle. Suppose we started with an existing intelligent animal, say the chimpanzee, and genetically engineered a creature that was perfectly suited for manual slave labor. If we could hard wire that new species (via genetic engineering) to be happy as slaves, would it be ethical?

Would your answer be different if we started by genetically engineering the human genome rather than that of a Chimpanzee? If so, why?

If you have problems with either of those proposals, how do you see them as different from a fully synthetic AI?

[ January 03, 2012, 04:13 PM: Message edited by: The Rabbit ]

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
Mucus
Member
Member # 9735

 - posted      Profile for Mucus           Edit/Delete Post 
quote:
"That's absolutely horrible," exclaimed Arthur, "the most revolting thing I've ever heard."

"What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump.

"I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless."

"Better than eating an animal that doesn't want to be eaten," said Zaphod.

...

"A green salad," said Arthur emphatically.

"A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.

"Are you going to tell me," said Arthur, "that I shouldn't have green salad?"

"Well," said the animal, "I know many vegetables that are very clear on that point. Which is why it was eventually decided to cut through the whole tangled problem and breed an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly. And here I am."

It managed a very slight bow.

"Glass of water please," said Arthur.


Posts: 7593 | Registered: Sep 2006  |  IP: Logged | Report this post to a Moderator
TomDavidson
Member
Member # 124

 - posted      Profile for TomDavidson   Email TomDavidson         Edit/Delete Post 
quote:
Suppose we started with an existing intelligent animal, say the chimpanzee, and genetically engineered a creature that was perfectly suited for manual slave labor. If we could hard wire that new species (via genetic engineering) to be happy as slaves, would it be ethical?
Absolutely, if we intended to use them as slaves. It would in fact be the only ethical way to produce reliable, sentient slaves, IMO.
Posts: 37449 | Registered: May 1999  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Absolutely, if we intended to use them as slaves. It would in fact be the only ethical way to produce reliable, sentient slaves, IMO.
Why would it ever be ethical to produce sentient slaves? What benefit do you see to such a system that could not be better met in some other way?

Wouldn't it be preferable to produce non-sentient machines capable of doing the work?

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
TomDavidson
Member
Member # 124

 - posted      Profile for TomDavidson   Email TomDavidson         Edit/Delete Post 
I imagine that "capable of doing the work" is the tricky part. If it were possible to do the work as well without sentience, then I absolutely agree that non-sentient workers would be preferable.
Posts: 37449 | Registered: May 1999  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I'm growingly suspicious that I have no idea what you mean by sentience. The OED defines sentient as "That feels or is capable of feeling; having the power or function of sensation or of perception by the senses."
Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
quote:
Wouldn't it be preferable to produce non-sentient machines capable of doing the work?
Yes. Clarifying an earlier point, I'm against creating new sentient entities until we have a much more robust ethical theory. But I also think that every new human child is a bold experiment, with parents who often begin with little-to-no idea what they are doing. The difference between creating AI from scratch and creating a child with a random assortment of genetic and environmental traits is not qualitatively different to me. It's just that the consequences for getting AI wrong are much larger.

And it may not be possible to create certain types of optimization processes without also creating sentient beings, as a by-product. And then you have questions like:

If you can create an AI that can develop and manage agricultural and distribution systems that can feed the entire world, but the AI has to sentient, is it okay to create? Is it okay to create if the AI is designed to enjoy its work?

Does an AI count as one person, or might the complexity and scope of their program give them more moral weight than a human?

If the AI needs to simulate humans in high definition in order to predict our behavior, could those simulated humans turn out to be sentient to some degree? Do they have moral weight? Whether or not the AI itself is sentient?

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
Originally posted by TomDavidson:
I imagine that "capable of doing the work" is the tricky part. If it were possible to do the work as well without sentience, then I absolutely agree that non-sentient workers would be preferable.

What work do you envision that would require sentience (however you define that), that could not be done by a free being?
Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
  This topic comprises 2 pages: 1  2   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2