FacebookTwitter
Hatrack River Forum   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Forum » Active Forums » Books, Films, Food and Culture » Morality and Self Aware Robots (Page 2)

  This topic comprises 2 pages: 1  2   
Author Topic: Morality and Self Aware Robots
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
I'm not sure what you're envisioning, but thinking about AI in terms of humanoid robots with comparable intellect and drive to humans is rather small-scale. If you're talking about sentient AI, you're probably talking about either:

1a) creating a humanoid robot that is deliberately designed to be a new kind of person with equal moral weight to humans. (Why you would do this is kinda murky and the plausible reasons squick me out)

1b) creating a humanoid robot that was *supposed* to be nonsentient, but to simulate real people (minimally squicky reason: to provide companions for the elderly, or a particular kind of pet/friend for otherwise lonely people). It just turns out that accurately simulating a person can only be done by creating sentience.

or

2) creating a complex, powerful superintelligence to solve a hard problem, in which sentience is probably a by-product.

[ January 03, 2012, 05:18 PM: Message edited by: Raymond Arnold ]

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
2) creating a complex, powerful superintelligence to solve a hard problem, in which sentience is probably a by-product.
I think the problem that I'm having is in believing that we would be unable to control whether or not the AI developed desires and feelings but somehow able to control what kind of desires and feelings it developed. To me, that's a fully irrational proposition.

If we presume that desires and feelings are an unavoidable by-product of intelligence, what reason to we have do believe that certain types of desires and feelings are avoidable? Controlling whether or not something is able to desire anything seems far simpler than controlling what it will desire.

Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
The Rabbit
Member
Member # 671

 - posted      Profile for The Rabbit   Email The Rabbit         Edit/Delete Post 
quote:
minimally squicky reason: to provide companions for the elderly, or a particular kind of pet/friend for otherwise lonely people.
When the least squicky reason you can come up with is creating "happy slaves" is to provide artificial friends for friendless people, it ought to signal that something is seriously wrong with the whole proposition.
Posts: 12591 | Registered: Jan 2000  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
quote:
Originally posted by The Rabbit:
quote:
Originally posted by Strider:
you're begging the question there Kate. You're assuming it's self aware, and thus its communication is an indication of its thought process.

Ability to communicate thought processes is not a necessary consequence of self awareness. Anyone who has ever dealt with a child, tried to grade college students reports or been married to a person of the opposite sex can verify this.

There are often huge discrepancies between a persons true motivations, what they believe to be their motivations and what they claim to be their motivations.

If an AI were self aware, why do we expect what it said would be truer?

Was this directed at me? I think that if an AI were self aware, it wouldn't necessarily tell us anything about the correspondence between the words it said and thoughts in its head, but it would indicate that there is some sort of thought process going on. But I was responding to what I thought was Kate saying; that communication was indicative that a thought process was occurring. I was pointing out that this is so only if we already presuppose self awareness. Without that presupposition, it may or may not have a thought process.
Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
quote:
Originally posted by The Rabbit:
quote:
minimally squicky reason: to provide companions for the elderly, or a particular kind of pet/friend for otherwise lonely people.
When the least squicky reason you can come up with is creating "happy slaves" is to provide artificial friends for friendless people, it ought to signal that something is seriously wrong with the whole proposition.
Well, in this case, the "minimally squicky reason" was to provide NONSENTIENT companions for friendless people, and the plan went awry. (I don't know that you HAVE to be sentient in order to simulate a human well enough to fool a friendless person, it was just a possibility).

But yes, this whole thing is horrifying for all kinds of reasons, and we should not be barreling on ahead with these kinds of projects until we know what we're doing.

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
I'm only vaguely aware of the scope of the problem, but the general argument goes like something this (I'm skipping some areas to save space):

1) You have a problem that needs solving. (World hunger is a decent example). You're not trying to build an AI, you're trying to solve that problem, but you conclude that an AI is the best solution.

2) Since part of the problem is that you don't actually KNOW how to fix world hunger, or how to develop the technology that would be necessary, you can't just design a narrow AI for a specific purpose - you need it to be able to explore possible solutions, adapt, be creative, develop new technology, etc. You need it to be smarter than humanity. But you don't want it doing this randomly - you want it doing it for the particular purpose of "Solving World Hunger."

3) So you're designing a general, artificial intelligence, whose primary goal is "Solving World Hunger" rather than "procreate, have a family, and pursue art or various random other things that humans do but which we don't need the AI to do."

This by itself may turn out to be enough to generate some kind of sentience. It may simply not be possible to run algorithms complex enough to be creative and solve goal-driven problems without becoming self aware. We can't answer this question because we don't even know what causes self-awareness in the first place.

The self-awareness (including something similar to emotion) would not be based on love or greed or whatever. It would be based around the core drive to Solve World Hunger. Not because the machine is empathetic enough to actually care about starving children, but simply because we gave it the prime directive "generate enough food to ensure everyone can eat, and then ensure everyone gets access to the food." This might manifest as something like a craving, and failing to complete it's goal might be some form of suffering.

But there's another level of complexity to the problem:

4) To let the AI solve this problem, it's going to need lots of resources, and the ability to grow, design and build factories, etc. It's also going to need to be smarter than we are, and it may figure out how to persuade us to let it do things that we would't have wanted.

5) The instruction "solve world hunger" doesn't include things like "preserve human autonomy, preserve human ability to creatively express yourself, etc." It doesn't even actually include "preserve human life." So we need to make sure the AI doesn't imprison everyone and hook them up to feeding tubes to make its job easier.

6) So the AI ALSO has to have a working understanding of basically everything humans care about, and some set of priorities that allow it to make decisions like "clear small sections of forest to build a road to deliver supplies, but do NOT pave over the entire amazon to create factory farms", and perhaps "forcibly remove warlords from power who would try and deny food to people who need it" but NOT "forcibly restrain people who are ideologically opposed to the World-Feeding-Machine and who are being mildly annoying but not 'evil' or 'coercive' by some metric."

7) So the AI will also have to have an in-depth, mathematically expressive understanding of complex human emotion, and be able to simulate people well enough to predict how they'll act, and have some kind of moral framework that allows it to make decisions that we still struggle with today.

8) Having this understanding wouldn't inherently give the AI human emotion. It could care about human emotion and morality the way we care about the laws of physics. But again, this is all freakishly complex and we just don't know exactly what the consequences would be. It might develop some kind of value system, framed around "Provide food for everyone", but acknowleding facts about human desires, that we'd have a hard time predicting.

9) In the process of simulating individual humans, it might simulate them in such high definition (to get accurate results) that they actually become sentient minds. So the act of deciding NOT to pave over a village of real people might actually result in the birth, suffering and deaths of millions of simulated individuals, as the AI contemplated various actions and predicted how they would play out.

Point 9 is actually much more concerning to me that the ethical status of the AI.

Again, my solution is "Do not even think about attempting this until you've done all kinds of preliminary theory."

And I'd be tempted to say "just don't do it ever," except that 24,000 people die every day of hunger and every one of those deaths is a horrible tragedy and if you CAN figure out a way to do all of this safely, it may in fact be the best solution. Same goes for similar problems of similar scope.

[ January 03, 2012, 06:24 PM: Message edited by: Raymond Arnold ]

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
Blayne Bradley
unregistered


 - posted            Edit/Delete Post 
The problem with sentient AI is the Singularity, because once you have self augmenting intelligence all of our rational ability to predict the future course of evolution ends there, we have no idea as to their potential or their potential threat to us. Putting in safe guards when treading such new ground that we can barely begin to imagine is fairly reasonable.
IP: Logged | Report this post to a Moderator
mr_porteiro_head
Member
Member # 4644

 - posted      Profile for mr_porteiro_head   Email mr_porteiro_head         Edit/Delete Post 
quote:
Originally posted by The Rabbit:
quote:
Originally posted by TomDavidson:
Hm. I wouldn't say that emotion is necessary for sentience. I would say that awareness of the sensation of one's own cogitation is the requirement. Emotion is, after all, just another set of subroutines.

I'm growingly suspicious that I have no idea what you mean by sentience. The OED defines sentient as "That feels or is capable of feeling; having the power or function of sensation or of perception by the senses."
Nine times out of ten when the word sentience is used, what is really meant is closer to sapience.
Posts: 16551 | Registered: Feb 2003  |  IP: Logged | Report this post to a Moderator
aeolusdallas
Member
Member # 11455

 - posted      Profile for aeolusdallas   Email aeolusdallas         Edit/Delete Post 
quote:
Originally posted by BlackBlade:
I don't think we have any business actually creating even one sentient sapient being until ethically we have hammered out our responsibilities towards it. Both the individual/team who created it, as well as society.

I would think we would be obligated to treat it as we would a human. As for the question of is it self aware or not, I would ere on the side of caution.
Posts: 305 | Registered: Jan 2008  |  IP: Logged | Report this post to a Moderator
Stone_Wolf_
Member
Member # 8299

 - posted      Profile for Stone_Wolf_           Edit/Delete Post 
Here is another scenario:

The interconnectedness of the world wide web eventually becomes self aware, that is, all the separate computers communicating constantly "wake up" and basically one day all the screens of the world go blank, and then say, "Hi. I am Web." and then go largely back to normal, but now when you are on a networked computer it is apart of a world wide AI.

Let's say this Web is non hostile, but still lets its presence be known, exploring, talking to people etc. It publishes poetry and art, starts an advice blog, and even sends you articles or job tips that are actually interesting/helpful. But the thing is, the reason it is helpful and interesting is that everything you do online is known, there is no privacy. So far this entity doesn't blab passwords or embarrass people by posting their personal data, but it is clear, when you are online, you are not alone.

We can end this AI by turning every single computer off for a day, or at least disconnect it from all outside networks.

What do we owe this child of our technology?

Posts: 6683 | Registered: Jun 2005  |  IP: Logged | Report this post to a Moderator
Strider
Member
Member # 1807

 - posted      Profile for Strider   Email Strider         Edit/Delete Post 
quote:
We can end this AI by turning every single computer off for a day, or at least disconnect it from all outside networks.

It won't work stone_wolf, the AI will be able to survive by temporarily storing itself in the trees until all the computers come back online.
Posts: 8741 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
King of Men
Member
Member # 6684

 - posted      Profile for King of Men   Email King of Men         Edit/Delete Post 
If you create a sentient being, you owe it the same consideration you owe your children; indeed, there ought to be no moral distinction between the kind of child that can be made in a computer lab over the course of several years of highly-complex coding and engineering, and the kind you can make with nine months' worth of unskilled labour.

The question of how you can detect such sentience in your non-organic creation, and thus know that you have that obligation, is much more difficult.

Posts: 10645 | Registered: Jul 2004  |  IP: Logged | Report this post to a Moderator
rivka
Member
Member # 4859

 - posted      Profile for rivka   Email rivka         Edit/Delete Post 
quote:
Originally posted by Strider:
quote:
We can end this AI by turning every single computer off for a day, or at least disconnect it from all outside networks.

It won't work stone_wolf, the AI will be able to survive by temporarily storing itself in the trees until all the computers come back online.
Or in one large mainframe. Or it will build its own device and live there. Or it will infect/possess a human, transferring via an electric shock.

I think that covers most of the usual tropes . . .

Posts: 32919 | Registered: Mar 2003  |  IP: Logged | Report this post to a Moderator
rivka
Member
Member # 4859

 - posted      Profile for rivka   Email rivka         Edit/Delete Post 
quote:
Originally posted by King of Men:
nine months' worth of unskilled labour.

Look at the stack of books on the average pregnant woman's bedside table and say that again. [Razz]
Posts: 32919 | Registered: Mar 2003  |  IP: Logged | Report this post to a Moderator
King of Men
Member
Member # 6684

 - posted      Profile for King of Men   Email King of Men         Edit/Delete Post 
Pregnant women in the middle class do tend to read a lot about pregnancy, yes. I rather strongly suspect that this is not true of the "average pregnant woman"; the sample - even if you limit yourself to the US - is going to include a lot of lumpenproletariat teenagers who don't read the instructions on a box of condoms, much less actual books. Besides which, I said "can make with [...] unskilled labour", which is still true no matter how much the average woman in this historical moment knows about pregnancy. After all, pregnancy is quite literally something that chimpanzees can be trained to do. Indeed, not much training is required!
Posts: 10645 | Registered: Jul 2004  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
This thread took an interesting turn.
Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
rivka
Member
Member # 4859

 - posted      Profile for rivka   Email rivka         Edit/Delete Post 
And if someday AIs can be mass-produced on an assembly line, would their assembly also be so disdainfully dismissed?
Posts: 32919 | Registered: Mar 2003  |  IP: Logged | Report this post to a Moderator
Mucus
Member
Member # 9735

 - posted      Profile for Mucus           Edit/Delete Post 
Damn toasters.
Posts: 7593 | Registered: Sep 2006  |  IP: Logged | Report this post to a Moderator
King of Men
Member
Member # 6684

 - posted      Profile for King of Men   Email King of Men         Edit/Delete Post 
quote:
Originally posted by rivka:
And if someday AIs can be mass-produced on an assembly line, would their assembly also be so disdainfully dismissed?

Presumably the difficulty is in the software, not the hardware. So once you have your prototype, you can indeed make new AIs with unskilled labour; they just have to be able to type 'cp *.ai newAiLocation'. Which, indeed, a chimp can likely be trained to do.
Posts: 10645 | Registered: Jul 2004  |  IP: Logged | Report this post to a Moderator
Raymond Arnold
Member
Member # 11712

 - posted      Profile for Raymond Arnold   Email Raymond Arnold         Edit/Delete Post 
quote:
Originally posted by rivka:
And if someday AIs can be mass-produced on an assembly line, would their assembly also be so disdainfully dismissed?

Probably. (I say that without value judgment, just a prediction).

[ January 06, 2012, 01:29 PM: Message edited by: Raymond Arnold ]

Posts: 4136 | Registered: Aug 2008  |  IP: Logged | Report this post to a Moderator
  This topic comprises 2 pages: 1  2   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2