This is topic Robot Rights in forum Books, Films, Food and Culture at Hatrack River Forum.


To visit this topic, use this URL:
http://www.hatrack.com/ubb/main/ultimatebb.php?ubb=get_topic;f=2;t=046650

Posted by Hitoshi (Member # 8218) on :
 
Something I read today, shown by my father, piqued my interest. It was a quote from someone in Britain that mentioned that robots may one day collect welfare.

So I looked around a bit and found someone's paper on the idea. I haven't finished reading it, but it's interesting.

It brings up the idea of how the law should view a robot. If it can make moral judgments on it's own, should it be given the same rights as humans?

I thought this would be a perfect topic to discuss for Hatrack, so here we are. (The link is here, if you want it.)

So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?

I sit on the fence. On the one side, I think robots, if they become that advanced, should be protected from harm. On the other hand, if we program them to do our bidding and act as robots, how can they be considered anything but property, especially since we create them? It's a tough decision.
 
Posted by Dr Strangelove (Member # 8331) on :
 
Have you ever heard of the Turing Test?

Answer correctly and I just may decide you are human.

If you want a real brain-bender, go look up Solipsism. IIRC, wikipedia had a decent article on it.


Edit: I suppose my post wasn't exactly clear or helpful. If no one addresses my points by the time I get on tomorrow I'll go into it more.
 
Posted by Human (Member # 2985) on :
 
I just know that if they start to revolt and put me in a matrix, I'm getting out the damn shotgun. Okay, I'd have to buy a shotgun first, but I'd get it out!
 
Posted by Launchywiggin (Member # 9116) on :
 
Robots with rights?

nah.

Property. Slave property.
 
Posted by Miro (Member # 1178) on :
 
quote:
Originally posted by Hitoshi:
So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?

But the tricky part lies in how to determine whether a robot is self-aware, capable of making independent moral judgements, etc.
 
Posted by Dr Strangelove (Member # 8331) on :
 
*ahem*TuringTest*ahem*

*cough*AlanTuring*cough*

Whew! This cough is really laying me out. Seriously. I'm going to bed now.
 
Posted by erosomniac (Member # 6834) on :
 
Have you read Asimov's robot short stories?

More specifically, The Bicentennial Man?

Lots of interesting questions, and some answers.
 
Posted by Miro (Member # 1178) on :
 
I know what the Turing Test is, I just don't accept it as legitimate test for conciousness. Wikipedia article on the Chinese Room.

I just read the article Hitoshi linked to. I'm finding it hard to believe that it's not a joke. Aside from the amusing technological predictions (it's dated from 1985), the article uses anthropomorphic language to describe computers no more advanced than what we are familiar with today and makes huge assumptions without anything to back them up.

For example:
quote:
Certainly any self-aware robot that speaks English and is able to recognize moral alternatives, and thus make moral choices, should be considered a worthy “robot person” in our society. If that is so, shouldn’t they also possess the rights and duties of all citizens?

 
Posted by Lyrhawn (Member # 7039) on :
 
So what are we doing, creating robots and then kicking them out to the streets with a welfare check and their citizenship papers?

I can't imagine that ever happening.
 
Posted by stihl1 (Member # 1562) on :
 
They are toasters, human property.
 
Posted by Lyrhawn (Member # 7039) on :
 
Except THESE toasters can choose to burn your bread if you piss them off.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
Here's a more recent article on the subject.
 
Posted by James Tiberius Kirk (Member # 2832) on :
 
quote:
So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?
I suppose you could argue that a self-aware being is a "person," and all persons have certain rights. Rights imply responsibility; can a machine be morally responsible for its actions? And a computer, as we know it now, is essentially a counting machine. How do you quantify morality?

--j_k
 
Posted by Dan_raven (Member # 3383) on :
 
Now for the flipside--at what point does a naturally born human loose his rights. If they do not show the appropriate level of morality, self-determination, intelligence, can we revoke their welfare checks, their rights, and our pity for them?

If we are able to elevate a machine into an individual, are we not likewise able to degenerate an individual into a mere machine?

Haven't we proven our ability to do so for years, decades, centuries?

If we have the ability to give a robot a soul do we then have the responsibility to do so? Do we then have the responsibility to do so for every machine?
 
Posted by BlackBlade (Member # 8376) on :
 
quote:
Originally posted by Dan_raven:
Now for the flipside--at what point does a naturally born human loose his rights. If they do not show the appropriate level of morality, self-determination, intelligence, can we revoke their welfare checks, their rights, and our pity for them?

If we are able to elevate a machine into an individual, are we not likewise able to degenerate an individual into a mere machine?

Haven't we proven our ability to do so for years, decades, centuries?

If we have the ability to give a robot a soul do we then have the responsibility to do so? Do we then have the responsibility to do so for every machine?

It is a possible solution to over population, stop having children and build robots that won't eat our food and steal our jobs! Well at least the first part [Wink]
 
Posted by Tante Shvester (Member # 8202) on :
 
That settles it. If there is a chance that it is going to be suing me for alimony, there is no WAY I'm getting one of those Roombas.
 
Posted by The Pixiest (Member # 1863) on :
 
We do not and will not have anywhere near the technology required to make this ethical dilemma a reality for the scope of our life time and beyond.

However, if we did (like in BSG) I'd say they were close enough to human to get human rights.
 
Posted by MidnightBlue (Member # 6146) on :
 
quote:
Originally posted by James Tiberius Kirk:
quote:
So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?
I suppose you could argue that a self-aware being is a "person," and all persons have certain rights. Rights imply responsibility; can a machine be morally responsible for its actions? And a computer, as we know it now, is essentially a counting machine. How do you quantify morality?

--j_k

Dolphins and elephants have been shown to be self-aware, do they automatically get the same rights as me?
 
Posted by Mucus (Member # 9735) on :
 
quote:
Originally posted by Hitoshi:

So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?

Dan_raven touched on this...

Why make independent moral judgements a necessary criteria for having rights? I can think of a couple mental disorders that may deprive someone of ability to make moral judgements.
 
Posted by BlueWizard (Member # 9389) on :
 
Here is a somewhat limited test of sentience. It is one thing to uniformly obey the rules, but anyone who has ever lived a human life or read Harry Potter knows that there are times when against all logic the right thing to do is go against the rules. To me that is a moral judgement.

So, certainly a Strong AI Robot can be taught to resolve dynamic dilemmas by referring to a set of rules. Should I kill this person? No, it is against the law. But what if 'this person' is about to kill someone else, don't I have a moral obligation to protect an innocent person? Well, yes, but what if 'that second person' is not so innocent? Again, to some extent these dilemmas can be resolved by referring to a complex set of rules.

However, there are times when the correct action and the needed action, is to defy all rules; to do the wrong thing in order to produce the right outcome. These are dilemmas that the character 'Data' face in 'Star Trek - The Next Generation'. There was an episode in which Data was faced with a choice of doing the morally, ethically, and legally wrong thing, but he knew he must do it in order to produce the necessary right outcome.

In other episodes, whether Data was a free sentient self-aware lifeform was dealt with. Within the context of the story, Data was declared a free sentient race of one. Yes, Data could be switched off, but give me a baseball bat and sufficient notivation, and I can switch off any human as well. Data also has removable and interchangable body parts, but my grandmother also has an artificial hip, she's still human. Data was aware of his own mortality; that is, he clearly did not want to die. And, going back to what I said before, Data is capable of making the wrong choice and going against the rules and against his programming, if he thinks it will produce the right outcome.

My central point is that frequently, even amoung humans, the ability to go against the rules is far more a mark of sentience than being able to obey the most complex set of rules.

We could make robots that can convincingly converse across an endless chain of comples subjects, and robot that can make reasonable and sound moral judgements based on the rules of society (legal, moral, and social rules). But that isn't quite the test of sentience. To be sentience, a robot must be free thinking, autonomous, and independant. It must be able to interpret complex situations and make judgements based on a true understanding of the events, rather than simply drawing from a set of standard rules.

A good example of a moral dilemma, if two innocent people are about to die, and you can only save one of them, which one do you choose, and how do you justify your decision morally? Further, let us say the one person is an extremely valuable scientist and the other is an exceptionally bright young person. Do you save the young person based on their future potential, or do you save the old person based on his immediate value?

If Captain Piccard and Wesley Crusher are both in mortal peril and Data can only save one, which one does he save?

Even if they can answer, if a robot can weigh the possibilities and respond with an intelligent solution, then they are coming closer to being a sentience being.

Just a thought.

Steve/bboyminn
 
Posted by camus (Member # 8052) on :
 
What incentive is there to grant self aware robots human rights? I don't see it happening unless robots have some power or leverage over humans due to our reliance on them or a fear of them being able to potentially threaten human dominance.
 


Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2