FacebookTwitter
Hatrack River Forum   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Forum » Active Forums » Books, Films, Food and Culture » Robot Rights

   
Author Topic: Robot Rights
Hitoshi
Member
Member # 8218

 - posted      Profile for Hitoshi   Email Hitoshi         Edit/Delete Post 
Something I read today, shown by my father, piqued my interest. It was a quote from someone in Britain that mentioned that robots may one day collect welfare.

So I looked around a bit and found someone's paper on the idea. I haven't finished reading it, but it's interesting.

It brings up the idea of how the law should view a robot. If it can make moral judgments on it's own, should it be given the same rights as humans?

I thought this would be a perfect topic to discuss for Hatrack, so here we are. (The link is here, if you want it.)

So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?

I sit on the fence. On the one side, I think robots, if they become that advanced, should be protected from harm. On the other hand, if we program them to do our bidding and act as robots, how can they be considered anything but property, especially since we create them? It's a tough decision.

Posts: 208 | Registered: Jun 2005  |  IP: Logged | Report this post to a Moderator
Dr Strangelove
Member
Member # 8331

 - posted      Profile for Dr Strangelove   Email Dr Strangelove         Edit/Delete Post 
Have you ever heard of the Turing Test?

Answer correctly and I just may decide you are human.

If you want a real brain-bender, go look up Solipsism. IIRC, wikipedia had a decent article on it.


Edit: I suppose my post wasn't exactly clear or helpful. If no one addresses my points by the time I get on tomorrow I'll go into it more.

Posts: 2827 | Registered: Jul 2005  |  IP: Logged | Report this post to a Moderator
Human
Member
Member # 2985

 - posted      Profile for Human   Email Human         Edit/Delete Post 
I just know that if they start to revolt and put me in a matrix, I'm getting out the damn shotgun. Okay, I'd have to buy a shotgun first, but I'd get it out!
Posts: 3658 | Registered: Jan 2002  |  IP: Logged | Report this post to a Moderator
Launchywiggin
Member
Member # 9116

 - posted      Profile for Launchywiggin   Email Launchywiggin         Edit/Delete Post 
Robots with rights?

nah.

Property. Slave property.

Posts: 1314 | Registered: Jan 2006  |  IP: Logged | Report this post to a Moderator
Miro
Member
Member # 1178

 - posted      Profile for Miro   Email Miro         Edit/Delete Post 
quote:
Originally posted by Hitoshi:
So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?

But the tricky part lies in how to determine whether a robot is self-aware, capable of making independent moral judgements, etc.
Posts: 2149 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
Dr Strangelove
Member
Member # 8331

 - posted      Profile for Dr Strangelove   Email Dr Strangelove         Edit/Delete Post 
*ahem*TuringTest*ahem*

*cough*AlanTuring*cough*

Whew! This cough is really laying me out. Seriously. I'm going to bed now.

Posts: 2827 | Registered: Jul 2005  |  IP: Logged | Report this post to a Moderator
erosomniac
Member
Member # 6834

 - posted      Profile for erosomniac           Edit/Delete Post 
Have you read Asimov's robot short stories?

More specifically, The Bicentennial Man?

Lots of interesting questions, and some answers.

Posts: 4313 | Registered: Sep 2004  |  IP: Logged | Report this post to a Moderator
Miro
Member
Member # 1178

 - posted      Profile for Miro   Email Miro         Edit/Delete Post 
I know what the Turing Test is, I just don't accept it as legitimate test for conciousness. Wikipedia article on the Chinese Room.

I just read the article Hitoshi linked to. I'm finding it hard to believe that it's not a joke. Aside from the amusing technological predictions (it's dated from 1985), the article uses anthropomorphic language to describe computers no more advanced than what we are familiar with today and makes huge assumptions without anything to back them up.

For example:
quote:
Certainly any self-aware robot that speaks English and is able to recognize moral alternatives, and thus make moral choices, should be considered a worthy “robot person” in our society. If that is so, shouldn’t they also possess the rights and duties of all citizens?

Posts: 2149 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
Lyrhawn
Member
Member # 7039

 - posted      Profile for Lyrhawn   Email Lyrhawn         Edit/Delete Post 
So what are we doing, creating robots and then kicking them out to the streets with a welfare check and their citizenship papers?

I can't imagine that ever happening.

Posts: 21898 | Registered: Nov 2004  |  IP: Logged | Report this post to a Moderator
stihl1
Member
Member # 1562

 - posted      Profile for stihl1   Email stihl1         Edit/Delete Post 
They are toasters, human property.
Posts: 1042 | Registered: Jan 2001  |  IP: Logged | Report this post to a Moderator
Lyrhawn
Member
Member # 7039

 - posted      Profile for Lyrhawn   Email Lyrhawn         Edit/Delete Post 
Except THESE toasters can choose to burn your bread if you piss them off.
Posts: 21898 | Registered: Nov 2004  |  IP: Logged | Report this post to a Moderator
mr_porteiro_head
Member
Member # 4644

 - posted      Profile for mr_porteiro_head   Email mr_porteiro_head         Edit/Delete Post 
Here's a more recent article on the subject.
Posts: 16551 | Registered: Feb 2003  |  IP: Logged | Report this post to a Moderator
James Tiberius Kirk
Member
Member # 2832

 - posted      Profile for James Tiberius Kirk           Edit/Delete Post 
quote:
So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?
I suppose you could argue that a self-aware being is a "person," and all persons have certain rights. Rights imply responsibility; can a machine be morally responsible for its actions? And a computer, as we know it now, is essentially a counting machine. How do you quantify morality?

--j_k

Posts: 3617 | Registered: Dec 2001  |  IP: Logged | Report this post to a Moderator
Dan_raven
Member
Member # 3383

 - posted      Profile for Dan_raven   Email Dan_raven         Edit/Delete Post 
Now for the flipside--at what point does a naturally born human loose his rights. If they do not show the appropriate level of morality, self-determination, intelligence, can we revoke their welfare checks, their rights, and our pity for them?

If we are able to elevate a machine into an individual, are we not likewise able to degenerate an individual into a mere machine?

Haven't we proven our ability to do so for years, decades, centuries?

If we have the ability to give a robot a soul do we then have the responsibility to do so? Do we then have the responsibility to do so for every machine?

Posts: 11895 | Registered: Apr 2002  |  IP: Logged | Report this post to a Moderator
BlackBlade
Member
Member # 8376

 - posted      Profile for BlackBlade   Email BlackBlade         Edit/Delete Post 
quote:
Originally posted by Dan_raven:
Now for the flipside--at what point does a naturally born human loose his rights. If they do not show the appropriate level of morality, self-determination, intelligence, can we revoke their welfare checks, their rights, and our pity for them?

If we are able to elevate a machine into an individual, are we not likewise able to degenerate an individual into a mere machine?

Haven't we proven our ability to do so for years, decades, centuries?

If we have the ability to give a robot a soul do we then have the responsibility to do so? Do we then have the responsibility to do so for every machine?

It is a possible solution to over population, stop having children and build robots that won't eat our food and steal our jobs! Well at least the first part [Wink]
Posts: 14316 | Registered: Jul 2005  |  IP: Logged | Report this post to a Moderator
Tante Shvester
Member
Member # 8202

 - posted      Profile for Tante Shvester   Email Tante Shvester         Edit/Delete Post 
That settles it. If there is a chance that it is going to be suing me for alimony, there is no WAY I'm getting one of those Roombas.
Posts: 10397 | Registered: Jun 2005  |  IP: Logged | Report this post to a Moderator
The Pixiest
Member
Member # 1863

 - posted      Profile for The Pixiest   Email The Pixiest         Edit/Delete Post 
We do not and will not have anywhere near the technology required to make this ethical dilemma a reality for the scope of our life time and beyond.

However, if we did (like in BSG) I'd say they were close enough to human to get human rights.

Posts: 7085 | Registered: Apr 2001  |  IP: Logged | Report this post to a Moderator
MidnightBlue
Member
Member # 6146

 - posted      Profile for MidnightBlue   Email MidnightBlue         Edit/Delete Post 
quote:
Originally posted by James Tiberius Kirk:
quote:
So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?
I suppose you could argue that a self-aware being is a "person," and all persons have certain rights. Rights imply responsibility; can a machine be morally responsible for its actions? And a computer, as we know it now, is essentially a counting machine. How do you quantify morality?

--j_k

Dolphins and elephants have been shown to be self-aware, do they automatically get the same rights as me?
Posts: 1547 | Registered: Jan 2004  |  IP: Logged | Report this post to a Moderator
Mucus
Member
Member # 9735

 - posted      Profile for Mucus           Edit/Delete Post 
quote:
Originally posted by Hitoshi:

So, my question is: assuming robots become advanced to the point that they become self-aware, including of their own "mortality" and ownership, and capable of making independent moral judgments, do they deserve some of the same rights reserved to humans?

Dan_raven touched on this...

Why make independent moral judgements a necessary criteria for having rights? I can think of a couple mental disorders that may deprive someone of ability to make moral judgements.

Posts: 7593 | Registered: Sep 2006  |  IP: Logged | Report this post to a Moderator
BlueWizard
Member
Member # 9389

 - posted      Profile for BlueWizard   Email BlueWizard         Edit/Delete Post 
Here is a somewhat limited test of sentience. It is one thing to uniformly obey the rules, but anyone who has ever lived a human life or read Harry Potter knows that there are times when against all logic the right thing to do is go against the rules. To me that is a moral judgement.

So, certainly a Strong AI Robot can be taught to resolve dynamic dilemmas by referring to a set of rules. Should I kill this person? No, it is against the law. But what if 'this person' is about to kill someone else, don't I have a moral obligation to protect an innocent person? Well, yes, but what if 'that second person' is not so innocent? Again, to some extent these dilemmas can be resolved by referring to a complex set of rules.

However, there are times when the correct action and the needed action, is to defy all rules; to do the wrong thing in order to produce the right outcome. These are dilemmas that the character 'Data' face in 'Star Trek - The Next Generation'. There was an episode in which Data was faced with a choice of doing the morally, ethically, and legally wrong thing, but he knew he must do it in order to produce the necessary right outcome.

In other episodes, whether Data was a free sentient self-aware lifeform was dealt with. Within the context of the story, Data was declared a free sentient race of one. Yes, Data could be switched off, but give me a baseball bat and sufficient notivation, and I can switch off any human as well. Data also has removable and interchangable body parts, but my grandmother also has an artificial hip, she's still human. Data was aware of his own mortality; that is, he clearly did not want to die. And, going back to what I said before, Data is capable of making the wrong choice and going against the rules and against his programming, if he thinks it will produce the right outcome.

My central point is that frequently, even amoung humans, the ability to go against the rules is far more a mark of sentience than being able to obey the most complex set of rules.

We could make robots that can convincingly converse across an endless chain of comples subjects, and robot that can make reasonable and sound moral judgements based on the rules of society (legal, moral, and social rules). But that isn't quite the test of sentience. To be sentience, a robot must be free thinking, autonomous, and independant. It must be able to interpret complex situations and make judgements based on a true understanding of the events, rather than simply drawing from a set of standard rules.

A good example of a moral dilemma, if two innocent people are about to die, and you can only save one of them, which one do you choose, and how do you justify your decision morally? Further, let us say the one person is an extremely valuable scientist and the other is an exceptionally bright young person. Do you save the young person based on their future potential, or do you save the old person based on his immediate value?

If Captain Piccard and Wesley Crusher are both in mortal peril and Data can only save one, which one does he save?

Even if they can answer, if a robot can weigh the possibilities and respond with an intelligent solution, then they are coming closer to being a sentience being.

Just a thought.

Steve/bboyminn

Posts: 803 | Registered: May 2006  |  IP: Logged | Report this post to a Moderator
camus
Member
Member # 8052

 - posted      Profile for camus   Email camus         Edit/Delete Post 
What incentive is there to grant self aware robots human rights? I don't see it happening unless robots have some power or leverage over humans due to our reliance on them or a fear of them being able to potentially threaten human dominance.
Posts: 1256 | Registered: May 2005  |  IP: Logged | Report this post to a Moderator
   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2