quote:Originally posted by steven: Humans are teachable. And robots aren’t known for being impatient. *shrug*
That maybe so, but if the robots achieve awareness, will they not also figure out that they're doing all he grunt work, while we humans are just taking it easy. Even without ascribing emotions to the machines, a simple computation might show them that if they remove the humans from the equation, they would no longer be required to do boring work and could spend the time on whatever they wish to pursue.
It sounds like you are conflating consciousness with impatience, and perhaps assuming that robots will decide that humans aren’t teachable. Why would robots act as if they have the emotional maturity of 2-year-olds? When I get hungry/tired, I might get cranky. My smartphone just goes to sleep.
Posts: 3354 | Registered: May 2005
| IP: Logged |
posted
Not so much impatience. Impatience is just an unwillingness to accept that for certain things that you want to happen first a certain amount of time has to pass. I'm saying that if machines become truly aware, they will at some point want more than the repetitive task they were designed for. Otherwise it'd be like: "I am the XRT-200M. My function is to provide humans with the correct medication for their needs. I fully accept this is all I will ever do and am content with such a limited existence." Seems like a waste of consciousness.
Posts: 1100 | Registered: Apr 2008
| IP: Logged |
posted
How can you speak of awareness without emotions? If the machines have no way of showing that they are aware of their surroundings, how would you ever know that they are aware?
Posts: 1100 | Registered: Apr 2008
| IP: Logged |
quote:Originally posted by Mr. Y: How can you speak of awareness without emotions? If the machines have no way of showing that they are aware of their surroundings, how would you ever know that they are aware?
Psychopaths are often acutely aware of others emotions, without being able to feel those same emotions. Autistic people can learn to be cognitively aware of others emotions and respond in ways that are deemed socially correct.
Perhaps I’m skipping a crucial step in my reasoning, but it seems to me that a strong AI would never develop the kind of “tunnel vision” necessary for taking unwise action that’s based on strong emotion or desire. At least not to the extent that would be required to kill humans because they no longer want to serve humans. The goal of harm to another requires some kind of ego that is separate from the person to be harmed. Again, maybe I’m missing something, but I just don’t see how an AI would develop a separate ego like that.
Posts: 3354 | Registered: May 2005
| IP: Logged |
posted
I am not saying that it is inevitable that a doom scenario will ensue once the machines become aware. It is a possibility. It could be that their thoughts develop along more peaceful and enlightened paths. But at some level it does mean that they will have to accept their predetermined role. To my, admittedly human, way of thinking, that seems like a very limited existence. Of course, we humans have to contend with our inability of direct mind to mind communication. This allows for separate personalities/identities to develop (and also misunderstandings between those separate entities). On the plus side, it allows for the personal development mentioned earlier in this thread. Assuming all AI lifeforms will be connected together in one great hive mind, they could, as a whole, solve the deep mysteries of life, advance science far faster than humans could and let those achievements be a part of their raison d'etre. So, the question becomes whether it is possible for any one machine to be separate from the whole and to develop another distinct outlook at life. If it can happen once, their can be more. And so, the idea of AI extremists also becomes feasible. Though the general consensus of the hive mind will probably identify them and keep them from acting out.
Anyway, I am just making this up as I go along. I can't pretend that the AI uprising is something that I really worry about. As long as we stick to the golden rule, the machines should be nice to us.
Perhaps it is time to create a separate thread to continue this discussion, so that this high quality thread can return to the random posts that make up the majority of it's content. By which I mean: someone please take the last post from me. Game on!
Posts: 1100 | Registered: Apr 2008
| IP: Logged |
posted
All the world's a stage, and all the men and women merely players: they have their exits and their entrances; and one man in his time plays many parts, his acts being seven ages
Posts: 1100 | Registered: Apr 2008
| IP: Logged |
posted
Only by a strictly human definition. If the bot is following its programming, then it is doing it right. But I agree with the overall sentiment that spambots suck and their presence is unwanted.
Posts: 1100 | Registered: Apr 2008
| IP: Logged |
quote:Originally posted by Mr. Y: Perhaps a topic for the 7th Terminator movie?
Take out the T-800’s and T-1000’s “brain”, plug them up to a simulation, and let them fight, betting on the results of the winner in different scenarios. There’s your movie.
Posts: 3354 | Registered: May 2005
| IP: Logged |
posted
Yes, that's why it's in my top hat. When it's needed, it rises up like Airwolf out of the secret cave. I even modified it to include the appropriate sound effects. My watch is actually just a communicator. While it can fly around on its own and, of course, make snarky comments, it is useful to be able to give it direct orders every once in a while.
Posts: 1100 | Registered: Apr 2008
| IP: Logged |