This is topic Do you think it's possible to create conscious machines? in forum Books, Films, Food and Culture at Hatrack River Forum.


To visit this topic, use this URL:
http://www.hatrack.com/ubb/main/ultimatebb.php?ubb=get_topic;f=2;t=049964

Posted by Threads (Member # 10863) on :
 
I do, but I’m wondering what everyone else thinks since the question is actually hotly debated.

[I use the phrase “conscious machine” rather loosely to mean anything that was artificially created and has consciousness. In other words, a “conscious machine” could still have biological parts.]

My primary issue with the claim that it is impossible to create conscious machines is that it implies that humans are supernatural beings. Given enough time, humans can theoretically recreate anything that is possible in the universe so to claim that it is impossible to artificially create consciousness is to claim that our consciousness is supernatural. I take issue with this claim for a couple reasons. First, we know so little about how consciousness comes about that it seems entirely unfounded to claim that it is a supernatural process. There are plenty of examples from history where people attributed unexplainable phenomenon to supernatural beings and ended up being wrong. Of course this only means that the claim is unfounded, not that it is wrong. My second issue with the claim is it would imply that consciousness comes about by magic (for lack of a better term) during the development of a fetus. In other words, it would imply that the genetic code of the fetus is irrelevant to consciousness, because if it was relevant then that would mean that there is a natural cause of consciousness. And if there is a natural cause of consciousness then we can create a conscious machine.

On another note, one of the major limiting factors in trying to simulate a human brain is computational capacity. Even if we had the code to simulate a human brain, we would be unable to run it because modern-day computers just aren't powerful enough. Luckily, this limitation will be overcome surprisingly soon. Software emulation of the human brain should be possible by 2030 on a $1000 computer. Hardware emulation on supercomputers will be theoretically possible much sooner (2015 or so iirc). Of course, being able to perform as many computations per second as the human brain is a far cry from actually being able to emulate it. Ideally, we will eventually understand how consciousness works and we won't be hindered by having to deal with artificial human brains and the emotions that come along with them [Smile]
 
Posted by The White Whale (Member # 6594) on :
 
If and when we get to the point that you say we could by 2030, I think a bigger problem would be to actually differentiate a "conscious machine" from...well...normal conscious beings. We already have the capability to simulate conscious thought (to a limited sense). I get emails that are generated, but read and make sense on the surface. I've chatted (IMing) with bots that seem pretty conscious until you start asking strange questions.

Think of 2001: A Space Odyssey. HAL9000 seemed conscious. It acted conscious. Since I am horrible at coming up with the right words, I'll quote a line from the movie:

"The sixth member of the Discovery crew [is] the latest result in machine intelligence - the HAL 9000 computer, which can reproduce, though some experts still prefer to use the word 'mimic,' most of the activities of the human brain, and with incalculably greater speed and reliability."

What would the definitive test of consciousness be?

I can't help but think of science fiction stories that directly address this. First and foremost, 'I, Robot' by Asimov. 'Software' and 'Wetware' by Rudy Rucker. And hundreds of others. It feels like every possible outcome of a conscious or near-conscious machine has been explored. All we need to do is see it actually happen. Or not happen.
 
Posted by Strider (Member # 1807) on :
 
I think that it's theoretically possible to do what you ask.

But like you say, there is so much involved in the human brain that goes way beyond computational capacity that I don't think we'll be close to achieving this task for a long while. Not to mention the fact that the sensation/subjective experience of consciousness itself is even less understood than brain function in general.

How will we know if a machine is conscious? If it tells us it is? That can be confusing. Is there an objective way of determining consciousness? And will people who equate consciousness with the idea of a soul ever accept the idea of machines as "conscious" no matter what happens?

I should add that I pretty much look at humans as conscious machines anyway, so the question for me isn't if a machine can be conscious, but if it's possible for us to create one.
 
Posted by King of Men (Member # 6684) on :
 
But of course; unskilled labour can do it in nine months.

But seriously, yes, what could possibly be the limiting factor?
 
Posted by King of Men (Member # 6684) on :
 
A good test of consciousness, I think, is the ability to write an original novel or to do graduate-level research in a hard science. Of course, somebody is no doubt going to prove me wrong by writing a novel-generating program in fifty lines of Lisp and put OSC out of business.
 
Posted by Threads (Member # 10863) on :
 
The Turing Test is the best method we have at the moment for determining whether or not something is conscious. It basically follows the "If it looks like a duck, waddles like a duck, and quacks like a duck, then it is a duck" principle. Of course, it doesn't actually prove whether or not something is conscious but keep in mind that we use it everyday by assuming that other human beings are conscious.
 
Posted by fugu13 (Member # 2859) on :
 
KoM: people generally aren't conscious until they've at least finished high school?

I don't like the term conscious. At least as commonly used, my dog is conscious. Sentient is a more common term for what is being talked about, I think.
 
Posted by King of Men (Member # 6684) on :
 
The Chinese Room is a good argument against the Turing Test, though; unfortunately it basically proves that humans aren't conscious either, we just follow really complicated rules.
 
Posted by Strider (Member # 1807) on :
 
agreed with fugu.

does sentient include self-awareness? Can you be sentient but not have consciousness?

is there a word that covers consciousness, self awareness, and sentience?
 
Posted by King of Men (Member # 6684) on :
 
quote:
Originally posted by fugu13:
KoM: people generally aren't conscious until they've at least finished high school?

Not in America, no. But more seriously, the test is not intended as a least bound; if you can do research, you are definitely sentient; it may be possible to be sentient without the ability to do research.
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by fugu13:
KoM: people generally aren't conscious until they've at least finished high school?

Having taught high school for many years, I think it is safe to say that as a general rule, no. There are some exceptions.
 
Posted by King of Men (Member # 6684) on :
 
To be fair, I do believe that many sixteen-year-olds are capable of doing research; it's just that to do anything useful, you have to learn a lot of what's already been done so you can think about what to do next, and that takes time. If someone were to invent an entirely new field, so that one did not need a vast knowledge base to get to the cutting edge, any number of teenagers could do useful work within that field. Indeed, exactly this happened in the eighties and early nineties when computers got cheap.

Edit: And, of course, there are recent examples of teenagers writing novels; indeed, my wife wrote some when she was a teenagers, although they were not published. Actually, I think I would insist on publication and reasonable commercial success, just because any other criterion is so subjective.
 
Posted by Threads (Member # 10863) on :
 
quote:
Originally posted by King of Men:
A good test of consciousness, I think, is the ability to write an original novel or to do graduate-level research in a hard science. Of course, somebody is no doubt going to prove me wrong by writing a novel-generating program in fifty lines of Lisp and put OSC out of business.

Simple Betrayal written by BRUTUS.1 which was, in turn, written by Selmer Bringsjord and others.

If your computer can handle postscript files then these links are better (you can also view them as text by using Google's cache):
Simple Betrayal
Self-Betrayal

Obviously these aren't novel quality stories, however I still find them extremely impressive for something that is computer generated.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
quote:
In other words, it would imply that the genetic code of the fetus is irrelevant to consciousness, because if it was relevant then that would mean that there is a natural cause of consciousness.
No it wouldn't. Effects can have multiple causes. Both the soul and genetics could be relevant to consciousness. The presence of one doesn't necessarily mean the other is irrelevant.

quote:
Software emulation of the human brain should be possible by 2030 on a $1000 computer.
Extrapolations of this kind are essentially worthless. As an example, I point to you my flying car, which became common and affordable back in the 70s.

quote:
I don't like the term conscious. At least as commonly used, my dog is conscious. Sentient is a more common term for what is being talked about, I think.
I don't like the term sentient either, because it literally means "feeling", and thus applies to your dog as well.

I prefer the term sapient.
 
Posted by fugu13 (Member # 2859) on :
 
Sapience is probably best so far, but it has problems as well. It could even be argued that my dog is sapient (at least on some issues. She definitely exercises judgement about when to hit me up for treats . . .)

I think the lack of an easy description for what we're looking for underscores how little understood that is, even within ourselves.
 
Posted by Threads (Member # 10863) on :
 
quote:
Originally posted by mr_porteiro_head:
quote:
Software emulation of the human brain should be possible by 2030 on a $1000 computer.
Extrapolations of this kind are essentially worthless. As an example, I point to you my flying car, which became common and affordable back in the 70s.
Please... false analogy. The growth of computing power has been extremely consistent over the past decades. The flying car prediction was not based off of any established trends.
Link
Link2

Notice that in addition to growing exponentially, the rate of growth is also increasing. There aren't any hardware limitations that we know of that will suddenly stop this growth.
 
Posted by MattP (Member # 10495) on :
 
I have to wonder how well we can identify the computational power of the human brain and, even if the power is available, if we will be able to write software to effectively emulate all aspects of brain function.

Emulation of non-like devices is notoriously difficult to do without taking a significant performance hit. Look at the MAME project some time. They "preserve" old arcade games by emulating the hardware of those old video game systems. Despite some of the emulated hardware being orders of magnitude slower and less complex than modern CPUs, some games still cannot be run at full speed on a modern PC because true emulation (as opposed to simulation) is computationally expensive.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
Yes, I'm very familiar with Moore's Law.

I still think that using it to guess at what things will be like over two decades from now is just that -- a guess.
 
Posted by Architraz Warden (Member # 4285) on :
 
Perhaps I'm being obtuse, but how accepted is the Chinese Room Test by experts in the field?

Confirming what KoM said, I've known people (children and adults) who you can hand a book of average to moderate difficulty, have listened to them read part of the work word for word flawlessly, and completely fail to comprehend even a single word of what they just read. Simple interpretation and replication is something that human beings are not only capable of, but also something a fair amount of us are consistently guilty of as well.

If I understand the comparison, say you asked the computer (in Chinese) "Don't you think that ducks look silly?" What I'm gathering is that the question of sentience (strong intelligence, whichever) is in the decision making and not the process. Every word of that statement that a human "understands" is through a process of familiarization (being able to associate the word with an image, definition, or grammatical rule). I don't see this being at all beyond the capabilities of a computer in the near future. Is the test then in how the computer comes to the decision behind the answer to "Do ducks look silly?" I'm not even sure how I come up with "Yes, webbed feet are just bizzare" let alone how feasible it is for a machine to reach that conclusion on its own.

Sorry, delayed and clipped response on account of work.

Feyd Baron, DoC
 
Posted by Xavier (Member # 405) on :
 
When I started my computer science major, I'd have said "Sure! Perhaps in 20 years."

Then I took about 6 different AI classes, and now my answer is "well probably, but almost certainly not in my lifetime".

I think our best shot is to evolve an artificial intelligence in an ALife program. The problem is creating an environment that selects for intelligence. I think it can be done, but I couldn't tell you how. It's possible that the first AI will evolve from a computer virus, as the internet is perhaps the only artificial environment which will approach that level of complexity.
 
Posted by King of Men (Member # 6684) on :
 
Well, let's do some math. A brain contains roughly 100 billion neurons. Each one has, say, five or ten connections to other neurons. So, in computer code, you'd emulate one like so:

code:
struct neuron {
int exciteLevel;
neuron** connections;
};

That's roughly 10 bytes per neuron. Then to simulate the brain, you need 10^12 bytes of memory just to store the neurons. 1 terabytes of RAM, in other words. In addition to this comes all the modelling of the chemicals and whatever other things influence brain function. Let's call it 10 terabytes all told. Even with Moore's law working full clip, that's quite a bit of memory. Then, of course, you're going to have to have a CPU that can manipulate all these structs at some reasonable speed.
 
Posted by Tatiana (Member # 6776) on :
 
I think it's certainly possible but we aren't even close to doing it yet.

After all, a human body is a machine, a very sophisticated biological one, but definitely a machine that works using molecules and the laws of physics. What is to say that an aiua wouldn't take up residence in any suitable receptacle? I think we totally can do it, but it might take hundreds or thousands of years to learn enough to know how.

I think when we do that we'll probably learn how to resurrect once-dead people into new perfect bodies. That will be cool! Do y'all want me to resurrect you when the time comes?
 
Posted by fugu13 (Member # 2859) on :
 
Moore's law only applies to density of transistors; there's no way of knowing how that'll translate to increased RAM capacity, particularly at as high a usage as we are at now. I suspect major advances in RAM capacity will be slower until a new technology is developed.

Of course, that total is easily reachable if the system can be parallelized (which one would think; neurological networks are sparse). I could get access to a supercomputer (we have a couple) with that much memory, though not likely for the processing time required, given a decent reason.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
Nah. I'll wait.
 
Posted by Threads (Member # 10863) on :
 
quote:
Originally posted by King of Men:
That's roughly 10 bytes per neuron. Then to simulate the brain, you need 10^12 bytes of memory just to store the neurons. 1 terabytes of RAM, in other words. In addition to this comes all the modelling of the chemicals and whatever other things influence brain function. Let's call it 10 terabytes all told. Even with Moore's law working full clip, that's quite a bit of memory.

I know that there is a supercomputer in existence that has 6 terabytes of memory, so the 10 terabyte requirement is very achievable. Desktop computers (assuming that we'll still be using them) still have a ways to go though.

I think the largest barrier we will encounter when attempting to model the brain will be, as you said about, trying to model all the complex chemical reactions behind the operation of our brain. We have trouble simulating even a small number of atoms as it is and who knows how much "detail" is needed when trying to create an artificial brain.

Hopefully we will discover a more general model of consciousness than the human brain.

Here's a random question that popped into my mind: Can a process exhibit intelligence without being conscious? For example, does evolution exhibit intelligence? I'll try to address this in more detail tomorrow, however my initial reaction is that it does. It has memory in the form of DNA. It is adaptable in that it works in a wide range of different climate conditions. It learns. For example, once legs evolved they did not have to evolve again for each separate creature that used them. Different creatures have different types of legs but a leg is still a leg [Smile] . It might rank incredibly stupid on an IQ scale because it takes so long to work, but I don't think its unreasonable to say that it exhibits intelligence. Obviously, as far as we know, the process of evolution lacks any sort of central intelligence or consciousness so there is no "it" that actually learns, memorizes, or adapts, but it still clearly emulates these features as a whole. I haven't fully developed my opinions on this idea and I don't know how it connects to the initial OP, but I still think that it's interesting food for thought.
 
Posted by Zhil (Member # 10504) on :
 
Not only does Moore's law only apply to density of transistors, it'll probably comes to a screeching halt sooner rather than later as manufacturers get closer to the physical limitations of... real life. thinFets are like, what, 3 nm thin or something? Something ridiculously thin? It's going to get into the quantum level soon, and that's bad news for traditional transistors. Weird stuff happen thar.

Moore's law has been "slowing down" for a couple of years now, which is one of the reasons why dual-core and quad-core are all the rage in computing nowadays. [edit: Especially in super computer and networking systems. Dual and quad core stuff is relatively new in personal computers, but they've been used in the bigger systems for a long time.]
 
Posted by ricree101 (Member # 7749) on :
 
quote:
Originally posted by Zhil:

Moore's law has been "slowing down" for a couple of years now, which is one of the reasons why dual-core and quad-core are all the rage in computing nowadays. [edit: Especially in super computer and networking systems. Dual and quad core stuff is relatively new in personal computers, but they've been used in the bigger systems for a long time.]

Correct me if I'm wrong, but I was under the impression that transistor growth had pretty much outstripped the ability to design faster computers. So basically, we had the ability to add a lot more transistors on a chip, but they would get more power out of using those transistors for a two core chip than they would by trying to make a faster single core chip.
 
Posted by Samprimary (Member # 8561) on :
 
I think we're going to create sapient digital life by trying to create complicated spambots designed to trick their way through human verification tests for forums.
 
Posted by Zhil (Member # 10504) on :
 
In a way, you're right. Dual/quad core allows for better threading and parallel programming, which should allow for faster computers. The problem is that it's much harder to program in parallel with two/four cores than with just one. There's a whole field that deal with the networking issues, power issues, etc.

It should also be noted that performance and optimality isn't only based on CPU speed. If my memory serves correctly, the latest quad core CPU for super computers has awe-inspiring throughput but slower clock cycles that equals overall better performance than it's predecessor.

In super computers and networking systems it's pretty much neccessary nowadays, but in personal computers it's really not. The CPU manufacturers are taking the long look ahead, and they realized that unless some genius create newer transistors that can go smaller than current transistors, they'll begin to "slow down", so they've started the dual/quad stuff for personal computers to continues the "Look! We're TWICE AS FAST as last year!!" trend. So Moore's law has been "slowing down" because of a combination of the physical limitations and long-term marketing plans. [Smile]

Concerning sentient, sapient AI: No way in 100 years. [Frown]
 
Posted by mr_porteiro_head (Member # 4644) on :
 
quote:
It's going to get into the quantum level soon, and that's bad news for traditional transistors. Weird stuff happen thar.
I remember about ten years ago reading that this would happen within a few years. It doesn't seem to have.
 
Posted by fugu13 (Member # 2859) on :
 
It has to some extent. Quantum considerations have become important, but so far we've worked around them.
 
Posted by Zhil (Member # 10504) on :
 
Predictions being off doesn't mean it'll never happen. It will happen, if we focus on simply making them smaller and packing them unto chips denser; which is why we aren't really trying anymore. Which would be why quantum tunneling isn't a problem in mainstream technology. Like I said, CPU manufacturers are taking the long-term approach to design, thankfully.
 
Posted by Nato (Member # 1448) on :
 
I think consciousness is probably more related to the ability to learn and adapt until you understand how to "use" your brain to get work done. I think that if we are going to make computers conscious, it won't be a matter of processing power.
 
Posted by Tatiana (Member # 6776) on :
 
quote:
Originally posted by mr_porteiro_head:
Nah. I'll wait.

Don't be like the guy praying to be rescued from the flood who keeps turning boats down because he's waiting for God to rescue him and finally God says "What else do you want? I sent three boats!" [Wink]
 
Posted by Nighthawk (Member # 4176) on :
 
quote:
Originally posted by King of Men:
code:
struct neuron {
int exciteLevel;
neuron** connections;
};

That's roughly 10 bytes per neuron...
Depends... you think the brain is 32-bit or 64-bit addressing?
 
Posted by Qaz (Member # 10298) on :
 
Thing is, consciousness is not the state of being able to produce complex behavior. Consciousness is the state of being conscious or aware. The Turing test is about complex behavior, and Turing argues that that's all we should be talking about.

I don't think we can make conscious machines because I don't see how the concept of consciousness can be reduced to anything replicable (or anything else). This is not a supernatural argument, surely. I am not sure if it leads to supernature -- I don't see how -- but surely we should go where reason takes us. There are other things humanity can never create: everything that predates humanity (unless time travel is possible); pi; energy (as opposed to merely changing its form).

But as for making machines that can do complex behaviors, sure, we just do not know how yet.
 
Posted by Nighthawk (Member # 4176) on :
 
Arguably, we probably *do* know how, but the question becomes more how to make it efficient and not make it a computer system the size of a Colorado.
 
Posted by TomDavidson (Member # 124) on :
 
I read the subject line and thought, "We already can. Sometimes it even happens by accident." [Smile]
 
Posted by MattP (Member # 10495) on :
 
quote:
I don't think we can make conscious machines because I don't see how the concept of consciousness can be reduced to anything replicable (or anything else).
If consciousness is an emergent property of a particularly complex physical structure, then I don't see why it could not, in principal, be created artificially if technology ever allows that structure to be replicated artificially.
 
Posted by King of Men (Member # 6684) on :
 
quote:
Originally posted by Nighthawk:
quote:
Originally posted by King of Men:
code:
struct neuron {
int exciteLevel;
neuron** connections;
};

That's roughly 10 bytes per neuron...
Depends... you think the brain is 32-bit or 64-bit addressing?
That's why I specified 'bytes', meaning 'one address' rather than a specific number of bits. Although, in fact, with 32-bit bytes you couldn't address that many structs anyway.
 
Posted by Threads (Member # 10863) on :
 
quote:
Originally posted by King of Men:
quote:
Originally posted by Nighthawk:
quote:
Originally posted by King of Men:
code:
struct neuron {
int exciteLevel;
neuron** connections;
};

That's roughly 10 bytes per neuron...
Depends... you think the brain is 32-bit or 64-bit addressing?
That's why I specified 'bytes', meaning 'one address' rather than a specific number of bits. Although, in fact, with 32-bit bytes you couldn't address that many structs anyway.
Not sure what you mean by "That's why I specified 'bytes', meaning 'one address' rather than a specific number of bits." A byte is 8 bits for all intensive purposes and there is no reason for that to change.

I believe Nighthawk's point was that C does not standardize the size of an int or a pointer. On a 32 bit machine that struct would probably require 8 bytes of storage plus alignment (4 bytes for int, 4 bytes for pointer) whereas on a 64 bit machine it would probably require 12 or 16 bytes (4 or 8 bytes for int, 8 bytes for pointer). That doesn't include the overhead of storing extra pointers to the actual structs.
 
Posted by King of Men (Member # 6684) on :
 
As far as I'm concerned, a byte is one machine address, whatever size that is.
 
Posted by Threads (Member # 10863) on :
 
But thats not a correct definition...
 
Posted by King of Men (Member # 6684) on :
 
But it's convenient.
 
Posted by Threads (Member # 10863) on :
 
quote:
Originally posted by Qaz:
I don't think we can make conscious machines because I don't see how the concept of consciousness can be reduced to anything replicable (or anything else). This is not a supernatural argument, surely. I am not sure if it leads to supernature -- I don't see how -- but surely we should go where reason takes us. There are other things humanity can never create: everything that predates humanity (unless time travel is possible); pi; energy (as opposed to merely changing its form).

If something occurs that violates the laws of our universe then it is a supernatural event.

"everything that predates humanity (unless time travel is possible)"
- Given enough time we can theoretically create anything as long as its creation does not require breaking the laws of the universe. For example, we could create stars, galaxies, quasars, and black holes if we had enough time. It would require an enormous amount of effort and would be extraordinarily impractical, however it would be silly to claim that we could never do it.

"pi"
- Pi is a human construct

"energy (as opposed to merely changing its form)"
- Energy cannot be created by anything in the universe as far as we know. If we did manage to discover a case where energy was created then it would mean that either energy creation is possible, in which case we could do it, or that we just witnessed a supernatural event.

We see conscious beings coming into existence everyday. If consciousness is not supernatural then we should be able to create it in other ways than through sex.
 
Posted by Nighthawk (Member # 4176) on :
 
quote:
Originally posted by King of Men:
As far as I'm concerned, a byte is one machine address, whatever size that is.

The "connections" object is a pointer, and depending on the memory addressing model, that pointer will be a 32-bit or a 64-bit address. That's why 32-bit systems can only access 4Gb of memory (2^32, or 4,294,967,296 bytes).

And, yes, "int" is a platform dependent construct, and will vary in size depending on the bit depth it is being compiled under. If you want to force the bit depth, at least in Win32 programming, you would explicitly use "long" (32-bit), or "_int64" (64-bit).

But this conversation digresses from the point of this thread.

"Dave... this conversation can serve no purpose anymore. Goodbye." - HAL9000
 
Posted by Tatiana (Member # 6776) on :
 
Qaz: "I don't think we can make conscious machines because I don't see how the concept of consciousness can be reduced to anything replicable (or anything else)."

Well, we do it already all the time, or our bodies do. It takes only nine months using unskilled labor. We just don't know all the ins and outs (no pun intended) of how we actually accomplish it.

As far as all the building of tissues, bones, nerves, muscles, organs, etc. we don't understand in detail yet how the biochemical processes work but we have found no hint in all we've studied that there's anything happening there between the molecules that doesn't happen to those same molecules in a test tube. The body is a fantastically complex machine, many orders of magnitude more complex than any we've built ourselves from blueprints, but it IS physical. It's a machine. If a body can be grown inside another body, and contain a consciousness, then it means it can be done. I just think we're very far from being able to do it.
 
Posted by Tatiana (Member # 6776) on :
 
What we have so far in computers, though, is fabulously useful, and I'm not knocking it, but I don't picture there being any hint of consciousness there.

I think the "Mike from Moon is a Harsh Mistress" model of consciousness is obviously not going to work either (i.e. when computers get big and complex enough they will just wake up on their own.) Three billion years of evolution have shaped us on every level (from the molecular level on up) to be suitable to live here. And still our bodies let us down regularly. Our minds let us down. My brain keeps forgetting stuff and it annoys me. We're going to have to understand all those levels, and be able to reproduce them ourselves. We're laughably far from being able to do that at the moment, and tasks that nobody thought would be hard for computers to do because they seem so easy to us have proven fiendishly difficult to implement.

People have a big logical problem who want to maintain that machines can't possibly be conscious because there can never be anyone home there inside a machine, regardless of how smart it acts. The same logical argument applies to other human beings as well. I can't be sure any of you really feel anything or experience life. The only person's experiences I have access to is my own. All I can see about you is your behavior, and that could just be some complex algorithm generating seemingly intelligent replies (or not so intelligent in many cases [Wink] ). This view, that nobody else is actually conscious, is called solipsism . It's completely philosophically justifiable, it's just ridiculously stupid and operationally no fun.

In other words, there will come a day when machines act conscious, and if you choose to, you can claim they really aren't conscious, but it won't stick anymore than solipsism does. Because it makes nonsense out of life, and everything that happens.

[ September 09, 2007, 01:14 PM: Message edited by: Tatiana ]
 
Posted by Tatiana (Member # 6776) on :
 
This gives rise to another important idea. The reason solipsism is untrue is because this would be a really boring universe if it were true. Honestly, there's no other evidence against it, and no logical proof of its impossibility. In fact, it's the only scientifically defensible viewpoint. I don't have access to your feelings, therefore Occam's razor says it's smarter for me to assume they don't really exist. To argue by analogy that we must be alike is to say that "you" and "me" are similar things, and at least the way I experience them, they totally aren't. (We're hard-wired to believe that other people have feelings, though, so that's why we don't question it. Not because it's a scientifically supported idea.)

So that tells us a lot about what "truth" means. For instance, evolution is true because it makes SENSE out of the entire panoply of life on earth. When you look at zillions of tiny details they all make sense when you understand them in the context of evolution. When you try to understand them as just whims of the creator, it doesn't make any sense. It's because evolution explains so much that it is obviously true.

So apply that same idea to the universe as a whole, too. The concept that when something explains everything in great detail, when one simple framework makes sense of a lot of seemingly disparate things, then you use that as your operational definition, and that's what we call "truth". That's why I believe in my religion, too. I came to the belief in a scientific way. I allow as observations of reality my own experiences too. I extend science to cover not just things that are available for other people to observe and agree with me on, but stuff that I observe internally as well. I mean, external verifiable observations are great but they are only one portion of life. Just as important is the rest of life, the life of the mind and heart and consciousness that isn't directly accessible to other people.

Given the whole of my internally-accessible and externally-accessible experience, the explanatory framework that makes most sense to me is closer to the religion of the Church of Jesus Christ of Latter-day Saints than it is to any other theory of existence, than any other philosophy or metaphysics that I've found. That's why I can stand up with confidence on a Sunday and say I know this church is true. [Smile] Brigham Young said it: "Everything true is a part of our religion."

I'm a scientist, and I love science, but I think when people make their entire worldview the scientific one, they err. The reason is that science covers only one part of existence. It's a rich and beautiful part, and science is deliciously productive of new truths inside that part, but it's still quite a limited slice of total existence.

Because science is so great, it's tempting to make it into everything. That is, we assume for the purpose of science that only mutually verifiable observations are valid, and that does something wonderful, it puts us on firmer ground to produce new truths. But then to extend that and say there IS NO experience that's not mutually verifiable, to claim that the realm that science covers is ALL THERE IS, that is a mistake. That's metaphysically making an assumption, a leap, that's unjustified.

So I guess because it's Sunday, and because this thread got me thinking all these thoughts, I will bear my testimony to you hatrackers, the only group I've been a member of that I could tell this stuff to, that would listen and perhaps understand. [Smile]

[ September 11, 2007, 08:57 AM: Message edited by: Tatiana ]
 
Posted by King of Men (Member # 6684) on :
 
quote:
Originally posted by Nighthawk:
[QB] The "connections" object is a pointer, and depending on the memory addressing model, that pointer will be a 32-bit or a 64-bit address. That's why 32-bit systems can only access 4Gb of memory (2^32, or 4,294,967,296 bytes).

I do know that, yes. Actually 'connections' is an array of pointers, and will therefore take up 64 bits times the average number of connections. Let's think in bits from now on, then, as we apparently do not agree on definitions.
 
Posted by Mike (Member # 55) on :
 
Thanks, AK, that was a beautiful post.
 
Posted by Paul Goldner (Member # 1910) on :
 
Would a conscious human-manufactured machine be distinguishable from an organic being?
 
Posted by James Tiberius Kirk (Member # 2832) on :
 
quote:
Would a conscious human-manufactured machine be distinguishable from an organic being?
Depends. Is it combustible?

--j_k
 
Posted by NotMe (Member # 10470) on :
 
I think we already have a computer capable of simulating the thought processes of a human brain, if not the actual biochemistry. We just lack the right software, and it's impossible to predict how long it will take to write. A true AI would be by far the most complex piece of software ever written, and probably the most complex thing ever engineered by humans.

More to the point, we have no way to size up the task. At least with rocket science, we can easily compute the necessary thrust and other quantities. In the long run, we may simply have to resort to the only tried-and-true method for creating intelligent life: evolution. Genetic algorithms will be much less efficient, though, and take a lot longer.
 
Posted by pooka (Member # 5003) on :
 
I figure when computers are ready for AI, they will make it themselves.
 


Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2