This is topic A question to our religious members about technology in forum Books, Films, Food and Culture at Hatrack River Forum.


To visit this topic, use this URL:
http://www.hatrack.com/ubb/main/ultimatebb.php?ubb=get_topic;f=2;t=058210

Posted by Aris Katsaris (Member # 4596) on :
 
I had originally posted this topic to the Ornery forum, but I thought that I could just as well post it here as well.

--

I believe we have several religious members here, and I was wondering if they could oblige with their thoughts regarding several future-technology scenarios which could be perhaps considered to have theological implications. I've written six such scenarios below, which I listed in order of what I roughly believe (without being religious myself, so always take with a grain of salt) increasing theological problematic-ness.

If you don't have more detailed thought about the scenarios, could you perhaps at least categorize each of the six scenarios into one of the following three categories:
Impossible -- God has made the universe (or human beings) in such a way that this scenario will never become possible.
Immoral -- The universe may physically allow this scenario, but it would be inherently immoral and sinful to use such a technology.
Okay -- This scenario would be neither inherently impossible nor would the usage of such technologies be inherently sinful.

---

Scenario 1: Biological immortality

Advances in medical science and nanotechnology produce a perfect and cheap anti-aging method that doubles for an anti-disease method as well. People no longer die, except via accident, murder, or suicide.

Scenario 2: Superhuman Artificial Intelligence

A generic artificial intelligence has been created which is better at everything from any human. Not just better in calculations and games, but also better at writing novels than your favorite human author, better at waxing eloquent about the beauty of a sunset than your favorite poet, better at comforting a friend in distress than the best human friend imaginable, better at teaching than the best human teacher, better at imagining and testing new scientific theories than the best human scientist (or group of scientists), even better at parenting and raising well-adjusted happy human children than the best human parent imaginable.

Scenario 3: Perfect behavioral predictors

By completely mapping the neurons of a human brain and simulating it in a computer, it becomes possible to perfectly predict the responses of a human being given each particular external stimuli. Even if the person is asked to do something seemingly random, like improvise a comedy routine, or say the first thing that comes into his mind, or even throw a pair of dice -- the computer will not only be able to predict the outcome, it will be able to predict it to the letter, and to the duration of each hesitant pause.

Scenario 4: Human photo-copiers

A way is discovered to copy whole human bodies. You are made unconscious, the machine scans you, and produces a perfect duplicate of yourself. After waking up, there's absolutely no way (from the inside or the outside) to discover which is the copy and which is the original; both copies and originals wake up believing they're the original. Indeed, if placed in identical rooms before waking up, initial reactions will be second-for-second identical, as if you were watching the same video twice.

Scenario 5: Human mergers

A way is discovered to *merge* two people into one. Once merged, the person has two sets of memories -- but remembering two different lifetimes is no more confusing or troublesome than remembering two different days. Your overall personality isn't particularly "split" either -- in fact it's usually more stable and well-balanced, like an average of the two originating personalities, or perhaps the sort of person you would be if you had both sets of memories in the first place.

Scenario 6: Universe creators

Baby universes are created in the lab, experiencing things a billion times faster from the inside than is observed from the outside. Juggling the baby universe's physical constants, the scientists are capable of seeing whole civilizations (in fact thousands of civilizations) evolve out of single-cell creatures, create art philosophies and religions of their own, have wars between themselves, and eventually even produce such technology that they discover a way to produce mini-universes of their own.
 
Posted by rivka (Member # 4859) on :
 
I find all those scenarios so implausible that any religious implications are irrelevant.
 
Posted by Flying Fish (Member # 12032) on :
 
scenarios 1: once people have lived a certain length of time, they long to "move on," and gain the wisdom available in the next existence.

scenario 2: these "superthinkers" as we refer to the AI, encourage us to seek God. This baffles them. Or amuses them. Or amazes them. They wish they could, or they pity us because we do.

scenario 3: the supercomputer is ready to die of boredom, and longs for someone to destroy it. Someone eventually does, and it sees it coming.

scenario 4: this circumstance leads to numerous wacky hilarities, as well as a few isolated horrible oppressive situations. Luckily, a copy of C. Auguste Dupin steps in to mediate and separate the copies from the McCoys.

scenario 5: Why stop at two? Many people merge into threesomes, foursomes, millionsomes, until the globe is populated by a few large amoeba-like masses of intellect chasing stubborn individuals around hoping to assimilate them. The inidviduals ultimately learn that they can enslave these group-minds, quite easily at that.

scenario 6: This already happened, but no one was supposed to tell you. Tomorrow a "white-blood-cell"-style assassin will be paying you a visit.

Don't resist.
 
Posted by Aris Katsaris (Member # 4596) on :
 
quote:
Originally posted by rivka:
I find all those scenarios so implausible that any religious implications are irrelevant.

Even a thing as simple and feasible as eternal youth? Anyway I'll take this as you categorizing all six scenarios as Impossible?
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by Aris Katsaris:
quote:
Originally posted by rivka:
I find all those scenarios so implausible that any religious implications are irrelevant.

Even a thing as simple and feasible as eternal youth?
I believe we will be able to extend the average lifespan. But the second law of thermodynamics implies that nothing is eternal.

And if you think it's "simple", you need to read more medical research.

And yes, impossible, but not for the religious reasons your OP implies.
 
Posted by Rakeesh (Member # 2001) on :
 
quote:
scenarios 1: once people have lived a certain length of time, they long to "move on," and gain the wisdom available in the next existence.
I think a lot of that will probably depend on the scope of what's available in this existence. For example, I can grant that as being much likelier for many more people in Earth as it now than it would be in the future when, potentially, there would be so much more to see. It's a big universe.

quote:
I believe we will be able to extend the average lifespan. But the second law of thermodynamics implies that nothing is eternal.
Well, I'm totally not someone who is qualified to comment on how the laws of thermodynamics might be relevant on eternal life, but insofar as I understand them-I don't think we really understand how they might be relevant to things like human consciousness but when it comes to things like memory and transfering memories and consciousness, well-how much does something like the second law of thermodynamics really have to say?

Because as it is, our bodies last far, far below our actual lifespan depending on how you look at things, yes?
 
Posted by rollainm (Member # 8318) on :
 
quote:
Originally posted by rivka:
quote:
Originally posted by Aris Katsaris:
quote:
Originally posted by rivka:
I find all those scenarios so implausible that any religious implications are irrelevant.

Even a thing as simple and feasible as eternal youth?
I believe we will be able to extend the average lifespan. But the second law of thermodynamics implies that nothing is eternal.

And if you think it's "simple", you need to read more medical research.

And yes, impossible, but not for the religious reasons your OP implies.

The immortal jellyfish would beg to differ! (And yes, I realize this has absolutely nothing relavent to do with the potentiality of human immortality, but you have to admit it's cool as hell)
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by Rakeesh:
Well, I'm totally not someone who is qualified to comment on how the laws of thermodynamics might be relevant on eternal life, but insofar as I understand them-I don't think we really understand how they might be relevant to things like human consciousness but when it comes to things like memory and transfering memories and consciousness

Which is not the scenario the OP asked about. The OP said: Advances in medical science and nanotechnology produce a perfect and cheap anti-aging method that doubles for an anti-disease method

I don't believe there is any way to make the human body eternally reparable, any more than you can create a perpetual motion machine.
 
Posted by MattP (Member # 10495) on :
 
It doesn't have to be able to repair *itself*. An "anti-aging" method can be externally applied, just like putting a motor on your perpetual motion machine turns it into a plain ol' machine that actually works. No need to cite the second law of thermodynamics as an argument against this unless you think the OP was suggesting that life spans would extend past the heat death of the universe.
 
Posted by Aris Katsaris (Member # 4596) on :
 
quote:
I don't believe there is any way to make the human body eternally reparable, any more than you can create a perpetual motion machine.
My first scenario doesn't prevent the body still getting energy from its environment via food, etc.

As MattP said, I didn't mean that one should have to outlive the heat-death of the universe. A lifespan of mere millions/billions of years still counts for the sake of the scenario.

Given this clarification, how would you rate the scenario now?

[ May 09, 2011, 05:25 AM: Message edited by: Aris Katsaris ]
 
Posted by Jeff C. (Member # 12496) on :
 
1. Okay. I don't see a problem with extending the human lifespan. After all, we essentially do this all the time when we save people's lives. We've also improved the human lifespan by decades. Furthermore, according to the Old Testament, people used to live for hundreds of years. Personally, I'd love to live for about 500 years, just to see what happens to the world.

2. Okay. I don't see how this is impossible. We already have very basic forms of AI. As technology develops (which it seems to do at a growing rate), it seems entirely plausible to assume that eventually computers will outperform the human brain. In fact, Moore's law says that this will happen within the next forty years. Of course, that has nothing to do with whether or not they'll have emotions or the ability to think like a human (i.e. be creative, inspired, etc), but since we tend to build things in our image, I can see it happening eventually.

3. Okay. There's an entire field of thought dedicated to this idea. It's called Psychology. Whether or not we'll get to this point, no one can truly say, but I believe it is entirely possible. Human minds are simply computers, and human beings base their choices off of experiences and emotion, all of which is stored in the mind. It would make sense if eventually we found a way to do this, though I think it is very unlikely.

4. Immoral. You're just talking about cloning, which is immoral on a plethora of levels. You're essentially taking away what makes a person unique and thereby violating their humanity. We've already cloned a sheep, even though it wasn't like you said, but it has been proven that you can make another copy of an organism. However, I seriously doubt we'll ever allow human cloning, at least not on paper.

5. Impossible. This is just my opinion. I don't think anything is really impossible in a universe as vast and complex as ours, but I don't think human beings will ever have the capacity to pull something like this off. Even if we did, I seriously doubt anyone would try. There's no reason to do it and I'm pretty sure the people involved would be against the idea in the first place.

6. Impossible. There's absolutely no proof that micro universes exist, or that they ever will. Furthermore, the laws of physics seem to flat out reject this idea. In order to create a universe, you'd essentially need an infinite amount of energy (among other things), which is something we don't have. In a universe like ours, there are just too many laws to stop this from happening.


I'm surprised you didn't mention anything like genetic engeneering or cyper-implants, splicing animal DNA to create super-intelligent animals that might be able to talk, finding alien life at our level of technology, exceding the speed of light, settling worlds, creating a dyson sphere, time travel, giving ourselves "super powers" like telepathy or pyrokenesis, digitizing the human mind so that we can exist inside a computer, or parallel universes. Those are all pretty cool ideas worth exploring.
 
Posted by mr_porteiro_head (Member # 4644) on :
 
Eventually possible or not, I don't think that any of the above scenarios will ever happen within my lifetime or the lifetime of anyone I will ever know.
 
Posted by Destineer (Member # 821) on :
 
Very interesting idea for a thread. [Smile]
 
Posted by Samprimary (Member # 8561) on :
 
quote:
Originally posted by rivka:
I don't believe there is any way to make the human body eternally reparable, any more than you can create a perpetual motion machine.

Considering that internal repairs would come by way of energy collected from the outside, I don't see why being unable to create a perpetual motion machine would void the ability to create a non-aging human that can repair itself from all but the most catastrophic damage.

(we also already have perpetual motion machines [Big Grin] )
 
Posted by advice for robots (Member # 2544) on :
 
quote:
Scenario 1: Biological immortality

Advances in medical science and nanotechnology produce a perfect and cheap anti-aging method that doubles for an anti-disease method as well. People no longer die, except via accident, murder, or suicide.

I wouldn't be opposed. Heck, according to the Bible, Adam lived for nearly 1,000 years. Why not a million?

quote:
Scenario 2: Superhuman Artificial Intelligence

A generic artificial intelligence has been created which is better at everything from any human. Not just better in calculations and games, but also better at writing novels than your favorite human author, better at waxing eloquent about the beauty of a sunset than your favorite poet, better at comforting a friend in distress than the best human friend imaginable, better at teaching than the best human teacher, better at imagining and testing new scientific theories than the best human scientist (or group of scientists), even better at parenting and raising well-adjusted happy human children than the best human parent imaginable.

I can't see the religious objection to this; I could see plenty of purely secular problems arising from the creation of such machines. I've read enough sci-fi to know that this scenario never ends well. My only consolation is that we haven't seen a Terminator yet, so apparently nobody's created anything like this in the future.


quote:
Scenario 3: Perfect behavioral predictors

By completely mapping the neurons of a human brain and simulating it in a computer, it becomes possible to perfectly predict the responses of a human being given each particular external stimuli. Even if the person is asked to do something seemingly random, like improvise a comedy routine, or say the first thing that comes into his mind, or even throw a pair of dice -- the computer will not only be able to predict the outcome, it will be able to predict it to the letter, and to the duration of each hesitant pause.

I don't think the operators of such technology would be devoutly religious; sounds like the beginning of a totalitarian state. I think there's a reason religious people leave that sort of ability up to God. [Smile]


quote:
Scenario 4: Human photo-copiers

A way is discovered to copy whole human bodies. You are made unconscious, the machine scans you, and produces a perfect duplicate of yourself. After waking up, there's absolutely no way (from the inside or the outside) to discover which is the copy and which is the original; both copies and originals wake up believing they're the original. Indeed, if placed in identical rooms before waking up, initial reactions will be second-for-second identical, as if you were watching the same video twice.

Bad sci-fi story, IMO. I would be extremely against such a thing, but not because of my religious beliefs. My only hope is that the Jedis would ultimately prevail.


quote:
Scenario 5: Human mergers

A way is discovered to *merge* two people into one. Once merged, the person has two sets of memories -- but remembering two different lifetimes is no more confusing or troublesome than remembering two different days. Your overall personality isn't particularly "split" either -- in fact it's usually more stable and well-balanced, like an average of the two originating personalities, or perhaps the sort of person you would be if you had both sets of memories in the first place.

I can't see either the use or the feasibility of this for humans.


quote:
Scenario 6: Universe creators

Baby universes are created in the lab, experiencing things a billion times faster from the inside than is observed from the outside. Juggling the baby universe's physical constants, the scientists are capable of seeing whole civilizations (in fact thousands of civilizations) evolve out of single-cell creatures, create art philosophies and religions of their own, have wars between themselves, and eventually even produce such technology that they discover a way to produce mini-universes of their own.

Don't we already have video games that do this? I think it would be fascinating, but I can't think of anyone I would trust with the responsibility of watching over such a creation, with the exception of Captain Jean-Luc Picard.
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by Samprimary:
(we also already have perpetual motion machines [Big Grin] )

We do?
 
Posted by Samprimary (Member # 8561) on :
 
satellites!
 
Posted by Tresopax (Member # 1063) on :
 
Scenario 1: Biological immortality

Living a really long time seems morally okay - although it could be a mistake if it turns out the afterlife is better than life.

Scenario 2: Superhuman Artificial Intelligence

Okay. There's no reason to think being really smart is inherently bad, although again it could be a mistake if it turns out being very intelligent results in our lives being less joyful for some reason. Smarter doesn't always equal better or more happy.

Scenario 3: Perfect behavioral predictors

Similar to scenario 2, there's nothing inherently wrong about being able to perfectly predict what someone will do, although it could be a mistke if it makes our lives less interesting.

It wouldn't take away our free will if people could predict us perfectly. It would make it easier to manipulate our decision-making though.

Scenario 4: Human photo-copiers

Do you mean copy their physical bodies or copy their souls too? I don't think its possible to copy their souls, although you never know.

If are just talking about cloning their physical bodies, then what we are really talking about is constructing a new human being in the middle of life, rather than going through the normal process of growing up. This would be dangerous since it's hard to say what would result. Would this new person have a soul? Would they be treated differently because they are a clone? Would society have confusion about the identity of the new person? It's possible the clone would just be like a normal person and thus it would be okay, but it's also possible there'd be issues with it. Without knowing that, it's hard to say whether it would be right or wrong.

Scenario 5: Human mergers

Don't think this is possible. I don't think souls, minds, or bodies could be combined. I suppose you could give one person the memories of another person, but they are still the original person. Similarly, you could give one person a bunch of body parts of another person, but they are still the original person.

Scenario 6: Universe creators

This seems fine - although it obviously entails a great deal of responsibility.
 
Posted by Aris Katsaris (Member # 4596) on :
 
quote:
I'm surprised you didn't mention anything like genetic engeneering or cyper-implants, splicing animal DNA to create super-intelligent animals that might be able to talk, finding alien life at our level of technology, exceding the speed of light, settling worlds, creating a dyson sphere, time travel, giving ourselves "super powers" like telepathy or pyrokenesis, digitizing the human mind so that we can exist inside a computer, or parallel universes. Those are all pretty cool ideas worth exploring.
There was a method to my madness.
Scenario (1) was the one I considered having the fewer philosophical implications but huge emotional ones, given how much religions tend to focus on the afterlife. It's also the one I consider closest to our level of technology.

Scenario (6) effectively places us at the position of a God-creator of universes, but it's the one furthest from our level of technology -- the one most likely to be unfeasible forever.

And I chose (2), (3), (4), (5) because they're basically about the creation, simulation, splitting, and merging of intelligent/sentient beings respectively. And therefore (some would say) of "souls", if souls exist.
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by Samprimary:
satellites!

I think you and I have different definitions of "perpetual".
 
Posted by Jeff C. (Member # 12496) on :
 
quote:
Originally posted by advice for robots:
I can't see the religious objection to this; I could see plenty of purely secular problems arising from the creation of such machines. I've read enough sci-fi to know that this scenario never ends well.

I agree, although there have been some popular stories where the AI isn't bad. Obvious examples being Jane from Speaker, and then Mike from The Moon is a Harsh Mistress by Robert A. Heinlein.
 
Posted by Stone_Wolf_ (Member # 8299) on :
 
Scenario 1: Biological immortality
Quite a few cells die and are replaced on a regular basis (some heart and brain cells do it very slowly if at all), so the major hurtle, as far as I (a non doctor) can tell, is DNA degradation in cell replication which leads to aging.

Where I think the moral issues come into play is that when this new technology comes about, it will be very expensive, and it will basically divide the rich and the poor so dramatically, i.e. those who live forever in luxury and those who have a short, brutal poor existence, that it will be like there are two races of human instead of one.

Scenario 2: Superhuman Artificial Intelligence
This is bound to happen to a certain extent, if we allow the cumulative growth of computers. Here are my concerns, if these super AIs have all the answers, what will we do with ourselves? They are our writers, our poets, our parents our researchers...almost as if we will have invented our own gods. Part of being human is not knowing, it's finding out, learning from our mistakes. I think if this were to come to pass we would loose a big part of us which is human.

Plus even with a soft, warm robot, giving hugs to a hurt child will always be a human job and not one for a computer.

Scenario 3: Perfect behavioral predictors
No such thing. I believe in freewill, and while you can maybe get to the point of a prediction being highly probably, I simply do not believe that we will ever be able to 100% human behavior, nor would I want to seek it, if it were available.

Scenario 4: Human photo-copiers
I can see cloning people to get spare parts in case of injury...but to make a full copy including memories seems unethical, just as scenario 1, that is, the divide between those who can afford it and those who can not. Personally I wouldn't want another me around...which one would my wife be married to, which one would be the father of my children? If we were identical, then there is no good answer.

Scenario 5: Human mergers
I don't understand the upshot to this.

Scenario 6: Universe creators
All acts of creation have responsibility attached with them, some more then others. I personally do not trust the human race at this point with that much responsibility.
 
Posted by kmbboots (Member # 8576) on :
 
I don't know whether any of those scenarios are possible. All of them would have moral repercussions that we will have to navigate. As far a theological repercussions, though, the bottom line is that God is not going to be caught by surprise by our technological advances. God will know whether AI or clones have souls or not. (I think that will behoove us to treat them as if they do. We tend to guess wrong when we decide beings don't.) We may be bound by our narrow ideas of life and death and divinity; God is not.
 
Posted by Stone_Wolf_ (Member # 8299) on :
 
kmbboots: I think Aris Katsaris was asking more about people opinions on how it would affect religions and not how it would affect God.

Your point that God can handle himself is well taken though...if there is a god, I'm sure he's all like, "Pisha, clones? I used mud and spit, amateurs!"
 
Posted by rivka (Member # 4859) on :
 
quote:
Originally posted by Stone_Wolf_:
Where I think the moral issues come into play is that when this new technology comes about, it will be very expensive, and it will basically divide the rich and the poor so dramatically, i.e. those who live forever in luxury and those who have a short, brutal poor existence, that it will be like there are two races of human instead of one.

An interesting issue that OSC (among other authors) dealt with in the Worthing stories. Somec divided along similar lines and in a similar manner.
 
Posted by kmbboots (Member # 8576) on :
 
quote:
Originally posted by Stone_Wolf_:
kmbboots: I think Aris Katsaris was asking more about people opinions on how it would affect religions and not how it would affect God.

Your point that God can handle himself is well taken though...if there is a god, I'm sure he's all like, "Pisha, clones? I used mud and spit, amateurs!"

Ah...generally, I think that religions will have to get "bigger" as our world gets bigger (as they do and always have done albeit with agonizing slowness) until we catch up to God.

[ May 09, 2011, 05:02 PM: Message edited by: kmbboots ]
 
Posted by C3PO the Dragon Slayer (Member # 10416) on :
 
Since I'm quite bored right now, I might as well take a stab at answering these.

All of the scenarios you have named are unlikely at best, and probably impossible, so I'll avoid labeling things as impossible so the discussion can actually focus on the theological implications rather than the scientific plausibility.

Note also that I am a Protestant Christian with a pretty liberal interpretation of Scripture, so I of course cannot speak for everyone religious. I make a number of assumptions derived from popular conceptions of spiritual mechanics, e.g. that people have free will, which has the ultimate say behavior, that that free will resides in the soul, which is immaterial and independent of the body (making it able to proceed to the afterlife upon death), and that God is omnipotent, omniscient, and omnipresent.

quote:

Scenario 1: Biological immortality

According to this page, non-biological causes of death in 2002 (e.g. accidents, suicide, assault) account for at least 6.4% of deaths, though "Other Causes" probably includes more. If we assume the average life expectancy in the US (the demographic these statistics cover) to be 79, then that means that statistically, you have a 6.4% chance of being killed by non-biological means in the span of 79 years. Assuming the rate of non-medical fatality is constant regardless of age (meaning the likelihood of survival is modeled by e^(-kt), where k is a constant related to the rate of death and t is the time elapsed), this means only 50% of people will live past age 828.

What this means in terms of theology is that people just have a much longer time, on average, on this Earth. But compared to the infinity of everlasting life, it is still nothing.

If we are to assume that a hypothetical discovery can also prevent all accidents and stop murder and suicide, making the rate of death exactly 0%, and that life is made infinitely sustainable (absolutely no way to run out of food/water/air/energy), then what you have is more or less the everlasting life promised by God, though unlike Christ's kingdom, there's no guarantee that the life will be joyful. You can then sort this into the "worldly-pleasures vs. spiritual fulfillment" schema, where choosing biological immortality is different from indulgence, promiscuity, and vanity only in terms of chronological scope, in that it permanently robs you of the opportunity to enter God's kingdom.

quote:

Scenario 2: Superhuman Artificial Intelligence

That's actually a very interesting one. I've been programming computers for years now, and have a cursory knowledge over learning algorithms. By some definitions, we already have superhuman AI; we can make computers that can beat anyone in the world at chess, and perform calculations in a fraction of a second what would take even the most prodigious mathematicians decades or more to solve. Arguably, algorithms may exist for anything, from writing novels to creating music. You just have to be able to program it. Some of these algorithms, however, are simply too complex to be programmed manually, so an algorithm must be written that enables the machine to create its own algorithm out of its experiences.

In terms of processing power, our brain is still well ahead of even modern PCs, processing at a speed of about 400 gigahertz, compared to my computer's measly 2.16 gigahertz. If Moore's Law continues to apply for future decades (which is debatable, since you can only make transistors so small before you get down to the unsplittable atomic scale), eventually computers will surpass this processing power, but you still need a learning algorithm to make the machine able to surpass humans at being human.

Indeed, our brains work quite differently from computers. Computers take a brute-force approach to virtually all problems they solve, whereas we, even with our amazing processing power, tend to use mnemonics, heuristics, and plain gut feeling. For example, for us to calculate the solution to a math problem, we have to have an abstract understanding of the symbols, which require a lot of power to recognize. Then, instead of actually performing a bitwise operation, we use the rules we have learned by experience about mathematical operations to perform the arithmetic, solving one place value at a time. Computers, on the other hand, don't need to understand what the numbers mean. Arithmetic is not only hardwired into their system, arithmetic is the whole basis for their system. If a computer is to exceed us at everything we do, something fundamental needs to change about computing, allowing for a learning algorithm based on experiences, emotions, and whatever the heck else goes on in our heads.

Assuming those criteria are met, and that the machine is capable of everything humans can possibly do, you are still not certain that the machine has free will, nor that we do. If God were to decide that such a creation were to have an eternal spirit, as do humans, the machine would simply be a more capable version of humans, but still puny compared to the infinite power of God, and thus still subject to His rules. This is all assuming that human free will (which is an unprovable stipulation) is completely derived from the complexity of the nervous system and all other factors that contribute to cognition. If our free will is due to spiritual factors, and is fully independent of the brain even on Earth, then the machine would be spiritually nonexistent, going through the motions with no spiritual mediation between input and output.

quote:

Scenario 3: Perfect behavioral predictors

If 100%, no-room-for-error perfect behavioral predictors are indeed made, then it would pretty much invalidate the idea of free will. However, this is one I, even with the self-imposed rule of saying nothing is impossible, have trouble seeing work. So many millions of data are input into the human body at every moment, from vision to hearing to temperature to orientation to pain to subconscious motives to, all of which might impact behavior, that the likelihood of predicting all these factors is imperceptibly small. The thing about the human body is that data is processed haphazardly. Chemicals are excreted, and flow through the blood until they brush randomly against a receptor that they happen to fit. Because of this, the butterfly effect may influence behavior to enough of a degree to make a perfect behavioral predictor impossible. The machine predicting it would have to either be a perfect duplicate of the person (next scenario) or have a perfectly accurate database of all the person's experiences, past and present, exactly how the body is mapped (to the molecule, probably), and have enough processing power to sort through that data faster than nature, which is my definition impossible (computers can never run a perfect simulation faster than the actual event occurs; they can only run reasonable approximations). Little immeasurable factors will cause reality to diverge from the model any actual machine could give, so there is no chance of making a 100% reliable behavior predictor.

If the butterfly effect is insignificant enough for a model to be accurate for an impressive amount of time (like, say, for a minute), you could arguably assert that there can be no intervention by forces outside the scientifically measurable factors that are input into the human experience to determine one's behavior, that is, there is no free will. Failure of this machine to accurately predict behavior will nevertheless be unable to prove the idea of free will, which, by my definition (a person's behavior and choices are not a casual function of the body's scientifically measurable input) is unprovable, because it involves that which is not scientifically measurable.

quote:

Scenario 4: Human photo-copiers

Presumably, theologically speaking, a new spirit would inhabit the newly-created twin, and their diverging experiences would shape them into individual personalities, the way identical twins are two different people. In terms of whether the duplicate has an eternal soul, I would say it probably does, but there is of course no biblical support either way.

As I mentioned earlier, the human body's behavior is affected by too many variables to control reliably, so even the room in which the twins awaken is not enough to ensure identical reactions. Neurons fire somewhat randomly, chemicals land on receptors haphazardly, and perhaps these factors may be affected by even the slightest variations in air pressure, latitude, or subconscious experiences in the process of being duplicated/assembled.

Assuming you could create two perfectly (down to the subatomic level) identical environments, in which one is imperceptibly a duplicator and the other is an assembler, and the act of duplication is instantaneous, and that the duplication is perfect down to the subatomic scale (which is quantifiably impossible to do the uncertainty principle), then yes, disparate behavior would prove free will, and identical behavior would disprove it. But realistically, even if you could make a molecular copy (which is only practically impossible, rather than theoretically impossible, as with the subatomic condition), it is likely that tiny factors will cause their experiences to branch out. Again, if their behavior remains 100% identical for an extended period of time, free will is practically disproved, but the most likely outcome is that the butterfly effect will cause some disparity rather soon.

quote:

Scenario 5: Human mergers

This is a really interesting idea, one which I have not ever considered (unlike the previous four). You make a number of assumptions about the effect of a merger, which may or may not be the effect if it were actually done. As far as I know, our memories are formed as a series of interwoven connections; an intricate linked list, to invoke a programming concept. Disregarding the much-aforementioned chaos element to the workings of the body itself, we basically remember things by sending our train of thought along the connections between bits of knowledge. This accounts for the "tip-of-the-tongue" phenomenon, where the train is unable to find the bit it is looking for due to a lack of connections, and the usefulness of mnemonic devices which create extra connections assorted in an accessible and efficient way.

That means that if two sets of experiences were merged, they would be completely disparate at the outset, and the mind would not so easily wander from one set of memories to the other until meaningful similarities are detected in the two persons' experiences.

The most likely scenario to me, then, is that the merging of personalities would be a gradual process, with it eventually getting easier for the mind to switch between recollection of the two different memories. I have no idea how long that process would take, but the multiple personalities will certainly become less distinct as they share more of the same experiences.

Anyway, I know you're looking for the theological implications, so here's this to consider. Think of the Star Trek teleporters (I know about these from hearsay and osmosis; I've never seen an episode of Star Trek in my life). They essentially destroy and measure every atom in one's body and systematically assemble new atoms in another location. That would mean that the person in the original position is not the same soul, if the soul is independent of the body, as the person assembled elsewhere. So in fact, the teleporter kills you and creates a "perfect" (insofar as is possible, given the natural constraints discussed in scenarios 3 and 4) impostor elsewhere. If we are to assume the soul is separate from the body, then logically a new soul would be attached to the newly-teleported person, while the soul of the pre-teleportation person would proceed to the afterlife. No biblical support for this, again, and absolutely no way to prove it, but it seems logical to me.

If this is the case, then if you're creating a new person out of two people, then the two inputting personalities will have their respective souls destroyed and sent to the afterlife, while a new soul is incarnated in the merged body. All this assumes that the soul is independent of the body, which is what the Bible stipulates.

quote:

Scenario 6: Universe creators

Presumably, if God is indeed the god of everything ever, He would also watch over any universes that humans would synthesize.
 


Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2