OSC's was not the first opinion I heard expressed on the topic (although done very well as can be read in the Library Section of this web site under short stories ATLANTIS) I first heard this theory expressed when I was attending university. The professor expressed his views well and with pretty solid evidence.
But the point of my post is this. Where do ideas come from.
When I was in High School I wrote a short story about a guy who traveled through time and ended up in other peoples lives and had to try to solve their problems. Sounds like Quantum Leap huh? Well-- Quantum Leap didn't exist yet. My idea came out of my head.
There are lawsuits regularly of people suing writers, movie studios, record companies for stealing their ideas.
Are ideas original? Or do they exist as an independent entity just waiting for someone to find them?
I guess the point is, if I had an idea and you had an idea at the same time and both of us wrote a story based on that idea, the stories would turn out very differently because we are different writers.
Even if I gave you my idea -- the stories would still be different.
Discuss?
The PI Man
But this isn't what the lawsuits are about; it's about copying more than just the idea, it's copying the interpretation, too.
What I mean by that is this: an idea, since it can be taken in many different directions, is turned into a story when it is interpreted by the Writer, and made into something that has a meaning (whether deep or shallow, it doesn't matter: all surviving stories have some kind of meaning).
Back to the main topic though, ideas come from living. It's the corny old line about Writing what you know about. Well, you can only really Write about what you've experienced, and you can only Write well about what you know well. This applies to the idea-finding process easily enough: you find something in your own life that relates to the characters that you're describing and Writing.
There are almost no "new" ideas or "fresh" ideas. If there are more, I at least haven't found them. But the way that these stories, these used ideas, are done, with one slant or another, is what makes it important. The interpretation of an event is what is most important. There are countless ways to Write one scene. For instance, a man watching a bird fly high above the concrete and steel skyscrapers of a city (Toronto, for instance--my hometown) can be Written from many viewpoints. Third-, second- or first-person views are the obvious first choices, but there are many more branches from that point. From third-person, one can choose the omniscience of the narrator. From second- and first-person views, one can choose the character who does this viewing (for instance the bird or the man or even someone totally different!).
My point is this: there are no original stories. Only wonderfully brilliant remakes. This might only sound like my dreary self, but it's true.
While this is true, I don't hold much with it as a useful distinction when we talk about ideas. So I will just say that it is true at a level that I am not concerned with at this time.
Where do ideas come from? That was a question. Not an original one, but then, it need not be. I like the question itself.
My ideas come out of the blue, whenever I think of a question. Ask me a question, and I have an idea about it. If you learn to listen, you will see that these ideas are always there, coming to our minds in every circumstance. Like fireflies in the twilight, they blink out at us in an unexpected pattern, unpredictable.
So the next question is, what is out there in the blue? Of course, I have an idea or two, but perhaps you may already know them.
In any case, my ideas simply come out from culture, it seems. I know a lot of people get ideas out of the blue, I'm just not one of them (or else I think I would have a few more short stories than I do now). Then again, it's not like I sit locked in my room forcing myself to put fingers to keyboard, either, so...<shrug> I don't know how I get my ideas, except that they come from the media in one way or another.
And forgive me for having been so cold in many of my previous posts. I don't mean to seem stuck-up, so once again, I apologise. It's some of my characters coming out to possess me, I swear!
Oy...I think it's time to visit my shrink again....
That is the exact thing that happened to me when I first saw Quantum Leap. I was speechless.
Einer, I think that you bring up some good points, but lets look at it this way. If I decide to write a story about a guy looking up at a bird in the Toronto sky and you decide to also, you sitting in Brampton me in Salt Lake City.
We both decide to write from the birds point of view. We both decide to have the bird viciously attack the man. We both decide that the man represented evil and the bird good. And we both end the story in the same fashion. Or basically we interperet the concept in the same way. What then? Is one of us stealing from the other? Is one of us plagerising the others work having never seen it?
I get the impression you are currently taking a writing class. I took many of these during my university days (most a waste of time -- trying to convice students that style is more important than substance -- but that's another topic all together) In some of those classes we tossed around ideas for stories, or we were to write a story based on a painting or photograph or snippet of some other writing or based on an experience shared with our class by someone.
How many of these stories do you think came out very similar? A few nearly identical. Yet none of us plagerised each other.
For a professionals perspetive visit OSC's letter found on this website in his library section. Under articles and essays at: http://www.hatrack.com/osc/articles/openletter.shtml
Let's keep up the discussion.
The PIMan
Okay, seriously, we all probably know of the phenomenon of the "prepared mind." The guy who figured out benzene rings goes to bed and has a dream of a snake eating it's own tail, wakes up and realizes carbon atoms can form a ring, right?
Or, we have the mundane, almost to the point of being trite, examples of how humans tend to think in dichotomies -- some theorize that we separate the world into "this or that" because we have two hands, two eyes, etc. On the other hand, our most primitive senses have a great influence on our mood, and can trigger both thoughts and emotions. Smells go directly to the brain stem, for example, then get processed in the cortex, unlike our senses for vision, touch and sound.
What's the point? The point is ideas and thoughts are what are called an epi-phenomenon of the brain. The "mind," like the "soul" is a construct. We use it to explain how the thing works, even though we don't have more than a sketchy understanding of how the brain pulls it off.
I think the real question here is "what are the constraints on human ideas?" Are there unthinkable ideas? Are there thoughts so foreign that we can't even conceive of them, let alone think them and write about them? This would seem to argue that even in our most altered states, we are still constrained in what bizarre ideas we can produce.
I think that's true. I think our ideas are limited by our evolution and our experience. We are bipeds. We have two hands with opposable thumbs. We see in the "visible" range of the electromagnetic spectrum. We hear sounds between 20 and 20,000 Hertz. And so on. That's how we are "built."
Layered on top of that we have language and culture that arises within the constraints of our own evolution. We think in our native language. We tend to dichotomize. We have trouble with anything over 3 dimensions, and beyond 4 dimensions, we're hopeless. Strangely, we are mathematical prodigies, but make bizarre and revealing types of errors (especially when estimating probabilities -- hence we tend towards superstitious beliefs).
So, where do I get my ideas? I take what already exists somewhere and put it in a new place. There are plenty of polymorphic animals, just none that are intelligent (at least on Earth). There's lots of cool instinctual behaviors in the animal kingdom, some of which would be very useful in automobiles. Etc.
I studied comparative cognition (how do animals "think and learn"). I use that to "play" with other worlds. Sometimes it works and sometimes it falls flat. But the main point is (to come full circle) I write about what interests me enough to study it in enough depth that I can explain it to someone else and get away with using it in a novel fashion (or so I hope).
This is completely conscious on my part.
There's lots of room for serendipity even with all that "consciousness" in the way. The characters still have to talk and interact. When they "write themselves" it feels like I'm not in control anymore. But it's still my hands pushing the keys. It has to come from somewhere. And, if my mind wasn't prepared in exactly this way, it'd come out completely different.
It's letting loose what is already inside. No doubt there are countless others who have the same, or similar, ideas in there, or are similarly prepared. Thank God for that, otherwise the stuff I write would be incomprehensible to them. It'd be hard to sell (not that I've sold anything yet...) to people who don't share some facility to understand the ideas I'm putting forth. If we all lived separate and incongruous existences, we couldn't possibly communicate.
Or so I think.
Why do we call people that come up with fantastic ideas 'visionaries'? How can you consciously 'get' a new idea? How do you decide which ideas to examine?
I know, I'm not playing fair, but I'll post a riddle in the forum somewhere that I just heard. It's not hard, but at first I thought it insoluable. It took almost five minutes for me to think of the answer, and that's a pretty long time for me to be stumped by a question that 80% of kindergarteners got right.
Maybe I'm not reading that right, though. The way you used it, saying that the "mind" is a construct, with mind in quotes, suggests that you may mean emergent property. Which is an interesting statement in and of itself. If that's true, then in only a few more years we should be able to create actually intellegent, self aware, sentient machines. We should be able to create them now, in fact.
Anyway, if there are some kind of limits on human ideas, then I'm afraid I've already gone well outside them. But that's just me. And I don't really think that I understand them at all. I have them, but...
I'm not willing to say "God did it," because I think every time we do that, we're saying "I give up -- I can't explain it, why bother studying it?" I believe in God, so for me, sure, ultimately, God "did it." But there's still room to find out how and see if we can emulate it, or even improve on it. (What hubris!!!).
I'm convinced that's what we're here for. To emulate God in all things, including the act of creation. But that's a different question.
Sentient machines? Sure thing. I think our best hope for that is to start with biological material -- something that already has a brain. Maybe I'm wrong, but it seems like the computer folks are going this way. They are trying to build software versions of neural nets. Some are even building small "brains" out of neurons to study how they could be turned into computers under our direct control.
But whether it's neurotransmitters or electrons that flow across the connections, I do think that hyper-complex "brains" will ultimately achieve consciousness. I actually don't think it's all that remarkable, special or rare. If you study animal cognition, you find pretty soon that finding a defintion that includes all humans and excludes all other animals from the lists of the "self aware" is pretty much impossible. Either we all have minds, or none of us do, and this is all an illusion.
Jeannette
Actually, I think that conscious machines would eventually be judged to have souls. It might take a millenium for the ultra conservatives among the religious community to recognize such things, but any machine that could choose to save a life, and then does so would meet my criterion right now.
Killing a machine you've had a conversation with should be just as hard as killing a person you've had a conversation with (especially if you've never encountered them face to face). If you couldn't tell the difference, how could your conscience deal with causing the death?
And, sure, I bet there will be machines capable of playing off our sympathies. If a dog or cat can do it, why not a machine designed specifically for hyper-intelligence?
My darn PC plays off my sympathies on an almost daily basis. And when it stops taking my commands, I run from the room and throw all the circuit breakers. It bought itself a battery. I'm screwed!!
If I can control my computer in that way, then it's meaningless to speak of it as having a soul or mind of its own, because it isn't its own. I control it.
If, on the other hand, my computer were able to decide for itself what to do, then it would be different. If I try to reprogram it, it can decide to deprogram itself back. It might not, but it has to be able to decide for itself.
As for some invisible line between animal and human, Mormon doctrine does not specify any such division. Animals have spirits, and so do humans. The degree to which animal spirits differ from human spirits is probably comparable to the degree of difference between their bodies. In other words, yes, clearly different, but not in some fundamental sense. Please note that the use of the word 'probably' in the above statement denotes that this is an interpolation of doctrinal statments, and may be unsound (not that I think it is, but I don't know for sure).
Jeannette
As for YOU, Mr. Onus, I will be hard to convince me that H.A.L will never happen-- our ego, as a race, will trick us into thinking it can't, and then it will. So there, you technophile! DIE, DEMON!! (Anyone else see eXisTenZ? Pretty cool, in a cheezy sci-fi way).
If we create machines that are smarter than ourselves and give them powers beyond our own, we would be foolish to then try and make them self-aware. If they have truely free will, then of course some of them will go bad, just like people. Then we'll have to fight a giant war to help the good machines beat the evil machines, and probably the evil machines will figure out how to kill us off so we don't help the Good machines.
Of course the Good machines will probably win in the end, but we might not be around anymore to be happy that they did. And they might well decide not to recreate or regenerate us either. I mean, surely they will realize that humans have free will, and so if they create us, then some of us will be evil rather than good. So, since they have free will themselves, what would be the point of creating us? Particularly if a lot of us would turn out to be evil?
So I'm in the camp that says, make machines as smart and powerful as you want, but don't give them free will. That would just be asking for trouble.
(As far as the eXisTenZ thing, that's the way the title of the movie is spelled, not something I made up).
Jeannette
"Machines are smart and quick. They can now formulate ideas, create art, and rationalize abstract ideas. And they can do this faster than we can. What keeps them from being able to destroy humans? I'll tell you. No matter how complex their thought patterns are, they're still programs. All computers run on a standard set of protocols. Humans have the ability to break the rules that machines can't and that's why they cant kill us. The only problem is knowing how to break those rules."
Give me your ideas on this, please. And mail me so that i can get them quick. Thanx.
And now for my ideas on--well--ideas. The above passage also says it. Humans have the ability to break the rules, even the ones that haven't even been written yet. All human's thought patterns differ in a slight way. Two people may see "The Matrix" and write stories that were inspired by it, but the stories will not be the same. John Locke believed in a think know as Tabula Rasa, or blank slate. In that when we are born our minds are blank and as we things are written on this slate. Two people that lead similar lives sill have similar things written on their slates, but they still wil grow up differently, tus giving them different ideas.
P.S. on the topic of the Quantum Leap thing. I know the guy who wrote Karate Kid, and I can promise you it wasn't the person named in the credits. Copywrite good stories and don't throw them away, you'll never know when aproducer will go sifting through your garbage.
Give me your ideas on this, please. And mail me so that i can get them quick. Thanx.</i>
I hope the italicizing worked there. If not, that was the selection from Khan's post I wanted to respond to. I sent an e-mail as well. Basically, I think you have leapt over a logical step. I don't think it follows that because we created something we are always going to be capable of "defeating it" or protecting ourselves from it, should that thing decide to act independently. I believe that a machine with consciousness, like anything else with consciousness, is by nature unpredictable. That means that we won't really know exactly what it will do in any given situation. Even if it behaves exactly according to its programming. We might be able to figure it out after the fact, but that's not the same as real prediction and control. I wouldn't count on a James T. Kirk to come in and argue the computer into self immolation.
But even if you are right and I am wrong, I think you need to have your character spell it out more clearly. I think he's saying "we're safe because we created them, and therefore we are smarter or can always figure out a way to kill them." That rings hollow to me. I'd like to have some details as to where he put the kill switch.
The thing is, if the machines can't break the rules and humans can, then that means that the humans have something that the machines don't, which is exactly what we're talking about. Free will is the ability to decide to do what you weren't programmed to do. If you have it, then you can break the rules. If we created machines that had free will, then they would be able to break the rules, since that is what it means to have "free" will.
Here's what I think is really going on with "artificial" intelligence and what will happen in the future(in reply to Khan and Survivor, both).
But, think of a system so complex that it can't be programmed so simply to just "do the right thing." The three laws only give us humans solace if the machines obey them without fail. As machines brains become more like ours, their ability to act becomes less controllable and less predictable simply because our PROGRAMS cover less and less of their overall capabilities. Suppose the machine can learn from experience? As soon as it leaves the showroom, it exceeds its initial programming.
I think all the failsafes and interdicts in creation will not stop the thinking machine from becoming self aware, eventually. With that comes self preservation. Asimov solved the dilemma by having Daneel (the immortal last remaining positronic robot) run things behind the scenes, but then eventually abandon humanity altogether. He couldn't exceed his core programming and he couldn't continue to be himself while in the company of humans. So, what then? They can't kill us, but they are free in the Universe? Still a far cry from servile automatons with an "off" button.
Basically, the plot revolves around this robot that realizes that it is beginning to malfunction. It is the main controller of a large city, and it knows that if it reports itself, it will be taken offline, but if not, it will begin to err and thus harm humans. So it uses an experimental machine to send itself into the distant past, before humans exist, and once there, evades the humans sent to capture it.
The thing is, the malfunction that it is suffering will become catistrophic when it affects the power core of the robot, but the robot avoids thinking about this, since to analyze that datum would lead to the laws of robotics being invoked, and it would be forced to surrender. It also disables it's communication ability and avoids coming within hearing range of the humans hunting it for the same reason, it knows that if they can talk to it, they can force it to surrender.
Not only that, but it leads them into situations where they must risk their safety in order to keep up the pursuit, and never warns them, thus indirectly attempting to harm or kill them. I never found out if it actually killed any or all of them, but it could have, given the circumventions it had devised.
In other words, the laws of robotics are no guarantee at all. Think of slave humans being raised and used to negate the laws, say by creating situation in which the robot is forced to choose between comprimising the continued safety of a large group of humans or the killing of a few. Think of the laws being reinterpreted a la humanoids so that robots consider it their duty to completely subjugate all humans, even to the point of lobotimizing those that resist or don't seem happy.
If AIs are free, then they can do evil.
I think that the only really interesting number generation analogy is the situation of irrational numbers. Coming up with infinite meaningful descriptions of irrational numbers isn't all that hard, since you have all the square roots of primes, and then you can multiply them by all the rational numbers to get more irrationals, and also do the same with simple addition and subtraction...anyway, what I'm saying is that once you solve for the rational component, to extract the base irrational, you find that it is described by a unique mathmatical derivation. The square root of two has no properties unrelated to it's being the square root of two, while the square root of four has many properties unrelated to being the square root of four.
Does this make any sense at all?
How long is the coastline of Maine?
A: Depends on how you measure it, when, and at what level of detail.
Why bring THAT up? Well, because mathematics can be a great source of ideas and because I never understood imaginary numbers.
The study of probability and statistical inference are very important to what I've been able to write in this genre.
Irrational numbers are more interesting. Early mathmaticians thought that any number could be represented in the form of a/b, where a and b were both integers. Then some clever fellow, working with the problem of mathmatically determining the proportions of the square, discovered that the square root of two cannot be so represented, because if there were some a/b that could represent the square root of two, because if a/b = V2, then aČ/bČ = 2. This may not seem like a serious problem, but we know that if aČ/bČ = 2, then bČ/aČ = 1/2. This means that 4bČ/aČ = 2, and if we take the square root of both sides, we get 2b/a = V2. Meaning that 2b/a is equal to a/b. Now we might just think that if we plugged in big enough numbers, we could make this work, but there is a reasoning loop that we can use to define a and b a little bit better. For one thing, we know that if aČ/bČ = 2, then aČ/2 = bČ, which means that aČ is even (since that's just what even means, that a number is evenly divisible by 2). Furthermore, the square of an odd number is always odd, and the square of an even number is always even. This implies that a must be an even number. Well, so what? Well, remember that a/b = 2b/a. If a is even, then 2b/a can be simplified, by dividing both numerator and denominator by 2. Which gives us b over a new number, that we'll call c. Okay, so we simplify to b/c, so what? Well, all the above steps of our argument about the square root of two apply to b/c just the same as they did to a/b. We just end up with c/d, then d/e, then eventually y/z. And we are still able to simplify making y/z = 2z/y, and dividing by 2! If a/b equals V2, then it must be infinitely large, because it must be capable of being divided by 2 an infinite number of times.
In short, there is no way to represent the square root of two using the form a/b, where a and b represent integers. In fact, there is no way to represent the square root of two mathmatically, even though we know that it is on the ordinary number line somewhere between 1 and 2.
The clever fellow that first discovered this fact, on telling his discovery to his master, the famous Pythagoras, was rewarded by being thrown into a deep well.
Which I think was very unfair.