FacebookTwitter
Hatrack River Forum   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Forum » Active Forums » Discussions About Orson Scott Card » OSC's physics

   
Author Topic: OSC's physics
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
I was wondering about how Card's physics in his books seems to everyone here at the boards. I think it's pretty good in general, but a few things bothered me. This is my first post to this board, and I'm not sure about the rules for posting spoilers up, if someone could tell me whether or not I can post plot spoilers, I will be most happy to give any insights I have.
Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
CalvinMaker
Member
Member # 2032

 - posted      Profile for CalvinMaker   Email CalvinMaker         Edit/Delete Post 
There's a bunch of threads concerning technology and stuff in the enderverse here: http://www.philoticweb.net/openbb/board.php?FID=3

Just thought you might be interested.

Posts: 1934 | Registered: Jun 2001  |  IP: Logged | Report this post to a Moderator
CalvinMaker
Member
Member # 2032

 - posted      Profile for CalvinMaker   Email CalvinMaker         Edit/Delete Post 
Oh, and it's fine to post spoilers, but make sure you label them clearly and boldly, so those that don't wish to read them have fair warning.
Posts: 1934 | Registered: Jun 2001  |  IP: Logged | Report this post to a Moderator
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
Thanks for the link/rules explanation. Much appreciated.
Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
I think his biologics are crudely lacking in favor of simplistic poetic justice.

flish

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
The site pointed to has some discussion of Ender physics, but there is little of any accuracy said there, at least as far as physics issues.

As far as Enderverse and physics go -

The big issues are space travel and philotic principles (which wind up merging).

First - Card does a bang-up job with the relativity thing. He doesn't take us beyond reasonable physics anywhere he's being specific, allowing for super-spiffy ways of storing/extracting propulsion energy for his ships (it's REALLY hard to have enough fuel to keep up constant acceleration to .9989 c - who knows. Ramscoop or something). No real problems within a sci-fi framework.

Second - philotes. Comparisons have been made with entangled photons, all of which are untenable. I think philotes were a great move - they don't clash with much known physics, look enough like some other physics (entanglement) that you aren't left shaking your head, and allow a deep and rich set of effects which revolutionize the universe he's playing with. Philotes violate a principle involving nonlocality (magic word), but that's OK. To have the universe he's working in be interesting, a few paradoxes are ok. There are good reasons to believe that instant interstellar travel and information transfer are problematic. Of course, things change all the time - this is just something that's fairly well established.

What trips me out is the MD device - setting up a field where electromagnetism and Maxwell's equations break down means there are some heavy-duty changes being made to the nature of the universe. Fine - but if one can arbitrarily change the Standard Model Lagrangian of a bit of space - there is MUCH more you can do with it as far as destroying stuff, extracting energy, etc.

Enough rambling. These books are great as far as their physics; the normal errors are generally absent. No Star Trek-isms.

Alden

(Edited to handle Davidson's quibble about the following phraseology: These are great books because the normal errors...)

[ July 14, 2003, 09:15 AM: Message edited by: Stradling ]

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
TomDavidson
Member
Member # 124

 - posted      Profile for TomDavidson   Email TomDavidson         Edit/Delete Post 
"These are great books because the normal errors are generally absent."

Well, no. They're great books because the characters are well-conceived, the prose is strong, and the ideas are, in general, interesting explorations of basic philosophy. The physics can safely be ignored completely.

Posts: 37449 | Registered: May 1999  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Of course. Otherwise all the physics texts I bought would be considered literature. Come on.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
CalvinMaker
Member
Member # 2032

 - posted      Profile for CalvinMaker   Email CalvinMaker         Edit/Delete Post 
Talk to SSywak. He's like a guru on Enderverse technology.
Posts: 1934 | Registered: Jun 2001  |  IP: Logged | Report this post to a Moderator
Boothby171
Member
Member # 807

 - posted      Profile for Boothby171   Email Boothby171         Edit/Delete Post 
You rang....?
Posts: 1862 | Registered: Mar 2000  |  IP: Logged | Report this post to a Moderator
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
I was pretty pleased with Ender's Game and Speaker for the Dead, where the ships went at near light-speed, without warping space or folding it, or any of that nature. However, in Children of the Mind, when ships were able to pop in and out of space at any point, I fail to see how the "thoughts" of the people on board could have possibly affected reality.

In general, though, Card tends to skip over physics lightly, as pointed out, and focus more on the story and characters. This is a good thing, because unlike many movies and books (such as The Matrix) tend to go out in left-field as far as physics is concerned and their plot falls short. Nevertheless, I was a bit bothered by the ship in the Homecoming series book Earthfall -- it describes a "web" used to collect hydrogen in space, yet in order to gather enough hydrogen to propel the ship to 99.5 the speed of light (this is the required speed for the ship to travel "100 years in just 10" to the people on board), the ship would need many square kilometers of a solid sheet just to get enough, and I doubt the web would have done it. Nevertheless, that book was probably my favorite in the series.

Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
This brings up an interesting point. Stradling posted the following:

What trips me out is the MD device - setting up a field where electromagnetism and Maxwell's equations break down means there are some heavy-duty changes being made to the nature of the universe. Fine - but if one can arbitrarily change the Standard Model Lagrangian of a bit of space - there is MUCH more you can do with it as far as destroying stuff, extracting energy, etc.

I am forced to wonder what he means by this. The Lagrangian function of a particle is T - V, where T is kinetic energy and V is potential energy. The function must be written in terms of the "canonical variables" p and q (momentum and position) before it can be used to solve classical mechanics problems. The "Standard Model" is the term given to our overall picture of the elementary (and not-so-elementary) particles of nature and the quarks and "messenger bosons" which comprise them. It is a very satisfying and comprehensive picture but it fails to yield the known masses or charges or spins of the particles so it is "incomplete" in that sense. Many physicists hope string theory will correct that defect.

I am not entirely sure what Stradling meant by juxtaposing the two terms. Nevertheless, since the MD device does tend to bend a lot of laws, the Buggers must have been a heck of a lot more advanced than we are. Pity that they died out for 3000+ years, no telling what they could have accomplished.

Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
WheatPuppet
Member
Member # 5142

 - posted      Profile for WheatPuppet   Email WheatPuppet         Edit/Delete Post 
*scratches head*
Huh. I always thought that the Little Doctor was a (plot) device made from unobtainium. [Big Grin] [Razz]

Posts: 903 | Registered: May 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
(doesn't know what standard model lagrangian is either)
Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
OK. Lagrangian is a magic word that means "equation that tells us what is allowed in the universe". You can write a lagrangian that describes a pendulum - the kinetic (T) and potential (V) energies are easy to look at in simple cases. It describes the "easiest" way to do things - the "principle of least action".

Now, more complicated. When you switch the T and V and the space you're working in from classical 2-dimensional vectors and scalars to quantum 4-vectors and scalars, you can carry along the idea of a Lagrangian. Now, the energies and potentials have to be described in a quantum way. You add terms to a lagrangian describing a system, and you can then manipulate it to get results that tell you about the system. (That's why it's useful in the first place).

You add a term to the lagrangian for each and every effect you want to describe. If you're including relativity, you write it in 4-vectors, and you try to keep it Lorentz invariant, meaning it doesn't change with your speed or reference frame.You play with the term and make sure it satisfies "symmetry" - meaning if you rotate it arbitrarily in some variable, it doesn't go berserk and give you a different answer.

People went in and described relativistic electromagnetic fields with a lagrangian and it went great. Then they realized that E&M doesn't even cover why atoms hang together, or why nuclei decay, so they made up a couple of other forces to cover that, and a lot of work later, added the strong and weak internuclear forces as terms in the system, and the description got better. Then they realized that they could merge the E&M and weak terms, and describe some particles they couldn't account for before. Then they fixed some problems by adding a term for "spontaneous symmetry breaking", called the Higgs field, because though the electroweak term was symmetric, it missed some things.

And on. Gravity isn't in it, and strong and weak are still separate, but the lagrangian I am describing works SO WELL compared to everything else that it is the standard that everyone now works from, looking for new things to add or fix, or some evidence to blow it apart altogether - after all, all this has to match perfectly with experiment, eventually, or it's not much of a description of the universe.

This Standard Model has flaws that're being worked on, but no showstoppers. Where it's wrong, it's wrong by being too limited or perhaps (who knows) by being a less-useful perspective. It describes things very nicely, so far.

Anyway, about the MD device comment - the lagrangian for an EM field requires that a photon be massless. The only ways to suppress the interaction (in such a way as to cause temporary breakup of molecules) are to directly screen the outer charges in the bonds from the inners (not practical, in my mind - requires the introduction of extra charge, which requires a LOT of energy to create), or add energy to the system in such a way as to overcome the potential of the bonds (not the role of a field, but of a force) - and if you're going to do that on a wholesale basis, the energy you are using would be better put to use blowing a small hole in each ship instead of overcoming all those bonds. It is hard for me to imagine that the breaking of the bonds would produce enough energy to propagate that effect at all (remember, breaking molecular bonds gives you TNT and gasoline engines, not spaceflight and nukes).

There are two alternatives. The first is a tweak of the EM lagrangian term - perhaps giving the photon a little mass. Great. EM goes from infinite-range to finite, all the molecules dissociate, and perhaps you can make up an echo effect so the reaction propagates. Only, you just broke symmetry big-time, in a system that has been verified as symmetric to unimaginable degrees. That's tricky. While you're at it, why don't you give gluons - maybe just one color/flavor gluon - a bit of mass? Nuclei dissociate and even protons/neutrons go to pieces - and instead of 1/1000th of their mass turning into energy, as it does in mere nuclear explosion, some 2/3 of their mass is released as the binding is invalidated. THAT would be a bang. No bugger ships left, lemme tell you. That's what I mean by having more fun.

Now, you could also say (in the Enderverse) that setting up a field in which philotes can't talk to each other will have the effect of relative positions going off a bit for a little while. That would be fine. Molecules would dissociate, and there would be a bang - about what Card described. Trouble is, he didn't tie philotes and MD device together. So, whatever - you don't want to overuse the unobtanium plot point too much, right?

Like I said before, the philote idea is great. It dodges physics validity altogether. Remember? New effect, not intrusive to our present model. And if human thought is applied philotic physics, it makes perfect sense that wishing hard enough makes it so. There are still problems with simultaneity, but I don't care. This is all fun and games.

Corrections for the interested - Standard model _does_ yield spins - very well, in fact. Charges, too. String theory might drag gravitation into the lagrangian and might even reduce it to one term... but so far it's only testable at energies which we'll never reach, no matter how hard we try. Maybe someone will jigger up some testable variable later on. I'm not counting on it. It is still cool. We have no idea what'll determine the mass of fermions from first principles. Hopefully the masses of bosons will happen when we tie down a Higgs boson.

There's a lot there - who got through all this? I'm sorry if it wasn't all real clear - this is my first time trying to describe this to a general audience.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

I'm not sure what you mean by "magic word".

My impression was that you meant to write "hamiltonian" instead of lagrangian, but I understood your point.

thank you for your posts.

flish (mike)

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Nope. I meant Lagrangian. Hamiltonian is T+V, and is great for some of the same things - but when you're describing something with a "least action" method, use a Lagrangian every time. I'm not equating this to a QM Hamiltonian like you'd use in a matrix element.

Most scientists (in my experience) use words that don't mean anything to their audience, mostly to avoid being sidetracked into the details of peripheral or difficult issues. They just say, "There's a thingy called <blah>, and it means that I need funding because it'll revolutionize the world." <blah> is a magic word - it means nothing until the listener has context. Meanwhile, it sounds more impressive than thingy.

Thus - when I said Lagrangian initially, I was using a magic word.

Alden

[ July 20, 2003, 11:40 AM: Message edited by: Stradling ]

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

My experience is counter to yours. I dislike the term "most" whenever it arises in arguments here on the forum as it is typically overblown and misrepresentative.

Isn't the audience of most scientists/physicists their students (who are familiar with the technical jargon)?

I'm not sure I would resort to calling those technical terms "magic words".

"Most" (subjectively speaking) scientists in my experience try to make their investigations and results as palatable as possible for the audience of their colleages, students, and the public at large for exactly those funding issues.

I would agree that it is difficult to provide the appropriate context. It takes specialized language to describe specialized concepts. I don't think this phenomena can be entirely described as elitist and obfuscating? Is that where you were headed?

mike

PS. "I had this idea for a thingy that totally explains the thingamagigeeness of that thingy"

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
No - let me clarify. There are many topics that are too difficult to explore in an hour, and they come up every few sentences in an overview lecture. You don't want to just skip them - the sharper audience members will notice that something's missing, and ask why. So instead, you tell them what the issue is called and perhaps give a nutshell summary of its implications, and move on. Later, in another context, the audience member learns something about the topic, and can plug it into your lecture, because he know where it fits.

It's certainly not out of a desire to be unclear. That'd be easy (and make the lectures a lot shorter.) [Smile]

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
That was quite intriguing. I still can't help but feel that this goes too far into the realm of speculative physics (metaphysics if you will) rather than actual tested physics though. But your point is well-spoken.
Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

I totally enjoyed reading your posts. If you feel inclined to let loose with greater technical detail, it would be a pleasure to read.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Griper-

Insofar as I was speculating what Card's sci-fi devices might do, it was all speculative.

As far as everything else, it is pretty well nailed down. At least, enough that the Europe and the US are willing to pay 2.1 billion dollars for the installation of an accelerator here in Geneva to nail them down further. And another one in Batavia, IL. And another in Stanford.

Again - particle physics is in many ways on a firmer footing than most other sciences. This is part of its nature.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Mike-

If you can think of any questions I'll try and get an answer for you. I don't quite know where to go from here in this thread as far as just holding forth at random. [Smile]

Alden

After I posted this, an idea sprang to mind - there are a lot of sites to look at. Here are a few:

http://www.particleadventure.org
http://www.cern.ch
http://www.fnal.gov

If you want to know more about the stuff I was talking about, the first link is best. The other places mentioned have some overviews of the subject and the machines that let us make these measurements. I'll answer questions as I can.

[ July 23, 2003, 07:44 AM: Message edited by: Stradling ]

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
Thanks for the links, Stradling. Someone once said something along the lines of "Research is convincing the congressman who know nothing about science to give you money" or something like that. I think it was one of my dad's friends in college, but I'm not sure.
Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

That's ok. Thanks for the links. I worked on building/simulating particle detectors for SLAC as an undergrad. I just haven't kept up all that well since. Just remarking that if you felt like expanding, I'd happily listen, try to ask intelligent questions, and enjoy a refresher/uptodater.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Cool. What branch of research did you wind up in (in sunny Santa Monica)?

I've appreciated the intelligent questions.

Did you work on BaBar, or was that more recent?

I wonder if there is any interest in the thread continuing as a discussion of how exactly I can say that these things are measured and not inferred, and why this is hard science and not metaphysics? I just stated those points, without any substantial backup.

I want to get better at explaining it, and it helps me formulate my thoughts to write it.
It's a lot of hard writing, though, so I won't go into it unless people say they'll read it and discuss it. [Big Grin]

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
eslaine
Member
Member # 5433

 - posted      Profile for eslaine           Edit/Delete Post 
*envy*

Stradling and PhysicsGriper, I keep coming into this thread and agreeing with all you have been posting, unable to match your physics knowledge, and so remaining silent. I love this thread.

And filletted comes in with the remark about the detectors she worked on...

Proton Decay?

Posts: 2506 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
JLcke
Member
Member # 5171

 - posted      Profile for JLcke   Email JLcke         Edit/Delete Post 
BaBar, the elephant king.
Posts: 56 | Registered: May 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
alden,

I worked on a subcomponent of something called the GEM project. Did simulations/construction etc of single-electron detectors.

Side-stepped away from high-energy into biophysics (part interest, part career advice from profs involved with the impending death of the SSC)

[Smile]

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
JLcke -
Yeah, that wasn't lost on the people who built the detector.

http://www-public.slac.stanford.edu/babar/

The thing it's set up to detect is the set of decay products from what are called B mesons (a b-quark (or anti-b) and another quark (or antiquark), which attract each other and bind together). The B that contains an anti-b is called a B-bar. Thus... BBbar - easy enough to make Babar out of that.

Eslaine - Proton decay (spontaneous) doesn't seem to happen here in this universe. At least, if it does, a proton's half-life is more than 10^33 years. It's a good thing, too. Protons decaying would have adverse consequences.

There are several ways to make protons kick the bucket, though - none of them spontaneous. Several involve shooting it with another proton, or an electron, hard enough that it breaks up. Another is called inverse beta decay, where the proton (for various reasons) captures an electron, and turns into a neutron and an electron neutrino. This is a primary process in a supernova - in fact, though the light burst makes it (for a moment) one of the brightest things in the universe, that light burst is only 1% of the total energy going out. The other 99% is in a wave of neutrinos from all the inverse beta decays that take place in the supernoval. A possible leftover from a supernova is called a neutron star. All those protons turn into neutrons, which don't have charge, so they don't push apart. Instead, they compress together and make matter so dense that a teaspoon of it is about the same mass as a battleship.

Such a neutrino burst would cause anyone in its vicinity to suffer a relatively difficult death, because they would (at that density) induce regular old beta decay in a large enough part of your body to make your biochemistry very unstable, as a lot of elements suddenly became a lot of slightly different elements.

Mike-

Ah, the vagaries of life. What a fiasco that was. So much wasted money and effort. At least LHC looks like it's going to happen for real.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

The tides turn.

Do you have a specialty? (research-wise)

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Yeah. I'm working on Higgs boson searches for the ATLAS detector at CERN. We're still about 4 years from turning on the accelerator, but we're working on techniques to extract the Higgs signature from a much higher background (about 10,000 times higher). It's tricky. That's what's nice about SLAC - electron-electron events are clean. Not so here. [Smile]

So really, I'm simulating detectors too - but in this case. I'm simulating the whole detector under a limited set of stimuli. We take the results and train a neural net to recognize them - then we turn it loose on a full simulation of detector performance under running conditions, and hope it can recognize the kind of events we fed it. If that works, we use the same neural net to pick the signal out of the real events that will start coming down the line in 4 years.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

The NN is trained on simulated data? I don't quite get that. The utility of NN's is their ability to pick out patterns when you don't have a preconceived model to begin with. If you've got a model, why not use that in the detection?

pondering... unless the hope is that the NN represents a more concise encoding of the original model used to generate the simulation.

(edit: I missed the training/testing scenario in my first read. Ok, I understand that for the sake of recognizing this signal/background problem)

mike

[ July 30, 2003, 01:46 AM: Message edited by: filetted ]

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
You're on the right track. We do, in fact, have a preconcieved 'known background' and another 'signal hypothesis'. The backgrounds are fairly well understood with respect to the physics, and we're able to simulate their interaction in the detector within an understood error. The signal is guessed at - we model those guesses and hold them in reserve.

I'm going to make this general-audience accessible , including explanatory detail.

When the experiment is running, we're going to be looking at the data in many different ways, trying to eliminate the crap and look at the signatures of hundreds of physics processes. It's a data reduction nightmare - 99.999999% of the data are immediately eliminated in the first microsecond - 99.999% in 25 nanoseconds or less. That's important because the stuff we _keep_ is ~100 MB/second - and that's one part in 10^7 of the data being stored.That's 1 000 000 gigabytes/s, to scale things into familiar storage terms. No way to handle that - that's like 20 simultaneous phone calls per person for everyone on earth. The reason it's necessary to have so many data flowing so fast is that even with this amount of data, we're still going to be waiting 10^3 seconds (17 hours) at least per "good" (signal) event for some processes. To get any good statistics before the end of the universe, or even the next century, the system has to run this fast. The rate at which bunches of protons hit each other in the collider is one every 25 nanoseconds, and there are perhaps 25 events from each crossing.

So we take these weeded out 100 MB/second and try to analyze them. One can't use one's own neural net (brain) because its interface to the data ois too slow and people are too expensive. So you create a program that mimics the behavior of a bunch of 'neurons', called a neural net. It is just a set of mini-decision-makers that choose solutions at random to the problem they're posed, and then try to refine the results they get by making incremental changes in their first assumptions - a process called training. It's a powerful way of getting a computer to recognize patterns. Once the net is trained to see a kind of pattern, you lock it (so it can't change further) and turn it loose on real data, and see if it can pick out the same patterns in the real world.

When human beings do the analysis by eye, we do somewhat the same thing, but on a different level - much simpler. We apply 'cuts' - getting rid of stuff we don't need, and trying to find underlying patterns. A neural net makes cuts that are less intuitive and more complicated, but it's the same sort of thing.

At the end of the day, you're left with a 'clean' signal, and you look at it. You say, does this fit my background hypothesis? Hopefully, you say No! It doesn't! There's this bump right here in the graph. Does that match any signal hypotheses we have? Why, Yes! This looks just like a Higgs boson!

'Course you have computers do a lot of that, too, so the process can be blind and one can avoid putting one's own bias on the data.

Anyone care for more?

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

That's a lot of data.

I was wondering why the use of the NN middleman instead of the original model used to generate the simulations (not a comparison to by-eye interpretation and flagging). If the model is good enough to generate an accurate simulation (upon which to train either the NN or a human observer - prohibitively unlikely), why can't the salient features of what you're looking for be extracted and codified into a model-based detector?

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Would that it were that simple. We can accurately model on small scales what an event might look like for one process or another, and we can layer them over each other - but the computation necessary to model lots of full events is daunting.We do it, nonetheless... but on a smaller scale. Enough to give us statistics.

Even with the fully simulated events, though, we are basically looking at a pile of crap.

This is an example of a reasonable event (top left). This one was produced in full simulation. We can ID a lot of the particles fairly well, some some of the time, and a lot not at all. Some we can't ever see (neutrinos). No collision happens the same way twice. Smashing protons together is like colliding bags of apples at hypersonic speeds, then trying to track all the seeds in real time. There's a lot of stuff to weed out to find what you're looking for. 'Course, you don't have virtual apples popping in and out of existence in the bags, and the seeds don't change into anything and move a lot less than lightspeed - so I guess applebags are easier. Still, you have a lot of clutter to sort out.

On the bottom left is the cleaned-up version of the collision - a cut has been applied to take out everything with an energy of less than 6 GeV - too low-energy to be uesful. Viola! One can see the event in the chaos. 'Course, that was a ginned-up easy event in a pile of easily reducible background. Usually you need MANY cuts to reduce the events to meaningfulness. You lose a lot of signal with those cuts, too - it's a very time-consuming task to optimize one's cuts.

Enter the NN. It looks at the clutter with its pretrained eye and says, I spy a Higgs! Its cuts were trained with lots of ginned-up sample events, and it looks at the event with those cuts. Those cuts are also applied to the simulated events I mentioned earlier. Then the real and theoretical post-cuts data are compared to determine the validity of the theory.

In sum - the NN doesn't do the recognition, but the reduction. Stat tools do the recognition and the uncertainty analysis.

Alden

Here are the numbers for the computing for LHC, and here is a picture of a simulated event looking end-on down the beam. Just for fun. On the second one, click the Next Picture link as well. [Smile]

[ July 30, 2003, 08:02 AM: Message edited by: Stradling ]

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
Maccabeus
Member
Member # 3051

 - posted      Profile for Maccabeus   Email Maccabeus         Edit/Delete Post 
Say...what happens if the pattern _doesn't_ match any of your signal hypotheses? New particle? (No doubt that's a headache.)
Posts: 1041 | Registered: Feb 2002  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
That's what everyone hopes for most - something new to look at. Sure, we'd love to verify all the guesses we've made - but something new will be a real treat. Potential for Stockholm.

They're thinking there might be the potential to produce micro-black holes here. Wouldn't that be something. People try to dream up everything that might happen - but I'll bet there are some surprises awaiting us.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden (and Macc)

[Wink]

My follow-up question. Given the simluation model is likely incomplete or flat-out wrong, why not train the NN on as large a family of simulated scenarios as possible and see who spikes during the real experiment?

I hate to say this, but I still don't feel like my question's been answered.

let me think and repose my question.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
side question: have you examined the NN's that appear to identify the signature successfully, deconvolute, and compare to the original simulation-model?

that might be helpful in refining the simulations themselves.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
The simulations are as accurate as we can make them, and that's pretty good. We're not talking WAGs here. The criterion is that the results of the simulation must match all previous experimental data - e.g. years of LEP and Tevatron and SLAC and Hera and Tristan and ...... This isn't done in a vacuum. The theoretical basis for the codes stems from those results. There are things we don't understand as well, and approximate as best we can - things like jet hadronization are more difficult at higher energies. Still, there are supporting data that make the model pretty good. The codes are good enough (in full sim) to give us a very good idea of what all physics processes, known or theoretical, will look like in the detector. There will be changes after runs start - there are always surprises - but those surprises are what we want to look at anyway. The codes give us a good look at what we understand now, so we can separate it from what we find later. Of course, the codes describing the processes we have postulated and are looking for are added in layers, so we can decide whether or no the thing we're looking for showed up or not. We compare the real measurement to the simulated "known" physics, and to the hypotheses, and see which fits better. That's all. Really, the codes are quite good. And get better constantly, with hundreds of PhDs working on them constantly. Remember, the collaboration has 4000 members in it.

The magnitude of the computational task is again the problem with your suggestion. When I run 20 000 events today (read: less than 25 microseconds, one process among thousands) it'll take a week to chew through the simulation. We do run several types of simulation, some of which do some tasks better than others. It's a lot of work.

I think I now see where you are hitting confusion. The NN isn't doing our analysis or pattern recognition, and it's certainly not generating the simulations.

Here's what it's doing: it's optimizing a cut pattern in the data on a training set (from these aforementioned lovely codes) that'll optimally separate the signal from background. There are literally hundreds of variables in energy and geometry that we look at in each of several parts of the detector. The cuts made are really just a surface in a space with a dimensionality equal to the number of variables you're using. Say you're doing cuts on 10 variables. That means you're searching for an optimal surface (usually a simple n-plane, n=10 in this case) in a 10-dimensional space. Humans aren't so good at that - LOTS of trial and error. The NN can do it better.

Once that cut is made, it is applied to the analysis of both signal and simulation, as I said before. We are, in fact, directly comparing the model to the signal - not doing something fancy to the model with a NN to change its results or context.

Really, this is one of the things my group is bringing to the table. The NN idea is still new enough that the more conservative members of the collaboration are still cautious about NNs. They don't trust cuts they didn't optimize themselves - they worry about bias. I don't quite understand this. I think bias is more likely from an interested observer. Nevertheless, we're working to get NN cut optimization accepted as part of the official analysis regimen of the ATLAS collaboration.

As an aside - training a NN on a large family of simulations of this size would occupy a nontrivial processor farm for more years than we have. And what would we be training it to recognize? And how could we validate it and check for overtraining, and determine which of the many breeds of NN is best for the task? Remember, the data set from simulation is already in the hundreds of terabytes.

Alden

[ August 01, 2003, 02:23 AM: Message edited by: Stradling ]

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

bingo! got it. Thanks for going to all that trouble to explain what you are doing.

I'd mistakenly thought you were training the NN to recognize something you already had in hand (your simulation codes). That baffled me.

So, it's a filter.

Pretty neat.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

Random thought. I don't have much of an intuitive grasp on the specific problem you're working on, but this idea of optimizing cuts makes me think of SVMs (support vector machines). I've used them a bit in my work.

There are some advantages to SVMs in that they don't contain an underlying model. Also, rather than "fitting" the data for pattern recognition, they emphasize categorical separation of the data by maximizing a margin between the two (in whatever dimension you like). If you can accurately label signal and noise based on your simulation codes, this might be worth exploring. I don't have a really good handle on the relative computational overhead of NN's vs SVMs.

I expect though, that a radial basis function SVM could train-up a lot quicker on your problem than a NN. That's my intuitive guess.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
[Party]

Glad I finally got it out clearly - it sometimes takes me a while to explain things right.

You seem to have enough of a feel to make good suggestions. We've been looking at SVMs (and genetic algorithms as well). The signal and background separation is great, because we produce them separately, and superimpose them. I think the computational overhead isn't so much of a problem on the samples we're using to train stuff - we just do it on our local machines for things like cuts optimization. The real problem is that the number of variables being simultaneously cut on is huge. The dimensionality of the NN problem is hard, and we have to choose only the best variables to include in the training, because computation increases as some power of the dimension. The problem seems to be worse for SVM, as far as I remember from a talk given by the guy in our group that's looking at them. Dunno, though. If I've made any egregious misstatements about that, I'll correct them Monday, when I talk to him again.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Cool.

I wouldn't think GAs were worth exploring unless you've got a particularly clever encoding of your problem. GAs work best when you've got a sort of hierarchical solution-assembly problem (in your case, the cuts on the various variables aren't tightly coupled). To what extent do you think the cuts you are optimizing are coupled? (if they are, then that's good enough reason to toss the idea of human-expert optimization and resort to a NN).

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
There's a range - they go from uncorrelated to very tightly coupled. It's a very hairy system.

I found out that the real difficulties in SVMs are that the problem scales with the number of training points - you invert a matrix proportional to the number of points in the sample set to use it. That's tricky. GAs scale better, and like NNs, can still be useful with shorter training cycles.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
PhysicsGriper
Member
Member # 5410

 - posted      Profile for PhysicsGriper   Email PhysicsGriper         Edit/Delete Post 
Hey Stradling --

All that work sounds quite interesting, especially the work done with inverting the matrix. I was curious as to whether or not determinants played any factor in any of your work, because I remember reading in my linear algebra book about some applications needed to find determinants of very large nxn matrices, sometimes with n up in the hundreds of thousands.

By the way, I was the one that sent you the email, I've run into a bunch of people on the internet claiming to be physicists, but you're the first real one. [Smile]

Two more questions: 1) Is this the research you're working on for your doctoral dissertation?

2) When you get your PhD, are you going to continue to do research or are you going to teach?

Good luck with the research and everything else.

Posts: 10 | Registered: Jul 2003  |  IP: Logged | Report this post to a Moderator
Stradling
Member
Member # 1182

 - posted      Profile for Stradling   Email Stradling         Edit/Delete Post 
Aha - mystery solved.

Matrices and determinants begin to take over in any real physics problem. Get real familiar with them.
[Grumble]

Answers: Yes and yes.

Alden

Posts: 90 | Registered: Aug 2000  |  IP: Logged | Report this post to a Moderator
filetted
Member
Member # 5048

 - posted      Profile for filetted   Email filetted         Edit/Delete Post 
Alden,

I would have thought SVMs would be a nice solution. Though the search or a separating decision boundary is akin to NNs, the SVM should be focusing most of the search/fit effort on the boundary rather that on the bulk of the data points like a NN. The memorization problem is likely similar, but the training ought to be quite a bit quicker.

dunno how'd you'd use a GA, though. That's a brilliant solution if you've got the right input-encoding.

mike

Posts: 1733 | Registered: Apr 2003  |  IP: Logged | Report this post to a Moderator
unohoo
Member
Member # 5490

 - posted      Profile for unohoo   Email unohoo         Edit/Delete Post 
My take on the "physics" is that OSC stayed close enough to known physics when he could without sacrificing plot and character, and invented philotes and the MD device to be far enough away from what we know about physics today to allow his plot and his characters to go where they needed to go. That is not to say that OSC ignored what is known about physical laws, but that he didn't allow technobable [Razz] to interfere with a really great story. If he couldn't conceive a device that would destroy a planet in one fell swoop, then Ender could not be the great Xenocide that he needed for the rest of the stories. And I love the idea of the philotes as both a means of instant communication and travel since, at present, we don't know of a way to go faster than light for either, this speculation is just as good as anyother that doesn't blatently violate what we do know. Afterall, warp drive is more cumbersome and problematic than the neat philote solution. [Big Grin]
Posts: 168 | Registered: Aug 2003  |  IP: Logged | Report this post to a Moderator
   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2