MadSci Network: Physics

Re: Is anything in Nature truly random and without cause?

Date: Fri Aug 21 23:56:58 1998
Posted By: Don Pettibone, Other (pls. specify below), Ph.D. in Applied Physics, Quadlux Inc.
Area of science: Physics
ID: 900348428.Ph

This is a world-class physics question.  Albert Einstein was very 
interested in this question, and it was the issue of randomness implicit in 
quantum mechanics that kept him from ever accepting quantum mechanics, he 
is quoted as saying, “God does not play dice.”  [In fact, there are all 
kinds of Einstein quotes about God.  He said once that he was trying to 
determine if God had any choice in how he made the universe, and another 
time he said, “God is subtle, but not malicious.”  At one physics 
conference Einstein spouted another of these God aphorisms and one of his 
colleagues shot back, “Stop telling God what to do!”]  We now are certain 
that any new physics that is discovered in the years to come will have to 
be consistent with quantum mechanics, so the randomness seems to be here to 
stay.  I am drawing heavily from “The Feynman Lectures on Physics” in this 

I’ll be talking a lot about quantum mechanics so I need to say a word about 
what it is.  Under most circumstances, when we look at objects that are 
reasonably big, we do not need to invoke quantum mechanics to accurately 
predict the object’s behavior.  For instance, the motion of a baseball, or 
an airplane, or a red blood cell, can all be described very accurately by 
classical physics.  When you start to look at things that are about one 
billionth of a meter or smaller, which is the size of atoms and small 
molecules, we find that classical physics is at a loss to predict much of 
anything.  So it is mainly when we consider things that are very small, 
such as atoms and their inner workings, that we need to come up with a new 
physics, which is known as quantum mechanics.  Your question about 
radioactive decay is a problem that can only be treated on a fundamental 
level by quantum mechanics.  [It is an oversimplification that quantum 
mechanics only shows up at the atomic level, and it is possible to find 
situations where quantum mechanics “shows through” into our classical 
world.  For instance, lasers owe their existence to one of the most 
important principles of quantum mechanics.  Similarly, superconductivity 
and superfluidity owe their existence to quantum effects, and they are easy 
to see in the laboratory on spatial scales that are large.  We believe that 
neutron stars are very real, these are stars that are extremely massive and 
large by laboratory standards, and yet quantum mechanics is needed to 
explain their existence.  So it isn’t true that quantum mechanics only 
takes over at small spatial scales.]  Quantum mechanics had a difficult and 
protracted birth, with the first efforts starting around 1900 and fairly 
complete and accurate formulations of the theory being developed in the 
1920s and the 1930s. 

First off, let’s talk about whether or not the laws of physics before 
quantum mechanics were deterministic or not.  By deterministic, I mean that 
given accurate starting conditions of some system of particles, say, for 
example, the system of particles that makes up our galaxy, and knowing the 
laws of physics, is it possible to predict the future evolution of the 
positions and velocities of all these particles?  If so, then the system is 
said to be deterministic.  A random system would be one that was non 
deterministic.  The first stunning success of Newtonian (classical) physics 
was Newton’s ability to apply his laws of motion and his law of universal 
gravitation to predict the motions of the planets and other heavenly 
bodies, such as comets.  Newton was fortunate in that in our solar system 
the sun is much more massive than any of the planets, and therefore the 
problem of the motion of the planets nearly breaks down to a bunch of two-
body problems.  It turns out that if three bodies are interacting we cannot 
solve the equations of motion exactly, except in some special limiting 
cases.  Newton had a terrible time trying to predict the motion of our moon 
accurately since both the earth and the sun interact strongly with the 
moon, which makes it one of the dreaded three body problems.  Since we 
can’t solve a three body problem analytically (which means with just pencil 
and paper giving simple mathematical expressions that are easy to 
evaluate), consider how much worse it is when there are lots of interacting 
bodies.  There is a good book written on the subject by Ivars Peterson 
entitled “Newton’s Clock.”  I recommend it highly.  Computers, in a sense, 
get us out of some of this problem.  Even though we can’t compute solutions 
analytically for N body problems, we can compute the solutions to Newton’s 
laws of motion for N bodies interacting gravitationally with arbitrary 
precision.  But there is a catch.  It turns out that many systems that you 
might study have properties that lead to a very high sensitivity in their 
evolution to vanishingly small differences in their initial state.  
Consider balancing a pencil on its tip, an unstable situation.  It will 
fall in one direction or another, eventually, but the direction it winds up 
falling in will depend extremely sensitively on its exact position and 
velocity when you let go of it, or perhaps it will depend on tiny air 
currents surrounding it.  In many physical systems tiny differences get 
amplified enormously.  Such systems are called chaotic.  This has the 
operational effect that since you cannot know positions and velocities with 
perfect precision, after a very short while your ability to predict the 
system’s behavior is terrible.  To explain the situation qualitatively, you 
might say that for every factor of 10 in precision that you improve your 
starting measurements you will increase the time over which you can 
accurately predict the system’s behavior by one second.  [Don’t take the 
specific numbers seriously, they are just given to make the case concrete.  
Exponential growth of an instability holds for small perturbations in many 
sorts of systems.]  So if you can measure to 4 decimal places, you can 
predict for 4 seconds, 10 decimal places worth of precision would allow you 
to predict accurately for 10 seconds, and so on.  This is not good news, as 
it is not possible to go on adding precision to our measurements with ease.  
The most accurate physical measurement that we can make is not much more 
than 10 decimal places, which is pretty good but not good enough to stay 
ahead of an exponential growth in some instability.  Therefore, since these 
nasty instabilities grow in lots of systems that we have studied, it means 
that while in theory we might be able to predict the future behavior of a 
complex system, in practice, we cannot.  So the world was already random 
before quantum mechanics came along.  An early pioneer in chaos theory was 
developing a simple model of weather and found this exquisite sensitivity 
to initial conditions and subtitled one of his papers “Can a butterfly in 
Brazil cause a tornado in Texas?”  His conclusion was that it could.  By 
the way, James Gleick wrote a very good and very readable book about chaos 
entitled, aptly, “Chaos.”

When quantum mechanics came along, things only got worse.  Now for even 
very simple systems we find that no matter how we prepare the system, the 
results have an element of randomness.  We can predict very accurately, on 
a statistical basis, what will happen on average if we repeat an experiment 
millions of times, but we usually cannot say with certainty what will 
happen in one specific instance.  You suggested that there might be an 
underlying layer that we don’t understand yet.  If we could see to that 
layer, things would be deterministic again.  That is a good idea, and 
Einstein held out hope that such a theory would prevail.  Such theories are 
called hidden variable theories.  However, none of these theories have been 
successful.  In fact, it is believed that no hidden variable theory can be 
correct, and that we are stuck forever with the indeterminacy of quantum 

Richard Feynman has said that no one who has thought deeply about quantum 
mechanics can feel comfortable with it.  It is just too strange a theory, 
compared to how we experience the world.  Perhaps if we grew up playing 
with quantum mechanical toys, the indeterminacy would seem right and 
natural to us.  But in our, seemingly, cause and effect world, quantum 
mechanics is very strange.  The test of a theory must be how well it 
predicts measurements we make.  Einstein’s theory of relativity seemed to 
be very counter intuitive when it was first proposed, but it had the virtue 
of making very detailed predictions that check with what we see.   We 
accept quantum mechanics because it, also, is a remarkably successful 
theory. In some cases, it is able to make predictions that are accurate to 
one part in 10 billion.  Not bad.  Perhaps in time we will find a deeper 
theory that encompasses quantum mechanics and makes more “sense” to us, and 
quantum mechanics will fall out of this “super” theory in a natural way.  
Then we might be able to see why the indeterminacy of quantum mechanics 
that we observe is right and natural, rather than the strange and 
unsettling thing that it appears to be.

Current Queue | Current Queue for Physics | Physics archives

Try the links in the MadSci Library for more information on Physics.

MadSci Home | Information | Search | Random Knowledge Generator | MadSci Archives | Mad Library | MAD Labs | MAD FAQs | Ask a ? | Join Us! | Help Support MadSci

MadSci Network,
© 1995-1998. All rights reserved.