Does God Play Dice? 
       | 
    
	      | 
	  
	      | 
	
       What is Randomness?
      First, we'd better take a closer look at what we mean by
      'random'. Chaos has taught us that we must be
      careful to distinguish between what happens in mathematical systems where
      we assume perfect and infinitely precise knowledge, and what happens in practice
      when our knowledge is imperfect and imprecise. The meaning of the word
      'random' depends heavily upon this distinction.
      
      
       A dynamical system is one whose state changes over time, according
      to some rule or procedure which I'll call a dynamic. A dynamic is
      a rule for getting to the 'next' state from the current one. The key to thinking
      about randomness is to imagine such a system to be in some particular state,
      and to let it do whatever that particular system does. Then imagine putting
      it back into exactly that initial state and running the whole experiment
      again. If you always get exactly the same result, then the system
      is deterministic. If not, then it's random. Notice that in order to show
      that a system is deterministic, we don't actually have to predict what it
      will do: we just have to be assured that on both occasions it will do the
      same thing. 
      
      
      For example, suppose the system is a cannonball, being dropped
      off the edge of a cliff under controlled, repeatable conditions. Suppose
      that the dynamic is the action of gravity according to
      Newton's laws. You drop the cannonball and it
      falls, accelerating as it does so. Obviously if you repeat the same experiment
      under identical conditions, the ball will do exactly the same thing as before,
      because Newton's laws prescribe the future motion uniquely. So this system
      is deterministic. 
      
      
      In contrast, the system may be a pack of cards, and the dynamic
      may be to shuffle the pack and then take the top card. Imagine that the current
      top card is the ace of spades, and that after shuffling the pack the top
      card becomes the seven of diamonds. Does that imply that whenever the top
      card is the ace of spades then the next top card will always be the seven
      of diamonds? Of course not. So this system is random. 
      
      
      Even this distinction is not so clear cut when we think about
      the real world. In fact, it's difficult to imagine circumstances in which
      you can be absolutely sure that the real world 'is' random rather than
      deterministic, or vice versa. The distinction is about appearances, not deep
      realities - an apparently random universe could be obeying every whim
      of a deterministic deity who chooses how the dice roll; a universe
      that has obeyed perfect mathematical laws for the last ten billion years
      could suddenly start to play truly random dice. So the distinction is about
      how we model the system, and what point of view seems most useful, rather
      than about any inherent feature of the system itself.
      
      
       In modelling terms, the difference between randomness and
      determinacy is clear enough. The randomness in the pack of cards arises from
      our failure to prescribe unique roles for getting from the current state
      to the next one. There are lots of different ways to shuffle a pack. The
      determinism of the cannonball is a combination of two things: fully prescribed
      rules of behaviour, and fully defined initial conditions. Notice that in
      both systems we are thinking on a very short timescale: it is the next state
      that matters - or, if time is flowing continuously, it is the state a tiny
      instant into the future. We don't need to consider long-term behaviour to
      distinguish randomness from determinacy. Scientists have devised many models
      of the real world: some deterministic some not. In a clockwork Newtonian
      model nothing is truly random. If you run a deterministic model twice
      from the same initial state, it will do the same thing both times. However,
      we currently have a different model of the universe a quantum mechanical
      one. In quantum mechanics - at least, as currently formulated there is genuine
      randomness.Just as the rule 'shuffle the cards' permits many different outcomes,
      so the rules of quantum mechanics permit a particle to be in many different
      states. When we observe its state, we pin it down to a particular value in
      the same way that turning over the top card of the pack reveals a particular
      card. But while a quantum system is following its own nose, unobserved, it
      has a random choice of possible futures. So whether we think that our universe
      as a whole is random depends on what brand of physics we currently espouse,
      and since we can't actually run the entire universe twice from the same initial
      conditions the whole discussion becomes a trifle moot. [Note that "moot"
      here suggests the meaning "void" -LB] 
      
      
      However, instead of asking 'is the entire universe really
      random?' we can ask a less ambitious question - but a more useful one.
      Given some particular subsystem of the real world, is it best modelled by
      a deterministic mathematical system or a random one? And now we can make
      a genuine distinction. It is clear from the start that any real world system
      might be suddenly influenced by factors outside our knowledge or control.
      If a bird smashes into the falling cannonball then its path will deviate
      from what we expect. We could build the bird into the mathematics as well,
      but then what of the cat that may or may not capture the bird before it can
      crash into the cannonball? The best we can do is choose a subsystem that
      we think we understand, and agree that unexpected outside influences don't
      count. Because our knowledge of the system is necessarily limited by errors
      of measurement, we can't guarantee to return it to exactly the same initial
      state. The best we can do is return it to a state that is experimentally
      indistinguishable from the previous initial state. We can repeat the cannonball
      experiment with something that looks like the same cannonball in the same
      place moving at the same speed; but we can't control every individual atom
      inside it to produce the identical initial state with infinite precision.
      In fact, whenever we touch the cannonball a few atoms rub off and a few others
      transfer themselves to its surface, so it is definitely different every time.
      
      
      
      So now - provided we remember that only short timescales
      are important - we can formulate a practical version of the distinction
      between deterministic chaos and true randomness. A real subsystem of the
      universe looks deterministic if ignoring unexpected outside effects, whenever
      you return it to what looks like the same initial state it then does
      much the same thing for some non-zero period of time. It is random if
      indistinguishable initial states can immediately lead to very different
      outcomes.
      
      
       In these terms the cannonball system, using a real cannonball,
      a real cliff, and real gravity, still looks pretty deterministic. The experiment
      is 'repeatable' - which is what makes Newton's laws of motion so effective
      in their proper sphere of application. In contrast, a real card-shuffling
      experiment looks random. So does the decay of a radioactive atom. The randomness
      of the card-shuffle is of course caused by our lack of knowledge of the precise
      procedure used to shuffle the cards. But that is outside the chosen system,
      so in our practical sense it is not admissible. If we were to change the
      system to include information about the shuffling rule - for example, that
      it is given by some particular computer code for
      pseudo-random numbers, starting with a given
      'seed value' - then the system would look deterministic. Two computers
      of the same make running the same 'random shuffle' program would actually
      produce the identical sequence of top cards. 
      
      
      We can also look at the card system in a different way. Suppose
      the choice of card is determined by just the first few digits of the
      pseudo-random number, which is fairly typical of how people write that kind
      of program. Then we don't know the 'complete' state of the system at any
      time - only the few digits that tell us the current top card. Now, even with
      a fixed pseudo-random number generator, the next card after an ace of spades
      will be unpredictable, so our model has become random again. The randomness
      results from lack of information about some wider system that includes the
      one we think we are looking at. If we knew what those 'hidden variables'
      were doing, then we would stop imagining that the system was random.
      
      
       Suppose we are observing a real system, and we think it looks
      random. There are two distinct reasons why this might happen: either we are
      not observing enough about it, or it truly is irreducibly random. It's very
      hard to decide between these possibilities. Would the decay of a radioactive
      atom become deterministic if only we knew the external rules for making it
      decay (the shuffling rule) or some extra 'internal' dynamic on 'the entire
      pack of cards'? Fine but right now we don't, and maybe we never will because
      maybe there is no such internal dynamic. (See Chapter 16 for some speculations
      on this topic.) 
      
      
      I repeat, we are in the business of comparing observations
      of the real world with some particular model, and it is only the model that
      can safely be said to be random or deterministic. And if it is one or the
      other, then so is the real world, as far as the aspects of it that our model
      captures are concerned. 
      
      
      Chance and Chaos
      Having sorted out what we mean or at any rate, what I mean by 'random'
      and 'deterministic', we can turn to the relation between chance and chaos.
      This is not a simple story with a single punchline. The main source of potential
      confusion is the multifaceted nature of chaos: it takes on different guises
      when viewed in different lights.
      
      
       On the surface, a chaotic system behaves much like a random
      one. Think about computer models of the Earth's weather system, which are
      chaotic and so suffer from the butterfly effect. Run the computer model starting
      from some chosen state, and you get a pleasant, sunny day a month later.
      Run the same computer model starting from some chosen state plus one
      flap of a butterfly's wing surely an indistinguishable state in any conceivable
      practical experiment - and now you get a blizzard. Isn't that what a random
      system does? Yes but the timescale is wrong. The 'randomness' arises on large
      timescales - here months. The distinction between determinacy or randomness
      takes place on short timescales; indeed it should be immediate. After a day
      that flapping wing may just alter the local pressure by a tenth of a millibar.
      After a second, it may just alter the local pressure by a ten billionth of
      a millibar. And indeed in the computer models that's just what happens. It
      takes time for the errors to grow and we can quantify that time using the
      Liapunov exponent. So we can safely say that on short timescales the computer
      model of the weather is not random: it is deterministic (but chaotic). 
      
      
      To add to the scope for confusion, in certain respects a chaotic
      system may behave exactly like a random one. Remember the 'wrapping
      mapping' of Chapter 6, which pulls out successive decimal places of its initial
      condition using the dynamical rule 'multiply by ten and drop anything in
      front of the decimal point'? There is nothing random about the rule
      - when presented with any particular number, it always leads to the same
      result. But even though the rule is deterministic, the behaviour that it
      produces need not be. 
      
      
      The reason is that the behaviour does not depend solely on
      the rule: it depends on the initial condition as well. If the initial condition
      has a regular pattern to its digits, such as 0.3333333..., then the behaviour
      (as measured by the first digit after the decimal point) is regular too:
      3, 3, 3, 3, 3, 3, 3. However, if the initial condition was determined by
      randomly throwing a die, say 0.1162541 ... then the behaviour will appear
      equally random: 1,1, 6, 2, 5, 4, 1.
      
      
       In the sense described, the 'multiply by ten' dynamical system
      displays absolutely genuine random behaviour exactly as random as the die
      that produced 1, 1, 6, 2, 5, 4,1 in the first place. However, it would be
      a gross distortion to say that the system 'is' random, for at least two reasons.
      The first is that the measurement we are considering, the first digit after
      the decimal point, is not a complete description of the state of the system.
      A more accurate representation is 0.3333333,0.333333, 0.33333,0.3333,0.333,
      0.33, 0.3; or in the random case 0.1162541, 0.162541, 0.62541, 0.2541, 0.541,
      0.41, 0.1. That second sequence doesn't look totally random if you see the
      whole thing. The second reason is that it is the initial condition that provides
      the source of randomness; the system merely makes this randomness visible.
      You might say that chaos is a mechanism for extracting and displaying the
      randomness inherent in initial conditions, an idea that the physicist Joseph
      Ford has advocated for many years as part of a general theory of the
      information-processing capabilities of chaos. 
      
      
      However, a dynamical system is not just a response to a single
      initial condition: it is a response to all initial conditions. We just tend
      to observe that response one initial condition at a time. When we start thinking
      like that, we can soon distinguish regular patterns lurking among the chaos.
      The most basic is that for a time systems whose initial conditions differ
      by a small amount follow approximately similar paths. Thanks to
      the butterfly effect this similarity eventually
      breaks down, but not straight away. If the initial condition had been 0.3333334
      then the behaviour would have been 3, 3, 3, 3, 3, 3 - so far so good - and
      then 4, well, it couldn't last for ever. In exactly the same way, if the
      initial condition had been 0.1162542 instead of 0.1162541, the two behaviours
      would also have looked very similar for the first six steps, with the difference
      becoming apparent only on the seventh. In fact, if we compare the exact values
      (rather than just our 'observations' of the first decimal place) then we
      can see how the divergence goes. The first initial condition goes
      0.1162541,0.162541, 0.62541,0.2541,0.541, 0.41,0.1 
      and the second goes
      0.1162542,0.162542, 0.62542,0.2542,0.542, 0.42,0.2. 
      
      
      The differences between corresponding values go
      0.0000001, 0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 
      and each is ten times bigger than the previous difference. So we can actually
      see how the error is growing; we can watch how the butterfly's flapping wing
      cascades into ever - bigger discrepancies.
      This kind of regular growth of tiny errors - I'll use the word 'error' for
      any small difference in initial conditions, whether or not it's a mistake
      - is one of the simplest tests for chaos. The technicians call it the system's
      Liapunov exponent, named after
      A. M. Liapunov, a famous Russian mathematician
      who invented many of the basic concepts of dynamical systems theory in the
      early 1 900s. Here the Liapunov exponent is 10, meaning that the error grows
      by a factor of ten at each step. (Well, strictly speaking the Liapunov
      exponent is log 10, which is about 2.3026, because the rate of growth
      is e raised to the power of the Liapunov exponent, not the exponent
      itself .But that's a technical nicety. To avoid confusion I'll talk about
      the 'growth rate', which here really is 10.)
      
      
       Of course the growth rate of a tiny error is not always constant:
      the only reason that the growth here is exactly tenfold at every step is
      that the dynamics multiplies everything by ten. If the dynamics were more
      variable, multiplying some numbers by 9 and others by 11, say, then you'd
      get a more complicated pattern of error growth; but on average and for very
      small initial errors it would still grow by some rate between 9 and 11. In
      fact Liapunov proved that every deterministic dynamical system has a well-defined
      rate of growth of errors, provided the errors are taken to be sufficiently
      small. 
      
      
      The Liapunov growth rate provides a quantitative test for chaos.
      If the Liapunov growth rate is bigger than 1, then initial errors, however
      small increase exponentially. This is the butterfly effect in action, so
      such a system is chaotic. However, if the Liapunov growth rate is less than
      1, the errors die away, and the system is not chaotic. That's wonderful if
      you know you have a deterministic system to begin with, and if you can make
      the extremely accurate observations required to calculate the growth rate
      from experiments. It's much less useful if you don't, or can't. Nonetheless,
      we see that deterministic systems behave differently from random ones, and
      that certain features of that difference lead to quantitative measures of
      the degree of chaos that is present. The Liapunov exponent is just one diagnostic
      of chaos. Another is the fractal dimension of
      the attractor
      (see Chapter 11). A steady state attractor has fractal dimension 0, a periodic
      cycle has fractal dimension 1, a torus attractor formed by superposing n
      independent periodic motions has fractal dimension n. These are
      all whole numbers. So if you can measure the fractal dimension of a system's
      attractor, and you get numbers like 1.356 or 2.952, then that's an extra
      piece of evidence for chaos. How can we measure such a fractal dimension?
      There are two main steps. One is to reconstruct the qualitative form of the
      attractor using the Ruelle-Takens method of Chapter 9 or one of the many
      variants that have appeared since. The other is to perform a computer analysis
      on the reconstructed attractor to calculate its fractal dimension. There
      are many methods for doing this, the simplest being a 'box-counting' technique
      that works out what proportion of different-sized boxes is occupied by the
      attractor. As the size of the box decreases, this proportion varies in a
      manner that is determined by the fractal dimension. The mathematics of
      phase space reconstruction then guarantees
      that the fractal dimension of the reconstructed attractor is the same as
      that of the original one - provided there is an original one, which means
      that your system must be describable by a deterministic dynamical system
      to begin with. 
      
      
      It is absolutely crucial not to be naive about this process.
      You can take any series of measurements whatsoever - say the prices in your
      last twelve months' shopping lists - push them through the Ruelle-Takens
      procedure, and count boxes. You will get some number maybe 5.277, say. This
      does not entitle you to assume that your shopping list is chaotic and lives
      on a 5.277- dimensional strange attractor. Why not? Firstly, because there
      is no good reason to assume that your shopping list comes from a deterministic
      dynamical system. Secondly, because even if it did, your shopping list data
      contains too little information to give any confidence in a dimension that
      big. The bigger the dimension of an attractor, the more data points you need
      to pin the structure of the attractor down. In fact any fractal dimension
      over about 4 should be viewed with great suspicion. 
      
      
| Questions & answers on everyday scientific phenomena
	     Welcome to our collection of scientific mysteries from New Scientist's weekly page, The Last Word. Start here with the latest answers... Random thought Question How can there be such a concept as "random"? Surely everything has a structure if you look deeply enough. What does random actually mean? Answer Random was originally a soldier's term meaning "forcefully", as opposed to "carefully" (Oxford Dictionary of English Etymology, OUP, 1966). Therefore, if something was thrown "at random" the result was unpredictable. Although the Oxford English Dictionary's 1989 definition focuses on it meaning "haphazard" or "aimless", the word random is often used to mean "unpredictable", which is the outcome of a haphazard or aimless action. But is anything really unpredictable? There was a period in the 19th century when people began to realise that everything was governed by some relatively simple universal laws. Consequently, they believed that the future could be mapped out if we knew the present state of everything and all the universal laws. We have now learned that this may be true philosophically, but chaos theory suggests that a minute error in our knowledge of the present state can make a huge difference to our predictions. If we knew enough about everything concerned we could work out which balls would come out of the lottery machine first. But if any of our input (size of balls, elasticity of plastic, shape of drum, and so on) is inaccurate, even infinitesimally, the answer will be wrong. So although in principle nearly everything is predictable, in practice very little is. In snooker, even an expert player can't be sure exactly where all the balls are going. Could anyone have predicted that I would sit down at 8.16pm and write this letter? The debate continues over whether people have free will or if our behaviour is determined by our genes and experience. If we use a device such as a roulette wheel to create an unpredictable series of numbers, then it is possible to compute certain statistics from these numbers--such as the average of 1000 successive numbers--and although the numbers themselves are unpredictable, the values of these statistics will approximate to known values. Mathematicians have extended the term random to mean any sequence of numbers having these statistics. This use of the word is totally divorced from its earlier meanings relating to unpredictability. Hence it is said that the digits of pi are random, though no one could run a fair lottery where the winning numbers are, say, the 1000th to 1005th digits of pi. Doug Fenna, Middlesbrough, Cleveland Answer The concept of randomness is best modelled in a linear series of numbers. The sequence is truly random when there is no possible way of predicting the next number or numbers from the preceding values, and it is also not possible, retrospectively, to explain how the sequence arose. In other words, the sequence was not generated from a mathematical rule. Your questioner is right to speculate that randomness is not common. Most natural processes and structures are predictable, but they are often so complex that it is impractical to do the necessary calculations. One truly random process is radioactive decay. There is no way to predict which atom of an unstable isotope will decay next and, within a range around the mean value, the timing of decay events is random. This latter fact gives us one way of generating random number sequences. There is a popular misconception that "random" means "thoroughly mixed". This often puzzles newcomers to randomisation in statistical work where it is sometimes apparent that there are clumps of high, low and intermediate values in a random number sequence. John Etherington, Haverfordwest, Pembrokeshire Answer Loosely stated, a random process is one that can only theoretically be described using a statistical model. This is exactly the case on ultramicroscopic scales, where the weirdness of quantum mechanics rules over common-sense notions of energy, velocity and pretty much everything else. For example, if the momentum of a particle is partially known, some element of uncertainty creeps into its position. This is not a defect of our measuring instruments, but a property of the particle. Some element of randomness appears in every particle's behaviour. This is most obvious with radioactive decay--no analysis will tell you when a nucleus is going to decay, it can only give you a probability. The extent to which this affects our thought processes is not fully known, but it is likely to be minimal. The axons and dendrites in the brain are not small enough to be susceptible to these weird effects, even if their constituent particles (and everything else) are. Nick Davison, Watford, Hertfordshire Answer There certainly are meaningful concepts of randomness. First, we now know about sources of physical randomness that govern the course of events but are outside the control of the information available within systems. These include quantum uncertainty and chaotic systems. Second, you can get two intrinsically non-random systems that have poor correlation. For instance, there is poor correlation between the decimal expansions of, say, the square roots of two and three. We define "random" according to our mathematical or conceptual requirements. One definition is that a system is random if knowing any part of it neither improves nor spoils one's chances of guessing any other part: no matter how many throws of a coin one sees, the chance of predicting any other throw remains 50 per cent. This concept is related to randomness being defined by mathematical tests for correlation. Jon Richfield, Somerset West, South Africa Answer may mean it whatever, certainly exists it. Knows who why? Patrick Forsyth, Maldon, Essex  | 
	
| Chaos | Quantum | Logic | Cosmos | Conscious | Belief | Elect. | Art | Chem. | Maths |