Skip to content

Sources of Variation in Biology Part 2: Emergence


This is the second installment in a three-part subseries in our "Bringing the Real World to Genesis" series, curated by Jan M. Long. These three articles, written by Mailen Kootsey, address the sources of variation in biology.  Previous "Bringing the Real World to Genesis" articles can be found here.

In Part 1 of this series, I summarized two sources of variation in biology: randomness and chaos. Those mechanisms are independent of each other and both can operate in systems of any level of complexity.

Emergence, on the other hand, is a characteristic of complex systems and is the most challenging of our three mechanisms to explain. Harold Morowitz begins his popular book The Emergence of Everything [1] with a quote from Ecclesiastes: “The thing that hath been is that which shall be; and that which is done shall be done; and there is no new thing under the sun.” In 2010, approximately 1.5 million peer-reviewed scientific papers were published and the number continues to grow at about 2.5% per year [2]. It seems that the writer of Ecclesiastes had a narrow view of the natural world! How does all this variety originate?


In order to appreciate the idea of emergence and its relationship to complexity, some background in the scientific method is necessary. The scientific era is usually considered to have begun with Galileo who observed falling bodies and derived laws of physics to describe the behavior he observed. The repeated cycle of observation or experimentation followed by hypothesis or theory is the heart of the modern scientific method, applicable in all scientific fields.

As the scientific method was applied by many scientists in diverse laboratories, similarities developed in the techniques the scientists used to build their hypotheses and theories. A natural way to understand a complex system is to break the system down into subunits, along with assumed connections between the subunits. A complex animal can be divided up into body systems or cells, a complex chemical system can be divided into reaction steps, etc. The names “reduction” and “analysis” have been given to this method of “divide and conquer” and it continues to be used today with great success.

At some point, the question arose: How do we know that our hypothetical internal structure of subunits and connections is correct? The hypothetical internal structure might be functional in nature and not a visible division, for example. The solution to this dilemma is to reverse the process: start with the proposed internal structure to see what behavior it generates. This is called “synthesis”, in contrast with “analysis”. For a biological system, a physical synthesis is not very likely with today’s technologies, so for now it must be a synthesis with mathematical models.

The task is to explore the range of behaviors of the synthesized system. The system being complex means that there are many internal (or state) variables. Mathematics demands that numerical values must be supplied for two parameters for each variable [3]. To explore the behavior of a system, a set of values is chosen for the parameters and the model equations are solved for that set by a computer or some other method to see what behavior is generated. This procedure is then repeated many times with different sets of parameter values to explore the range of behaviors.

How do we pick values for all the parameters? What about exploring ranges of possible values for all the parameters, trying all combinations? That way we would know all the possible behaviors of the system!

Take an example system model containing 50 variables, requiring 100 parameters. (This is a simple model by biological standards.) We will try 10 different values for each parameter. That is actually too few, but it will serve to illustrate my point. Now, how many different combinations of parameter values will we have to test? 10 for parameter 1, times 10 for parameter 2, times 10 for parameter 3, … ending with times 10 for parameter 100. The total number is 10 to the 100th power. To give you an idea of what that number means, consider this: If every atom in the known universe was a computer and each of these computers could find a solution to the system equations in one millionth of a second, it would still take far longer than the 14 billion year lifetime of the known universe to test all the possible combinations! And that was for a simple model! Clearly, exhaustive testing of all possibilities is not a practical method to explore the behavior of our synthesized system.


There are some approaches to get around this problem of parameter values, ways of exploring the 100-dimensional space, so to speak, without exploring every point in this space. These approaches are too technical for this discussion, but the conclusion is clear: In the process of synthesis – building a complex system out of simpler components, the range of possible behaviors is essentially infinite, even for a system of modest complexity (50 internal variables). Yet, in the real world, we see a large, but not infinite variety in living systems. For example, the same molecular types appear over and over again in different living cell types. The sodium-potassium ATPase molecule, a molecular pump machine using the ATP molecule as a source of energy, is a prime example, appearing with small variations in virtually every living cell type. Other examples include known cases of molecular “self-organization” in chemistry and biochemistry.

When phospholipid molecules (“clothes pin” shapes in the diagram below) are put into water, for example, they spontaneously organize into bilayers, like those forming the walls of cells, and micelles — closed spheres with walls made up of a single layer of lipid molecules.

The basic reason behind this spontaneous self-organization is that the polar head of the phospholipid molecule “loves” water molecules while the tail “hates” them. All three of these spontaneous shapes keep the tails away from water molecules while allowing the heads direct contact.

Scientists have concluded that there are “laws” of complex systems that select from the infinite variety of possible outcomes. In other words, if we put phospholipid molecules into a water solution and repeat the experiment many times, we will see that certain organizations of the molecules appear over and over and we do not see something totally different in each experiment – even though equations describing the solution tell us that a virtually infinite variety of results could appear.

We have finally arrived at the definition of “emergence”: Laws, yet to be understood, select from the infinite possibilities when a complex system is assembled. These laws of complex systems “emerge” as a natural part of the universe we live in to produce order and repeatability, just as there are fundamental laws of gravity, electricity and magnetism, and nuclear forces. While the complex system depends on the fundamental laws, the emergent laws cannot be derived directly from that understanding because of the vast number of possible outcomes.

Emergence is both good news and bad news for science. The good news is that even with relatively few basic subunits (say small molecules, for example), the number and variety of different possible systems than can be constructed with these subunits is, for all practical purposes, infinite. No wonder the natural world can be so complex even when based on a small number of elementary particles and interactions. Scientists are not at all in danger of “discovering” themselves out of a job!

The bad news of emergence comes when we try to understand the complex systems that actually exist. Even if we had a complete and perfect understanding of the basic laws of matter and a computer as big and fast as we wished, we could not work from simple to complex and predict what systems will exist and what they are going to do in the future, as Laplace hoped. Somehow, by emergent laws that we do not presently understand, the existing systems have been selected from the infinite possibilities that theoretically could exist. Scientists such as Stuart Kaufmann are working hard to understand the emergent laws of complex systems, but the effort is only beginning.

Emergence is a characteristic of complex systems. Emergent systems may contain random sub-processes and also may show characteristics of chaos, but neither of these qualities are necessary.


The natural universe has been organized by humans into a hierarchy of size and complexity, beginning with sub-nuclear particles and going up through atoms, to molecules, to cells, to organs, to organ systems, to individuals, to societies, etc. (the biological branch of the hierarchy). These levels are separated by steps in complexity, with subunits below and more complex systems above each step. Emergence applies at each one of these steps. Each step increase in complexity has attracted scientific studies with different scientific terminologies, experimental methods, and theories, resulting in the definition of a scientific field at each step. For example, the step of molecules forming reactions has given rise to the field of “biochemistry” and the step of cells cooperating to form organs is called “physiology.”  Each field of science has work to do in identifying their emergent laws, although it is likely that there will be common themes to laws at all steps. Morowitz, in his book referenced earlier, identifies 28 steps from the origin of the universe to all that exists today.

In the last installment of this series, I will write about the significance of emergence for theories of life and life’s origins.

Mailen Kootsey has a BA from Pacific Union College and a PhD from Brown University, both in Physics. He had a 41-year career in university education, including faculty and administrative positions at Duke University, Andrews University, and Loma Linda University. His expertise is multidisciplinary, having had appointments in departments of Physics, Physiology, Computer Science, Biomedical Engineering, and Biology. Dr. Kootsey published research on ion transport and electrical activity in heart muscle, applying techniques of mathematical modeling and computer simulation. He is now COO of Alexandros, Inc., an international business company.

Art: Josh Keyes, Migratory Soul, acrylic on panel, 2009


1.     Morowitz, Harold J., The Emergence of Everything, Oxford University Press, New York, 2002.


3.     Variables in models of real physical systems are generally governed by differential equations based on fundamental laws. For such equations, each model variable must have two numbers specified in order to solve the equations, typically an initial value of the variable and a “rate.”

Subscribe to our newsletter
Spectrum Newsletter: The latest Adventist news at your fingertips.
This field is for validation purposes and should be left unchanged.