Universal Evolution, Probability and Life's Origins part 2

Mike's Notes

I came across the two-part essay, Universal Evolution, Probability and Life's Origins, by James Miller. It has to be one of the best explanations I have read of how those three topics on Nature come together. I have copied his essays in full to the blog and hope to record a video interview with him some time in the future.

The resource references below cover both essays.

Pipi's core architecture and properties were built on many of the same conclusions about nature.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/05/2025

Universal Evolution, Probability and Life's Origins

Part 2. Dissipative structures

By: James Miller
Substack: James Miller's observations: 18/05/2023.

In evolving complex structures, reactions are fueled by self-generated energy or by energy from an external source. Some of this energy is channeled into the enlargement or complexification of given ordered structures. Thus, while the entropy created grows, the complex structures aggregate and extend themselves. Deacon goes to great pains to explain this and to justify his description of it, which results in a long and complex book, but he gives a summary account in these words (Deacon, ibid., p. 235.):

“In these processes, we glimpse a backdoor to the second law of thermodynamics that allows—even promotes—the spontaneous increase of order, correlated regularities, and complex partitioning of dynamical features under certain conditions. Ironically, these conditions also inevitably include a reliable and relentless increase in entropy. In many non-living processes, especially when subject to a steady influx of energy or materials, what are often called self-organizing features may become manifest. This constant perturbation of the thermodynamic arrow of change is in fact critical, because when the constant throughput of material and/or energy ceases, as it eventually must, the maintenance of this orderliness breaks down as well. In terms of constraint, this means that so long as extrinsic constraints are continually imposed, creating a contragrade dynamic to the spontaneous orthograde dissipation of intrinsic constraints, new forms of intrinsic constraint can emerge and even amplify.

“. . . Here we see that living organisms exemplify dissipative structures that survive as long as their self-organizing processes can overcome the second law of thermodynamics. Individual death is the result. We can think about these phenomena by imagining spontaneous reactions occurring in an environment at thermodynamic equilibrium. These reactions generate instabilities in which it is possible to go further away from equilibrium or fall back to equilibrium. But as we think of this, we must remember that some chemical reactions create products that are more stable than the reactants; i.e., it takes more energy to break the bonds that have formed than it took to create them. This allows the creation of stable chemical compounds in a far-from-equilibrium condition. In some circumstances, this stability may go further away from equilibrium. Prigogine says there is a “bifurcation point” indicating the possibility of a new step in the direction already taken, or a falling back toward equilibrium (ibid. p. 66). There is no “determinate process” going on in this; it is all a function of probabilities that emerge and disappear in the chaotic flux of matter in motion. Organized structures that sustain themselves, however temporary or permanent, are denoted “dissipative structures” by Prigogine, and he argues that chemistry allows for the existence of “(1) far-from-equilibrium situations defined by a critical distance; and (2) catalytic steps, such as the production of intermediate compound Y from compound X, together with the production of X from Y.” (ibid, p. 66)

The need for a critical distance depends on sufficient local concentrations of the reactants. Within the critical distance between 2 or more atoms or molecules, the formation of a chemical bond becomes increasingly probable, provided that their chemical affinities are conducive to this kind of interaction. But beyond the critical distance, the question of bond formation does not arise. The question of catalysis is also important. The presence of a catalytic surface is important in the formation of bonds because it lowers the energy of activation that is necessary for the formation of a bond. Catalysts make bonds easier to form. Many “origin of life” experiments take this into account by providing catalytic elements (usually metals) in the initial conditions of the procedure.

Deacon discusses the “bifurcation point” in Incomplete Nature (ibid., p. 249):

“But there is an upper limit to this diffusion process that has a critical effect on the overall global dynamic. This is a threshold at which dynamical discontinuities occur. These are often called bifurcation points, on either side of which distinctively different dynamical tendencies tend to develop. At this threshold, local variations can produce highly irregular chaotic behaviors, as quite distinct dynamic tendencies tend to form near one another and interact antagonistically. But exceeding this threshold, radical changes in orthograde dynamics can take place.”

The diffusion process to which he refers is when a system is approaching toward, or receding away from, a highly-constrained far-from-equilibrium configuration; there is an inflection point in the gradient which marks a change in the vulnerability of the system to outside impingement and disruption. For example, the greater the input of heat into the system, the more intense the molecular activity, and the more vulnerable the system is to disruption (imagine the denaturing of proteins).

But the real significance of “dissipative structures” is that they increase entropy when the “final” balance sheet of the molecular activity is provided. In saying this we recognize that there is no “final” balance sheet while regenerating processes are established as continuous, but there are cycles of growth and decay which create repetitive cyclic processes of structural dissipation. On the one side, the dissipative structures absorb free energy from the environment. (In the case of earth’s biomass, this is primarily energy from the sun, but we can’t forget that some microorganisms absorb non-solar energy that is stored in minerals). But in addition to the absorption of energy from the environment, living creatures emit energy through their metabolic activity. This energy is radiated outward in the form of heat. On balance, the heat radiated away from the dissipative structures exceeds the heat absorbed, so the thermodynamic balance is positive. The existence of life, however glorious it might be, actually does its part to hasten the death of the universe.

We must presuppose the interaction of molecules at the beginning of the inquiry into the origin of life without first naming and categorizing these particles, because whatever they might have been is as yet unknown. The initial approach should be based on a consideration of processes of emergence. This is not a departure from materialism, but a necessity when discussing a material change that took place billions of years ago, which science is incapable of theoretically reconstructing at present. Of course, every process has material components, and in nature, the processes and the components are not independent but exist through time as inseparable aspects of the integral synthesis of universal phenomena.

But what does the term “emergence” mean? The theme of emergence has long been a focus of discussion among philosophers of science. C. Lloyd Morgan (1852–1936) was one of the founders of this course of analysis. In his first Gifford lecture, he defined emergence as the appearance of something new that could not have been predicted by knowing the pre-existing entities that gave rise to it. And he gave an example

https://www.giffordlectures.org/books/emergent-evolution/lecture-i-emergence:

“When carbon having certain properties combines with sulfur having other properties there is formed not a mere mixture but a new compound some of the properties of which are quite different from those of either component. Now the weight of the compound is an additive resultant of the sum of the weights of the components; and this could be predicted before any molecule of carbon-bisulfide had been formed. One could say in advance that if carbon and sulfur shall be found to combine in any ascertainable proportions there will be such and such weight as resultant. But sundry other properties are constitutive emergents which (it is claimed) could not be foretold in advance of any instance of such combination. Of course, when one has learnt what emerges in this particular instance one may predict what will emerge in that like instance under similar circumstances. One has learnt something of the natural plan of emergent evolution.”

The example taken from Morgan is from the science of chemistry, but the same principle can be applied, for example, in biology. C.D. Broad (1887–1971), another philosopher of science, made the following argument about nutrition https://plato.stanford.edu/entries/broad/#Eme:

“Suppose a certain biological feature, e.g. the capacity for nutrition, is an emergent property. This means that the property is not a mere result of an immensely high degree of organisational complexity at the chemical level. The property is, to be sure, nomically dependent on the chemical level, but it is not a mere resultant of the properties, relations, and laws operative at that level. If nutrition were a mere resultant property, it would be a reducible property; it would be a property that theoretically can be inferred from features at the chemical level (i.e. the properties, relations, and laws which characterise the constituents in isolation or in other wholes than the one in question).”

Biology teaches us that life evolves; living creatures are part of an evolutionary process in which existing organisms extend, regenerate and reproduce themselves through adaptations they have accumulated in the course of evolution. But what kinds of matter in motion, chemical reactions, and cycles were possible or likely at the period of the origin of life 3.5 or 4 billion years ago? The capacity to systematically generate energy to fuel ongoing chemical cycles is one of the important questions that has been addressed. It is recognized that at the very beginning of life, there was no photosynthesis—which could only emerge as the result of a long period of biological evolution—so the first self-assembling, self-perpetuating chemical cycles had to rely on the absorption of energy from sources that were a natural part of the environment. The energy to be used by the first repeatable chemical cycles could have come from the sun, lightning, or volcanic activity, and volcanic activity includes the release of subterranean gases through vents on the sea floor.

Various hypotheses have been developed around this assumption, including one that indicates that life originated in the pores of sea-bottom alkaline hydrothermal vents (initially proposed by Michael Russell of NASA’s Jet Propulsion Laboratory and elaborated by Nick Lane in his book The Vital Question). The alkalinity of the heated water welling up from the subsurface, as it mixed with the acidic seawater, established the conditions for the continuous flow of reactive chemical and heat energy. Other scientists have been examining microbial life forms that exist deep within the rock structures in the earth’s interior, for example at the Carnegie Institution’s Deep Carbon Observatory, hoping to expand knowledge about the variety of modes of existence of living creatures. See Robert Hazen’s book, The Story of Earth. Although there is no specific evidence as yet to confirm these hypotheses, scientists agree that life began by means of a transition from non-living chemistry and arrived, perhaps through many intermediate stages, at a stage whereby evolution through natural selection could be definitively established.

As Deacon has argued (ibid., p. 430),

“So, the first organism wasn’t a product of natural selection. The constellation of processes that we identify with biological evolution ultimately emerged from a kind of proto-evolution, supported by a kind of protolife, that ultimately must trace back to the spontaneous emergence of the first molecular systems capable of some minimal form of evolutionary dynamic.”

But we need to recognize that the existence of any self-organizing, complex, self-regenerating system of chemical reactions that emerged from the primordial chaos would require the generation of constraints (evolving limitations that prevent the emergence of disorganizing processes). The developing structures that would protect the cohesion of the generated molecular pattern and push forward its cyclic regeneration. Organized systems do not suddenly appear as if by magic. For the emergence of any given dynamic system of relationships, or cyclical processes, there is a point in time when it does not yet exist in an organized form, but only in potential. A potential is “real” in the sense that, while it has not yet emerged as a definite material process or entity, there does exist a probability in the given configuration of chemical relationships that it can emerge to become part of the ongoing processes in a given locale.

We recognize that in any natural evolutionary process, there are transitions that lead from lower to higher levels of any existing formation, or from preparatory stages to advanced stages of development. There is often discontinuity in these transitions. Higher levels contain features that cannot be predicted by examining the antecedent stages. The potential that is realized in a higher stage is not found by examining the lower stage, but that does not mean that the unrealized potential, existing within the lower stage, is illusory or fictitious. This is due to the probabilistic nature of natural evolutionary changes.

This process of the transition of any potential into full existence is probabilistic, that is, it is an outcome that becomes more-or-less likely given the existing molecular circumstances. Unrealized potentials are therefore undetermined. A wide range of different outcomes are possible, and some are more probable than others. This or that outcome is not solely reliant on the momentum or dynamic of a particular process, taken in isolation, but will vary depending on the changing environment which that process inhabits, and forms a part of. Molecular relationships and interactions are constantly emerging and dying away in uncountable different circumstances. The emergence of organized chemical systems out of the ever-changing world of pre-existing possibilities not only must conform to these pre-existing physical constraints, but also must develop new, more specific constraints which then become critical features of the process concerning its structural stability, as well as in relation to the environment that gave birth to it and continues to surround it.

Before an entity becomes real in the world, there must be some potential. And every potential that emerges corresponds to conditions that both facilitate its transition into full existence, and at the same time prepare the constraints that define its limits. But whatever has not yet happened only comes about as a result of prior conditions, which themselves are constantly changing. Thus, probability becomes an essential condition of all processes of change in nature. The study of transitions from what is probable to what is occurring, or what has previously occurred, brings us the question of chance and necessity. This contraposition is often interpreted as an absolute contrast. In reality, however, accident and necessity cannot be contrasted with one another as an “either/or” question. As Friedrich Engels argued in 1878 (MECW, vol. 25, p. 498):

“Another opposition in which metaphysics is entangled is that of chance and necessity. What can be more sharply contradictory than these two thought determinations? How is it possible that both are identical, that the accidental is necessary, and the necessary is also accidental? Common sense, and with it the majority of natural scientists, treats necessity and chance as determinations that exclude each other once for all. A thing, a circumstance, or a process is either accidental or necessary, but not both.

“… And then it is declared that the necessary is the sole thing of scientific interest and that the accidental is a matter of indifference to science. That is to say: what can be brought under laws, hence what one knows, is interesting; what cannot be brought under laws, and therefore what one does not know, is a matter of indifference and can be ignored.”

Engels goes on to explain the problems encountered in trying to argue that every minuscule variation in nature is absolutely predetermined, such that there is no such thing as “accidental.” This renders science itself impossible, because there are no methods by which natural variations can be predicted. Elements that emerge from a defined starting point cannot be predicted because what is possible depends not only on what potentials are generated within the molecular chaos.

Engels, then, refers to the philosophy of Hegel, which provides an answer to this rigid dichotomy between accident and determinism (Logik, II, Book III, 2: "Die Wirklichkeit"), MECW, vol. 25, p. 500):

“Hegel came forward with the hitherto quite unheard-of propositions that the accidental has a cause because it is accidental, and just as much also has no cause because it is accidental; that the accidental is necessary, that necessity determines itself as chance, and, on the other hand, this chance is rather absolute necessity.”

While Hegel’s form of posing the nature of this contradiction might be difficult to interpret in the common language of today, it is his way of characterizing the relationships within nature that provoked the awakening consciousness of the young Marx and Engels back in the 1840s. Engels continues:

“… Darwin, in his epoch-making work, set out from the widest existing basis of chance. Precisely the infinite, accidental differences between individuals within a single species, differences which become accentuated until they break through the character of the species, and whose immediate causes even can be demonstrated only in extremely few cases (the material on chance occurrences accumulated in the meantime has suppressed and shattered the old idea of necessity), compelled him to question the previous basis of all regularity in biology, viz., the concept of species in its previous metaphysical rigidity and unchangeability. Without the concept of species, however, all science was nothing.”

In other words, the changes from generation to generation within a population, as well as the genome-wide changes brought about by gene exchange between closely related populations, cannot be predicted in each instance, but over time these processes produce a new species that becomes reproductively isolated from its progenitor species. Thus, a barrier forms between the related species, which can be defined by certain limitations on the possibility of producing viable offspring. Thus nature, natural selection, has produced and continues to produce, unique species which are thought to be reproductively isolated entities, even though from the standpoint of anatomy and physiology they are very nearly identical. Darwin, writing in the 1860s, knew nothing of the gene, nor of its chemical composition or structure, nor even of the genetic factors that the Austrian monk, Gregor Mendel, had analyzed in the same historical period.

The speciation process is ongoing, so we must keep in mind that the current levels of classification of species: (Kingdom, Phylum, Class, Order, Family, Genus, Species) are only an approximation and do not provide for the intermediate forms that are still in process of determination, or straddle the boundary line between one species and another. As Engels has said on this point:

“Hard and fast lines are incompatible with the theory of evolution. Even the borderline between vertebrates and invertebrates is now no longer rigid, just as little is that between fishes and amphibians, while that between birds and reptiles dwindles more and more every day. Between Compsognathus and Archaeopteryx only a few intermediate links are wanting, and birds’ beaks with teeth crop up in both hemispheres. “Either-or” becomes more and more inadequate. Among lower animals, the concept of the individual cannot be established at all sharply. Not only as to whether a particular animal is an individual or a colony, but also where in development one individual ceases and the other begins (nurses).

“For a stage in the outlook on nature where all differences become merged in intermediate steps, and all opposites pass into one another through intermediate links, the old metaphysical method of thought no longer suffices. Dialectics, which likewise knows no hard and fast lines, no unconditional, universally valid “either-or” and which bridges the fixed metaphysical differences, and besides “either-or” recognises also in the right place “both this-and that” and reconciles the opposites, is the sole method of thought appropriate in the highest degree to this stage. Of course, for everyday use, for the small change of science, the metaphysical categories retain their validity.”

As with biological evolution, the same contradiction involving chance and necessity comes into play when considering the transformation of inorganic chemical reactions into self-perpetuating, self-renewing cycles of reactions. As in the biology of living systems, so also in those prebiotic chemical cycles that first showed the potential to become self-organizing systems, it became necessary—but only in hindsight—to overcome the chaotic thermodynamic obstacles that constantly favor dissolution and disorganization. The process was spontaneous and must have only occurred after millions of near misses. So, was the process that led to the origin of life on the planet foreordained or necessary in any sense? This is like the story that, if 100 monkeys make random keystrokes on 100 typewriters, given sufficient time, the complete works of Shakespeare will eventually be produced. Maybe, maybe not. How much time is left in the universe? As for life on the planet, how much time was available? Only about 10 billion years.

As we study biology, we come to recognize that the process of natural selection demonstrates the potential for new adaptive traits providing improved adaptation of populations to their environment, but at the same time, other populations experience the growth of maladaptive changes which widen the gap between their survivability and the environmental conditions. The road to the emergence of a new species is also the road to the extinction of the same species, having grown old.

Given that scientists do not have any concrete evidence of the transitional forms that preceded the first life form, then we are left with calculated guesswork. Jamie A. Davies, in his book Life Unfolding: How the Human Body Creates Itself, begins his approach to the explanation of life’s beginnings by introducing the term “adaptive self-organization” to indicate the range of physical possibilities that embrace inorganic chemistry as well as the evolution of life itself and the physiology of plants and animals. On page 5 he refers to this term:

“‘Adaptive self-organization’ is a description grounded in the components and looks upwards, describing how the application of simple rules to these components can result in their collectively doing something large-scale, clever, and subtle. The way in which adaptive self-organization allows non-living molecules to produce a living cell, and allows cells with very limited individual abilities to produce a very able multicellular body, will form a theme that runs through this book because it is the core of development. Adaptive self-organization and emergence go far beyond biology, and some very readable books on its wider implications are listed under ‘Further reading’.”

Deacon devotes a significant portion of his book to the question of how complex self-regenerating systems can emerge and persist without violating the second law of thermodynamics. Systems that are in thermodynamic equilibrium are systems that can just as easily lose order as gain order, but life is a phenomenon of persistent self-organization that continuously reproduces itself in a far-from-equilibrium state. Life is highly ordered. As Deacon explains (p. 223), in an isolated system, such as a gas trapped in a flask, thermodynamic equilibrium will remain undisturbed:

“This is made explicit in the classic thermodynamic model system: A gas in a container that can be isolated from all outside influences. Asymmetrically heat the container, using an external heat source, and the majority of molecules in one region are forced to move more rapidly than the majority at some other region. But as soon as the external influence is removed, the gas will begin an inevitable transition back to equilibrium, redistributing these local differences. In the one case, the cause must be imposed from without for change to occur, in the other the cause is intrinsic; change will happen unless it is impeded by outside intervention.”

[But the world is not composed of isolated systems.]

“Because the world is structured and not uniform, and because there are many distinct dimensions of orthograde change (change of status toward equilibrium) possible, involving different properties of things, such as temperature, mass, movement, electric charge, structural form, etc., certain of these tendencies can interact in relative isolation from others. Contragrade change is the natural consequence of one orthograde process influencing a different orthograde process—for example, via some intervening medium. This implies that in one sense all change ultimately originates from spontaneous processes.”

Inorganic chemistry involves constant perturbation of systems that impinge on one another. The perturbations create chemical reactions and cycles that harbor a tendency to create mutually interdependent cycles that push away from thermodynamic equilibrium and give rise to self-limiting and self-renewing systems. Deacon introduces terminology and concepts that try to examine what were the possible ways in which interdependent, self-perpetuating chemical processes might spontaneously emerge.

What is the ongoing activity in the inorganic world that makes the emergence of life a possibility? In Life's Ratchet: How Molecular Machines Extract Order from Chaos, Peter M. Hoffmann explains the source of energy that drives the creation of life as well as its persistence (p. 7),

“As we enter the microscopic world of life’s molecules, we find that chaos, randomness, chance, and noise are our allies. Without the shaking and rattling of the atoms, life’s molecules would be frozen in place, unable to move. Yet, if there were only chaos, there would be no direction, no purpose, to all this shaking. To make the molecular storm a useful force for life, it needs to be harnessed and tamed by physical laws and sophisticated structures—it must be tamed by molecular machines. The fruitful interaction of chance and necessity also explains how these chaos-harvesting machines were designed by evolution.”

Speaking of the great strides taken in the course of the development of modern scientific knowledge in the eighteenth and nineteenth centuries, Friedrich Engels remarks (Dialectics of Nature, Introduction):

“The new outlook on nature was complete in its main features: all rigidity was dissolved, all fixity dissipated, all particularity that had been regarded as eternal became transient, the whole of nature was shown as moving in eternal flux and cyclical course.”

Both random chance and determinate order are necessary for the emergence of life. Life requires the mutual interdependence of chemical cycles whose chemical products provide reactants for other cycles. The primordial chemical processes are not the same as those which later evolve to become metabolic processes within living organisms, although the same physical laws constrain both organic and inorganic systems. The physical laws governing the bonding of chemicals have to do with physical distance, the presence or absence of catalysts (whether biological or inorganic), the electronegativity of the elements in question, the presence or absence of free energy for energy of activation, etc. These are described in any chemistry textbook. The intermediate systems are transitional forms in a double sense. First, they represent a transition from unconnected cycles of chemical reactions to complexes of mutually interdependent systems. Secondly, they represent a transient stage on the road to becoming what we now call “life.” As of this point in history, scientists can neither work out what those stages were, nor recreate them in laboratories.

Ian Hacking, in his book The Taming of Chance (Cambridge University Press, 1990), a very readable history of scientific creation, speaks of the erosion of determinism and the advance of statistical thinking (p. xiii):

“Determinism was eroded during the nineteenth century and a space was cleared for autonomous laws of chance. The idea of human nature was displaced by a model of normal people with laws of dispersion. These two transformations were parallel and fed into each other. Chance made the world seem less capricious: it was legitimated because it brought order out of chaos. The greater the level of indeterminism in our conception of the world and of people, the higher the expected level of control.”

Here Hacking seems to express the same judgment on the probabilistic nature of control as Schrodinger (ibid., above): “Only in the cooperation of an enormously large number of atoms do statistical laws begin to operate and control the behavior of these assemblies with an accuracy increasing as the number of atoms involved increases.”

Imagine a bell curve as a way of evaluating a population’s height. The curve rises to the mean and then falls as it proceeds along the range of possibilities. The extremes are considered the opposites: short on one side, tall on the other, with the average height in the middle. Opposites are seen as “mutually exclusive” entities, yet there is a continuous gradation separating them. The recognition of “infinitesimal magnitudes” in natural differences is the starting point for differential calculus. Engels pointed out (MECW, vol. 25, p. 111):

“But, regardless of all protests made by common sense, the differential calculus under certain circumstances nevertheless equates straight lines and curves, and thus obtains results which common sense, insisting on the absurdity of straight lines being identical with curves, can never attain. And, in view of the important role which the so-called dialectics of contradiction has played in philosophy from the time of the ancient Greeks up to the present, even a stronger opponent than Herr Dühring should have felt obliged to attack it with other arguments besides one assertion and a good many abusive epithets.”

What happens is the emergence of something that did not exist before. Whatever entity it might turn out to be is something that “crossed the line” between potentially existent and really existent. But where is that line? Is there a way to locate the exact point in time when something changes from “potential” to “real”? If you could pinpoint the exact moment of change, then you would no longer need the concept of “potential” at all. You would only be positing an instantaneous transformation of the one entity into the other. Instead of “emergence” you would only be falling back on the idea of an “external force” that substitutes the one for the other. In using the concept of “potential” you recognize that there is a contradiction between two internally generated forces, between birth and death, between being and nothing, and between what is coming into being and what is passing away.

Nature is fluid. The world is not only composed of objects linked or separated by forces; it is also made up of constantly emerging, evolving, and interacting processes, which create and recreate what are defined for analytical purposes as “objects,” but which themselves are only momentary and transitional incarnations of the world movement. The natural processes of “emergence” have been topics of study and debate since the early 19th century, since it is not easy to imagine something that didn’t exist before without divine intervention, nor is it so easy to discern the causes of emergent phenomena.

Deacon emphasizes that life was not created, nor did it suddenly appear. It emerged spontaneously, out of the accumulation of evolving chemical cycles. But how are we to understand “emergence”? Deacon explains the challenge:

“This is the problem of emergence: understanding how a new, higher dimension of causal influence can be woven from the interrelationships among component processes and properties of a lower dimension.”

It is Deacon’s goal to help us try to understand the myriad ways in which order can emerge spontaneously from the chaos of the mass of molecular collisions, while at the same time avoiding the pitfalls of magical thinking. His book is not necessary for understanding the progress of biology or anthropology, but it provides valuable lessons on how to develop scientific hypotheses for questions that are as yet unresolved, while at the same time upholding a strictly materialistic outlook in the approach.

Our species is one of the products of evolutionary development, which was first biological, then became social. We think of evolutionary change as a slow process, moving in tiny increments, and only over long periods of time do we see changes that seem to make a difference. Continents drift apart. The flat plain becomes a mountain range. The fish becomes an air-breathing tetrapod. Homo erectus becomes Homo sapiens. Evolutionary science allows us to trace the continuity between the ancestor and the descendant. Paleontology relies on evidence, carbon dating of artifacts, analysis of fossils, studies of comparative vertebrate anatomy, etc., and more recently the study of ancient DNA using various molecular genetic methods and DNA resources. Lineages can be traced through a combination of these different approaches. Many gaps can be filled in with hypothesized intermediate forms.

We human beings are the only species capable of thinking about and studying our own origins and history, as well as planning our future social development. With the scientific knowledge gained by humanity, we are learning how to manage the resources on our planet, resources that provide the basis for all life, and for our human civilizations. We are the only species that has the capacity to become the steward of nature, the guarantors of the health of our planetary ecosystems, and the makers of our own future as integral members of the planetary biota. But in order for us to fulfill our roles as protectors of the planet and molders of our common life together, we must first carry through the changes in our own social organization that will allow us to achieve these goals.

“A socialist revolution will be the first and most important task faced by humanity to open the door to the future, a future which will fulfill the guiding principle: “from each according to their ability, to each according to their needs.” (Marx).

The evolution of life on earth has brought forth the human capacities needed to emerge from the preparatory stage of development of our social development. We are now in an age of revolutionary transition from a lower stage of society to a higher one. The national secretary of the Socialist Workers Party of the U.S., Jack Barnes, describes this perspective in the following words, taken from the journal New International, No. 13. (p. 57):

“The establishment of the dictatorship of the proletariat won’t bring socialism. It will create the conditions in which the working class can begin to take ever-greater strides toward workers’ control of industry together with the opening steps toward the management of industry and economic planning. In which farmers, no longer threatened by foreclosure on the land they till, can begin, with the help of toilers in the cities, to revolutionize agricultural production in the interests of humanity today and tomorrow (and can teach the urban population a thing or two, enriching our lives and broadening our culture). In which Blacks can organize under the aegis of the new state power to take rapid, giant strides toward ridding social relations of every vestige of racist prejudice and discrimination. In which women, together with powerful allies, can organize themselves to advance the struggle for their complete emancipation from the oppressive legacy of millennia of class society. And in which the entire weight of the new workers and farmers republic in the United States will be brought to bear to advance every struggle for national liberation and for socialism taking place anywhere in the world.”

Barnes’ conscious activity is within the tradition of Marx and Engels, who asserted in the Communist Manifesto:

“We have seen above that the first step in the revolution by the working class is to raise the proletariat to the position of ruling class to win the battle of democracy.

“The proletariat will use its political supremacy to wrest, by degree, all capital from the bourgeoisie, to centralise all instruments of production in the hands of the State, i.e., of the proletariat organised as the ruling class; and to increase the total productive forces as rapidly as possible.

“Of course, in the beginning, this cannot be effected except by means of despotic inroads on the rights of property, and on the conditions of bourgeois production; by means of measures, therefore, which appear economically insufficient and untenable, but which, in the course of the movement, outstrip themselves, necessitate further inroads upon the old social order, and are unavoidable as a means of entirely revolutionising the mode of production.

“… When, in the course of development, class distinctions have disappeared, and all production has been concentrated in the hands of a vast association of the whole nation, the public power will lose its political character. Political power, properly so called, is merely the organised power of one class for oppressing another. If the proletariat during its contest with the bourgeoisie is compelled, by the force of circumstances, to organise itself as a class, if, by means of a revolution, it makes itself the ruling class, and, as such, sweeps away by force the old conditions of production, then it will, along with these conditions, have swept away the conditions for the existence of class antagonisms and of classes generally, and will thereby have abolished its own supremacy as a class.

“In place of the old bourgeois society, with its classes and class antagonisms, we shall have an association, in which the free development of each is the condition for the free development of all.”

Universal Evolution, Probability and Life's Origins part 1

Mike's Notes

I came across the two-part essay, Universal Evolution, Probability and Life's Origins, by James Miller. It has to be one of the best explanations I have read of how those three topics on Nature come together. I have copied his essays in full to the blog and hope to record a video interview with him some time in the future.

The resource references below cover both essays.

Pipi's core architecture and properties were built on many of the same conclusions about nature.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/05/2025

Universal Evolution, Probability and Life's Origins

Part 1. Science and Marxism

By: James Miller
Substack: James Miller's observations: 11/05/2023.

The story of life—and the story of the modern class struggle—begins with the origin of the universe. We can’t trace our ancestry any farther back—and it can be argued that we can’t even trace it that far back. Although we know that life developed out of the inorganic environment on earth, and therefore is continuous with the history of the planet, we still don’t know enough about the origin of life to definitively trace our ancestry back through previous biological forms and then through the hypothesized forms of proto-life that preceded life itself.

Although the working class has no near-term stake in studying the theories of the origin of the universe and related astrophysical questions, all those involved in the working-class movement can benefit by thinking scientifically about the world in which we live, including the recognition that we are “citizens of time,”—that we live within a universe that has a beginning, a direction of motion, and a trajectory of development that produces many forms of evolutionary change. This evolving planet and its resources provide the ground we stand on as we struggle to find a road forward for the future of humanity. Working people can improve their readiness for playing the leading role in the ongoing processes of social change by gaining knowledge about the evolution of our own solar system, the planet we inhabit, the origin of living creatures, and their evolution. These processes of transformation of our material surroundings have created the foundation for human society and its distinctive cultural and social evolution.

We should not take the view that specially-educated professionals are the only ones capable of carrying out scientific studies. The revolutionists among us need to view ourselves as responsible worker-scientists, as willing and able to take advantage of what the scientists have discovered and make this a part of our theoretical armament. Needless to say, we need an outlook grounded not only in the physical sciences but also in scientific economics, sociology, and history. Marxism is defined as scientific sociology.

So it is that Marxists have not ignored the natural world and its history as the starting point for life and for our species. The history of science is a history of the accumulation and advancement of knowledge of the world. Now it is up to the working class to take advantage of this process of discovery and make use of its key lessons as we struggle to end capitalism and build a genuinely human society. Friedrich Engels in Anti-Duhring (1878), avowed that modern science had come to replace philosophy as the mode of inquiry necessary to discover the laws of existence of nature and humanity. After reviewing the stunning advances made in the sciences of physics, chemistry, and biology:

“Further: if no philosophy as such is any longer required, then also there is no more need of any system, not even of any natural system of philosophy. The perception that all the processes of nature are systematically connected drives science on to prove this systematic connection throughout, both in general and in particular. But an adequate, exhaustive scientific exposition of this interconnection, the formation of an exact mental image of the world system in which we live, is impossible for us, and will always remain impossible.” (Marx Engels Collected Works [MECW] vol. 25, p. 35)

Science is the guiding light of the working class. Without a scientific view of the society we live in, we cannot overcome the oppressors who stand over us, exploit us, and increasingly degrade our conditions of life and labor. Their destructive methods lay waste to the natural resources on which humanity depends, and their competitive urges propel them to continue building an ever more massive stockpile of weapons to eliminate all life on earth.

The science of society is Marxism. Marxism is considered “rubbish” by the members of the capitalist class and their political representatives. But workers are beginning to recognize that human society is the outcome of biological evolution, which itself is a product of the evolution of the earth, and all the developments that came before it. Workers will come to know that capitalism, currently the dominant system of social relations in the world, is the product of the cultural evolution of human existence. Workers will also realize that all social systems are transient, including the one into which we have been born.

Working people will continue to develop deeper knowledge about the decline of the strength of the capitalist-imposed regimes worldwide, and how their ruling classes’ capacity to exercise their control over all life and nature is weakening, crumbling, and producing disastrous consequences for billions of people. Zealous to enlarge their share of the world’s wealth at the expense of the labor of workers and farmers, the ultra-wealthy property owners and the governments subordinated to their needs have been accustomed to wreaking havoc on nature and humanity alike as their birthright.

In the middle ages, the forerunners of the capitalist class could be found among the merchants and bankers and their strivings to increase their wealth through the exchange of commodities, as well as making use of the credit systems of the day. But they were not yet capitalists in the modern sense. The creation of wealth through the exploitation of wage labor did not come to dominate the world until the period of the industrial revolution in the late 18th century. Since that time, the needs and goals of the capitalist class have been the central determining factor in the development of social and political institutions throughout the world.

But social systems are not God-given nor everlasting. Even though the relentlessly profit-hungry industrialists, bankers, and landlords continue to treat the world and its working population as nothing but resources to be utilized as private property for their enrichment, the truth is that their very success is undermining their control of the world's human and material resources. The powers they seized in the growth phase of world capitalism—powers that enabled them to build gigantic productive machines and populous cities—are now slipping through their fingers as they enter more deeply into their declining phase. The history of society is a creative matrix of evolving labor processes that never stop unfolding, creating ever-new marvels of science and technology, and will not reach a termination event until the universe becomes uninhabitable by living beings.

For those who believe that capitalism will never die, it would be a good idea to examine more closely how the world has come to be as it is. Fixing our attention on how we managed to arrive at the present epoch gives us a good idea of how we can effectively proceed forward into the next one. Frederick Engels achieved a deep understanding of how capital accomplished its stunning successes in the course of its conquest of social and economic power. The fruitful combination of science and labor, once appropriated by the capitalists, made possible the conscious expansion of technological and industrial development up until the late 20th century. Engels explained how the process of growing scientific knowledge sprang from the creativity of the human species, which itself was, and always has been, a product of the natural world. (MECW, vol. 25, p. 461).

“Thus at every step, we are reminded that we by no means rule over nature like a conqueror over a foreign people, like someone standing outside nature—but that we, with flesh, blood, and brain, belong to nature, and exist in its midst, and that all our mastery of it consists in the fact that we have the advantage over all other creatures of being able to learn its laws and apply them correctly.”

We should not assume that humanity has lost the capacity to consciously learn nature’s laws in order to act within their constraints to improve the environmental conditions we need to enable our species to thrive. Indeed, we should be aware that human ingenuity is still alive and well, in spite of what we are being told by the so-called "environmentalists," whose major preoccupation is the collapse of civilization and the demise of the human species, which they—against all evidence—believe is imminent. At the same time, they cling firmly to the belief that the capitalists, who are wreaking havoc on the atmosphere, the soil, and the world’s animal and human populations, must remain in power forever since there is no way to remove them. Nor do they openly proclaim their loyalty to the capitalists; indeed, they often put the blame on the ruling classes, but none of them believe that capitalism can be surpassed and replaced by a higher level of civilized society. Their pessimism about humanity runs too deep to allow themselves to think about an improved form of civilization. They don’t say that capitalism is the best possible system, or even that human society cannot go beyond capitalism—because to say that would open up a debate about a possible future beyond capitalism. They prefer to keep their thoughts restricted to individual or governmental solutions within the existing social framework: recycle waste appropriately, reduce consumption of electricity, reduce economic growth, reduce your carbon footprint, don’t eat meat, etc.

Origin of life

As we approach the issue of the origin of life from inorganic nature, we recognize that we won’t be able to utilize the same methods as those used in tracing the lineages of biological evolution. There is no direct line of descent from vulcanism to organism, although the former made its appearance long before the latter. However, by using modern methods of the examination of fossils, together with advances in the study of the earth’s geological and atmospheric history, scientists (and their students) can study and trace the lineages of the evolved species of multicellular life. But when it comes to the origin of life from non-life, there is no physical evidence to help us understand how that process began, nor what forms it took at the earliest stages. But we do know that it happened. And there is much research and study underway to find out what happened to promote the transition from the sterile pre-biotic world to the world of living beings.

Although more scientists are now addressing the issue of the origin of life, still the basic facts of the process are unknown. Terrence Deacon, professor of anthropology at UC Berkeley, has developed some useful hypotheses that indicate a way of thinking about how this transition might have occurred, based on the principles of inorganic chemistry, thermodynamics, and the emergence of complexity. [Deacon, Terrence W. Incomplete Nature: How Mind Emerged from Matter (p. 436)] For Deacon, the primary task is to develop a method for formulating realistic chemical scenarios that could lead to the emergence of life. This is an approach that excludes assumptions about what were the most important pre-biotic molecules, where they were located on the earth, which reactions or molecules came first, etc. And of course, we must exclude divine intervention, or any other supernatural source, even though these ideas have been inculcated into popular culture through the influence of capitalist ideology. We must not forget that under the rule of the bourgeoisie, popular culture has been molded and shaped through its dominance over the education system, the mass media, and the entertainment media.

The practical search for the origin of life began with an attempt to simulate the processes that must have occurred at the dawn of life. The so-called “Miller-Urey” experiment was conducted by Stanley Miller in 1952 and consisted of passing electrical charges through a mixture of water, ammonia, methane, and hydrogen contained within a sealed vessel (this concoction being a stand-in for the supposed “primordial soup” in which life originated). Analysis of the results produced after one week showed that a variety of amino acids had been generated. The experimental results were considered to be a success, but it was recognized that much more needed to be done. This was the first in a long series of attempts to experimentally produce potentially “organic” chemicals in various conditions.

The Miller-Urey experiment was carried out in a vessel with an artificial atmosphere low in oxygen (at the present time the atmosphere is about 20% oxygen). It was understood at the time that the prebiotic atmosphere was severely lacking in free oxygen, although the oxygen present was combined with carbon, hydrogen, and nitrogen in a variety of compounds. The Russian Aleksandr Oparin had hypothesized this kind of prebiotic atmosphere in 1924. In the issue of 23 November 2012 of Nature, Clifford P. Brangwynne and Anthony A. Hyman wrote:

“Nearly 20 years after the book’s publication — and 60 years ago this year — Stanley Miller and Harold Urey tested Oparin’s hypothesis in a lab at the University of Chicago in Illinois. They sent a continuous electric current through a glass vial containing water, hydrogen, methane, and ammonia. Within a week, a substantial amount of the carbon had been converted into complex macromolecules, including many amino acids. This ‘Miller–Urey’ experiment confirmed the significance of Oparin’s ideas, and Miller duly referenced [Oparin’s book] The Origin of Life.

“Oparin’s work thus played a seminal part in the formulation of our modern ideas of life’s conception. His ideas on the organization of cells and first stirrings of life continued to attract an important audience. In 1957, a large international meeting (attended by Miller) was held in Moscow to discuss the origin of life, the proceedings of which make it clear that Oparin’s book had had a profound influence.”

While experiments have not produced any definitive results regarding the initial conditions that determined life’s appearance, it’s useful to develop hypotheses about the formative processes and molecular mechanisms that could have arisen, leaving aside the specific ingredients at the outset. We need to rely on the latest geochemical science to reimagine the lifeless earth as it existed before the formation of any ongoing systematic arrangements of these particular chemical reactions. And we must keep in mind that these chemicals are not like little “objects.” As Richard Feynman has said, “Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen… Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone.”

https://thefinchandpea.com/2012/12/13/life-versus-the-molecular-storm/

There were sites on earth with abundant chemical activity in a liquid or gaseous state involving the formation and dissolution of molecules involving carbon, hydrogen, oxygen, nitrogen, sulfur, iron, and other elements, sometimes in the presence of catalytic metallic surfaces. But we need to understand that the existence of particular kinds of molecules does not automatically produce life or its precursors. While hypothesizing about what might have occurred will not, in itself, produce the definitive answer, it can reduce the range of possibilities by discarding erroneous approaches.

It helps to recognize some of the basic characteristics of life that distinguish it from non-life. One of these characteristics is the ability of living organisms to sustain themselves without an exterior source of energy. Energy is necessary for activity, both for internal metabolism as well as for interacting with its environs. Life must be able to extract free energy (sunlight or chemical energy) from the environment to facilitate the chemical reactions it needs for its activities. But before the question of internalizing the energy supply arises, there must exist some chemical structure that is able to persist—for a time—as an organized entity, capable of movement or activity.

Initially, chemical reactions, to reach a sufficiently stable state, can occur in a locality without an isolating boundary to separate them from the randomizing impingements of the local molecular environment. However, eventually, such relatively stable molecular aggregations must acquire a bounding coat, or membrane, to isolate themselves from outside destabilizing collisions or reactions. Life needs a “self” to distinguish itself from the “other.” Oparin recognized this problem and hypothesized that stable persistent chemical complexes could isolate themselves from surrounding disruptions by occurring within enclosed spaces called “coacervates.” As Brangwynne and Hyman explain (ibid):

“Oparin went on to describe a mechanism by which macromolecules would self-assemble into large liquid-like structures that he called ‘complex coacervates’—what today might be called colloidal assemblies. He suggested that these protocells were a key step in the origin of life. However, given the uncertainty at that time about the nature of biological macromolecules, it was unclear exactly how these colloids might form.”

Freeman Dyson, in his book Origins of Life, Cambridge University Press, 1999, described Oparin’s view of the original enclosures for these macromolecular operations as the “garbage-bag world,” as he explains (p. 37):

“The garbage-bag world is not so elegant and not so widely accepted. It is a generalized version of the world imagined by Oparin. Life began with little bags, the precursors of cells, enclosing small volumes of dirty water containing miscellaneous garbage. A random collection of molecules in a bag may occasionally contain catalysts that cause the synthesis of other molecules that act as catalysts to synthesize other molecules, and so on. Very rarely a collection of molecules may arise that contains enough catalysts to reproduce the whole population as time goes on. The reproduction does not need to be precise. It is enough if the catalysts are maintained in a rough statistical fashion. The population of molecules in the bag is reproducing itself without any exact replication.”

The noted physicist Ilya Prigogine (The End of Uncertainty, 1997) commented on the instability of molecular aggregations in nature and their constant interactions which generate indeterminate states which lie at the very heart of nature. They are indeterminate because none of them are part of a strictly isolated system—all systems of molecular interaction are only relatively isolated in a limited area of chemical activity. There are no absolute or definitive “boundaries” between nearby “systems” of chemical interactions. They all mutually interfere in a random way. Nature cannot produce a permanently isolated organized system of chemical reactions. But scientists generally recognize that whatever initial conditions are chosen to design a lab exercise, these conditions are themselves artificial, and cannot replicate real conditions in nature. At the same time, it must be emphasized that knowledge of biochemistry, and the physical laws that underly it, has progressed through this process of experimentation and analysis of results.

The contrast between nature on the one side, and controlled experimental conditions on the other, must be taken into account in establishing any experiment design. The imperative of establishing isolated conditions for laboratory experiments is key. It is true that in nature there are relative degrees of isolation between molecular assemblies in liquids and gases, since they are separated in space by distances that are determined by the chaotic intermingling of the forces of repulsion and attraction. However, in spite of these differences in the degree of interaction among these various molecular associations, there are no impenetrable barriers separating them. They impinge upon one another to a greater or lesser extent. Various forms of association and disaggregation are constantly occurring. But there is no possibility that a perfectly isolated system can form. Molecular aggregations are by nature temporary and open to change. It’s this principle that underlies the processes of universal, planetary, and biological evolution and why every living thing must die. It’s the lack of isolation that generates the instability and the relative unpredictability of the future evolution of their activity.

Nature is probabilistic at its core, argues Prigogine (ibid, p. 29). He quotes the physicist Emile Borel, who said:

“The representation of gaseous matter by a model, composed of molecules with positions and velocities which are rigorously determined at a given instant is, therefore, a pure abstract fiction. ... as soon as one supposes the indeterminacy of the external forces, the effect of collisions will very rapidly disperse the trajectory bundles which are supposed to be infinitely narrow, and the problem of the subsequent movement of the molecules becomes, within a few seconds, very indeterminate, in the sense that a colossal number of different possibilities are a priori equally probable.”

In the aggregate, the position and velocity of individual atoms or molecules are indeterminate. This finding gives rise to the science of quantum physics. This analysis of nature reveals a “nanoscale” world that is qualitatively different from the macroscopic world in which Newtonian physics, known as “determinism,” prevails. Erwin Schrödinger, an eminent quantum theorist, argued in What is Life? (1944),

“Only in the cooperation of an enormously large number of atoms do statistical laws begin to operate and control the behavior of these assemblies with an accuracy increasing as the number of atoms involved increases. It is in that way that the events acquire truly orderly features. All the physical and chemical laws that are known to play an important part in the life of organisms are of this statistical kind; any other kind of lawfulness and orderliness that one might think of is being perpetually disturbed and made inoperative by the unceasing heat motion of the atoms.”

Deacon points out that the second law of thermodynamics is a probabilistic tendency, not a law involving forces or predictable outcomes. As he explains,

“The second law of thermodynamics is only a probabilistic tendency, not a necessity, and that offers some wiggle room.” The “wiggle room” is the latitude that allows deviations from the deterministic Newtonian scheme of interactions of force and mass, in which given quantities of matter and energy interact according to the ‘laws of physics’ to provide predictable results. The origin of life cannot be considered apart from this thermodynamic ‘wiggle room,’ which allows the formation and perpetuation of organized structures of molecules. The creation and continuity of systems of increasingly complex molecular aggregates only appear to violate the second law of thermodynamics. (Some proponents of “intelligent design” argue that evolution is impossible because the second law of thermodynamics states that the outcome of all chemical reactions increases entropy, or disorder, in the universe. If this were true, there would be no organized systems at all. Their appreciation of science is limited.) The second law is a statistical prediction of a net increase of entropy, or dissipation of heat energy, in the universe.”

The early universe cooled, and in the process developed immense volumes of organized systems of particles of matter held together by atomic or chemical bonds, and these organized entities were dispersed in galaxies, solar systems, and a variety of solidified or gaseous masses. But the universe continues to expand, and as it does so, these organized systems tend to dissipate, disassemble and dissolve. Initially, what is known as “matter” did not exist. It took time for atoms to consolidate and differentiate out of the disorganized plasma initially created in the “big bang.”

But why would organized systems, such as atoms, molecules, and minerals, come into being if entropy were constantly increasing? It’s because even though entropy increases, this is only a statistical result of random chemical processes that continually create structure and organization as well as dissolution and dissipation. In the long run, dissipation wins, but at the same time, many organized forms of matter grow and take shape, utilizing the available heat energy from the many fusion and fission processes scattered throughout the galaxies. Heat energy can create atomic states that facilitate the formation of chemical bonds.

Heat can play a role in the formation of atomic and molecular aggregates that can serve as a basis for an increasing progression of complex structures. Heat can likewise break chemical bonds. The planet had to cool sufficiently to reach a point when chemical activity was free enough to form molecular aggregations that could remain stable for enough time to form structures. These structures, provided they have the potential to maintain their existing level of organization, can build upon, and extend themselves, to attain higher levels of complexity. These processes can become relatively long-lasting by utilizing free energy to protect themselves against the thermodynamic tendency to dissolve and disaggregate. And, as we know, organized structures can produce self-perpetuating processes through their own cycles of energy creation. At a higher level of organization, e.g., in living beings, internal energy generation becomes built into the aggregate structure, for example, the citric acid cycle.

But, apart from the structure-forming processes that capture energy from the surrounding atmosphere, or generate it internally, there are structure-forming processes that release energy to the surrounding atmosphere as the structure is built. In Incomplete Nature, Deacon explains that self-organizing processes are already common in the inorganic world, and he gives the example of snowflake crystal formation, in which there is an accretion of elaborate crystalline forms with hexagonal shapes. The formation of snowflakes is a process that creates a crystalline array, a more organized relationship among water molecules than in the liquid state, but at the same time releases energy to the surrounding atmosphere. Water molecules spontaneously “fall together” to form an ordered crystalline structure that has a lower energy level than separate water molecules. Organized crystalline structures then emerge spontaneously any time atmospheric conditions favor snow.

Terrence Deacon explains (p. 257):

“A quite different example of morphodynamic change is exhibited by the amplification and propagation of constraints that takes place in the growth of snow crystals. The structure of an individual snow crystal reflects the interaction of three factors: (1) the micro-structural biases of ice crystal lattice growth, which result in a few distinct hexagonally symmetric growth patterns; (2) the radially symmetric geometry of heat dissipation; and (3) the unique history of changing temperature, pressure, and humidity regimes that surround the developing crystal as it falls through the air.”

Another example of the spontaneous formation of hexagonal patterns in nature is the honeycomb. Although constructed by honeybees, not weather, these patterns are formed without premeditated design. As Philip Ball explained honeycomb formation on April 27, 2016, issue of Nautilus (https://nautil.us/why-nature-prefers-hexagons-235863/):

“Why hexagons, though? It’s a simple matter of geometry. If you want to pack together cells that are identical in shape and size so that they fill all of a flat plane, only three regular shapes (with all sides and angles identical) will work: equilateral triangles, squares, and hexagons. Of these, hexagonal cells require the least total length of the wall, compared with triangles or squares of the same area. So, it makes sense that bees would choose hexagons, since making wax costs them energy, and they will want to use up as little as possible—just as builders might want to save on the cost of bricks. This was understood in the 18th century, and Darwin declared that the hexagonal honeycomb is “absolutely perfect in economizing labor and wax.”

“Darwin thought that natural selection had endowed bees with instincts for making these wax chambers, which had the advantage of requiring less energy and time than those with other shapes. But even though bees do seem to possess specialized abilities to measure angles and wall thickness, not everyone agrees about how much they must rely on them. That’s because making hexagonal arrays of cells is something that nature does anyway.”

Another example of the abiotic spontaneous formation of hexagonal cells is provided by Rayleigh-Bénard cells. Deacon explains (p. 250):

“Highly regular shaped convection cells (hereafter termed Bénard cells) can form in a process known as Rayleigh-Bénard convection in a uniformly heated thin layer of liquid (e.g., oil). In 1900, Claude Bénard observed that a cellular deformation would form on the free surface of a liquid with a depth of about a millimeter when it was uniformly heated from the bottom and dissipated this heat from its top surface. This often converged to a regular pattern of tiny, roughly hexagonally shaped columns of moving fluid, producing a corresponding pattern of hexagonal surface dimples. These Bénard cells form when the liquid is heated to the point where unorganized (i.e., unconstrained and normally distributed) molecular interactions are less efficient at conducting the heat from the container bottom to the liquid surface than if the liquid moves in a coordinated flow. The point at which this transition occurs depends on a number of factors, including the depth, specific gravity, the viscosity of the liquid, and the temperature gradient.”

Matter grows out of the cooling of the elements in the primitive universe and becomes incorporated in galaxies, stars, planets, asteroids, and other gaseous, liquid, colloidal, and solid formations. At the same time, the universal material phenomena we recognize as relativity, Newtonian mechanics, and quantum mechanics evolve, and no doubt continue to evolve. We should not imagine that anything in the universe has reached a point of completion, or stasis. By the time entropy has come close to its maximum, our own sun and solar system will have long since been converted to dust. If we were to argue that the formulae we use to define material relations (f=ma, e=mc2, etc.) were to be regarded as fixed and final, we would have to abandon our knowledge of thermodynamics.

As the universe evolves toward its predicted destiny, all the energy that has been consumed in binding these systems together dissipates; the organized systems fall apart, and energy becomes randomly dispersed. This dissipative, randomizing process is called the increase of entropy. At the end of the universe, the hypothetical termination of this process, there is a uniform, random distribution of particles and energy. This end-state of matter is called “the heat death of the universe” because no further increase of heat diffusion is possible. In the end, entropy wins! (But who knows if this is really “the end”? Maybe there is more . . .

Deacon, referring to the “wiggle room” afforded by the second law of thermodynamics (see above) says: “

“This loophole does, however, allow for the global increase of entropy to create limited special conditions that can favor the persistent generation of local asymmetries (i.e., constraints). And it is the creation of symmetries of asymmetries—patterns of similar differences—that we recognize as being an ordered configuration, or as an organized process, distinct from the simple symmetry of an equilibrium state. What needs to be specified, then, are the conditions that create such a context.”

Deacon puts stress on the need to understand “constraints” in the material world. No organized structure can emerge except through preconditions that prevail at a certain time and place. There are no “timeless” structures, nor are there entities with no particular location in space, although relationships among entities can take on forms that can only be analyzed indirectly. (Quantum physics introduces alternate forms that are not explicable by Newtonian physics.) A physical structure cannot exist without its material constituents or the configuration of its limits and relationships. Temperature, pressure, and the availability of the appropriate chemicals are limiting factors. For complex structures to emerge, or for existing structures to develop further, “constraints” will limit the degrees of probability for this or that constructive change to occur. These limitations are constraints. The “wiggle room” Deacon mentioned earlier is defined by the material constraints dictated by the existing constraints.

This principle of constraints also applies, for example, in historical development. We recognize that the Ancient Greeks, even with Aristotle’s guidance, could not have developed a rocket to the moon. The Greeks of that time were constrained by the state of scientific development that the most capable theorists had developed. Every stage of historical development is built upon the foundations of the material and cultural achievements prevailing in the given epoch. These material and theoretical limits are constantly in conflict with the potentials latent in the contemporaneous populations. This tension between what is and what can be keeps driving scientific and technological advancement.

An equilibrium state, in a relatively localized section of the natural world, is represented by the condition that there is no possible action that can increase entropy, that entropy has been maximized, at the given moment, in this given portion of the universe. To persist it must generate processes that constantly restore its stability. Therefore, the structure remains constantly on the verge of disaggregation and restabilization. And we must recall that absolute isolation is not possible, something is always interfering from the “outside” (even though there is no absolute border between “inside” and “outside”). Yet partial, relative isolation is possible because of this tension between destruction and creation. And it is in these conditions that life processes occur.

Among the animal and plant species at high levels of development, individual organisms reproduce and die. At a low stage of development, individual organisms, or colonies of cells, can persist for centuries or millennia without dying. For these latter phenomena, see:

https://scholar.google.com/scholar?q=R.+Petralia,+M.+Mattson,+P.+Yao,+Ageing+Res+Rev.+2014+Jul%3B+0:+66%E2%80%9382&hl=en&as_sdt=0&as_vis=1&oi=scholart R. Petralia, M. Mattson, P. Yao, Ageing Res Rev. 2014 Jul; 0: 66–82:

“Basal metazoans typically maintain many pluripotent stem cells that are capable of differentiating into all types of cells in the body; this gives these animals incredible abilities to grow, regress, regrow, and regenerate their bodies as needed. They can become in some cases potentially immortal. However, during the evolution of more complex animal body forms, these abilities were reduced or lost, apparently in an effort to produce complex body structures for sophisticated functions while still avoiding the production of destructive tumors. Nevertheless, there is no direct correlation of increased body complexity with reduced lifespan.”

The range of observed creation and destruction of life forms observed by these scholars—the antagonism between longevity and entropy—reflects the various modes that express the same contradiction. Natural selection works to preserve the stability of complex systems that have arisen in biological evolution as they resist the unyielding pressure of the second law of thermodynamics. The complex structures elaborated by natural selection exist within a thermodynamic space that is far beyond thermodynamic equilibrium. The scientific analysis of the development and sustainability of “far from equilibrium” molecular systems is called “dissipative structure,” and this theme was treated extensively by Prigogine and others. The structures are “dissipative” because while living cells are expending energy in their life processes, they produce more entropy than they absorb.

A Beginner’s Guide to Event Sourcing

Mike's Notes

Event Store provides a good explanation of event sourcing, with examples. Pipi has an event messaging system with relational database event stores. The article is referenced below.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/05/2025

A Beginner’s Guide to Event Sourcing

By: Eric Evans
Domain Driven Design: 2023

Introduction

Event Sourcing has been used in production by some of the world's biggest companies (including Netflix and Walmart) for years, providing them with the platform to rapidly scale, iterate and evolve their systems, and establishing a data model that delivers a strong competitive advantage.

In this guide, we discuss what Event Sourcing is, why you'd use it, the range of benefits it provides and we break down the jargon.

What is Event Sourcing?

Event Sourcing is an architectural design pattern where changes that occur in a domain are immutably stored as events in an append-only log.

This provides a business with richer data as each change that occurs within the domain is stored as a sequence of events which can be replayed in the order they occurred. This means you’re able to see more than just the current state of your domain - you can see what lead up to the current state. ..." - Event Store

DAFNI Conference 2024

Mike's Notes

The DAFNI 2024 conference was held in September. The DAFNI conference presentations are available on the DAFNI website and are listed below. I am particularly interested in how the UK Digital Twin for infrastructure is working out.

 I first came across the UK National Digital Twin Project some years ago while looking for examples of temporal modelling in large databases to use in Pipi. I came across the late Dr Matthew West, the technical lead on the UK project. He led the development of the Shell Refinery Ontology, etc.

Matthew also developed 4Dism, a way to model things in time and how they change. One of the Pipi Engines uses this now.

Matthew helped me immensely through the Ontolog Forum to discuss issues with using 4Dism.

That led me to Chris Partridge, who is behind BORO, which the Pipi BORO Engine uses, running it forward and in reverse.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

18/04/2025

DAFNI Conference 2024

By: 
DAFNI: 03/12/2024

"DAFNI is a computing platform which aims to support advanced research into national infrastructure, including transport, water, and energy and city scale modelling. It represents an £8 million investment from the UK Collaboratorium for Research on Infrastructure and Cities (UKCRIC) to provide world leading infrastructure systems research capabilities and enhance the quality of outputs.  

DAFNI aims to improve the efficiency, reliability and sustainability of infrastructure through better sharing and use of data, exploitation of simulation and optimization techniques, and engagement with stakeholders through visualisation. DAFNI allows researchers to scale and integrate their computational models to explore the social and environmental impacts of changes in infrastructure with much greater coverage and detail, and to deposit, share and use data collected in research projects and generated in models. ..." - DAFNI

Lyft Design language on Selection Controls

Mike's Notes

A helpful article published on Medium discusses what selection control to use and where. It is based on the experience of teams using the Lyft Design System and builds on a previous article also listed below from 2019.

I need the Pipi CMS to automatically pick the correct selection control for any UI based on simple rules. The rules are explicitly described in the two Medium articles below.

This also builds on the work of the EU-funded Cameleon Project, which was to create Model-Driven User Interfaces. After 10 years of serious work, the Cameleon Project has gone nowhere, possibly overtaken by the arrival of HTML5 and mobile phones some 15 years ago.

However, I have found a way to put all that excellent research to good use in the Pipi CMS Engine.  And the Lyft Design System articles answer some of my remaining unanswered questions.

The full range of selection options presented by Lyft is tempered by some good points by Adam Silver, who argues that functionality should first be made on web UI to preserve accessibility. I agree, given that Pipi is built to generate workplace enterprise tools. Adam has a website with many valuable tips on providing web forms.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/05/2025

A better segmented control

By: Runi Goswami
Medium: 19/05/2021




Bellman equation

Mike's Notes

More notes on using Markov chains. Pipi uses Markov chains.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/05/2025

Bellman equation

By: 
Wikipedia: 01/12/2024

"A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.

The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior and Abraham Wald's sequential analysis. The term "Bellman equation" usually refers to the dynamic programming equation (DPE) associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation that is called the Hamilton–Jacobi–Bellman equation.

In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation). However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation can be found without state augmentation. ..." - Wikipedia

Markov decision process

"Markov decision process (MDP), also called a stochastic dynamic program or stochastic control problem, is a model for sequential decision making when outcomes are uncertain.

Originating from operations research in the 1950s, MDPs have since gained recognition in a variety of fields, including ecology, economics, healthcare, telecommunications and reinforcement learning. Reinforcement learning utilizes the MDP framework to model the interaction between a learning agent and its environment. In this framework, the interaction is characterized by states, actions, and rewards. The MDP framework is designed to provide a simplified representation of key elements of artificial intelligence challenges. These elements encompass the understanding of cause and effect, the management of uncertainty and nondeterminism, and the pursuit of explicit goals.

The name comes from its connection to Markov chains, a concept developed by the Russian mathematician Andrey Markov. The "Markov" in "Markov decision process" refers to the underlying structure of state transitions that still follow the Markov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty. ..." - Wikipedia


Dynamic programming

"Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.

If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation. ..." - Wikipedia