David Krakauer reflects on the foundations and future of complexity science

Mike's Notes

This is a great overview of the last 100 years of complexity science.

Resources

David Krakauer reflects on the foundations and future of complexity science

In his book “The Complex World,” Krakauer explores how complexity science developed, from its early roots to the four pillars that now define it—entropy, evolution, dynamics and computation.

14 JANUARY 2025 | by PAUL MIDDLEBROOKS

This transcript has been lightly edited for clarity; it may contain errors due to the transcription process.

David Krakauer

I mean, the main point was, what is a field of inquiry? What is a domain of knowledge? You know what is a neuroscientist. What is a particle physicist? What is a complexity scientist? I was interested in that almost as a meta question. What you realize when you go down that path is how difficult that is to answer. Agency, intentionality, will, all of those ideas break the fundamental assumption of all of physics. I view that as fundamentally, paradigmatically new, and it is the origin of complexity science. As you say, it's the origin of life.

That's what all science is, other than fundamental physics. That's why emergence is so important a concept, because the processes of emergence are what explain why other disciplines, other than physics, have to exist in the world.

[music]

Paul Middlebrooks

This is “Brain Inspired” powered by The Transmitter. David Krakauer is the president of the Santa Fe Institute, where their mission is officially searching for order in the complexity of evolving worlds. When I think of the Santa Fe Institute, I think of complexity science, because that is the common thread across the many subjects that people study at SFI, like societies, economies, brains, machines, and evolution.

David has been on the podcast before, and I invited him back to discuss some of the topics in his new book, The Complex World: An Introduction to the Fundamentals of Complexity Science. The book, on the one hand, serves as an introduction and a guide to a four-volume collection of foundational papers in complexity science, which you'll hear David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about, and so on.

During our conversation, we discuss the four pillars of complexity science, entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls problem-solving matter, another term for what the complexity sciences are about.

We discuss emergence, the role of time scales in complex systems, and plenty more, all with my own self-serving goal to learn better and to practice better how to think like a complexity scientist to improve my own work on how brains do things.

Hopefully our conversation and David's book help you do the same. I will refer you in the show notes to his website to discover all of David's other accolades over the years, and with a link to the book that we discuss. If you want this and all full episodes of “Brain Inspired,”support it on Patreon, which is how you can also join our Discord, have a voice on who I invite, even submit questions for the guests, and otherwise just show appreciation. Thank you to my Patreon supporters, and thanks to The Transmitterfor their continued support. Show notes are at braininspired.co/podcast/203. Here's David.

[transition]

Paul Middlebrooks

I had the thought in reading your book, and I'll hold it up here. We're going to be talking about concepts from The Complex World. My overarching goal, I think, in our discussion is to get a feel personally for how to think like a complexity scientist. This book, it's deceptive, David, because I don't know if you can see, it looks fairly thin. It's elegantly and concisely written. I don't know if I read the whole thing because now I have to go back and read the whole thing much, much more slowly, because there's so much material in here. This book is, it's interesting because it is out before four volumes containing the foundational papers. Oh, you have physical copies already.

David Krakauer

Yes. If you're interested, Paul, I'll tell you what happened. A few years ago-- It's a long story. Many people, or if your listeners might know SFI through its books, those brown and red books, if you remember from the '80s and '90s, Christopher Langton's Artificial Life, Arrow and Pines, and Anderson on the economy as a complex system and on and on it goes.

We've always published and been interested in communicating complexity science right from the beginning from the '80s, but we decided to bring a lot of that in house. Have our own press as opposed to working with McGraw Hill or Oxford or MIT, all great presses, but we wanted the authors to be closer to the publishers, and we wanted to make the books more affordable. The big project of the press is the four volume Foundations of Complexity Science spanning 100 years, which you see behind me, those beautiful yellow-- Those are the first three volumes, the fourth comes out in December.

It's just under 4,000 pages. It grew out of asking the community, what is complexity science? If you had one paper or two, whatever, that you thought were absolutely archetypical of this endeavor, what would they be? We amassed tons of suggestions, and it was surprising how concentrated they were. Then we asked each person who was expert and had recommended a particular paper to write the history of that paper, and its enduring impact. Each of, that's what that four volumes is, it's just under 90 papers, all placed in historical context and annotated. I wrote an introduction to those four volumes, because I thought, "Oh, my God, what is this thing? What has come together here?”

Paul Middlebrooks

It's more than an introduction, though. One of the things that you do is you reference the people who wrote about the papers, who annotated and introduced those papers, which-- You're giving a roadmap of a roadmap to the papers in one case, but it's more than that.

David Krakauer

Thank you. Exactly. That's the history. I thought I'd write the introduction, then I realized I'm not really writing an introduction, because it got out of hand. To your point, each of those papers represented a perspective on the complex world. How so? Where did they come from? How did they influence each other and so on. In weaving that tapestry, I wrote a little book. I didn't expect to, that was not the plan, but it became black hole dense.

Paul Middlebrooks

That's a good way to put it.

David Krakauer

I decided to keep it that way. The reason it's actually published as a separate book has a funny story, it is the first opening part of the four volumes.

My colleague, actually, Sean Carroll, at Hopkins, was giving a course on complexity with Jenann Ismael. They said, "We were looking for the book to write the course, and then we realized we're just going to use your opening introduction, but the students don't want to buy all four volumes." I said, "Oh, Sean, I'm just going to do that. We'll publish the first opening introduction as a separate book." That's the history of it, to make it available for students.

Paul Middlebrooks

That's interesting because I was envisioning how this book could be used. I was envisioning like a dedicated study group whether-- If I formed one of those, if I formed foundations of complexity science study group, should I use your book? Because all of the papers are in chronological order in the foundations, and I can imagine going through every single paper with the annotations, but that would that would take a really long time. What would you recommend if I were going to do something like that? How should I approach that?

David Krakauer

I would recommend just that.

Paul Middlebrooks

Oh my God.

David Krakauer

Because if you felt like it, read my shorter book first.

Paul Middlebrooks

I think that that's essential, because then you get the whole context and why you can go through chronologically, which you actually probably took pains to figure out how they are related to each other and put that in order.

David Krakauer

Again, the main point here was not only to organize the papers, but to ask where they came from in the 19th and late 18th century.

Paul Middlebrooks

What was your eventual task, eventual goal in running the book? Because you set out to write this introduction, and then it became something more. What did you hope to achieve?

David Krakauer

The main point was, what is a field of inquiry? What is a domain of knowledge? What does it mean to be expert in X? What is a neuroscientist?

What is a particle physicist? What is a complexity scientist? I was interested in that almost as a meta question. What you realize when you go down that path is how difficult that is to answer in a thoughtful way. You can say silly things. Neuroscientists study the brain, okay, whatever. Physicists study the universe. They're not particularly informative answers. One was, what is this paradigm that we call complexity? A lot of people and many books have been written that confuse it with methods. That really drove me nuts.

Paul Middlebrooks

I'll ask you more about that.

David Krakauer

It's important because the methods matter. It's a very difficult thing, but let me just give you a couple of examples. If you said to me or asked me the question, "What is quantum mechanics?" and I said, "Oh, you know what it is? It's functional analysis, linear algebra." You say, "What's general relativity?" I say, "Oh, that's the calculus of variations and differential geometry." They're important. They're absolutely foundational, the study of tensors and so on, but they're not the problem. They're not the conceptual issue.

Part of it was, what is the relationship between the technologies of knowledge and methodologies and the ontology, the domain of inquiry? They are deeply entangled, as I think you're alluding to, in very interesting ways, particularly in the complex domain. I wanted to resolve that. I had to go back and establish where all this started. Just in a nutshell, the phrase I use is just as modern physics and chemistry have their roots in the scientific revolution of the 17th century, complexity science has its roots in the industrial revolution of the 18th and 19th.

We study machines, and we study machines that were made, engineered, or evolved. Understanding that kind of matter, as opposed to the ordinary matter of physics and chemistry, is the nature of complexity science.

Paul Middlebrooks

Recent philosophical works, and old, have debated- I'm throwing this into a tangent already, I apologize, -debated or rather shot back on the idea that organisms could be even equated with machines. Now you're saying that organisms are evolved machines. I just want to clarify, do you view organisms as machines, or do you see the distinction?

David Krakauer

I have such a capacious definition, just in terms of mechanisms that perform adaptive work that are metabolically fueled. I'm willing for that to be an ecology or a machine. I don't mean necessarily classical. I'm not talking about watches and grandfather clocks, but I am talking about mechanisms that do work, and the work is dependent on certain internal degrees of freedom that produce motion that we view as informational or computational. It's quite capacious, and I'm willing for those machines to be distributed, to be organic, and to be very noisy.

Paul Middlebrooks

You have an inclusive definition of machine, then, it sounds like, which is fine. Just a moment ago, you talked about what is it to be, let's say, a quantum physicist, which is a different question than what is quantum physics, which you also asked. I know you bristle at the question, what is complexity science? Right when you were talking, I thought, oh, well, maybe a better question is what is a complexity scientist, or at least you could wrangle a better answer.

One of the things that struck me about reading your book about complexity science, one of the many, many things, is it fair to say that one goal of complexity science or scientists is integration rather than unification?

David Krakauer

Oh, that's interesting. There's so much to say here. Let me make a slightly different point and then edge into that question in terms of the development of a mind that is interested in this problem. In my life, I make distinctions between two kinds of scientists in their early formation.

The first kind I call foveal. These are the people who looked at the stars when they were 12 and said, "I have to be a cosmologist," or they saw a suffering animal and they said, "I have to be a vet." They looked directly at their target and established an ambition.

Another scientist, I think, is peripheral vision scientists. They see a pattern in their peripheral vision. As you know, peripheral vision is really crap.

You're not sure what you saw. Sort of diffuse, and it haunts you for your entire life, and you're constantly trying to bring it into focus. That peripheral pattern, for me at least, evolved into complexity science. I found it in writings by people like Doug Hofstadter and Martin Gardner and Margaret Bowden, and so on, and then Nietzsche and Schopenhauer, quite frankly, who slowly gave me a sense that I wasn't hallucinating

Paul Middlebrooks 

Some validation.

David Krakauer

-a validation, a certain order of nature that isn't a pattern that you can directly look at. If you say I study the sun, or I study, gas giants or nuclear fusion machines, that's one thing, but when you say, "I'm interested in that pattern that unifies what the brain does and what markets do, and what societies do," that's a harder description. I do think complexity science as an ontology, which I think I now understand much better, had that character.

Now, when you say is it synthesis or unification, that's a really interesting question. I think it's a bit of both, quite frankly, because it's synthesis because you're looking for the horizontal connections across domains, like economies or biology and so forth, but you are looking for the underlying shared principle, for example, the principle of information or computation or cognition. That's unification. I think it's a bit of both.

Paul Middlebrooks

I said integration, but I guess if you're looking for the common underlying patterns, that would be the synthetic part of it, I suppose.

David Krakauer

Yes. Synthesis can be merely comparative. You could write an associative book pointing out commonalities, but I think it would be more profound to say, or ask, where do the commonalities come from? That's also a part of this enterprise.

Paul Middlebrooks

Since you mentioned your own personal story, I was going to ask you this later, and I'll just ask it now. This is in terms of how the Santa Fe Institute operates, and maybe how it optimally operates.

Do you want a bunch of people who are studying complexity science, who are studying the science of complexity itself like you, or do you want people in their individual domains? Maybe some foveal people who have started to appreciate their peripheral vision over time, and appreciate what complexity science approaches have to offer within their fields? Is it better to have a bunch of those specialists who then can widen their view, or do you want everyone just studying complexity science, if that makes sense?

David Krakauer

Again, I don't know the answer to that question. It's very idiosyncratic. At the core, it has to be people who are looking for the fundamental principles of the self-organized selective universe. That is, we all study problem-solving matter, and the fundamental principles governing problem-solving matter. That has to be at the center.

You can have expertise in other fields, archaeology, linguistics, neuroscience, but it has to be primary, not secondary, because there's a lot of rigorous technology and methodology that goes with that pursuit. You'd be spending your entire life doing catch-up, because most of your expertise was in your domain, which is critical. Our expertise is the interstitial fabric that connects fields.

Paul Middlebrooks

How did that feel when you finally found that home? It had bothered you since an early age, going back to your peripheral analogy. The reason why I'm asking is because I want to know how to approach my own field like a complexity scientist. I've had that same, I don't think I'm a foveal person, but I think that's also hindered me in my specialties. How did that feel once you realized, "Oh, complexity science is my intellectual home."?

David Krakauer

It's worth mentioning the history briefly here, Paul, because it gives you a sense of where it came from. In the 19th century, many things were happening, but two things of interest to this conversation. One is we're building steam engines. Out of that came the science of thermodynamics and statistical mechanics. How do we build better machines of various kinds that were revolutionary, both in terms of engineering, but in terms of economics?

At the same time, we were trying to understand patterns in natural history, post Linnaeus, we're talking about Darwin and Wallace, where does all this come from? Why is it that animals look a bit mechanical? An eye, a lung, a heart, that looks a little bit like some of these machines that we're trying to build, just more efficient. All these theories start emerging and essentially there are four, what I call pillars, that emerge in the 19th century.

Paul Middlebrooks

Entropy, evolution, dynamics, and computation. See, I read the book.

David Krakauer

You got it. Those are it. Everyone could ask these questions about these systems they were studying. Are they stable? Are they efficient? How much energy do they require? How are they engineered or evolved? What problems are they solving, what we call computation or logic? As you say, evolution, entropy, control, dynamics and computation and logic. All of those people, Boltzmann, Maxwell, Clausius, Carnot, Darwin, Boole, Babbage, Wallace, were all part of a society in constant conversation in the 19th century.

Paul Middlebrooks

You write in the book that this is before organized academic publication systems, which is where we all talk to each other now. We can say we all talk to each other and we see each other at conferences, for example, but you write a little bit about how these people came together and interacted, which is [crosstalk]

David Krakauer

And fought each other. Charles Babbage fought everybody.

Paul Middlebrooks

Is that right?

David Krakauer

Oh, yes. He was certainly, much to say about Babbage, a really important figure, but was in correspondence with Charles Darwin, as was Maxwell, in argument with him. They didn't agree on a lot of things. This is before the tyranny of metrics and the journal system. What was starting to happen and really happened in the 20th, which is complexity science proper, is that those fields started to coalesce. We started to do, and those four volumes start in 1922, with Lotka, who said, "I want to combine Darwin with Clausius and Carnot. I want to do evolution and thermodynamics in one go."

The whole history is, when you say, what does it mean to think like a complexity scientist? Essentially, it means connecting the four pillars. That is the game. You're not allowed to think about something purely in terms of information. You have to think about the energetic implications. You have to think about its stability, its robustness and so forth. What problem it's solving, is it a computationally hard problem or easy problem it's solving? Really thinking like a complexity scientist is having the four fields a little bit under your belt.

Paul Middlebrooks

Have to be a little bit under your belt. That's a big ask as well.

David Krakauer

It's a big ask. You can tell right away in this community, you had better know something about theoretical computer science, something about statistical mechanics, something about nonlinear dynamics, and something about adaptive dynamics. You have to. You don't have to be expert in all of them, and I think it's-- Any problem is rotated through those four pillars. That's what it means, I think, to think like a complexity scientist. We can talk about the history, but finding those principles that combine the four.

Paul Middlebrooks

You had just mentioned that people often equate complexity science with the methods and how that's a mistake and other sciences as well. If you have all four of these pillars under your belt, each of these pillars has their own abundance of methods. I think that's where someone like me gets lost. It's almost a frame problem in complexity science where how do I know which methods to draw from, from all of these fields to make the connections if I only know a few from each field, for example?

David Krakauer

No, I think it's a totally reasonable conundrum that we all face. In the end, it's disciplined by the question you're asking. Let's take the example. The second paper in foundations is the large famous analysis of Maxwell's demon. This is this really extraordinarily surprising experiment suggested by  Maxwell that the second law is unlike all the other laws in physics. It's not a fundamental law, and it can be violated locally if you have an intelligent demon in your system. One could talk about that at length.

Paul Middlebrooks

He didn't call it a demon. Did he call it a demon?

David Krakauer

He did.

Paul Middlebrooks

Oh, but it wasn't coined Maxwell's demon until later. Is that--

David Krakauer

Yes, that's right. I think it was actually Lord Kelvin who called it Maxwell's demon. He called it an intelligent being, a discriminating observer.

Nevertheless, that weird conceptual sleight of hand that placed an intelligent entity absolutely at the foundations of physics became the field that we now call the thermodynamics of computation, which led to the quantum computation revolution. There was a good example of someone struggling to understand the nature of a fundamental law in physics, which was not a fundamental law, based on conservation principles and symmetries, as the other laws were and realizing that the right way to think about it was in through a computational informatic lens.

To me, they were natural methods. The methods weren't developed, let's be quite clear. Information theory didn't exist until the '40s.

Paul Middlebrooks

Going back, this is how complexity science was-- the pre foundations were or the foundations were inspired by technology and the machines coming along to a degree to even have these processes to study and understand. Is that right?

David Krakauer

Yes, that's a very good point. The whole concept of the second law comes out of Carnot's analysis of steam engines. How do I make an efficient thermodynamic cycle? That wouldn't have even been asked. I make an efficient machine by minimizing heat dissipation. How do I do it? All that follows from that question. You're right.

Paul Middlebrooks

I interrupted you with an aside. I thought they were connected what you were saying.

David Krakauer

No, I just want to make the point that I think it's rarely the case that you start with the method. I think that you start with the question, and then you start-- in these papers, of course, because they're foundational, they develop their own methods. There was no chaos theory before Ed Lorenz invented it. Henri Poincaré had made the observation based on his analysis of the three-body problem that there was this thing in deterministic systems, it was a bit of a shock that they weren't perfectly predictable without perfect precision of measurement.

Lorenz then takes that to meteorology, again, very interesting history or very rich history, and has to develop techniques for the analysis of chaotic systems. Nearly all of these papers are making methods. Not just applying them. That's not unlike the history of physics. Leibniz and Newton had to invent the calculus. Network theory, as we now know it, which is what happens when sociology meets statistical mechanics, was invented to deal with systems that are naturally described as network systems.

At a certain point, what happens is something a little bit decadent, perhaps the method moves to center stage and then just gets overused and over deployed and becomes the thing itself as opposed to the instrument for understanding the thing itself.

Paul Middlebrooks

I'm just jumping way ahead now and then we'll come back, but where is complexity science in terms of-- Is complexity science continuing to evolve and develop new methods, or is it in danger of the methods becoming so centralized that it could be mistaken for the methods? I know all sciences are continuing on and developing new methods when they need to, but it seems like complexity science is so fluid and evolvable that it might-- essentially what I'm asking is where are we now in complexity science?

David Krakauer

I don't know. I think it's right at the beginning. I think we're right at the beginning. There are fields, string theory is a good example of a field that's nominally started with an effort to resolve contradictions between discreet and continuous formalisms in theoretical physics, and then became the method dominated by mathematicians, not physicists.

Now it's drying up because at some point, people woke up and realized that they weren't answering the questions. They were just building more and more elaborate techniques.

I think, again, you have to look at the history. Let me just give you an example of why I think we're at the early phases. One of the papers that we include in this volume is the canonical McCulloch Pitts original paper on neural networks. This is the paper that established the entire field in 1943.

Two extraordinary people, weird people

Paul Middlebrooks

Weird people.

David Krakauer

-as you know, Walter Pitts, child prodigy, homeless, writes letters to Bertrand Russell when he's 12 or 13 or whatever it was, gets replies inviting him to Cambridge. [chuckles] Russell not realizing that this kid who had pointing out errors in the Principia was a homeless kid on the one hand and then doing the same thing by auditing Carnap's lectures, university of Chicago.

Then Warren McCulloch trying to discover the psychon, the elementary atom of psychological processing. These two come together to try and develop a formal logic based on thresholded units that we now think of as neural net and in the process make all sorts of criticisms, which are still valid to this day that haven't been addressed since 1943, particular issues of circular causality.

Paul Middlebrooks

Wait, you just said that they make criticisms [crosstalk]

David Krakauer

Yes. It's very interesting, that's a-- In that paper, they make many points about what these kinds of machines, in my sense, can do and what they can't do and what their future problems will be. One of them is in these neural networks that are recurrent, it's very difficult to establish causality.

How will we understand them if we've got circular causality in millions of units, which of course is the problem of today, the problem of interpretability of neural networks? This is 1943.

Paul Middlebrooks

I remember them saying something about-- you would know the phrasing better than I, but with their drawings, and they do put recursive loops in there. Then I feel like they punted and said, "These, of course, are in the system and will have to be addressed at some point." [laughs]

David Krakauer

Yes. Essentially, that is what they say. They put it very floridly and beautifully.

Paul Middlebrooks

Sure.

David Krakauer

Warren McCulloch had a very colorful language. That paper really has come into its own in the last 10 years, and there are many other papers like it.

That's what I mean when I say it was the very beginning because the interpretability problem which is trying to understand the logic of large, decentralized thresholding units. Even today, we're even dealing with non-circular causality. Most of these are strictly feedforward, autoencoders, but I don't think we have the methods actually or the frameworks for analyzing such systems. They're just being developed as we speak.

I could go through all of these papers, even deterministic chaos. There's so much current debate about the free will problem, and I've talked about it myself at length, so confusing and based on a really crappy misunderstanding of chaos and even quantum mechanics and so forth. Whilst we have expertise in a number of fields, it does feel like a series of disconnected islands without bridges. That's what's gratifying. Again, the history is attempt to build them.

Paul Middlebrooks

You mean within complexity science or the paradigm of complexity science or within the specialties?

David Krakauer

Both, quite frankly. Both.

Paul Middlebrooks

Again, I want to ask what the current challenges-- We'll come back to current challenges. Let's stay in the history. All right. Take us back then, you had the pre foundations and then in the 1920s, you start to generate what with the advent of people like Turing and lots of other people, thinking about how machines work and then applying that to how maybe biological organisms work. Early 1920s, it starts to gather these disparate parts and try to make sense of them together. What else in that early landscape did it look like?

David Krakauer

I think that you have to remember how much happened in the '40s and '50s, because in the '40s and '50s we have, you think about it-- This is just volume one of those four. You have Shannon, information theory, you have Turing, computation imitation game, you have Nash, game theory, you have Weaver. All of this is happening. Interestingly, complexity as a phrase that somehow captures this constellation of concepts that the four pillars circumscribe was first articulated in 1948 by Warren Weaver.

Paul Middlebrooks

That was Weaver.

David Krakauer

Weaver explicitly wants you to make a distinction between simplicity, what in the book I call the world of symmetry

Paul Middlebrooks

Symmetry, yes.

David Krakauer

-and determinism and so forth. Disorganized complexity, which is the world of Boltzmann, Carnot and Clausius statistical mechanic, gases, formerly disordered states, both of which we know how to describe mathematically. One we average and we treat with ensembles and one we treat with classical differential equations. In the middle is the world that he called the world of organized complexity. The things that just seem irreducible that we are constantly struggling with. That's societies and brains and the natural world and ecosystems and all the rest.

Weaver says that's complex, somewhere between those two extremes. In that middle, we need new methods. It was both ontological and epistemological in 1948. The point he makes is he says, "We need new forms of computation to study that explicitly." He talks less about new kinds of mathematics, which we turned out we did need. That paper's very prescient and it established the field which then by the 1970s, everyone was talking about complexity. Now, at the same time in the '40s, Rosenblueth and Wiener are inventing cybernetics. Cybernetics has a legitimate claim to being the embryo of what developed into complexity science.

Paul Middlebrooks

Is that because of the emphasis on a-- not autopoietic, but on agency?

David Krakauer

I think on agency, yes. What about 1970s? Yes, very much so, because what the cybernetic framework did is it said the objects have objectives.

Interestingly, William James in the Principles of Psychology in 1918 or 1919, whenever he wrote that book, makes this very interesting distinction between laws from in front and laws from behind. James suggests that the defining characteristic of all mental phenomena is that they follow laws from in front, meaning they have purpose.

They're being driven from the laws from behind, that's physics and chemistry, what we would think of bottom-up. There's something peculiar about psychological mental phenomena, which is that they start with a desire, they start with a goal. Cybernetics was the mathematical and engineering solution to the William James question. Of course, it came out of radar tracking machines and all that stuff in the war, and was generalized to the study of, in some sense, self-maintaining informations whose parts are integrated through the sharing of information.

Wiener set about trying to develop the framework that would allow him to address that question. Of course, somewhat unsuccessfully, it morphed into control theory, and complexity science branched off in a different set of directions.

Paul Middlebrooks

Had Wiener stuck to his original emphasis and goal, he might have been more of a forefather, I hate that term, but more of a progenitor of complexity science. You started by saying that, in some sense, we can trace complexity science back to cybernetics.

David Krakauer

The problem is he became obsessed with feedback.

Paul Middlebrooks

Nothing wrong with that, but

David Krakauer

Nothing at all wrong with that. It's one of the four pillars. It's the control dynamics pillar. Again, that has a fascinating history going back to Maxwell's work on governors, which regulated power in steam engines, but that Wiener discovered, actually, rediscovered. It was one pillar, and a little bit too much was made of this. Everything is about feedback and the maintenance of state and the relationship to the notion of homeostasis.

There's all this other stuff going on that was much as interesting, let's say, about adaptation and computation and change, not stasis, not just tracking targets, but making them. He was a bit too much of a monomaniac, I think, on this feedback loop concept, and missed a lot of other interesting material.

Paul Middlebrooks

This goes back to how to think like a complexity scientist because monomania heralds great discoveries as well. When you become transfixed on something, even if you have your blinders on, you're going to study that thing in great, great depth, and that leads to discoveries maybe in that narrow field. If I want to be a complexity scientist studying-- I study the brain and behavior. For example, how do I know how much time do I spend studying feedback control and then moving on to self-organized systems and then moving on to autopoiesis? How often do I spend? What is the perfect trajectory in terms of when to start integrating concepts and methods from these different fields?

David Krakauer

Let's take the example, let's imagine that Norbert Wiener asked himself the question of what would happen if two agents were engaged in mutual feedback? He would have been forced to start thinking about things that von Neumann, Morgenstern, and then John Nash were worrying about, what we now call game theory, the theory of strategic interactions. It's another higher-order stability concept that comes from reckoning with multiple agents interacting. That's just one example.

He got stuck with a single agent in an environment with, say, a moving target. I think the framework suggests themselves by virtue of asking the next logical question, and then you have to go and retool to try and address them. I do think it's the question that prompts the expansion of your inquiries.

Paul Middlebrooks

Often the next logical question is only logical in hindsight, or obvious in hindsight. That's just a question of creativity, I suppose, maybe more than logic. It's only logical in light of what you know about complex systems and what makes them interesting. If you don't know that already, it's hard to see where that next logical question is from that context, perhaps.

David Krakauer

Let me give you another example, which brings in John von Neumann, not in game theory, but in his theory of automata. I think it's quite natural.

Von Neumann's working at the Institute for Advanced Study and building the MANIAC, based on the ENIAC, Philadelphia, to do numerical meteorology and ballistics. These machines are very unreliable, and these machines keep breaking down. You're constantly having to replace parts. Von Neumann says, "How do you achieve robustness in noisy computational systems?" Of the kind that Wiener was positing to solve problems of feedback control.

Wiener wasn't worrying about the fact they were falling apart. He was theorizing about it. Von Neumann was building a computer that was falling apart. He says the only way to ensure continued operation of a system with that many parts is that the parts replicate.

Paul Middlebrooks

Oh, I thought you were going to say redundancy.

David Krakauer

Yes, he wrote a very famous paper on essentially redundancy or robustness in probabilistic automata. Fault tolerance is what we would call it now.

You can do it that way, but that only takes you so far. At a certain point, you need to replenish it. The way life replenishes it is it replicates parts.

Von Neumann suddenly realizes, "Do you know I thought I was just trying to build reliable computers? What I was really after was a theory for the origin of life."

Because it's von Neumann, he doesn't stop and say, "Oh, I'm not going to go down that path because look at all the things I'm going to have to learn about biochemistry and so on," he does go down that path and invents an entirely new theory, the theory of universal constructors, which have now proven to be only in the last decade again with the work of people like David Deutsch and then my colleagues, Sarah Walker and Lee Cronin on Assembly Index and so on.

That's a beautiful example of seeing the problem, then daring to pursue it and not just saying I don't have the time or the skills. Maybe if you say thinking like complexity scientists is kind of-- I don't know if it's an immodesty or a bravery or a recklessness that says I am going to go down that path because I know someone has to.

Paul Middlebrooks

I hate to bring this in, but then people have to worry about their careers as well. I know SFI, maybe you've used the term maverick in the past to describe people's personalities, what kind of people fit in the SFI, although I know it's a wide range of people. Then I have to worry if I go down all of these different rabbit holes and spend a little bit of time in each of them and ask the right question that I have to worry about my career. Right?

David Krakauer

I'm slightly less sympathetic. I'm the worst person that way.

Paul Middlebrooks

I'm just echoing what I think people might be thinking.

David Krakauer

No, I know. I know people do think this and I get it. I do think there's something a little wrong with the world, to be honest.

Paul Middlebrooks

Yes, but that's the way it is.

David Krakauer

Then it's our job to defy it. I have to say at a certain point, we only live once and we don't live very long. I think you should dare to go down that path. The reality is, Paul, that if you're sincere and you really work at it, chances are you'll do work as interesting as conventional work in your own field. You might not be a Lorenz or a von Neumann or a Nash. Most of us are not, but you could still do good work down that path. That's been my experience.

It's not that it's a totally reckless jumping-off-a-cliff move. It's just a lateral move. Most people have done very well, surprisingly, and maybe not surprisingly because the territory has not been saturated with other scientists. Even if you're not doing the best work, you're making discoveries because there's no one else in the same room with you. I think there is a kind of safety, weirdly enough, in moving into underexplored territories because you're not competing with a million other people. It's a tradeoff. You don't have the security of as many peers, but you have the benefit of an unplundered environment. I think they might even out.

Paul Middlebrooks

I'm sorry, I'm just going off the top of my head here with questions, but this made me just think you have these four volumes of the foundational papers. What role does survivorship bias play in the complexity sciences? If I go laterally and I maybe I'm not jumping off a cliff, but maybe I took a misstep and it's taken leading me down a road that's not going to get me in the foundational papers volumes. Is that a problem or not a problem, but a phenomenon within complexity science as well?

David Krakauer

It's a phenomenon in all sciences. Most of us will be completely forgotten. It is the case that these are the papers that prove to be of enduring value.

It's worth asking, were they attended to in their time? You mentioned autopoiesis a couple of times, that's 1970s, that's Maturana and Varela. That was completely ignored. It was considered new age nonsense for decades.

Paul Middlebrooks

Was it decades? Has it only in the past decade or two come back?

David Krakauer

People like Randy Beer and others at Indiana, who have really been championing that worldview. Honestly, Paul, even now, if you went into your neuroscience conference and you mentioned it, they'd slap you about the face.

Paul Middlebrooks

I don't go to neuroscience conferences. I don't want to get slapped.

David Krakauer

I think even now-- What they did, it's interesting to talk about von Neumann, is they said, life is not about self-replication. It's about self-synthesis.

It was an interesting move. It was very much in the universal constructor lineage. They use this language, which was quite unfamiliar to people, the concept of autonomy and self-maintenance and the issues of boundary that only now through people like Judea Pearl's work, and Carl Friston's work, all this idea of Markov blankets in the Bayes' language, but it was actually present in their language, but it was early language. It felt a bit odd and unfamiliar.

Again, I think they're way ahead of their time. People like Niklas Luhmann, the German sociologist, used their work to interpret societies, building on people like Durkheim's idea of a society as a irreducible aggregate of individuals. It's greater than the sum of its parts. I think it's fair to say that a lot of papers in this volume, certainly like von Forster's theory of self-observation, that is 30 years before Doug Hofstadter talking about strange loops, people think this is silly stuff. Age of Aquarius speculation, and part of the problem is, we were so good at reductionism, we were so good at simple causality that anything that was decentralized, collective, complex causality, the summation of many, many important factors, it attracted a holistic language prior to the development of its methods that was marginalized as new ageism. It's there with Alexander von Humboldt and his early theories of ecology, but it's very empirical. It's based on the observation of the natural world. Early attempts to formalize these ideas in things like synergetics, or general systems theory, a lot of people thought it was just a little strange.

Paul Middlebrooks

Are you happy with the term complexity science?

David Krakauer

I am because it does two things for me. One is it's opposed to simplicity and reductionism of a certain kind, that there are ways of knowing without taking everything apart. There's that bit of complexity.

Paul Middlebrooks

Can I guess your next point? I'm sorry.

David Krakauer

Yes, I guess.

Paul Middlebrooks

Are you going to use the word pluralism in this next sentence?

David Krakauer

No, I wasn't, but I can.

[laughter]

David Krakauer

The other one was much more about the development also in these four volumes of what we now think of algorithmic information. Kolmogorov, Solomonoff, Chaitin, and this idea that was developed then subsequently by people like Ristin and others, that complex phenomena are incompressible phenomena. Another way of saying that is that they break lots of symmetries and they have long histories.

Paul Middlebrooks

Can we pause on broken symmetries? It's one thing I realized-- You know when you're reading a book and you read the same phrase over and over, and then you get halfway through the book and you think, "I'm not sure I have a great grasp on what that phrase means."? Broken symmetries is one of those phrases. I went back and I tried to see if I could find a little more detail on. What that what does symmetry and therefore broken symmetry mean? If you could just expand on it a little bit and why it's important for-- I don't want to use the word emergence yet in our conversation, but why it's important for complex systems.

David Krakauer

I'll tell you why the opposite is important for symmetry. Symmetry is the foundations of all physical theory. You can think of it as symmetry of process, the time symmetry of the equations of motion. That gives you through elaboration, all of the forces in the standard model and so on. A lot of the gauge theories sit on fundamental symmetries in the processes.

Then there's symmetry of outcomes. There are alternative states you can be in, but you're as likely to be in one as the other. That's true at very small scales. We'll talk about that in a second. That's where things like the renormalization group and so on kick in. The symmetry of configurations versus the symmetry of the fundamental equations and laws. Basically physics all comes out of that. Now, it's been known for a long time in physics and beyond that in certain processes, whilst there is a symmetry of outcomes, once you enter into one of those states, you get stuck in it. You can get stuck in it effectively forever.

Very famously, one of the founding texts, also I think it's in volume two, the 1972 paper by the Nobel laureate Phil Anderson, more is different.

What Phil starts with in that paper is the following thought experiment. It says, take a simple molecule like NH3, like ammonia. Ammonia has two configurations. It's a pyramid. Those pyramids invert. They go [imitates sound] It's small enough that it fluctuates between the two configurations such that if you were to observe the system naturally, you'd be in one state 50% of the time and the other state. You have symmetry of outcomes.

If you make that slightly larger molecule, just more atoms like pH3, like phosphine, that also has a pyramid of pyramidal structure that oscillates, but very slowly.

Paul Middlebrooks

Still quite fast, but slowly relative to the NH3.

David Krakauer

Yes. Basically, the energy requirement to move between the two states now is high enough for the energy barrier that actually you basically stop where you started. As molecules get larger and larger, the underlying physical laws that gives you symmetry of outcome become useless because now you end up where you started. Now, why does this matter? In a very famous paper written by Eugene Wigner on physics, on what he called principles of invariance and symmetries, he said, all physical law tries to do two things. It tries to come up with a very parsimonious set of processes and a very small set of initial conditions.

The processes is the thing we understand, usually based on symmetry, and the initial conditions are things you don't understand. You have to assume them. What Phil was saying in 1972 in the “More Is Different” paper is that once things get large enough, almost all the information you care about that allows you to explain the observable is in the initial conditions, the thing you know nothing about. Now, Darwin's theory and other theories like it are essentially theories that try to explain the history of initial conditions. We can get into that. It's a little bit-- It's a very profound observation. Broken symmetry is when the state that you find in the natural world can not be explained by the fundamental law, but by something you're ignorant of, namely the initial condition.

There are many ways you can get broken symmetries and we can get into that. I'll give you one very simple example and stop. If you think about a DNA molecule, it's made of four bases. The sequence of bases matters because they make proteins that are functional. With respect to the laws of physics, you could permute it completely. It makes no difference. That's all one molecule with four to the end possible configurations, each of which is essentially equally likely by the fundamental laws. Wait a minute, most of those sequences of A, C, G's, and T's are rubbish. Only a small, tiny subset actually make functioning proteins.

Paul Middlebrooks

Junk DNA. That's a side topic.

David Krakauer

That's another issue, the issue of junk DNA. I'm just making this point that broken symmetry matters because when it comes to problem solving matter, the only way to really explain the functional configurations is to look at their history, not the fundamental laws.

Paul Middlebrooks

In that example, the symmetry breaking is the sequencing of the molecules themselves?

David Krakauer

It's the sequence you find at multiplicity. Meaning that particular DNA sequence is found across all organisms, from flies to humans. Why that one?

Physics doesn't tell you why. Physics says any of them could be found. They're all compatible with the laws of physics. You have to come up with a special story, which is exactly what Eugene Wigner and physics doesn't want, because it wants it all to come down to the fundamental laws.

Paul Middlebrooks

Anything that's not a fundamental law is a result of a broken symmetry. Is that so?

David Krakauer

Any persistent state where the observation of that state cannot be explained by the fundamental law is going to be evidence of a broken symmetry.

You walk out of your door, you go left or right. Physics doesn't know which way you're going to go. You choose to go right because I happen to know you need to buy a new pair of socks.

Paul Middlebrooks

That's because I had free will, but okay.

David Krakauer

That's another issue. I need to know your history or your internal state to know that. Anyway, it turns out to be the foundational concept for all complex phenomena from DNA molecules to transistors, because these can all-- if you think about Hopfield, who just won the Nobel Prize in physics of all things--

Paul Middlebrooks

Oh, I wanted to ask you about that. Maybe we can come back to it.

David Krakauer

I'll come back to that. What he showed, the reason it's important, the reason he won in physics is because he was working on spin glasses. The point about spin glasses is that they can store tons of broken symmetries. They have lots of ground states. It's absolutely crucial. Parisi, who won the Nobel Prize before, won it for symmetry, what he called replica symmetry breaking, which is why Hopfield's model works. This concept is everywhere once you start looking for it.

Paul Middlebrooks

I don't know if we want to bring up emergence right now. You do spend some time, actually, it's one of the whole parts of the book talking about emergent properties and emergence. The last time we spoke, you wanted everyone to cool their jets about emergence, that it's some spooky thing.

You talk a little bit about this in the book. Then you have at the very end, you talk about compilers as a sort of solution to thinking about emergence.

Maybe we can come back to that. I was going to ask, and stop me if you want to go somewhere else, because we can go anywhere, but a broken symmetry then, here's what I want to ask, is there anything that is not an emergent property of something else? Paired with that, I was leading into that because broken symmetries are fundamental for any emergent property.

David Krakauer

One way to say it, and again, I don't want to get too weedy, but is if you're in this world where the particular state that has been selected depends on history, meaning it depends on time, what are you going to do? If you were Eugene Wigner, you throw up your hands and you say, we're done, there's nothing to be done. That's just all about the world of accidents. It's the world of frozen accidents, another name for broken symmetries. The physicists and the philosophers of physics have a word for this pejorative phrase, they call them the special sciences. Special education, anything not fundamental.

Paul Middlebrooks

I'm special.

David Krakauer

I'm with you. We're special. You can do something really clever, which is you can take those broken symmetries and you can aggregate them into what we call effective dimensions. One of them could be a cell. Then you can come up with cell theory. Or you can aggregate them into a particular cell called a nerve cell that has an excitable membrane, which you can then explain using an effective theory, Hodgkin-Huxley theory. They're not fundamental, but they're very coherent and consistent, and quite parsimonious.

This is the key concept, is that the fact of broken symmetries doesn't imply a world just of description. Because if you aggregate them just right, which is what emergence is about, you can come up with theories which work at their levels. That's what all science is, other than fundamental physics. That's why emergence is so important to concept because the processes of emergence are what explain why other disciplines other than physics have to exist in the world. It's really important.

Paul Middlebrooks

You have an agglomeration of broken symmetries. This, if arranged just right, leads to what could be leads to an effective-- what theory? Is that the effective theory, and that effective theory, which just means that you can say something about the causality of the way that the system is interacting with the world or affecting things that abides at its own level. You don't have to appeal then to lower quarks, for example, you don't have to reduce everything to more microstates. That is an emergent property, an emergent system.

David Krakauer

Absolutely. Let me just demystify this. Let's again go back to a DNA molecule and RNA molecule. That's a very complicated bit of chemistry there.

If you're doing diagnostics, medical diagnostics, genetic diagnostics, phylogenetic inference, you don't need to worry about the chemistry, you just say just give the letters. This has got an A there instead of a C. The A thing that doesn't have any of the detail chemistry, it stands in for it, because the detail chemistry maps in a consistent fashion to the letter A, the A captures everything you want to know, you can back it out if you want to.

That ability to back it out if you want to says something about the chemical processes, because that's not true for everything. Not everything has that degree of coherence and stability in time. For example, your beliefs aren't as stable as that, our political institutions are not as stable as that.

Emergence has different properties and different emergence has different properties and different scales, and in different contexts. The fact that you can do useful scientific work with a list of letters without going to the chemistry is really interesting. I can tell you, do you have sickle cell by looking at letters?

Paul Middlebrooks

Although we invented the concept of sickle cell.

David Krakauer

We did. Interestingly, but it's but it's a coherent mapping from the chemistry to the category that works, that has utility. That's why we believe in it.

I'm sure we could have done it in other ways, but it really works. That's evidence of emergence. That is, again, coherent, coordinated dimensions of fundamental matter that can be labeled or tagged. Then you work with the tags. That is what emergence is all about. There are different mathematical ways of saying that, but that's the key.

Paul Middlebrooks

Then I go down and I list all of the things that I can observe. I don't find anything that I can't say is not an emergent property of something else. Is there anything that is not an emergent property of something else? I'm sorry, this is a ignorant question, but I've run a short list through my mind at various times, and I thought, okay, it's just a--

David Krakauer

I'll give you some examples of things. This property that you can do work with the aggregate variable, real work, demonstrable work in the world is not true of most things that you measure. That's very important. There are many properties of the world that are aggregate properties of underlying microscopic things, but they don't show this emergence property. Let me give you one of the, controversial example. Game theory.

Game theory for a long time, when it was first starting to be developed was thought to be a normative, prescriptive model of human behavior.

We were going to use this simple model that John Nash and others developed to actually run political institutions. Mutually assured destruction was not a problem because we could model exactly what rational actors would do when they're in possession of very powerful thermonuclear weapons. Many of these ideas were idealized into ideas like cooperators and defectors. It turns out that the notion of a cooperator and defector is actually not a label like ACGT is with DNA. It's not a consistent coherent mapping from psychological states, cognitive states, neural states. It's not.

It's an endlessly metamorphosing idea that doesn't have temporal and spatial stability.

Therefore it's not very useful. At a certain point, game theorists realized this is not a normative theory. It's a way of thinking about thought experiments more rigorously with math. That's a failure of emergence and you find them everywhere. There are things that we think are real, we theorize with them, and they don't work. There's only a small number of things that actually have that consistent property that you can screen off the lower levels. You can say, it's much to say about this. Again, look at your field. Let's say I wanted to treat a psychiatric illness. Should I do it with psychoanalysis or some behavioral intervention, or with, or with pharmacology?

Paul Middlebrooks

Quarks, you should do it with quarks.

David Krakauer

We do it with quarks. Even worse, right? What is the right intervention into the system? These are questions of emergence. I think where our behavioral therapies fail, they're suggesting that the categories that we've discovered are wrong. They're not truly emergent categories. They're actually arbitrary aggregations of microscopic degrees of freedom.

Paul Middlebrooks

For it to be emergent, it has to be pragmatic. It has to work.

David Krakauer

It has to work. It has to have this-- There's lots of language for this.

Paul Middlebrooks

It has to be effective. It has to have--

David Krakauer

It has to be effective. Yes. In people, the technical term for when it doesn't work is sometimes called a failure of entailment, and when it does work, sometimes called dynamically sufficient. The example I like to use is mathematical proof. If I'm proving a theorem, why don't I have to use neuroscience? Because it turns out that mathematics, the axioms and the deductive rules are sufficient. They label consistently a certain kind of logic. You can operate with them. If they didn't, if you got up every now and then when you were proving a theorem and had an outrageous tantrum, I'd have said, no, I've got the wrong-- I need another kind of theory here. I need a theory of mind here. I need to go down a level.

Paul Middlebrooks

Why do you keep picking on neuroscience, David? That's unfair.

David Krakauer

That's your field. I'm not--

Paul Middlebrooks

I'm just kidding. I know that's why you're bringing it up. I appreciate it. We went into the emergence talk right before that. You were talking about the stability and we were talking about broken symmetries. If something stays in one condition for long enough, it is considered a broken symmetry. Then you hesitated and with that long enough, and then I thought, you write in the book about one of the challengesin complexity science is dealing with time.

Is that because we need to think of everything in terms of time scales and what is the right time scale to think of it? If something flips back and forth quickly on our observational time scale, we can say that it's symmetric. If something has been to the right side for 100 years, or geological time can be symmetric, but we might not observe it. Is that where time becomes a challenge in complexity science? Or what is the relationship of time?

David Krakauer

That's completely correct. That's exactly right. In the lifetime of the universe, everything's going to be symmetric because there'll be some kind of heat death or what have you, and we'll be fully thermalized in the symmetric state. You're completely correct. Time scales and time is deeply foundational in our thinking. The thoughts that we're having now are absolutely dependent on the time scales of chemistry. Action potential rates, aggregate circuit properties and so forth. You're completely right. It's why this actually introduces-- There's so much to say about this.

That is that the notion of subjectivity in this profound sense that you just asked, which by the mean, the choice of time scales for these processes, is a key concept in complexity science. It was already a key concept in one of the pillars. When the concept of entropy was being formalized, it was understood that the value that you calculated depended on what's called the coarse graining. For example, I can turn a six sided die into a coin by just making half of the numbers heads and the other half of the numbers tails. If you calculate the entropy of a fair coin, it's a different number from the entropy of a fair die.

One's the entropy of one six plus one six plus one six. One is one and a half plus half. That choice of what we call the coarse graining, that is the aggregation of the probabilities is subjective and depends on the time scale of observation, to your point. This is a field that's nascent. Now, how we really think about observer-dependent entropy calculations, and it plays into everything. That's one side of the time question.

Another side, which is as profound and people perhaps not aware of is the concept of past, present, and future have nothing to do with physics.

They have everything to do with observers. There is no past, present, and future in physics, but not in classical physics. Actually the right way to calculate them is to use the thermodynamics of computation, to use a computational theory.

Paul Middlebrooks

Because of entropy, is that--

David Krakauer

Because of measurement, because of it gets to some of the issues that McCulloch and Pitts were talking about, which is the reason why there is a past versus the future is because of things like the irreversibility of the logical or function in a neural network. The mapping is not invertible. I can go forward in calculating all but not backwards, because I don't know what the initial states are. They're ambiguous. The point is that all of these concepts that we use to explain complex systems depend on the limitations of the observer, whether that's a cell, by the way, a neuron. Right?

Paul Middlebrooks

Again, picking on neuroscience.

David Krakauer

The reason I'm not picking on it, actually..

Paul Middlebrooks

Actually, I know.

David Krakauer

I mean this sincerely, because I think it's a very key field, because it's about brain and mind. There's a special role of brain and mind in complexity science.

Paul Middlebrooks

Speaking of nested timescales.

David Krakauer

It's the field that worries about computation and observation. Chemistry less. I think it's at the nexus of some of these theories in a quite profound way.

Paul Middlebrooks

Let's see, we went from broken symmetries to emergence to time. Before we move on, is there anything in the book that we haven't talked about yet and I still have questions, that you think is something that you would like to highlight before I continue asking you questions?

David Krakauer

There's so much. It's important to understand the importance of complexity in terms of reconciling in the 20th century the social and the natural sciences. Take the Austrian School of Economics. I'm talking about Schumpeter and Hayek, and others. A lot of ideas that we're thinking about now in network science come from the 40s, which is how you generate knowledge in decentralized systems. I think it's important to understand the importance of complexity in terms of reconciling in the 20th century, the social and the natural sciences.

I'm talking about Schumpeter and Hayek, and others. A lot of ideas that we're thinking about now in the 20th century, the social and the natural sciences. I'm talking about Schumpeter and Hayek, and others. A lot of ideas that we're thinking about now in network science come from the '40s, which is how you generate knowledge in decentralized systems. That became an ideology in neoliberal ideology. Actually in the '40s, it wasn't so much. How do you come to consensus when you have many constituents with partial information? Problems of consensus and coordination, that really comes from social science and now is everywhere.

We tend to think of sciences in terms of going from mathematics and the natural sciences into the social sciences, but actually this is the case where the reverse happened. One of the things I've been interested in is this approach to the consilience of the disciplines when you look very carefully about how ideas migrate. That was a bit of a discovery for me. I said, oh, general systems theory, which is now used in large organizations to build aircraft, came out of the work of Fechner in biophysics and psychiatry. The oddness of how knowledge actually comes together is an important part of what I talk about, because it's not linear. I think the educational system we have is misleading in a really profound way.

Paul Middlebrooks

The point that you were just making about understanding, I think you used the word consilience, but understanding where the knowledge began and then where it ended and how it traversed. That is a Herculean scholarly effort. Does it teach us anything about-- Does it teach the working scientist anything about how to go about their problem solving?

David Krakauer

I think it does, because of this weird fact that we're addressing now of the limitations of time. Very often, the seeds to solving a problem exist. As I said, I think there's something very unfortunate about the way we learn science and practice it. We all do this. We write papers, and by and large we cite contemporary work, because they're likely to be the reviewers or whatever, some cynical reason, but also because it seems more relevant.

Then we'll put in the occasional historical reference just to demonstrate our scholarly bona fides.

I have to say, in going back and reading these papers, and these are quite limited, because they're 200 years or when the volumes' just 100 years. So many trails that weren't followed because they didn't have the methods then. Every paper it's like 1/10th of it was realized, because they could, because they had to cite their peers who had methods. We now have methods that they didn't. You could go back 100 years and rewrite that paper in a completely novel form now based on what we now know. Really profound work is so super generative. I suspect you could win a Nobel Prize just hanging out in the 1930s and rewriting what everyone wrote.

Paul Middlebrooks

I was just thinking, what a wonderful exercise if you were an educator, to assign some of those foundational papers and have someone rewrite them through the modern lens.

David Krakauer

It's interesting. I was talking to a colleague here, the physicist Carlo Rovelli, about this. Carlo said he's going back and looking at all of the foundational papers and statistical mechanics and rethinking them.

Paul Middlebrooks

Life is short. We only have so much time.

David Krakauer

I know. I think it's less about being prescriptive about what one should do, but just suggesting that

Paul Middlebrooks

There is richness in that.

David Krakauer

-there's super richness in deep ideas. It's worth bearing in mind that just one path of many was followed. It's just worth bearing that in mind. We could live in a very different world if the other path had been followed.

Paul Middlebrooks

One of the things that you do in the book is you list out, you call it synoptic surveys. There have been a number of books over the years giving synopses of complexity science. Like you said before, there's a cluster of centralized ideas, but each highlights a different idea. One of the things you note among those is that there are common themes, but the earlier you get, the more the books tend to focus on the principles and the ontology.

Then as you traverse more toward later current times, alluding to what you were talking about earlier, the books tend to focus on models and the methods. You note that, this is a good sign because it's a sign of a maturing field, but you're also somewhat hesitant because it's also a sign of what modern day society demands on a mature field. What do you think about that current demand, let's say, from modern society?

David Krakauer

I should say, again, this book is just, as you've seen, it's just full of tables. It's a crazy table book because I wanted, given the constraints of length, to put as much in. One way to do that is with tables. It's quite thin, but it's just got tons of--

Paul Middlebrooks

It's really thick.

David Krakauer

Yes, it's thick. If this had been written the way many books are now written, it would be 500 pages.

Paul Middlebrooks

Oh my God, yes.

David Krakauer

I think people write very long books when they shouldn't, quite frankly. I prefer density myself. You can look up all the other stuff on Wikipedia.

We don't need to re-say it over and over and over again. As you said, I really want to just list all the books that purport to be books about complexity science. People can go and read those for themselves and see what other perspectives are worth understanding. One thing I did notice, you said, these popular books, not the technical ones, Haken's book on synergetics is a very important book.

Came out of Germany. You're not going to read that for fun. Whereas you will read Prigogine’s book for fun or Melanie Mitchell's book for fun.

These are edifying in a way that's not quite as challenging. I saw it, the early books that were wrestling with ‘what is this?’ brought a lot to bear on the problem. They're quite poetic. They're quite expansive. As the field developed, people would go down-- my colleague, Jeffrey, would be, I'm going to use scaling theory to understand contracts--

Paul Middlebrooks

Jeffrey West.

David Krakauer

Jeffrey West, or like Mark Newman, I'm going to use network theory to understand. They're all very illuminating, but they tend to be more narrow in their methodologies. One advantage of that is that readers can then use them. They can say, oh, I'm going to do scaling on my data, or I'm going to use networks on my data. I think society likes that, for good reasons. It has utility. It's a bit that early conversation we were having about what was lost in history. What other methods would have been useful? What other ideas were there that were neglected because they didn't lend themselves to scaling or something?

All fields, I talk a lot to my theoretical physics friends about this, is that theoretical physics up until the standard model was established in the '60s and '70s, so after Gell-Mann and Feynman, and Schwinger, then it became all maths. Physics ceased to be conceptual. If you pick up a physics book, pick up Margenau's book on the structure of physical reality, it's just a brilliant read. It's like, wow, this is fascinating.

The nature of time, the nature of causality, whether space is fundamental or emergent, those kinds of questions. Then they become these very technical texts exploring very narrow questions that you're not particularly interested in. There is a natural evolution, but I think it comes at a cost.

There's a reason why people still go back and read Gödel, Escher, Bach. It's still meaningful to people, because so many of the questions it raises are >unanswered.

Paul Middlebrooks

Then what does that mean for the future of complexity science? is it going to be just more methods and models? Is that the trajectory of a maturing science? There'll be a paradigm shift and it'll go back to principles and ontology and rejiggered? Oh, by the way, I wanted to mention the paradigm, shoot, matrix. No.

David Krakauer

The disciplinary matrix.

Paul Middlebrooks

Disciplinary matrix. I think that's one of the more important, maybe not important, but very digestible concepts from Kuhn that you talk about in the book.

David Krakauer

Oh, yes, this idea. One of the questions was, is this a paradigm? If not, what is it?

Paul Middlebrooks

Is it? That's good question. I shall ask you that now.

David Krakauer

I'm curious to know what you think a paradigm is. I marshal a few alternative related concepts to address this. The Kuhnian paradigm and the disciplinary matrix. One is Wittgenstein's language game. The other one is Dilthey's hermeneutic circles. They're all attempts to deal with the mereology or part-whole relationships of knowledge structures. Is this a part of physics or part of chemistry? Is this really coherent as an enterprise? I like what Thomas Kuhn said.

He said, look, think about a matrix or a graph that's connected. The point is, how easy or difficult is it to add or remove an edge from the graph? If I remove this edge, does it really matter? Does it change everything, or does it just locally change something? For him, the characteristic of a paradigm, or a paradigm shift, or what he called a revolution, is when you take away or add an edge, which completely compromises the underlying graph and--

Paul Middlebrooks

Is incompatible with that.

David Krakauer

It just makes it incompatible. You make this observation where you have an observation of an interference pattern through thin little slits. You see classical behavior just because you looked at it and you think, shit, there's nothing in my classical mechanics that can explain this. I need a new theory in the theory of quantum mechanics, and so on. The question for me was, what's the paradigm shift from physics and chemistry to complexity?

What becomes incompatible with physics and chemistry? One we've talked about quite a lot, which is a weak incompatibility, which is all these broken symmetries, which mean you need emergence and effective theories, because the fundamental laws don't do it.It's weak. They still obey the physics, it's just not determinate. The more profound one for me is when the particle thinks.

Paul Middlebrooks

Life is life, or does it have to think?

David Krakauer

It doesn't matter. Life for me thinks. It's when the particle says, no, I'm not going that way. I'm going home. Then agency, intentionality, will, all of those ideas break the fundamental assumption of all of physics. These are no longer particles in fields. These are self-determining particles. I view that as fundamentally paradigmatically new. It is the origin of complexity science. As you say, it's the origin of life. The origin of life is coextensive with the origin of complexity science in that sense.

Paul Middlebrooks

Is that why I am trending toward being more interested in life as the thing to study, as opposed, if you want to understand intelligence, you have to understand life.

David Krakauer

I think so. It's really interesting, Paul. This is a debate that many of us have been having now for a while, which is, what is the meaningful difference between life and intelligence?

Paul Middlebrooks

You don't have to conflate them, which is what I'm worried I'm doing. Defining one by the other. I want to keep them separate, but I don't know how to.

David Krakauer

I'll give you an example. I think I know how to keep them separate, but it's not easy. One way to say this that I think will be compelling to you, even though I don't have a fundamental theory. In physics, a distinction is often made between intensive and extensive variables. Things that grow in system size, like entropy, versus things that don't, like temperature. You have the same temperature in a room and it's twice as big. Life is intensive.

You're not more alive because you're an elephant than a flea.

That would be weird. It doesn't seem to be scale dependent. Whereas intelligence does. I think any reasonable definition of intelligence would allow that an elephant is more intelligent than a flea. if you don't do that, I would say you don't have a very good definition. That distinction is real.

The theory should reflect that fact that in one world scale matters, in the other world scale matters less. That somehow thatthey are, at some point of convergence, equivalent. At some point.

Paul Middlebrooks

Could you say if one organism was more minded than another, that it was then also more alive? These are semantics. [crosstalk]

David Krakauer

That's why I don't think you should. I think it would not be useful.

Paul Middlebrooks

That has nothing to do with intelligence. That has to do with, let's say, subjective experience of a fly versus an elephant versus a--

David Krakauer

There might be another concept we need.

Paul Middlebrooks

A third.

David Krakauer

That's a very good point. I don't think intelligence is the one we want for that. For example, a virus is more adaptable than an elephant. That's true.

We learned that a few years ago.

Paul Middlebrooks

At the right timescale.

David Krakauer

Okay, but the timescale turned out to match. We do have other concepts where there's more or less which don't perfectly align with more or less intelligence. It can be more adaptable and less intelligent. I think that's true of a virus versus a human. There is a point at which intelligence and life seem to converge. The origin of life might be that point, which is the origin of both. I don't think you'd want to say that prior to life, the universe is intelligent. There are people who say that. My colleague, Seth Lloyd would say the universe is a computer.

Paul Middlebrooks

You're an it from bit guy, right?

David Krakauer

Right. Information to me, I want to make distinct from intelligence. I think that's the revolution in the physics of information. Even though Claude Shannon worked on engineering systems, telegraphs and telephones and computers, we now know that you can talk about the information in a black hole. It's a theory that's more general than purposeful matter. I think that's an important point.

Paul Middlebrooks

David, I want to make sure that I get to some-- like I told you at the beginning, I had a few people write in some questions for you. I want to ask you one last question before I get to those Patreon questions. The questions I'm going to ask you from them also have to do with what we've already been talking about. You said early on in the conversation how nascent complexity science is still. We talked about whether it's synthesis or unification and how it's defined by the boundaries of other sciences. It still seems to me, from an outsider, like a disparate collection of entities.

This goes back to, how do I know which thing to choose if I have this particular question? I want to know, how does complexity science view the brain? There's not a simple answer to that. Really what I want to know is because it feels uncomfortable to me to sit in that world, does it over time feel start to feel more comfortable living in that space where there are so many moving parts and things to choose from, and knowing if I have this question, these are the fields and methods I should draw from, et cetera? Does that start to feel more comfortable over time?

David Krakauer

It's an interesting question. This is a very hard question to answer, because it would be unfair if you say, how should a physicist understand the brain? You'd think, God, what would that even mean? You'd have to think about that for a bit. Or a chemist or anyone else. First of all, it's a difficult question to answer for any field. It's not special to complexity. I'm not sure complexity is a field in the way they are, by the way, either.

Paul Middlebrooks

It's a paradigm, but not a field perhaps.

David Krakauer

That's interesting. Again, let me just state it to demystify it. A set of principles to understand problem-solving matter. Brains do that, they solve problems. What are those principles? We've said that the pillars, you'd want to understand the metabolism,the thermodynamics of the brain, that's totally reasonable. That goes into fMRI. That's the whole evolution of that technology, which is in some sense an effort to connect the thermodynamic vertex to the informational vertex saying, through the Landau principle, it says every elementary operation requires a certain amount of energy, kT log2, whatever.

I think that looking at any system that is purposeful in the adaptive sense, not the religious sense, in the adaptive sense, through these different frameworks and their associated methodologies. There are people who study, I know them, there are people who study critical phenomena in the brain, so they're using statistical mechanics. There are other people who are interested in movement in low-dimensional manifolds of the brain, so they're studying nonlinear dynamics in the brain.

They're all out there. The complexity lens at its best combines some of those, and in the process, possibly asks you to reevaluate the boundary of the system you're studying. It's saying, maybe what I'm really most interested in is not just in one head, it's in a population of heads. In which case I need to now engage with social networks and information transmission in a way that I could get away with ignoring when I did the work in the lab.

Paul Middlebrooks

Do you think a good exercise for me, or a neuroscientist, someone like me, could I go through your book with these four pillars in mind, and because you do it, things are parsed out in tables, but you also write about all the papers that are in all the tables and stuff, so I could go through and scan, with these pillars in mind and think, what of these methods and conceptual frameworks and approaches are linked to those four pillars that would be beneficial to me to understand what I'm studying? Do you think I could do that?

David Krakauer

I think one could do that. There is a heuristic there. Also, I do think it's a separate enterprise, because it's really looking for-- you said it at the beginning, I don't remember if integration or synthesis. You'll still do amazing work if you just work on the biochemistry of cell surface receptors.

There are people, like Bray at Cambridge who said, I want to study cell surface receptors as if I'm looking at flocking of birds. I'm going to study them as if they're coordinated collective dynamics of semi-autonomous agents.

Paul Middlebrooks

This is where pluralism comes in as well, because then you're taking-- and perspectivalism.

David Krakauer

Exactly. He's taking a complexity lens on what normally would be studied using more traditional biochemistry. I think you are right to say, if you step back and say, if I thought of the nucleus as flocking behavior, what would that do? Let's say I apply those methods. I think you could do that.

It'd be quite a powerful experiment in counterfactuals to use it that way.

I suspect, by the way, I remember John Wheeler making this point, it's a bit John Wheeler, is that his heuristic for doing physics was always to ask the counterfactual. What if there was no gravity? What if there was no strong nuclear force? What if information is more fundamental than energy?

I think that can take you quite a long way as long as it's disciplined by real methods and real frameworks, as opposed to fantasies, which is also interesting, but that's on the fiction shelf.

Paul Middlebrooks

We haven't even talked about that, but all right. It sounds like I better get on it if I want to do it within this lifetime.

David Krakauer

I don't think so. Weirdly enough, I don't think so, Paul, because interestingly, just take the example of, again, whether we like it or not, Karl Friston's free energy, it really was just, I'm going to take ideas from information theory and rethink the brain through that lens. Of course, you can take it as far as you like, but I don't know if it's that hard. I don't know if it's that much of a digression.

Paul Middlebrooks

David, thank you so much for your time. It's really nice to see you again. I recommend this book. I'll put links in the show notes to where people can find those foundation papers and then we'll start the 40-year journey into reading them all and understanding them all.

David Krakauer

Good luck. Thank you for having me.

[music]

Paul Middlebrooks

“Brain Inspired”is powered by The Transmitter, an online publication that aims to deliver useful information, insights, and tools to build bridges across neuroscience and advanced research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives written by journalists and scientists. If you value “Brain Inspired,”support it through Patreon to access full-length episodes, join our Discord community, and even influence who I invite to the podcast. Go to braininspired.co to learn more. The music you're hearing is “Little Wing,” performed by Kyle Donovan. Thank you for your support. See you next time.

[music]

Subscribe to “Brain Inspired”to receive alerts every time a new podcast episode is released.

No comments:

Post a Comment