Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

How your brain can be trained like a muscle

Mike's Notes

Some good tips.

  • Stop working when brain fade sets in.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

15/03/2026

How your brain can be trained like a muscle

By: Joanna Fong-Isariyawongse
RNZ: 1/02/2026

Joanna Fong-Isariyawongse is associate professor of Neurology, University of Pittsburgh.

When the brain is asked to stretch beyond routine, that slight mental discomfort is often the sign that the brain is being trained, a neurologist says.

If you have ever lifted a weight, you know the routine: challenge the muscle, give it rest, feed it and repeat. Over time, it grows stronger.

Of course, muscles only grow when the challenge increases over time. Continually lifting the same weight the same way stops working.

It might come as a surprise to learn that the brain responds to training in much the same way as our muscles, even though most of us never think about it that way. Clear thinking, focus, creativity and good judgment are built through challenge, when the brain is asked to stretch beyond routine rather than run on autopilot. That slight mental discomfort is often the sign that the brain is actually being trained, a lot like that good workout burn in your muscles.

Tasks that stretch your brain just beyond its comfort zone, such as knitting and crocheting, can improve cognitive abilities over your lifespan. Unsplash

Think about walking the same loop through a local park every day. At first, your senses are alert. You notice the hills, the trees, the changing light. But after a few loops, your brain checks out. You start planning dinner, replaying emails or running through your to-do list. The walk still feels good, but your brain is no longer being challenged.

Routine feels comfortable, but comfort and familiarity alone do not build new brain connections.

As a neurologist who studies brain activity, I use electroencephalograms, or EEGs, to record the brain’s electrical patterns.

Research in humans shows that these rhythms are remarkably dynamic. When someone learns a new skill, EEG rhythms often become more organized and coordinated. This reflects the brain’s attempt to strengthen pathways needed for that skill.

Your brain trains in zones too

For decades, scientists believed that the brain’s ability to grow and reorganize, called neuroplasticity, was largely limited to childhood. Once the brain matured, its wiring was thought to be largely fixed.

But that idea has been overturned. Decades of research show that adult brains can form new connections and reorganize existing networks, under the right conditions, throughout life.

Some of the most influential work in this field comes from enriched environment studies in animals. Rats housed in stimulating environments filled with toys, running wheels and social interaction developed larger, more complex brains than rats kept in standard cages. Their brains adapted because they were regularly exposed to novelty and challenge.

Human studies find similar results. Adults who take on genuinely new challenges, such as learning a language, dancing or practicing a musical instrument, show measurable increases in brain volume and connectivity on MRI scans.

The takeaway is simple: Repetition keeps the brain running, but novelty pushes the brain to adapt, forcing it to pay attention, learn and problem-solve in new ways. Neuroplasticity thrives when the brain is nudged just beyond its comfort zone.

The reality of neural fatigue

Just like muscles, the brain has limits. It does not get stronger from endless strain. Real growth comes from the right balance of challenge and recovery.

When the brain is pushed for too long without a break – whether that means long work hours, staying locked onto the same task or making nonstop decisions under pressure – performance starts to slip. Focus fades. Mistakes increase. To keep you going, the brain shifts how different regions work together, asking some areas to carry more of the load. But that extra effort can still make the whole network run less smoothly.

Neural fatigue is more than feeling tired. Brain imaging studies show that during prolonged mental work, the networks responsible for attention and decision-making begin to slow down, while regions that promote rest and reward-seeking take over. This shift helps explain why mental exhaustion often comes with stronger cravings for quick rewards, like sugary snacks, comfort foods or mindless scrolling. The result is familiar: slower thinking, more mistakes, irritability and mental fog.

This is where the muscle analogy becomes especially useful. You wouldn’t do squats for six hours straight, because your leg muscles would eventually give out. As they work, they build up byproducts that make each contraction a little less effective until you finally have to stop. Your brain behaves in a similar way.

Likewise, in the brain, when the same cognitive circuits are overused, chemical signals build up, communication slows and learning stalls.

But rest allows those strained circuits to reset and function more smoothly over time. And taking breaks from a taxing activity does not interrupt learning. In fact, breaks are critical for efficient learning.

The crucial importance of rest

Among all forms of rest, sleep is the most powerful.

Sleep is the brain’s night shift. While you rest, the brain takes out the trash through a special cleanup system called the glymphatic system that clears away waste and harmful proteins. Sleep also restores glycogen, a critical fuel source for brain cells.

And importantly, sleep is when essential repair work happens. Growth hormone surges during deep sleep, supporting tissue repair. Immune cells regroup and strengthen their activity.

During REM sleep, the stage of sleep linked to dreaming, the brain replays patterns from the day to consolidate memories. This process is critical not only for cognitive skills like learning an instrument but also for physical skills like mastering a move in sports.

On the other hand, chronic sleep deprivation impairs attention, disrupts decision-making and alters the hormones that regulate appetite and metabolism. This is why fatigue drives sugar cravings and late-night snacking.

Sleep is not an optional wellness practice. It is a biological requirement for brain performance.

Overdoing any task, whether it be weight training or sitting at the computer for too long, can overtax the muscles as well as the brain. Unsplash

Exercise feeds the brain too

Exercise strengthens the brain as well as the body.

Physical activity increases levels of brain-derived neurotrophic factor, or BDNF, a protein that acts like fertilizer for neurons. It promotes the growth of new connections, increases blood flow, reduces inflammation and helps the brain remain adaptable across one’s lifespan.

This is why exercise is one of the strongest lifestyle tools for protecting cognitive health.

Train, recover, repeat

The most important lesson from this science is simple. Your brain is not passively wearing down with age. It is constantly remodeling itself in response to how you use it. Every new challenge and skill you try, every real break, every good night of sleep sends a signal that growth is still expected.

You do not need expensive brain training programs or radical lifestyle changes. Small, consistent habits matter more. Try something unfamiliar. Vary your routines. Take breaks before exhaustion sets in. Move your body. Treat sleep as nonnegotiable.

So the next time you lace up your shoes for a familiar walk, consider taking a different path. The scenery may change only slightly, but your brain will notice. That small detour is often all it takes to turn routine into training.

The brain stays adaptable throughout life. Cognitive resilience is not fixed at birth or locked in early adulthood. It is something you can shape.

If you want a sharper, more creative, more resilient brain, you do not need to wait for a breakthrough drug or a perfect moment. You can start now, with choices that tell your brain that growth is still the plan.

Neuroscience has a species problem

Mike's Notes

A great example of how science could advance.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Transmitter
  • Home > Handbook > 

Last Updated

04/03/2026

Neuroscience has a species problem

By: Nanthia Suthana
The Transmitter: 16/02/2026

Nanthia Suthana is professor of neurosurgery, biomedical engineering and neurobiology at Duke University. Her lab studies the neural mechanisms of human memory, emotion and spatial navigation using intracranial recordings, neuromodulation and wearable technologies during real-world behavior. Her work bridges basic neuroscience and clinical translation, with the goal of developing novel treatments for neurological and psychiatric disorders. Suthana earned her B.S. and Ph.D. at the University of California, Los Angeles. She has led interdisciplinary research programs integrating neuroscience, engineering and clinical practice, with an emphasis on studying brain function in naturalistic settings..

If our field is serious about building general principles of brain function, cross-species dialogue must become a core organizing principle rather than an afterthought.

Neuroscience has never been richer in data. Laboratories now generate detailed recordings of neural activity, behavior and physiology across species at scales unimaginable a decade ago. In rodents, researchers can monitor thousands of neurons simultaneously across distributed circuits during behavior. In humans, they can record from deep brain structures during ambulatory, real-world behavior, integrated with wearable sensors and linked to clinical symptoms and subjective experience. The field has access to neural signals spanning orders of magnitude in space, time and biological complexity.

Yet despite this abundance, neuroscience remains deeply organized along species lines. Animal and human researchers often operate within separate conceptual frameworks, attend different conferences and develop theories that rarely confront data across species. This separation is no longer a minor inconvenience but a growing liability. The problem is not simply that cross-species translation is difficult; it is that the field has largely accepted this difficulty rather than treating it as a central scientific challenge. Neuroscience has also struggled to confront the fact that different species often tell different stories.

As a result, neuroscience’s primary limitation today is not a lack of data or tools, but persistent fragmentation across model systems, recording modalities and analytic traditions. Findings are typically interpreted within species- and technique-specific frameworks, with little pressure to explain when, how or why neural principles should generalize across organisms. Researchers acknowledge differences but rarely use them to constrain or revise theory.

If neuroscience is serious about building general principles of brain function, cross-species dialogue must become a core organizing principle rather than an afterthought. Differences between species should be treated as informative constraints that refine theory, not as inconsistencies to be explained away. Overcoming this divide won’t be trivial, but there are ways we can start now to begin to change our culture. 

major source of the field’s fragmentation lies in how it treats different neural signals. Researchers focused on single-unit activity often prioritize spikes as the fundamental currency of computation, treating population-level signals, such as local field potentials, as secondary or ambiguous. Others emphasize population dynamics and view single-neuron activity as overly local or insufficiently informative for translational applications. Similar divisions exist across recording and manipulation modalities, from electrophysiology and calcium imaging to hemodynamic and electrical stimulation-based approaches. Though these distinctions reflect real technical constraints, they have hardened into conceptual boundaries that shape which questions are asked and which forms of evidence are considered explanatory.

These boundaries persist across species, even as many of the technological constraints that once justified them have faded. As a researcher studying the human brain using both single-unit and local field potential recordings, I am acutely aware that these signals offer distinct and complementary views of neural activity, each with its own strengths and limitations. In humans, it’s now possible to directly record brain activity during behaviors such as walking and natural navigation, enabling experiments similar to those in animals. Single-unit sampling in humans is sparse, however, so field potentials are often the primary signal available for linking neural activity to ethologically relevant behavior. 

Differences between species should be treated as informative constraints that refine theory, not as inconsistencies to be explained away.

High-density single-unit recordings in animal models are therefore essential for understanding how population-level signals relate to single-neuron activity. Yet even when spikes and field potentials are recorded simultaneously in animal studies, researchers often prioritize single-unit analyses, reflecting long-standing theoretical preferences. These preferences limit opportunities to connect neural activity across scales and species. Rather than optimizing theories around a single signal or model organism, the field would benefit from frameworks designed to link signals across scales, using the strengths of each system to offset the limitations of others.

Theta oscillations, a brain rhythm typically defined as 4 to 8 hertz, provide a clear example of how this fragmentation plays out in practice. The details of theta matter less here than what its cross-species differences reveal about how the field handles disagreement. In rodents, hippocampal theta activity during locomotion appears to be largely continuous, a regularity that has shaped decades of influential models of navigation, memory encoding and temporal organization. In humans, however, hippocampal theta activity occurs in brief, intermittent bouts, often linked to specific behavioral or cognitive events rather than ongoing movement. These findings have been replicated across laboratories and tasks and are supported by converging evidence from bats and nonhuman primates. 

When these findings emerged, they were initially met with skepticism. Rather than asking what the differences might imply for theory, the dominant response was to question whether the signals were truly comparable. As evidence accumulated over time, skepticism softened. But theories that attempt to meaningfully integrate the two types of theta are still largely lacking. 

Nearly a decade later, rodent-derived models continue to assume sustained oscillatory structure, although bat, nonhuman primate and human findings are treated as species-specific implementation details rather than as constraints on general principles. For the most part, scientists have not tried to uncover why different species recruit theta in distinct ways, what computational roles these patterns serve, or whether continuous and intermittent theta reflect complementary solutions to shared navigational and memory demands, or distinct modes of environmental sampling, such as whisking, echolocation or eye movements. 

This pattern illustrates a broader issue in neuroscience. With enough evidence, researchers tend to accept cross-species differences, but they rarely use these differences to refine or revise theory. Instead of asking why hippocampal theta is continuous in rodents but burst-like in nonhuman primates and humans, or what computational advantages these different regimes might confer, the field has largely compartmentalized the findings, enabling parallel literatures to proceed with little pressure to reconcile them.

Yet these differences are precisely where theoretical progress should occur. Intermittent hippocampal theta suggests a fundamentally different mode of coordinating neural activity, one in which rhythmic structure is recruited transiently to gate information, mark boundaries between events, or coordinate distributed circuits at specific moments rather than continuously. Ignoring these implications does not preserve existing theories; it limits their scope and explanatory power. 

Cultural asymmetries within the field reinforce this divide, a pattern I observe as a researcher who studies the human brain. When human data align with animal model data, they are welcomed as validation. When they do not, they face higher evidentiary thresholds and greater skepticism. This skepticism is often justified by appeals to sample size, even though nonhuman primate studies, long viewed as theoretically foundational, have historically relied on similarly small cohorts. Such asymmetries insulate animal-derived theories from challenge and weaken the role of human research as a source of theoretical insight rather than mere applied confirmation.

For much of my career, I have watched this divide only perpetuate and deepen. I have attended conferences where animal research overwhelmingly shaped the agenda and human work was treated as secondary. At human-focused meetings, the reverse was true, with few researchers whose primary work involved non-primate species having influence over the event. These experiences shape not only which conversations happen but which questions young scientists learn to ask. The result has been the emergence of parallel scientific cultures that rarely engage deeply with each other.

Overcoming this divide, and developing theories that incorporate contrasting data, will require shifts in how scientists are trained, how conferences are structured and how cross-species work is valued within academic culture. It will also require theoretical frameworks and models that are explicitly tested and revised across species rather than optimized within a single model system. Finally, funding, review and publication practices must reward work that treats cross-species differences as opportunities for insight rather than liabilities to be minimized.

Biological Brains Inspire a New Building Block for Artificial Neural Networks

Mike's Notes

Back propagation is based on a flawed model of how the brain works. This model is based on a more current understanding of how the brain works.

I'm impressed by the work of the Flatiron Institute in New York. It would be great for Ajabbi Research to collaborate with.

Resources

References

  • A Logical Calculus of the ideas Imminent in Nervous Activity. By Warren McCulloch, Walter Pitts and Walter Pitts. 1943. University of Illinois at Chicago.
  • On Computable Numbers by Alan Turing. 1936. Proceedings of the London Mathematical Society.

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Announcements From the Simons Foundation
  • Home > Ajabbi Research > Library > Authors > Alan Turing
  • Home > Ajabbi Research > Library > Authors > John von Newmann
  • Home > Handbook > 

Last Updated

14/02/2026

Biological Brains Inspire a New Building Block for Artificial Neural Networks

By: 
Simons Foundation: 26/01/2026

.

While artificial intelligence systems have advanced tremendously in recent years, they still lag behind the performance of real brains in reliability and efficiency. A new type of computational unit developed at the Flatiron Institute could help close that gap.

New research is exploring how to improve neural networks using components more like those in real brains. Alex Eben Meyer for Simons Foundation

While artificial neural networks are revolutionizing technology and besting humans in tasks ranging from chess to protein folding, they still fall short of their biological counterparts in many key areas, particularly reliability and efficiency.

The solution to these shortcomings could be for AI to act more like a real brain. Computational neuroscientists at the Simons Foundation’s Flatiron Institute in New York City have drawn lessons from neurobiology to enhance artificial systems using a new type of computational component that is more akin to those found in real brains. The researchers presented their work at the annual conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Singapore on January 23.

“Artificial intelligence systems like ChatGPT — amazing as they are — are, in several respects, inferior to the human brain,” says Dmitri “Mitya” Chklovskii, a group leader in the Center for Computational Neuroscience (CCN) at the Flatiron Institute. “They’re very energy- and data-hungry. They hallucinate, and they can’t do simple things that we take for granted, like reasoning or planning,” he says. Each of these individual issues may trace back to one larger problem, he says: The foundations of these systems differ significantly from “the foundations on which the brain is built.”

The current building blocks of artificial neural networks are deeply rooted in a previous era. During that time, “the people who wanted to understand how the brain works and the people who wanted to build artificial brains or artificial intelligence were either the same people or close colleagues and collaborators,” Chklovskii says. “Then, sometime in the ’60s and ’70s, those two fields divorced and basically became fields of their own,” he says. That divergence has also led to artificial networks that are based on an outdated understanding of how biological brains function.

In the new work, Chklovskii and his colleagues revisit the fundamentals of artificial neural network architecture. For more than 10 years, Chklovskii had been on a quest for an alternative to the decades-old neural network building blocks used in machine learning. Through years of research, learning from real animal brains and innovation, Chklovskii and his team cracked the problem and found the solution he’d been dreaming of, one rooted in our modern understanding of the brain.

He and his team built a biologically inspired multilayer neural network made up of a new type of fundamental computational unit called rectified spectral units, or ReSUs. These ReSUs extract the features of the recent past that are most predictive of the near future. The ReSUs are self-supervised, meaning they control their own training of how they process data based on the information they receive, rather than relying on external instructions. ReSUs are designed to learn from constantly changing data, just as our brains learn from the real world.

This is in stark contrast to the current standard units, which are called rectified linear units (ReLUs). ReLUs, which have roots in a 1943 paper, were popularized about 15 years ago. In that paper, researchers presented “a very simple, but very primitive, model of a neuron,” Chklovskii says.

Building on that earlier model, researchers developed ReLU-based networks, which are commonly trained using a concept known as error backpropagation. This method calculates the contribution to past mistakes of each individual neuron in an artificial network, enabling the network to adjust and perform more accurately in the future. “But standard error backpropagation, as used in deep learning, is widely viewed as biologically implausible, and there is no evidence that the brain implements it in that form,” Chklovskii says.

Unlike the ReLUs, the novel ReSUs “actually care about the history of the input” they receive, says Shanshan Qin, a former CCN research scientist who is now an assistant professor of computational neuroscience and biophysics at Shanghai Jiao Tong University in China and lead author of the article that accompanied the AAAI presentation. That alternative setup, which doesn’t involve backpropagation, means ReSU networks are far closer analogs of what actually happens in the brain, he says.

The team’s ReSU neural network succeeded in a proof-of-principle test. The researchers created videos comprised of photographic images that drift in different directions, which were then used to train the network. “Imagine you are sitting on a train looking out the window. The trees, mountains, and houses outside appear to ‘slide’ horizontally across your vision. That sliding movement is a ‘translation,’” Qin says.

They demonstrated that a network trained on these videos exhibited learned two key features that resemble components of the fruit fly (Drosophila) visual system. The first feature is temporal filters, which sift through the input history that real or artificial neurons receive. These filters select certain signals to emphasize and others to ignore based on when the signals were received and other patterns that emerge within the system. Motion-selective units are the second key feature. These units only fire when movement occurs in a certain direction.

Instead of the researchers needing to directly instruct the system through coded rules, “we gave the network a blank slate,” Qin says. “We showed it the ‘train window’ videos (translating scenes). The network realized on its own: ‘To make sense of this data, I must remember what happened a split-second ago (temporal filters), and compare neighbor to neighbor (motion selection),” he says.

If the approach can be successfully scaled up, it could perform more complex computational tasks using rules similar to those that govern how neighboring neurons learn together. The approach may also excel in situations where the program lacks supervision and is using raw data that hasn’t been labeled or given additional context, Qin says.

The work not only brings AI closer to biology, but it also helps explain how biological systems operate, Qin says. “We can explain a lot of existing experimental data in fruit fly visual systems using this architecture,” he adds.

In the future, Chklovskii, Qin and colleagues hope to build on this work by developing ReSU-based neural networks based on different sensory systems — such as those responsible for smell and hearing — in animals ranging from fruit flies to humans. Such work would help reveal how those systems operate in nature and could reveal new ways of designing neural networks, Qin says.

Everything, everywhere, all at once: Inside the chaos of Alzheimer’s disease

Mike's Notes

This article provides a clear explanation of the brain with Alzheimer's disease. It's also a great example of a complex system.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Transmitter
  • Home > Handbook > 

Last Updated

08/07/2025

Everything, everywhere, all at once: Inside the chaos of Alzheimer’s disease

By: Michael Yassa
The Transmitter: 16/06/2025

Michael A. Yassa is professor of neurobiology and behavior and James L. McGaugh Endowed Chair at the University of California, Irvine. His lab has been developing theoretical frameworks and noninvasive brain-imaging tools for understanding memory mechanisms in the human brain and applying this knowledge to human neurological and neuropsychiatric disease.

To truly understand Alzheimer’s disease, we may need to take a systems approach, in which inflammation, vascular injury, impaired glucose metabolism and other factors interact in complex ways.

For nearly three decades, Alzheimer’s disease has been framed as a story about amyloid: A toxic protein builds up, forms plaques, kills neurons and slowly robs people of their memories and identity. The simplicity of this “amyloid cascade hypothesis” gave us targets, tools and a sense of purpose. It felt like a clean story. Almost too clean.

We spent decades chasing it, developing dozens of animal models and pouring billions into anti-amyloid therapies, most of which failed. The few that made it to market offer only modest benefits, often with serious side effects. Whenever I think about this, I can’t help but picture Will Ferrell’s Buddy the Elf, in the movie “Elf,” confronting the mall Santa: “You sit on a throne of lies.” Not because anyone meant to mislead people (though maybe some did). But because we wanted so badly for the story to be true.

So what happened? This should have worked … right?

I would argue it was never going to work because we have been thinking about Alzheimer’s the wrong way. For decades, we have treated it as a single disease with a single straight line from amyloid to dementia. But what if that’s not how it works? What if Alzheimer’s only looks like one disease because we keep trying to force it into a single narrative? If that’s the case, then the search for a single cause—and a single cure—was always destined to fail.

What if Alzheimer’s only looks like one disease because we keep trying to force it into a single narrative? If that’s the case, then the search for a single cause—and a single cure—was always destined to fail.

Real progress, I believe, requires two major shifts in how we think. First, we have to let go of our obsession with amyloid. Now don’t get me wrong. There’s no question amyloid plays a role. It was the first thing Alois Alzheimer saw under the microscope in 1906. And there’s decent evidence that misfolded amyloid spells trouble for the brain. But betting the house on clearing amyloid has been a costly mistake. In fact, we have long known that one-third of people with amyloid pathology do not show any cognitive symptoms, a disconnect that should have forced a rethink years ago.

To the field’s credit, a shift is underway. We’re now exploring other mechanisms—tau, inflammation, metabolic dysfunction, vascular damage, neuronal hyperexcitability and more. But too often, these alternatives are still treated as side plots in an amyloid-centered story. They get less funding, less attention and fewer drug development efforts. That needs to change. These mechanisms may be far more central to the disease than we once thought. And they may drive it differently in different people.

This brings us to the second shift: We need to stop thinking in straight lines. The brain isn’t exactly a flowchart. It’s a dynamical system—a tangled web of feedback loops, compensations and nonlinear interactions. In such systems, small disruptions can ripple outward in unexpected ways. When one part starts to fail, another compensates. Over time, those compensations can become part of the pathology. In some people with Alzheimer’s disease, amyloid might be the trigger. In others, it might be inflammation, vascular injury, impaired glucose metabolism or runaway neural activity. These factors don’t act in isolation—they interact in complex ways, creating a web of multicausal loops. They are less like a chain of dominoes and more like a knot of tangled threads pulling on one another.

In systems terms, it’s not a cascade. It’s a state space. To understand this space, it’s useful to imagine a map in which every possible state of the brain is a point. In this space, healthy brains tend to move within a basin of attraction, a functional stable state. In Alzheimer’s, the brain may be pushed by interacting pathologies into a different region of state space, a pathological attractor—stable but dysfunctional.

There’s growing experimental support for this view. Functional imaging, for example, has shown that people with Alzheimer’s spend more time in sparsely connected, low-flexibility brain states, and MEG recordings reveal changes in the temporal complexity of network dynamics. Recent work in my lab identified a dominant state, characterized by co-activity of nodes in the limbic network, that is linked to worse cognition and Alzheimer’s pathology. The idea is that once the brain tips into the dysfunctional state, it can get stuck there, even if you remove the original trigger.

The idea is that once the brain tips into the dysfunctional state, it can get stuck there, even if you remove the original trigger.

Researchers have already identified a number of “systems-level” factors that can disrupt network stability and contribute to Alzheimer’s disease, including vascular compromise, in which small vessel disease disrupts blood flow and triggers downstream effects; metabolic dysfunction, such as insulin resistance or glucose hypometabolism; runaway inflammation, such as overactive microglia or cytokine chaos; and overactivity, driven by an imbalance in neuronal excitation or inhibition. These factors may represent different systems-level routes to the same clinical outcome. Each person’s condition likely involves a different mix or “weighting” of underlying mechanisms. For someone with a history of diabetes, metabolic dysfunction might be the dominant factor. For someone with high blood pressure, the vascular component could play a bigger role. Ultimately, pinpointing this weighting—the primary mechanism driving the system’s dysfunction, or the mechanistic phenotype—could help match people with the most appropriate treatment.

This framing also changes how we think about treatment. In a system governed by feedback loops and nonlinear dynamics, removing a single trigger may not be sufficient to get the system “unstuck.” That may explain why anti-amyloid drugs haven’t made a major clinical impact: By the time symptoms show up, the system has already reorganized itself. Instead, we may need interventions that restore network stability—rebalancing excitation and inhibition, reducing inflammation or improving metabolic resilience. Noninvasive brain stimulation is one such approach, potentially nudging the system toward a more functional dynamic without needing to target a molecular mechanism. The goal isn’t to fix a part. It’s to shift the conditions that shape how the whole system behaves.

So where is amyloid in all this? Well, amyloid is always present, because our diagnostic criteria make it so. Think of it like background noise—it’s there, but it may not be what’s pushing the system off-key. Unlike the factors described above, amyloid doesn’t consistently drive network-level disruption. It reflects cellular dysfunction, such as misprocessing of amyloid precursor protein or altered lipid metabolism, but the downstream systems-level effects aren’t nearly as consistent or potent as those seen with, say, inflammation or synapse loss. This doesn’t mean addressing amyloid buildup or clearance has no clinical value. It just means it’s likely a small piece in a much larger puzzle. Focusing on it is like trying to fix a whole cacophonous orchestra by tuning but one violin.

Of course, there’s no perfect framework yet. To build it, we’ll need better tools. That includes better ways to capture brain dynamics in vivo, not just static pathology. We also need animal models that go beyond single-gene variants—instead, we need models that combine multiple hits, such as inflammation plus hyperexcitability. And we need ways to track these factors in humans, using multimodal imaging, physiological sensors and inflammatory biomarkers.

Getting there will take work. Paradigm shifts happen slowly, painfully, often after the old model has failed enough times to lose its grip. That’s where we are now. The dominant model isn’t working anymore. What comes next isn’t fully formed—but it’s coming into view. Mechanistic phenotyping and dynamical systems thinking may offer a path forward. It won’t be neat or linear. But it may finally meet the disease on its own terms.

How Evolution Gave Us Free Will

Mike's Notes

This book explains how the evolution of living things gave rise to free will.

"... a shift toward thinking about the brain as a complex dynamical system with emergent properties that defy reduction to simple elements." - Nicole Rust Transmitter

"Kevin Mitchell is associate professor of genetics and neuroscience at Trinity College Dublin in Ireland. He studies the genetics of brain wiring and its relevance to variation in human faculties, psychiatric disease and perceptual conditions such as synesthesia. His current research focuses on the biology of agency and the nature of genetic and neural information.

Mitchell completed his Ph.D. at the University of California, Berkeley, studying the genetic instructions that direct the development of the nervous system in the fruit fly, and his postdoctoral work at the University of California, San Francisco and Stanford University, exploring the same topic in mice. He is the author of “Innate: How the Wiring of Our Brains Shapes Who We Are” and “Free agents: How Evolution Gave Us Free Will.” He also writes the Wiring the Brain blog and is on X (formerly known as Twitter) @WiringtheBrain." - Transmitter

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Transmitter
  • Home > Handbook > 

Last Updated

17/05/2025

Player One: An edited excerpt from ‘Free Agents — How Evolution Gave Us Free Will’

By: Kevin Mitchell
The Transmitter: 13/11/2023

In his new book, neuroscientist Kevin Mitchell argues that, despite his field’s mechanistic models of cognition, we are all “Player One” in the game of life, the authors of our own actions.

Are we the authors of our own stories? Or is our apparent freedom of choice really an illusion? These questions were brought home to me recently as I was watching my son play a video game — one where you wander around an open world, meeting interesting denizens of one type or another (and killing quite a few of them). As I watched, his character entered a tavern and approached the bartender, who offered a generic greeting. The game then threw up some options for things you could say in reply to get information about the prospects for fortune and glory in the surrounding territory.

In this exchange, my son’s possibilities for action were limited by the game, but he was really making choices among them, and these choices then affected how the conversation went and what would subsequently unfold. His decisions were based on his overall goal in the game, the tension between his goals of taking some immediate action or to keep exploring, his need to have enough information to make a decision with confidence, the risk of biting off more than he could chew and losing his hard-won stuff: All these considerations fed into the decisions he made. He had his reasons and he acted on them, just like you or I do every day, all day long.

The bartender, in contrast, was not making choices. He was a classic “non-player character,” an NPC. His responses were completely determined by his programming: He had no degrees of freedom. His actions were merely the inevitable outcome of a flow of electrons through the circuits of the game console, constrained by the rules encoded in the software. Even the more sophisticated NPCs in the game, including the monster that eventually caramelized my son’s avatar, were similarly constrained. The monster’s actions — even in the fast-moving melee — were determined by the software programming and mediated by the electronic components in the console.

Thus the NPCs only appear to be making choices. They’re not autonomous entities like us: They’re just a manifestation of lots of lines of code, implemented in the physical structure of the computer chips. Their behavior is entirely determined by the inputs they get and their preprogrammed responses. We, in contrast, are causes of things in our own right. We have agency: We make our own choices and are in charge of our own actions.

At least it seems that way. It certainly feels like we have “free will,” like we make choices, like we are in control of our actions. That’s pretty much what we do all day — go around making decisions about what to do. Some are trivial, like what to have for breakfast; some are more meaningful, like what to say or do in social or professional situations; and some are momentous, like whether to accept a job offer or a marriage proposal. Some we deliberate on consciously, and others we perform on autopilot—but we still perform them. Of course, our options may be more or less constrained (or informed) by all kinds of factors at any given moment, but generally we feel like the authors of our own actions.

And we interpret other people’s behavior in terms of their reasons for selecting different actions — their intentions, beliefs and desires that make up the content of their mental states. We constantly analyze each other’s motives and habits and character, looking for explanations and predictors of their behavior and the decisions they make. Why people act the way they do is ultimately the theme of most entertainment, from Dostoyevsky to Big Brother. All this rests on the view that we are not just acted on — we are actors. Things don’t just happen to us, in the way they happen to rocks or spoons or electrons: We do things.

The problem is that if you think about this view for too long, it becomes difficult to escape a discomfiting thought. After all, like the NPCs, our decisions, however complex they may be, are mediated by the flow of electrical ions through the circuits of our brains and thus are constrained by our own “programming,” by how our circuits are configured. Unless you invoke an immaterial soul or some other ethereal substance or force that is really in charge — call it spirit or simply mind, if you prefer — you cannot escape the fact that our consciousness and our behavior emerge from the purely physical workings of the brain.

There is no shortage of evidence for this from our own experience. If you’ve ever been drunk, for example, or even just a little tipsy, you’ve experienced how altering the physical workings of your brain alters your choices and the way you behave. There is a whole industry of recreational drugs — from caffeine to crystal meth — that people take because of the way that physically tweaking the brain’s machinery in various ways makes them feel and act. The ultimate consequence in some cases is addiction — perhaps the starkest example of how our actions can sometimes be out of our control.

And, of course, if the machinery of your brain gets physically damaged — as occurs with head injuries, strokes, brain tumors, neurodegenerative disorders and a host of other kinds of insults — or its function is impaired in other ways, as in conditions such as schizophrenia, depression and mania, then your ability to choose your actions may also be impaired. In some situations, the integrity of your very self may be compromised.

We all like to think that we are Player One in this game of life, but perhaps we are just incredibly sophisticated NPCs. Our programming may be complex and subtle enough to make it seem as if we are really making decisions and choosing our own actions, but maybe we’re just fooling ourselves. Perhaps “we” are just the manifestations of genetic and neural codes, implemented in biological rather than computer hardware. Perhaps we are the victims of a cruel joke, tragic figures in the grip of the Fates. As Gnarls Barkley sang, “Who do you, who do you, who do you think you are? Ha ha ha, bless your soul, you really think you’re in control.”

It’s hard not to look at the growing body of work from neuroscience and see only the machine at work. Driving this circuit or that one either directly causes an action or influences the cognitive operations that the animal — mouse or human or anything else — uses to decide between actions. If we were dissecting a robot in this way, we would apply engineering approaches to understand the kinds of information being processed, the control mechanisms configured into the different circuits and the computations that lead to one output or another. There does not seem to be any need for something like a mind in that discussion. There is no real need for life, for that matter.

If the circuits just work on physical principles, then who cares what the patterns of activity mean? Why does it matter what the mental content associated with a particular pattern of neural activity is, if it is solely the physical configuration of the circuitry that is going to determine what happens next? We may have set out, as neuroscientists, to explain how the workings of the brain generate or realize psychological phenomena, but we are in danger of explaining those phenomena away.

”We make decisions, we choose, we act — we are causal forces in the universe. These are the fundamental truths of our existence and absolutely the most basic phenomenology of our lives."

If the neuroscientists have it bad, pity the poor physicists, whose existential angst must run much deeper. Where neuroscientists can at least hold onto the view that the circuits in the brain are doing things (whether “you” are or not), some physicists claim that even that functionality is an illusion. After all, the brain is made of molecules and atoms that must obey the laws of physics, just like the molecules and atoms in any other bit of matter.

These small bits of matter are pushed and pulled by all the forces acting on them — gravity, electromagnetism, the so-called strong and weak nuclear forces that hold atoms together — and where each atom goes is fully determined by the way those interactions play out. These processes are no doubt complicated, as they would be in any system with so many atoms simultaneously acting on each other, and in practice how the system will evolve is unpredictable — but it is still all driven by the physics. Even at the lower levels of subatomic particles, how the system evolves is captured by the equations of quantum mechanics in a way that many would argue theoretically leaves no room for any other causes to be at play.

So, then, what does it matter what you are thinking? You cannot push the atoms in your brain around with a thought. You cannot override the fundamental laws of physics or exert some ghostly control over the basic constituents of matter. According to this view, the very idea of mental causation — of the content of your thoughts and beliefs and desires mattering in some way — is a naive superstition, a conceptual hangover inherited from philosophers like the famous dualist Rene Descartes.

I am not willing to give up on free will so easily. In this book I argue that we really are agents. We make decisions, we choose, we act — we are causal forces in the universe. These are the fundamental truths of our existence and absolutely the most basic phenomenology of our lives. If science seems to be suggesting otherwise, the correct response is not to throw our hands up and say, “Well, I guess everything we thought about our own existence is a laughable delusion.” It is to accept instead that there is a deep mystery to be solved and to realize that we may need to question the philosophical bedrock of our scientific approach if we are to reconcile the clear existence of choice with the apparent determinism of the physical universe.

But if we want to solve this mystery, humans are the absolute worst place to start. It is a truism in biology to say that nothing makes sense except in the light of evolution — and this is surely true of agency. Instead of trying to understand it in its most complex form, I go back to its beginnings and ask how it emerged, what the earliest building blocks were, and what the basic concepts should be. How can we think about things like purpose and value and meaning without sinking into mysticism or vague metaphor? I argue that we can do so by locating these concepts in simpler creatures and then following how they were elaborated over the course of evolution, increasing in complexity and sophistication as certain branches of life developed ever-greater autonomy and self-directedness.

Indeed, before tackling the question of free will in humans, we have a much more fundamental problem to solve. How can any organism be said to do anything? Most things in the universe don’t make choices. Most things — like rocks or atoms or planets — don’t do anything at all, in fact. Things happen to them, or near them, or in them, but they are not capable of action. But you are. You are the type of thing that can take action, that can make decisions, that can be a causal force in the world: You are an agent. And humans are not unique in this capacity. All living things have some degree of agency. That is their defining characteristic, what sets them apart from the mostly lifeless, passive universe. Living beings are autonomous entities, imbued with purpose and able to act on their own terms, not yoked to every cause in their environment but causes in their own right.

To understand how this could be, we have to go right back to the beginning, to the very origins of life itself. From the chemistry of rocks and hydrothermal vents — the chemistry of the evolving planet itself — life emerged as systems of interacting molecules, interlocked in dynamic patterns that became self-sustaining. The ones that most robustly maintained their own dynamic organization persisted, replicated, evolved. They became enclosed in a membrane — a tiny subworld unto themselves — exchanging matter and energy with their environment while protecting an internal economy and reconfiguring their own metabolism to adapt to changing conditions. They became autonomous entities, causally sheltered from the thermodynamic storm outside and selected to persist.

A new trick was invented: action, the ability to move or affect things out in the environment. Information became a valuable commodity, and mechanisms evolved to gather it from the environment. With that came the crude beginnings of value and meaning. Movement toward or away from various things out in the world became good or bad for the persistence of the organism. These responses were selected for and became wired into the biochemical circuitry of simple creatures.

As multicellular creatures evolved, a class of cells — neurons — emerged that specialized in transmitting and processing information. Initial circuits acted as internal control systems, designed to coordinate the various muscles or other moving parts of the multicellular animal and defining a repertoire of useful actions. At the same time, neurons coupled various sensory signals to specific actions in this repertoire, hardwiring adaptive instincts for approach or avoidance.

With the elaboration of the nervous system, this kind of pragmatic meaning eventually led to semantic representations. Perception and action were decoupled by layers of intervening cells. Instead of being acted on singly and immediately like a reflex, multiple sensory signals could be simultaneously conveyed to central processing regions and operated on in a common space. Circuits were built that integrated, amplified, compared, filtered and otherwise processed those signals to extract information about what was out in the world and what that meant for the organism. More and more abstract concepts were extracted — not just things, but also types of things and types of relations between them. Creatures capable of understanding emerged.

Meaning became the driving force behind the choice of action by the organism. That choice is real: The fundamental indeterminacy in the universe means the future is not written. The low-level forces of physics by themselves do not determine the next state of a complex system. In most instances, even the details of the patterns of neural activity do not actually matter and are filtered out in transmission. What matters is what they mean — how they are interpreted by the criteria established in the physical configuration of the system. Animals were now doing things for reasons.

That causal power does not come for free: It is packed into the organism through evolution, through development and through learning. It is encoded in the genome by the actions of natural selection. And it is embodied in the physical structure of the nervous system in the strength of neuronal connections that express functional criteria in relation to a hierarchy of aims of the organism. There is nothing here that violates the laws of physics; it just demands a wider concept of causation over longer time frames and an understanding that the dynamic organization of a system, which encodes meaning, can constrain and direct the dynamics of its component parts.


And yes, your actions are at any given moment constrained by all those prior causes. Yet you could just as well say, more positively, that they are informed by prior experience. That is precisely the property that sets life apart from other types of matter: Living things literally incorporate their history into their own physical structure to inform future action. For those who would argue this impinges on the freedom of the self to decide at any moment, I counter that it is this very process that enables the self to exist at all. There is no self in a given moment: The self is defined by persistence over time.

And though you are configured in a certain way that reflects all this history, you are not hardwired. We humans have the remarkable capacity for introspection and metacognition. We can inspect our own programming, treating goals and beliefs and desires as cognitive objects that can be recognized and manipulated. We can think about our own thoughts, reason about our own reasons and communicate with each other through a shared language. We can access the machine code running in our brains by translating high-level abstract concepts into causally efficacious patterns of neural activity. This gives a physical basis for how decisions are made in real time, not just as the outcome of complex physical interactions but also for consciously accessible reasons, and it provides a firm footing for the otherwise troublesome concept of mental causation.

So, if you want to know what kind of thing you are, you are the kind of thing that can decide. Not just a collection of atoms pushed around by the laws of physics. Not a complex automaton whose movements are determined by the patterns of electrical activity zipping through its circuits. And not an NPC, unknowingly driven by its programming. You are a new type of thing in the universe — a self, a causal agent. In the game of your life, you are Player One.

From reductionism to dynamical systems

Mike's Notes

"For The Transmitter’s first annual book, five contributing editors reflect on what subfields demand greater focus in the near future—from dynamical systems and computation to technologies for studying the human brain.an article by Nicole Rust on how the brain works." - Transmitter

One of those authors was Nicole Rust who wrote "The field is increasingly embracing the notion that the brain is a “complex dynamical system” where causes lead to effects that feed back as causes—this happens through feedback loops within the brain and interactions between the brain and the environment. From ecology, engineering and other fields, we know that when complex dynamical systems go awry, they can be exceedingly difficult to restore. Tackling that challenge will be the key to developing treatments for the billions of people with brain conditions of nearly every type, from Parkinson’s disease to psychosis." - Nicole Rust Transmitter

That got me curious, so I looked up what else she had written and discovered the article below.

"Nicole Rust is professor of psychology at the University of Pennsylvania in Philadelphia. Her research focuses on understanding the brain’s remarkable ability to remember the things we’ve seen and using that knowledge to develop new therapies to treat memory dysfunction. She is also writing a book on the types of understanding of the brain that will ultimately be required to treat neurological and psychiatric conditions. In it, she argues that effective progress in brain research will require ambitious and unprecedented multidisciplinary conversations of the type that will appear in The Transmitter.

Rust received her Ph.D. in neuroscience from New York University and completed her postdoctoral training at the Massachusetts Institute of Technology. She has been recognized by the Troland Research Award from the National Academy of Sciences, the McKnight Scholar Award, a CAREER Award from the National Science Foundation, a Sloan Research Fellowship, the Charles Ludwig Distinguished Teaching Award, and election to the Memory Disorders Research Society." - Transmitter

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Transmitter
  • Home > Handbook > 

Last Updated

17/05/2025

From reductionism to dynamical systems: How two books influenced my thinking across 30 years of neuroscience

By: Nicole Rust
Transmitter: 26/08/2024

I first read Francis Crick’s “The Astonishing Hypothesis,” published in 1994, as a disenchanted undergraduate student. I knew that I wanted to change my major from chemical engineering, but I was unsure about what I wanted to switch to. In Crick’s book, I found the answer in spades. In fact, 30 years later I can still recite key lines from “The Astonishing Hypothesis” from memory: “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.” Though I wasn’t completely convinced that the hypothesis was correct, the notion that I could make a career out of studying it hit me as a profound insight. I still have my original copy of Crick’s book, complete with a dozen or so Post-it notes marking the most important passages.

I eventually ended up studying vision and memory rather than consciousness per se, but that tattered copy has continued to be influential across my career. I read it most recently a few years ago, when I was contemplating writing a book myself. Though I still very much respect the brilliance of Crick’s book, what struck me on that recent reread was how outdated it had become. I view that not as an indictment but as evidence of the evolution of our field. “The Astonishing Hypothesis” reflects an era of 1990s neuroscience in which researchers shifted toward thinking about the wonders of the brain and mind in ways that were more mechanistic and scientifically falsifiable. That approach tended to try to reduce every phenomenon to a simple explanation, such as the expression of a gene or the activity of a brain area. For instance, Crick proposed that the decisions we make may not be freely decided but instead predetermined, and that our illusion of free will may arise from what happens in the anterior cingulate cortex.

What is the modern alternative? Enter Kevin Mitchell‘s 2023 book “Free Agents: How Evolution Gave Us Free Will,” which reflects a shift toward thinking about the brain as a complex dynamical system with emergent properties that defy reduction to simple elements. In “Free Agents,” Mitchell spells out how will that is truly free could exist in a dynamical brain in which agency has evolved across millions of years. (The Transmitter published an excerpt from “Free Agents” last year.)

The gist of Mitchell’s proposal is that higher-level mental states do not simply emerge from the activation of neurons and their interactions, but that they also influence how brain activity will evolve in the future. In Mitchell’s account, noise allows for multiple possible future brain states, and top-down causal influences determine which one happens. It’s this process that creates free will. A long-held objection to this type of idea is that mental states cannot both emerge from brain activity and also simultaneously cause it, because in that argument, causality is “circular.” Central to Mitchell’s argument is that top-down influences do not act instantaneously but instead shape the future of the system, like a “spiral” that unfolds in time. This provocative proposal has inspired me to think about how we might test ideas about “spiraling” causality—to explain not just free will, but also functions such as seeing, remembering and feeling.

Causality question: Mitchell ascribes free will to a process in which noise allows for multiple possible future brain states, and top-down causal influences determine which one happens. A long-held objection to this type of idea is that mental states cannot both emerge from brain activity and also simultaneously cause it, because in that argument, causality is “circular.” Central to Mitchell’s argument is that top-down influences do not act instantaneously but instead shape the future of the system, like a “spiral” that unfolds in time.

I’m very much looking forward to revisiting Mitchell’s book in 30 years to see how well it holds up. My hope is that we will have either verified that the framework is correct, or (as with Crick’s book) we will regard it as pioneering but dated, and we will have moved on to a better alternative. Either would reflect tremendous progress in our field. I can’t wait to find out.