Planned Outage 16 July

This is a notification that we will be performing maintenance on your server.

Date: July 16, 2025 between 2:00 PM to 3:00 PM EST (Eastern Standard Time)

Affected Domain: ajabbi.com

Affected Server: sgp200.greengeeks.net

Outage Expected: ~30-60 minutes.

Reason: MariaDB (Database Server) Upgrade 10.6 --> 11.4.

Additional Notes: This is a preventative maintenance that is absolutely required to avoid potential future issues.

Thank you, 

The GreenGeeks Team

https://www.greengeeks.com

The Vital Question by Nick Lane – a game-changing book about the origins of life

Mike's Notes

Note

Resources

References

  • The Vital Question. Nick Lane

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Fundamentals Biology
  • Home > Ajabbi Research > Library > Authors > Nick Lane
  • Home > Handbook > 

Last Updated

15/07/2025

The Vital Question by Nick Lane – a game-changing book about the origins of life

By: Peter Forbes
The Guardian: 22/04/2015

Why sex? Why then only two sexes? Why do we age and die? It’s the energy, stupid! This thinking is as important as the Copernican revolution

The “Origin of Life” is a conundrum that could once be safely consigned to wistful armchair musing – we’ll never know so don’t take it too seriously. You will probably imagine that it’s still safe to leave the subject in this speculative limbo, without very much in the way of evidence.

You’d be very wrong, because in the last 20 years, and especially the last decade, a powerful new body of evidence has emerged from genomics, geology, biochemistry and molecular biology. Here is the book that presents all this hard evidence and tightly interlocking theory to a wider audience.

While most researchers have been bedazzled by DNA into focusing on how such replicating molecules have evolved, Nick Lane’s answer could be characterised as “it’s the energy, stupid”. Of all the definitions of life, the one that matters most concerns energy: the churn of metabolic chemistry in the cells and the constant intake of nutrients and expulsion of waste are the essence of life. Information without energy is useless (pull the plug on your computer); information could not have started the whole thing off but energy could.

It is widely recognised that the creation of a viable primitive living cell, capable of reproduction and Darwinian selection, has three requirements: a containing membrane, which acts as an interface between the organism and the environment; replicators able to store the genetic instructions for the organism and to synthesise its chemical apparatus; and a way of taking energy from the environment and putting it to work to run the cell’s processes. Lane shows how all the rest can follow if we put energy first.

He is a researcher in evolutionary biochemistry at University College London who has been developing his grand energy theory of life, the universe and everything for more than two decades, explaining it in the books Oxygen (2002), Power, Sex, Suicide (2005) and Life Ascending (2009), which won the Royal Society book prize in 2010. He is an original researcher and thinker and a passionate and stylish populariser. His theories are ingenious, breathtaking in scope, and challenging in every sense. To read him, it helps, as Richard Dawkins once said of himself when embarking on an intricate passage in The Blind Watchmaker, to bring your “mental running shoes”.

Lane’s research on the energy reactions of living cells has brought him to a theory that can account for some of life’s biggest mysteries: why sex? Why then only two sexes? Why do we age and die? Why are the mitochondria, the cell components that produce all our energy, only inherited from the female line (the male mitochondria being destroyed in the germ cells)? Why do those same mitochondria – once fully fledged, free living bacteria with at least 1,500 genes (before they merged with another cell 1.7-2bn years ago to create the possibility of multicellular life) – have only 13 protein-coding genes left?

Lane has the most plausible answers to these questions so far, but the greatest detective story is that of life’s origin. The evidence now is highly detailed: the essential biochemical machinery of life is known down to the last atom; the remarkable large protein complexes that catalyse the cascade of energy reactions have been, thanks to x-ray crystallography, charted in atomic detail. What these precise structures reveal are clues such as the existence of mineral centres in the otherwise proteinaceous complexes of life’s vital enzymes: iron sulphide is found at the heart of the respiratory enzymes. Why is that significant?

Because the most plausible location for where life on Earth began is the alkaline hydrothermal vents near the Mid-Atlantic Ridge, on the deep ocean floor, and other such formations. These structures, discovered only in 2000 after being predicted by the pioneering geochemist Mike Russell at Nasa’s Jet Propulsion Laboratory, have the right credentials: masses of warm energetic minerals pour out of the ocean bed and form calcium carbonate chimneys full of micropores. In the conditions of the primitive world, they would also have contained the ingredients necessary to create organic chemicals, the precursors of life; the micropores would have contained and concentrated them and the hot chemicals that spewed forth, rich in iron and sulphur, would have created energy gradients.

Russell is one of the key figures in this developing story, along with Lane himself, Bill Martin at the University of Düsseldorf, and Lane’s colleague at UCL Andrew Pomiankowski. If Lane and his colleagues are right on the origin of life, what of the other puzzles: why do animals have sex, grow old and die? The answer, to paraphrase Kenneth Williams’s farmer character in Round the Horne, “lies in the mitochondria”. It is the biochemical mechanisms and structures that evolved from those energetic deep-ocean outpourings that power our cellular batteries, the mitochondria, today. You’re most likely to have heard of them through the recent controversial therapy of mitochondrial replacement. There might only be 13 mitochondrial genes left (the rest have all been incorporated in our main genome or rendered useless by mutation) but that still means that we have two genomes, not one. In fact, the commonly used but misleading term for mitochondrial replacement therapy – “three-parent babies” – would be better described as “Two Parents and 13 Genes Left Over from a 2bn-Year-old Mitochondrion”. Which isn’t to deny the significance of the mitochondrion and its 13 genes; as Lane explains, the subtle interactions between the two genomes can account for all the mysteries of multicellular life.

It might provide all our energy but, genetically, the mitochondrion is a cuckoo in the nest: it has its own genome and reproduces, bacteria-style, without sex. Sex evolved in order to shuffle our genes every generation, allowing us to keep good mutations and lose bad ones. But the energetic, sexless, cuckoo mitochondrion can’t do this. Bad mutations in your battery are extremely dangerous: that’s why most of the genes have dropped out of the mitochondrion into the main genome, so that they can enjoy the advantages of sex. The rump genes, though, have to be close to the mitochondrial machinery for the system to work and it is these genes, when faulty, that would be replaced in mitochondrial therapy.

Why do we only inherit them from the mother? Because her eggs are formed only once, at birth, whereas men make sperm throughout their lives, creating many more opportunities for mutation. This is why male mitochondrial genes are deleted in the sperm cells. Lane goes on to explain how our weird mitochondrial inheritance explains the other great puzzles.

There will be those who question the book’s title: “The Vital Question”? But intellectually what Lane is proposing, if correct, will be as important as the Copernican revolution and perhaps, in some ways, even more so. Life, seen in energetic terms, is a process of reducing carbon dioxide with hydrogen to create biomass and all the interesting consequences that follow from it (us, for instance). The future of life on the planet now seems to hinge on one life‑form (us again) learning to copy this process as a substitute for all that fossil fuel we’ve been burning. There’s a poetic symmetry in this (“in my beginning is my end”) and the work on the origin of life feeds the work on solar biosynthesis. But get this wrong and we’ll have to update TS Eliot: “Our end was our failure to learn from our beginning.”

Adobe Tracker

Mike's Notes

Pipi has no way for users to;

  • Search for bugs or feature requests
  • Add a bug
  • Add a feature request
I read in Feedspot a link to a recent post by Ben Nadel that led to a bug report using Adobe Tracker. I wrote some notes about Adobe Tracker to remind me to design and add a Pipi Tracker.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Feedspot
  • Home > Handbook > 

Last Updated

14/07/2025

Adobe Tracker

By: Mike Peters
On a Sandy Beach: 14/07/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Search Example

This is what I could make of a unloggedin search using Adobe Tracker

Product   (Select from a list)
Type   Bugs & Features | Bugs Only | Features Only
Version   (Select from a list)
Priority   (Select from a list)
System   (Select from a list)
Frequency    
Browser   (Select from a list)
System   (Select from a list)
Title    
Reported By    
Description    
Test Configuration    
Component   (Select from a list)
Failure Type   (Select from a list)
Reason    
Fixed in Build    
Found in Build    
Created At = | < | >  
Votes = | < | >  
Attachments = | < | >  

Bug Example

This is a copy of a search result that discovered a recent bug reported by Ben Nadel.

ID CF-4227145
Title Closures do not work properly in custom tags
Description

Problem Description:

A closure, defined in a custom tag, does not close over its lexical scope; and fails to retain binding to custom tag page context when passed out of scope.

Steps to Reproduce:

My example: https://bennadel.com/4821

1. Define a closure inside a custom tag that references a closed-over variable.

2. Pass closure out of context (using `caller` scope as example)

3. Try to invoke closure from calling context.

Actual Result:

Error: closed-over variable is not defined.

Expected Result:

Closed-over variable should be available.

Any Workarounds:

Not that I've been able to find.

Comments
Status: Open
Details
Date Created: 02/07/2025
Component: Core Runtime
Version: 2025
Failure Type
Found In Build: 2021 and 2025
Fixed In Build:
Priority: Normal
Frequency:
System:
Browser:
Reason Code:
Votes: 1

The Power Law of Learning: Consistency vs. Innovation in User Interfaces

Mike's Notes

More UI wisdom from NN Group.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The NN/g Newsletter
  • Home > Handbook > 

Last Updated

13/07/2025

The Power Law of Learning: Consistency vs. Innovation in User Interfaces

By: Raluca Budiu
NNGroup: 30/10/2016

Raluca Budiu is Senior Director, Data Strategy, at Nielsen Norman Group, where she uses her data-analysis expertise to drive strategic decisions. She also serves as editor for the articles published on NNgroup.com. Raluca has coauthored many NN/g reports, as well as the book Mobile Usability. She holds a Ph.D. from Carnegie Mellon University..

Summary:

Across many tasks, learning curves show an initial learning period, followed by a plateau of optimal efficiency. New interfaces compete with much practiced, old ones that have already reached this plateau.

In response to one of our recent articles on centered logos vs. left-aligned logos, one reader tweeted: “Every @NNgroup newsletter, summarized: Do things exactly the way every other website already does them, or your users will be confused.”

Although obviously a hyperbole (we publish articles on a wide variety of topics), the tweet does identify a big theme in many of our articles and one of the main principles of user experience: consistency. Consistency is one of the original 10 usability heuristics and is a corollary of Jakob’s law of Internet use. As disappointing as it may be for designers to have to follow the same beaten path, we preach consistency again and again for reasons deeply rooted in basic human behavior. In this article, we explain the most fundamental of these reasons: the power law of learning.

In This Article:

  • Learning Studies and Learning Curves
  • The Power Law of Learning
  • Analyzing a Learning Curve
  • Memory and the Power Law of Learning
  • Consistency = Boring, Old Interfaces?
  • References

Learning Studies and Learning Curves

The best way to measure how people learn a task or an interface is by running a learning experiment. In a learning experiment, people come into the lab and do the same task multiple times. Each time the person does the task, the experimenter records one or more quantitative metrics (usually, the time it takes to do that task and the number of errors). If the measures get better, people have learned from their previous experience, and the numbers show how much. The repetitions of the same tasks may or may not be separated by different activities, and sometimes participants are even sent home between two measurements, and asked to come back after a day, a week, or a month.

One of the first rigorous learning experiments was described by Hermann Ebbinghaus in 1885 in his book on human memory. Since then, many other learning experiments have been reported in the psychology, human-factors, and HCI literature. All these experiments show that “practice makes perfect:” when people do the same task over and over again, they get better and faster. The chart below shows the results from one such study by David Ahlstrom and colleagues, who were investigating pie menus and comparing them with other types of menus.

Ahlstrom et al.’s study had participants interact with the same menu interface for 8 different practice blocks — in each block participants selected the same 6 items in the menu, and the obtained selection times were averaged to get the mean time for that block. The learning curve shows that the mean selection time decreases with practice.

This type of graph that plots the results from a learning experiment is a learning curve. A learning curve describes how a specific quantitative measure of the same human behavior changes as a function of time. In the menu experiment, the measure of interest is the task time — the mean time to select an option inside the menu. But the measure can vary from one learning experiment to another: it can be any metric that you’d expect to change as a result of learning. For example, if we were interested in UX education, we may ask people with the same background to facilitate a user test once, and measure how many facilitation errors they make. We would give them feedback on those errors, and then we would ask them to come a second time to facilitate a user test. After a few such repetitions, we would plot the average number of errors made for each test. That graphic of errors over time would represent a learning curve.

The Power Law of Learning

In the 1980s, Allen Newell, a famous Carnegie Mellon cognitive scientist, analyzed reaction times for a variety of tasks reported in learning experiments and he noted that the learning curves obtained in all these studies had a very similar shape: that of a so-called power law. Power laws have a nice mathematical property: when you plot them in log-log scale, you obtain a straight line.

The learning curve in Ahlstrom’s menu experiment is described by a power law; when plotted in log–log scale, it is well approximated by a straight line.

Definition: The power law of learning says that (1) the time it takes to perform a task decreases with the number of repetitions of that task; and (2) the decrease follows the shape of a power law.

Newell focused primarily on time as the quantitative measure of learning, but there is evidence that the power law holds for other measures as well.

Analyzing a Learning Curve

Although learning curves can be described by power laws, they won’t be described by the same power law.

Let’s assume that we were interested in analyzing the learnability of three different interfaces A, B, and C for the same task (e.g., answering a customer query in a call center). For each design, we ask participants to complete the task in different, repeated trials and we measure the task time for every trial. Then we plot the average task time corresponding to each repetition and we obtain three learning curves like the ones in the figure below.

Three learning curves for three different interfaces

Notice that in the first trial participants take roughly the same amount of time with all designs. But by the second repetition, design A is much faster than designs B or C. By the 3rd repetition design A speeds up even more, and after the 4th repetition the reaction times reach a plateau: the curve flattens out and the users have learned the interface as much as possible. There are no more improvements to be expected, and extra repetitions will only decrease the reaction time insignificantly. We can say that, with design A, learning is saturated after the 4th repetition (or that 4 is the saturation point for design A).

The learning curve for design C also flattens, but the plateau is reached later, by approximately the 10th or 11th repetition. So design C requires more practice to stabilize the performance. In other words, it takes people more trials to learn how to use design C than design A.

Moreover, design A exhibits more improvement: the difference between the highest and the lowest points on the learning curve is approximately 20s (22 for repetition 1 and 2 for repetition 15), whereas for design C this difference is approximately 19s. So with design C participants don’t speed up as much as they do with design A.

Design B also reaches saturation later than design A (approximately by repetition 11), but the improvement is bigger. More importantly, the expected task time after the interface has been learned is lower for design B than for design A (1s vs. 2s). In other words, design B takes longer to learn than design A, but once it has been learned, people are faster at using it.

The choice between A and C is easy: A is better in every way, with an earlier saturation point, a greater speedup, and a superior task time once the interface has been learned. But the choice between A and B depends on whether in real life users will be exposed to enough repetitions to reach the saturation plateau. If, for instance, you expect people to use the interface every day as part of their work, then it makes sense to go with design B, because in the long run it will save more time. (During the first week of use, A will be better, but halfway through the second work week B becomes better, and then it stays better forever.) However, if your users will use the design occasionally, with large intervals of time between two different sessions, then design A will be better because it will help people learn the interface faster.

Let’s consider two enterprise software examples:

The employee directory: assuming that most people use it once a day, we should prefer a user interface like design B for this type of application.

Reclaiming value-added taxes for foreign travel in expense reporting: we should prefer a user interface like design A if most people go on, say, at most one business trip abroad each year.

The ratio between the improvement and the saturation point indicates the slope of the learning curve: if the curve drops a lot and fast, then it means that the interface is highly learnable. If the curve drops only a little, or it takes many trials to reach the saturation point, the interface is less learnable. So the term “steep learning curve” is actually a misnomer — in reality, steep learning curves are good. They mean that the improvement is substantial and that it happens fast.

Memory and the Power Law of Learning

Romans used to say that “repetition is the mother of learning” — the more we rehearse a piece of information, the more likely we’ll be to remember it. Not only that, but we’ll also be faster at retrieving it from memory. When applied to human memory, the power law of learning says that the time it takes to retrieve a piece of information from memory depends on how much we’ve used that information in the past, and this dependence follows a power law. Thus, we are fluent with concepts and patterns that we use every day in our work, yet high-school math (such as the definition of logarithms) or other facts that we don’t often encounter are hard to remember, because we haven’t used them often enough.

In an experiment reported by Peter Pirolli and John R. Anderson (1985), the time it took participants to recognize facts that they had studied decreased with the number of days they had practiced those facts. The curve follows a power law and reaches a saturation level approximately around day 12.

Items that are practiced a lot acquire a high activation in our memory and are retrieved faster. Whenever we’re trying to solve a problem or recall a piece of information, the first things that come to mind are those items in our memory that have a raised activation. Let’s say you want to navigate to the homepage of a site. You may have encountered multiple solutions to this problem in the past — for example, clicking the logo or clicking a Home link. All these solutions will compete in a “race” in your memory and you will select the one that gets to the finish line first. But, based on the power law of learning, the one that wins the race is the one that’s been practiced most often. Of course, if the first winning solution doesn’t work, people will try the next best. But they will also start feeling annoyed and perceive the problem as harder, and the site as less usable.

The key implications of this research are as follows:

The power law of learning is real: it’s been proven in countless experiments during a very long period (the 19th, 20th, and 21st centuries). It’s the way the human brain works, and no degree of wishful thinking or new gadgets will change this. Design for it.

Learning is not a dichotomy (as a simplistic model might have assumed), where either you know something or you don’t. The more something is practiced, and the more recently it was practiced, the better it’s known.

Just showing users a tutorial or help screen isn’t enough to make them learn something well.

Doing something often is the way to strong learning.

Consistency = Boring, Old Interfaces?

We’ve seen that every repetition helps users practice a concept or an action. So, by being consistent with other sites, you’re giving users one more repetition of a highly common UI element, and you’re also reaping the benefits of practice on all these other websites. Remember Jakob’s law: your users spend most of their time on other websites.

Learning curves for two different interfaces: when the new interface is introduced, the old one is already at saturation level (repetition 1 for the new interface corresponds to repetition 5 for the old one). It’s going to take a lot more time and good will for the user to put up with the new suboptimal interface than to continue using the old one.

As shown in the graph above, when you are creating a new design pattern that goes against an old, familiar one (e.g., logo on the right of the page instead of on the left, horizontal scrolling instead of vertical scrolling on desktop, hamburger menu instead of a navigation bar on desktop), the learning curve for the new pattern will be in the steep, high part of the first repetitions, while the learning curve for the competing old alternative will have already reached saturation. It’s going to take more than a few repetitions for your new design to also reach saturation and perhaps prove better than the competing one. Unless your users are captive and you can force them to practice, chances are that they will give up and go elsewhere instead of putting up with a harder to use design: users hate change.

So that means there’s no hope for innovation, right? We are doomed to have the search box in the top right corner, the logo on the left, and the navigation in a bar?

Any type of innovation will incur a cost for users and for designers. For users, because it will be a new pattern that they must learn and that takes them on an untrodden, slow path. For designers, because they must provide extra scaffolds such as contextual tips and progressive disclosure to help users navigate on the new path. The cost of implementing these tools can be significant. Think twice at what you are trying to achieve — is it worth departing from the well-beaten path? Does it make sense to innovate or will you be just as well served by a traditional design?

It also means that innovation is easier push when you have a captive audience or when the perceived value of a brand is a lot bigger than the cost of using a new design pattern. That’s why traditionally, big companies with a large user base (think Apple and iOS or Google and Android, to a lesser extent) can afford to innovate — because people who are already using these platforms will have to put up with the new interface (especially if the company is pushing updates aggressively, like Apple does with iOS, or the innovation happens in an enterprise, where users don’t have a choice to go back to an older version of the interface).

It’s also easier to innovate if your users will experience the new interface very often, perhaps several times a day. That means that people will get faster to the saturation part of the learning curve because they will have quite a few opportunities to practice. (Yet, if the saturation point is too far in the future, people may actually never get there. Windows 8 is a live proof of that: Microsoft ended up changing the design instead of waiting for users to reach the saturation plateau.)

Innovation can also happen if designers adopt it en masse and create a new standard. If all websites rebelled tomorrow and started placing the logo in the top right corner, then users would get the repetitions needed to reach saturation relatively fast, everywhere. Usually, this process takes time, but it did happen with design elements such as the swipe-to-delete gesture in iOS or the hamburger menu on mobile.

We can make a simple decision tree for whether to introduce a deviant user interface in cases where a conventional design is already well established:

Will the new design perform much better than the old, once users have “descended” the learning curve? If not, don’t even try.

Is it credible that users will be willing to try the new design again and again, until they have learned it well enough to realize those long-term benefits? If people are likely to give up (e.g., leave a website for a competing, familiar design), then don’t introduce the new design.

Can you speed up learning, either by exposing users to the new design more often or by making it easier to learn? If yes, you will increase the proportion of users who will be willing to embrace the new design.

So yes, consistency is the curse of innovation in design. If you’re convinced that, once your users will have learned the interface, they will save time over the status quo, then it can be worth trying. But remember that the path to innovation is circuitous and costly, and if your users won’t have many learning opportunities, they may never reach that optimal-performance plateau accessible only after learning has happened.

References

David Ahlstrom, Andy Cockburn, Carl Gutwin, Pourang Irani (2010). Why It’s Quick to Be Square: Modelling New and Existing Hierarchical Menu Designs. CHI 2010.

Hermann Ebbinghaus, (1885). Memory: A contribution to experimental psychology. New York: Dover.

Allen Newell, Paul Rosenbloom (1980). Mechanisms of skill acquisition and the law of practice. Technical Report. School of Computer Science, Carnegie Mellon University.

Peter Pirolli, John R. Anderson, J. R. (1985). The role of practice in fact retrieval. Journal of Experimental Psychology: Learning, Memory, & Cognition, 11, 136-153.

Wiring the Winning Organization: The Hidden Management System Behind Extraordinary Performance

Mike's Notes

I'm going for stable, autonomous, self-managing teams. Here is more evidence from IT Revolution.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > IT Revolution
  • Home > Handbook > 

Last Updated

12/07/2025

Wiring the Winning Organization: The Hidden Management System Behind Extraordinary Performance

By: Leah Brown
IT Revolution: 30/06/2025

Steve Spear is Principal at HVE LLC, Founder of See to Solve, and co-author of “Wiring the Winning Organization.” His research on high-velocity learning and problem-solving leadership has influenced organizations from Toyota to NASA to the U.S. Navy.

Leah Brown is Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.

The number one predictor of organizational success isn’t technology, resources, or even talent—it’s how fast you can solve problems.

In a recent presentation at Prodacity 2025, Steve Spear—coauthor of Wiring the Winning Organization and longtime student of high-performing organizations—shared a startling discovery that challenges everything we think we know about competitive advantage. His research spanning Toyota production plants, NASA missions, Navy shipyards, and technology giants reveals that when organizations have the same resources, technology, and constraints, the winners are distinguished by one thing: their ability to identify and solve problems at high velocity.

The Half-In, Twice-Out Discovery That Changes Everything

Spear’s journey began in the late 1980s when fellow MIT graduate student John Krafcik studied all 186 final assembly plants worldwide. What he found was remarkable: while 181 plants required roughly the same inputs to produce the same outputs, five plants achieved something extraordinary—with half the people, half the physical space, and half the capital equipment, they produced twice the output. This “half in, twice out” performance was all achieved by Toyota plants.

But the advantage was even more profound than the “number four” suggests. These plants didn’t just double productivity—they achieved:

  • Higher initial quality by hundreds and thousands fewer defects
  • Better durability in their finished products
  • Greater agility switching between models in half the time

This wasn’t about being Japanese or making cars. The same pattern emerged across industries and continents: Nokia versus Apple, Yahoo versus Google, organizations before and after transformation. The only variable that consistently explained extraordinary performance was the management system.

Redefining Leadership: From Hero to Steward

What separates winning organizations from the rest isn’t visionary leadership in the traditional sense—it’s leaders who understand their fundamental role differently. When Spear visited Toyota’s San Antonio plant, he asked the new site president about her legacy goals. Her response was illuminating:

“Legacy? I’m a steward. I am temporarily responsible for the management system that allows all these thousands of people’s individual efforts to come together in harmony every day. I just want to make sure when I leave, this system has a better shine than when I found it.”

This stewardship mindset extends to a core paranoia about organizational capability. As the Toyota executive explained, “Because of the number of problems we have, I’ve got to make sure we have a lot of good problem solvers. That’s my concern, that’s my paranoia. Are we developing people everywhere, all the time, to be wickedly good problem solvers?”

The Social Circuitry Problem

Organizations excel at engineering technical systems—the machines, software, and instrumentation that act on objects. But Spear identifies a critical blind spot: the “social circuitry overlay” of processes and procedures that determine whether individual genius translates into collective success.

Most work can’t be done unless we harmonize individual effort into collective action. It’s the processes, procedures, routines, and norms that determine whether we create conditions for people to give fullest expression to their ingenuity, creativity, and problem-solving skill—what Spear and co-author Gene Kim explore extensively in Wiring the Winning Organization.

Too often, organizations inadvertently create what Spear calls the “danger zone”—conditions that make effective problem-solving nearly impossible:

  • Time pressure that forces reactive responses instead of thoughtful solutions
  • High stakes that make experimentation too risky
  • Complexity that overwhelms individual cognitive capacity
  • Isolation that prevents learning from others’ experiences

Three Mechanisms for Escaping the Danger Zone

Spear outlines three key mechanisms that winning organizations use to create optimal problem-solving conditions:

1. Slowification: Taking Control of Time

Our brains can do things very quickly, but only things that are already muscle memory. When you’re triggered in an unfamiliar situation, you’re going to behave very badly. Leaders must engineer situations where people have time for deliberation, repetition, contemplation, and feedback processing.

2. Simplification: Breaking Down Complexity

Taking really big problems and breaking them down into smaller pieces makes the pieces manageable even if the whole is not. Spear uses NASA’s Apollo program as the perfect example—Neil Armstrong’s “small step” was literally small, building on Apollo 10’s descent to 47,000 feet, which built on Apollo 9’s orbital rendezvous testing, and so forth.

3. Amplification: Making Problems Visible

The culture must not only allow but also encourage people closest to the work to identify and escalate problems early. A colleague from the naval reactors program spent 35 years “trying to see little problems before they have a chance to become big ones.”

Democracy in Action: Eliminating “People at the Bottom”

In his presentation, Spear passionately argues against hierarchical thinking that relegates frontline workers to “the bottom of the organization.” “We have documents that say we hold these truths to be self-evident—that all are created equal. But then we talk about ‘people at the bottom of the organization.’ Pick one—they’re mutually incompatible ideas.”

In a powerful Navy shipyard example, a machinist named Emory was given permission to refuse work unless conditions were perfect for uninterrupted completion. When problems arose, senior leaders responded immediately, breaking down silos and creating the support systems she needed. The result was giving frontline workers voice to complain when situations were imperfect, with leadership responding in non-bureaucratic fashion by creating connectivity across silos.

The Bottom Line

Organizations that consistently outperform their peers share one characteristic: they’ve engineered management systems that create optimal conditions for collective problem-solving. They’ve moved beyond heroic leadership models to stewardship approaches that develop problem-solving capability everywhere, all the time.

The competitive advantage isn’t just about having smart people—it’s about creating conditions where those smart people can solve hard problems together at high velocity. If you’re not solving problems at high velocity, you’re losing.

In an era where every organization faces unprecedented complexity and change, the winners will be those that wire their organizations for continuous problem-solving excellence. The question isn’t whether your people are capable—it’s whether your management system creates the conditions for their capabilities to flourish.

Luis Majano on BoxLang

Mike's Notes

Pipi 10 will utilise BoxLang.

Luis Majano, founder of Ortus Solutions, makes regular blog updates about BoxLang.

I will update this page with references to posts from Luis about BoxLang since the stable release on May 1, 2025.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > 
  • Home > Handbook > 

Last Updated

11/07/2025

Luis Majano on BoxLang

By: Luis Majano
Ortus Solutions: 11/07/2025

Luis was born in San Salvador, El Salvador, where he lived until 1995 before moving to Miami, Florida, where he completed his Bachelor of Science in Computer Engineering at Florida International University. Luis resides in Malaga, Spain, with his beautiful wife Veronica, daughter Alexia, and son Lucas!

Luis is the CEO of Ortus Solutions, an engineer and creator of BoxLang and many other products.

BoxLang Monaco Editor Released

By Luis Majano on July 03 2025

We're excited to announce the first release of the BoxLang Monaco Editor Support - a comprehensive language support package that brings BoxLang syntax highlighting, IntelliSense, and custom theming to Monaco Editor, the powerful code editor that powers Visual Studio Code. ...

https://www.ortussolutions.com/blog/boxlang-monaco-editor-released

BVM v1.15 Release : Enhanced Security, Insights, and Reliability for BoxLang

By Luis Majano on July 02, 2025

We're thrilled to announce the release of BVM (BoxLang Version Manager) v1.15.0! This release focuses on three critical areas: security, visibility, and reliability. With SHA-256 integrity verification, comprehensive installation statistics, and enhanced system resilience, BVM v1.15.0 continues to make it easier to work with multiple versions of BoxLang. ...

https://www.ortussolutions.com/blog/bvm-v115-release-enhanced-security-insights-and-reliability-for-boxlang

BVM v1.14 Release : Project-Specific Versions and Enhanced Developer Experience

By Luis Majano on June 24, 2025

We're excited to announce the release of BVM (BoxLang Version Manager) v1.14, bringing significant enhancements that make BoxLang development even more seamless and productive. What's crazy is that we have already released 14 minor versions of this amazing little version manager. This release introduces project-specific version management, improved snapshot handling, and powerful self-maintenance features that developers have been requesting. ...

https://www.ortussolutions.com/blog/bvm-v114-release-project-specific-versions-and-enhanced-developer-experience

BoxLang v1.3.0 Released

By Luis Majano on June 23, 2025

We're thrilled to announce the release of BoxLang v1.3.0! This significant update brings exciting new features, substantial performance improvements, and critical bug fixes that will enhance your development workflow and application reliability. ...

https://www.ortussolutions.com/blog/boxlang-v130-released

BX-AI 1.2 Released: Claude 4 Support, New Tooling API, CFML Compatibility & More!

By Luis Majano on June 19, 2025

We’re excited to announce the release of BoxLang AI v1.2, a major update to the BoxLang AI module that powers intelligent applications with a unified AI abstraction layer across even more providers: OpenAI, Claude, Grok, Gemini, and more. This release packs new features for providers, tools, debugging, and customization — making it easier than ever to build multi-runtime, AI-driven BoxLang and CFML applications. ...

https://www.ortussolutions.com/blog/bx-ai-12-released-claude-4-support-new-tooling-api-cfml-compatibility-more

Introducing the BoxLang Version Manager!

By Luis Majano on June 17, 2025

We're excited to announce the release of BVM (BoxLang Version Manager), a powerful new tool that makes managing multiple BoxLang installations effortless across Mac, Linux, and Windows Subsystem for Linux (WSL). Whether you're a BoxLang developer working on multiple projects or testing across different versions, BVM is designed to streamline your workflow. ...

https://www.ortussolutions.com/blog/introducing-the-boxlang-version-manager

Devnexus 2025 : BoxLang - The Future is Dynamic Recording

By Luis Majano on June 13, 2025

The future of development isn't about choosing a single platform or runtime—it's about embracing the dynamic nature of our ever-evolving digital landscape. My presentation "BoxLang - The Future is Dynamic" from DevNexus 2025 is now available on YouTube, and I'm excited to share why BoxLang represents a paradigm shift in how we approach modern software development. ...

https://www.ortussolutions.com/blog/devnexus-2025-boxlang-the-future-is-dynamic-recording

https://www.ortussolutions.com/blog/supercharge-your-boxlang-applications-with-maven-integration

By Luis Majano on June 06, 2025

We're excited to announce a game-changing feature for BoxLang developers: Maven Integration! This powerful addition opens the door to the entire Java ecosystem, allowing you to seamlessly incorporate thousands of Java libraries into your BoxLang applications with just a few simple commands. ...

https://www.ortussolutions.com/blog/supercharge-your-boxlang-applications-with-maven-integration

Streamline Your CI/CD: Introducing the Setup BoxLang GitHub Action

By Luis Majano on June 04, 2025

We're excited to announce the release of the Setup BoxLang GitHub Action – a powerful new tool that makes it incredibly easy to integrate BoxLang into your continuous integration and deployment workflows. Whether you're building applications, running tests, or deploying BoxLang projects, this action eliminates the complexity of environment setup and gets you coding faster. ...

https://www.ortussolutions.com/blog/streamline-your-cicd-introducing-the-setup-boxlang-github-action

BoxLang v1.2.0 Released

By Luis Majano on May 29, 2025

We're excited to announce the release of BoxLang 1.2, a significant milestone that demonstrates our commitment to delivering both cutting-edge features and exceptional performance. This release represents how much innovation the entire BoxLang team can accomplish in just 2 weeks of focused development, bringing you powerful new capabilities while dramatically improving the runtime efficiency that makes BoxLang a compelling choice for modern applications. ...

https://www.ortussolutions.com/blog/boxlang-v120-released

BoxLang v1.1.0 Released

By Luis Majano on May 13, 2025

We’re excited to announce the release of BoxLang 1.1.0, packed with powerful new features, critical bug fixes, and performance-focused improvements that make the language even more robust, secure, and developer-friendly. ...

https://www.ortussolutions.com/blog/boxlang-v110-released

BoxLang Stable Released : A Multi-Runtime JVM Dynamic Language

By Luis Majano on  May 01, 2025

inally, the wait is over! After a lot of intrigue and almost 10 months of extremely hard work, we can finally tell you about the most important release of the year for us at Ortus:

BoxLang: Multi-Runtime Dynamic JVM Language

BoxLang is a modern dynamic JVM language that can be deployed on multiple runtimes: operating system (Windows/Mac/nix/Embedded), web server, lambda, iOS, android, web assembly, and more. BoxLang combines many features from different programming languages, including Java, ColdFusion, Python, Ruby, Go, and PHP, to provide developers with a modern and expressive syntax.

BoxLang has been designed to be a highly adaptable and dynamic language to take advantage of all the modern features of the JVM and was designed with several goals in mind, check them out here! ...

https://www.ortussolutions.com/blog/boxlang-stable-released-a-multi-runtime-jvm-dynamic-language

Database Design for Google Calendar: a tutorial

Mike's Notes

The first customer needs a Gregorian calendar module that can import and export with Google Calendar, and the rest (Outlook, Yahoo, etc). Here are some notes and references to help me build this quickly. The data model is mainly complete.

There is also an existing Pipi Engine that deals with space and time.

Below is a table of contents from the Google Calendar online article by Alexey Makhotkin, taken from his book.

Resources

References

  • Database Design Book. By Alexey Makhotkin.

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Minimal Modelling
  • Home > Handbook > 

Last Updated

11/07/2025

Database Design for Google Calendar: a tutorial

By: Alexey Makhotkin
Database Design Book: 20/05/2024

Table of contents

  • Introduction
  • Intended audience
  • Approach of this book
  • Problem description
  • Part 1: Basic all-day events
    • Anchors
    • Attributes of User
    • Attributes of DayEvent
    • Links
    • A peek into the physical model
  • Part 2: Time-based events
    • Time zones
    • Anchors
    • Attributes of Timezone
    • Attributes of TimeEvent
    • Links
    • Similarities between DateEvent and TimeEvent
  • Part 3. Repeated all-day events
    • Attribute #1, cadence
    • Attribute #2, tangled attributes
    • Attribute #3
    • Days of the week: micro-anchors
    • Are we done?
    • Repeat limit: more tangled attributes
  • Part 4. Rendering the calendar page
    • A note on tempo
    • General idea
    • Day slots
    • Exercise: TimeSlots
    • How far ahead do you need to think?
  • Part 5. Rendering the calendar page: time-based events.
  • Part 6. Complete logical model so far
  • Part 7. Creating SQL tables
    • Anchors: choose names for tables
    • Attributes: choose the column name and physical type
    • 1:N Links
    • M:N links
    • Finally: the tables
  • Conclusion
    • What’s next?

Bootstrapped CPC rule of thumb: ARPU/25

Mike's Notes

Ajabbi is a bootstrapping social enterprise, but the useful measures outlined in this excellent article still apply.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > A Smart Bear
  • Home > Handbook > 

Last Updated

09/07/2025

Bootstrapped CPC rule of thumb: ARPU/25

By: Jason Cohen
A Smart Bear 01/07/2025

In the first year of business, you have no data for decision-making.

Even after the first hundred customers, half of those were serendipitous one-offs, not representative of repeatable, predictable customer acquisition, and the scale of the data isn’t statistically significant.

One of the fundamental data-driven questions (but you don’t have data) is: What’s the maximum I should bid for CPC (cost-per-click) campaigns like Google AdWords?

The answer for a funded startup is “Bid as much as possible, to get as many customers—and data!—as you can, as quickly as you can, then rapidly iterate from there in the presence of that data.”

That’s a smart use of money: To “pay to find out.” But what about a bootstrapped, profit-driven business? You don’t have that budget, and you’re keen on getting a reasonable return on investment reasonably quickly.

Here’s my way.

(Tune the exact numbers if you disagree with my assumptions!)

LTV = ARPU x 20

ARPU (Average Revenue Per User) is the amount you charge the average customer every month, which is typically a mixture of different quantities of customers at different tiers, special add-ons, etc..

LTV (Life-Time Value) is the total amount of money you expect to collect from a customer over their entire tenure. A simple version [1] is ARPU ✕ [expected months] meaning the average number of months a customer sticks with you.

[1] The correct version also includes multiplying by Gross Profit Margin, i.e. the cost to serve customers, which for SaaS is tech support, server infrastructure, and payment fees. You should include this for a more accurate calculation; small bootstrapped companies often have very high GPMs, so ignoring it for this back-of-the-envelop calculation was simpler.

Some customers cancel in one month, some cancel in a year, some in five years, and some never cancel! So it can be difficult to compute LTV accurately for small companies, and impossible to know for young companies (where five years hasn’t elapsed yet to see how many customer stuck it out that long). These are among the reasons that I dislike the LTV metric, but it’s common to use it in this context.

If you do have data, the simplistic calculation is [expected months] = 1/c where c is your monthly cancellation rate.

But since you don’t, in my experience (and in a non-scientific survey of some of the 100 startups currently officed at the fabulously Capital Factory co-working space in Austin), a good pre-data rule of thumb is 20 months.

If you have an average customer lifetime smaller than 20 months (i.e. cancellation rate higher than 5%/mo), that’s a dangerously high cancellation rate for almost any SaaS business, and you need to focus on addressing the business issues before acquiring more unsatisfied customers. Use surveys and one-on-ones to try to understand whether it’s technical failings, lack of features, missed expectations, bad service, doesn’t hit pain points, or what.

A healthy SaaS company will have a higher number of expected months, but at the start you also will have lots of mis-steps with weird early-adopters and non-ICPs where your product is at its worst—least features, least quality, etc—so it’s good to assume a low LTV instead of inflating it to where it might be in future.

CAC = LTV / 5

CAC (Cost to Acquire a Customer) is your average total cost to get a new customer, which includes direct costs (AdWords spend, affiliate payouts, the fees your affiliate system charges to process them) and indirect costs (consultants and your own time). So to compute CAC, take your total costs to acquire new customers and divide by the number of customers you acquired.

In general of course CAC needs to be less than LTV, otherwise it costs so much to get the customer that you will never make money. A surprising number of startups have CAC > LTV. Many justify this either by not correctly computing CAC (e.g. ignoring indirect costs) or saying they’ll “fix that later” by raising prices or finding other channels of revenue. Others justify by saying they’re doing a “land-grab” for customers, and just having a customer at all has intrinsic value.

Profit-seeking bootstrapped companies cannot afford those delusions. Also you need something far stronger than CAC = LTV, because you need to pay for other business expenses and still produce a profit. So how big can CAC be before it’s “too big?”

Growing, funded SaaS companies who treat CAC with respect often commonly target CAC = LTV / 3.

Back at my second startup IT WatchDogs, my co-founder Gerry Cullen used to say “A third to built it, a third to get rid of it, and a third to keep,” meaning a third of revenue goes to pay for hardware/inventory/shipping costs of the sale, a third goes to what I’m calling “CAC” here, and a third for the overhead costs, development costs, and profit.

That’s a good model, and I think a bootstrapped company can copy it, but I urge profit-seekers to instead adopt an even more strict model of CAC = LTV / 5. The reason is that at the start you should be able to find a few efficient ways of acquiring customers, even if those get saturated over time.

CAC = ARPU x 4

If you combine the previous two results, you see that the cost to acquire a customer should be no more than four months of revenue.

Another good way to think about it is: “The payback-period for my cost to acquire a customer is four months.” Also, ideally you’re getting the first month of revenue back immediately, so it’s really three months of cash-float.

Companies with large budgets to deploy at scale will often be happy with 12 month payback periods; some very high volume businesses like shared hosting will accept 24 or 36 months! But a bootstrapped company’s cash-flow won’t allow it, even if the math would work in the long run.

Conversion Rate = 1%

Conversion Rate is the percentage of visitors to your website who convert to a paying customer.

This is another step which in practice should be completely data-driven, segmented by customer type and marketing channel, segmented by landing page, A/B tested and iterated, blah blah blah. But since you don’t have data, and you don’t have enough visitors to have real ratios, you have to take a swag at this number.

In that same informal survey I ran, and bolstered by other formal surveys, a huge number of bootstrapped SaaS companies report a 1% conversion rate.

Another way of saying the same thing is “You need 100 visitors to make 1 sale.”

And since you need to incur no more than CAC dollars in the making of that sale, you need to incur no more than CAC/100 dollars in the making of each of those visitors.

And if you’re running a CPC campaign, that means you can pay up to CAC/100 dollars per click.

And since CAC is ARPU x 4, we can substitute and get the end result:

CPC = ARPU / 25

So for example if your average customer generates $50/mo, you can spend $2/click.

Indeed, this is a great way to prove one of my main arguments for all bootstrapped companies, which is that you should charge a lot more than you think, in part because it enables you to pay quite a lot per click, which enables a wide number of marketing channels, and out-bidding parsimonious competitors whose paltry LTVs preclude them from competitive marketing spend.

Customized

“But my numbers are different!” Of course, but now you have a formula you can plug them into, to arrive at the answer:

CPC = (ARPU) r/5c

Where:

  • c = monthly cancellation rate
  • r = visitor → purchase conversion rate from the paid marketing source in question

Everything, everywhere, all at once: Inside the chaos of Alzheimer’s disease

Mike's Notes

This article provides a clear explanation of the brain with Alzheimer's disease. It's also a great example of a complex system.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Transmitter
  • Home > Handbook > 

Last Updated

08/07/2025

Everything, everywhere, all at once: Inside the chaos of Alzheimer’s disease

By: Michael Yassa
The Transmitter: 16/06/2025

Michael A. Yassa is professor of neurobiology and behavior and James L. McGaugh Endowed Chair at the University of California, Irvine. His lab has been developing theoretical frameworks and noninvasive brain-imaging tools for understanding memory mechanisms in the human brain and applying this knowledge to human neurological and neuropsychiatric disease.

To truly understand Alzheimer’s disease, we may need to take a systems approach, in which inflammation, vascular injury, impaired glucose metabolism and other factors interact in complex ways.

For nearly three decades, Alzheimer’s disease has been framed as a story about amyloid: A toxic protein builds up, forms plaques, kills neurons and slowly robs people of their memories and identity. The simplicity of this “amyloid cascade hypothesis” gave us targets, tools and a sense of purpose. It felt like a clean story. Almost too clean.

We spent decades chasing it, developing dozens of animal models and pouring billions into anti-amyloid therapies, most of which failed. The few that made it to market offer only modest benefits, often with serious side effects. Whenever I think about this, I can’t help but picture Will Ferrell’s Buddy the Elf, in the movie “Elf,” confronting the mall Santa: “You sit on a throne of lies.” Not because anyone meant to mislead people (though maybe some did). But because we wanted so badly for the story to be true.

So what happened? This should have worked … right?

I would argue it was never going to work because we have been thinking about Alzheimer’s the wrong way. For decades, we have treated it as a single disease with a single straight line from amyloid to dementia. But what if that’s not how it works? What if Alzheimer’s only looks like one disease because we keep trying to force it into a single narrative? If that’s the case, then the search for a single cause—and a single cure—was always destined to fail.

What if Alzheimer’s only looks like one disease because we keep trying to force it into a single narrative? If that’s the case, then the search for a single cause—and a single cure—was always destined to fail.

Real progress, I believe, requires two major shifts in how we think. First, we have to let go of our obsession with amyloid. Now don’t get me wrong. There’s no question amyloid plays a role. It was the first thing Alois Alzheimer saw under the microscope in 1906. And there’s decent evidence that misfolded amyloid spells trouble for the brain. But betting the house on clearing amyloid has been a costly mistake. In fact, we have long known that one-third of people with amyloid pathology do not show any cognitive symptoms, a disconnect that should have forced a rethink years ago.

To the field’s credit, a shift is underway. We’re now exploring other mechanisms—tau, inflammation, metabolic dysfunction, vascular damage, neuronal hyperexcitability and more. But too often, these alternatives are still treated as side plots in an amyloid-centered story. They get less funding, less attention and fewer drug development efforts. That needs to change. These mechanisms may be far more central to the disease than we once thought. And they may drive it differently in different people.

This brings us to the second shift: We need to stop thinking in straight lines. The brain isn’t exactly a flowchart. It’s a dynamical system—a tangled web of feedback loops, compensations and nonlinear interactions. In such systems, small disruptions can ripple outward in unexpected ways. When one part starts to fail, another compensates. Over time, those compensations can become part of the pathology. In some people with Alzheimer’s disease, amyloid might be the trigger. In others, it might be inflammation, vascular injury, impaired glucose metabolism or runaway neural activity. These factors don’t act in isolation—they interact in complex ways, creating a web of multicausal loops. They are less like a chain of dominoes and more like a knot of tangled threads pulling on one another.

In systems terms, it’s not a cascade. It’s a state space. To understand this space, it’s useful to imagine a map in which every possible state of the brain is a point. In this space, healthy brains tend to move within a basin of attraction, a functional stable state. In Alzheimer’s, the brain may be pushed by interacting pathologies into a different region of state space, a pathological attractor—stable but dysfunctional.

There’s growing experimental support for this view. Functional imaging, for example, has shown that people with Alzheimer’s spend more time in sparsely connected, low-flexibility brain states, and MEG recordings reveal changes in the temporal complexity of network dynamics. Recent work in my lab identified a dominant state, characterized by co-activity of nodes in the limbic network, that is linked to worse cognition and Alzheimer’s pathology. The idea is that once the brain tips into the dysfunctional state, it can get stuck there, even if you remove the original trigger.

The idea is that once the brain tips into the dysfunctional state, it can get stuck there, even if you remove the original trigger.

Researchers have already identified a number of “systems-level” factors that can disrupt network stability and contribute to Alzheimer’s disease, including vascular compromise, in which small vessel disease disrupts blood flow and triggers downstream effects; metabolic dysfunction, such as insulin resistance or glucose hypometabolism; runaway inflammation, such as overactive microglia or cytokine chaos; and overactivity, driven by an imbalance in neuronal excitation or inhibition. These factors may represent different systems-level routes to the same clinical outcome. Each person’s condition likely involves a different mix or “weighting” of underlying mechanisms. For someone with a history of diabetes, metabolic dysfunction might be the dominant factor. For someone with high blood pressure, the vascular component could play a bigger role. Ultimately, pinpointing this weighting—the primary mechanism driving the system’s dysfunction, or the mechanistic phenotype—could help match people with the most appropriate treatment.

This framing also changes how we think about treatment. In a system governed by feedback loops and nonlinear dynamics, removing a single trigger may not be sufficient to get the system “unstuck.” That may explain why anti-amyloid drugs haven’t made a major clinical impact: By the time symptoms show up, the system has already reorganized itself. Instead, we may need interventions that restore network stability—rebalancing excitation and inhibition, reducing inflammation or improving metabolic resilience. Noninvasive brain stimulation is one such approach, potentially nudging the system toward a more functional dynamic without needing to target a molecular mechanism. The goal isn’t to fix a part. It’s to shift the conditions that shape how the whole system behaves.

So where is amyloid in all this? Well, amyloid is always present, because our diagnostic criteria make it so. Think of it like background noise—it’s there, but it may not be what’s pushing the system off-key. Unlike the factors described above, amyloid doesn’t consistently drive network-level disruption. It reflects cellular dysfunction, such as misprocessing of amyloid precursor protein or altered lipid metabolism, but the downstream systems-level effects aren’t nearly as consistent or potent as those seen with, say, inflammation or synapse loss. This doesn’t mean addressing amyloid buildup or clearance has no clinical value. It just means it’s likely a small piece in a much larger puzzle. Focusing on it is like trying to fix a whole cacophonous orchestra by tuning but one violin.

Of course, there’s no perfect framework yet. To build it, we’ll need better tools. That includes better ways to capture brain dynamics in vivo, not just static pathology. We also need animal models that go beyond single-gene variants—instead, we need models that combine multiple hits, such as inflammation plus hyperexcitability. And we need ways to track these factors in humans, using multimodal imaging, physiological sensors and inflammatory biomarkers.

Getting there will take work. Paradigm shifts happen slowly, painfully, often after the old model has failed enough times to lose its grip. That’s where we are now. The dominant model isn’t working anymore. What comes next isn’t fully formed—but it’s coming into view. Mechanistic phenotyping and dynamical systems thinking may offer a path forward. It won’t be neat or linear. But it may finally meet the disease on its own terms.