On a Sandy Beach, reformat is done - what's next

Mike's Notes

Over the last six weeks, all 381 current posts and 12 pages on the blog have been reformatted. The job took careful manual editing. Grammarly fixed many grammar and spelling errors, and broken links and missing photos were relinked.

More changes are planned.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

21/05/2025

On a Sandy Beach, reformat is done - what's next

By: Mike Peters
On a Sandy Beach: 21/05/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

The first stage of fixing this blog is complete.

What's Next

A blogging module needs to be built and added to Pipi 9 CMS. This could then be used to create blog posts using an underlying database, which could be modified to be more useful.

Data Model version 1 (current)

  • Mike's Note
  • Resources
  • References
  • Repository links
  • Date Updated
  • Title
  • Page Url
  • Author
  • Source publication
  • Date Created
  • Author description
  • Body of the article
  • Tags

Data Model version 2 (next)

  • Title
  • Page Url
  • Site-wide Navigation
  • Site-wide Breadcrumb
  • Mike's Note
  • Author
  • Source publication
  • Date Created
  • Author description
  • Body of the article
  • References
  • Further Reading (replacing References)
  • Articles
  • See Also (cross links to Ajabbi.com website pages, replacing Repository URL)
  • External Links (replacing Resources)
  • Keywords (replacing Tags)
  • Sharing
  • Updated

Website consistency

The second format is similar to that used on the rest of Ajabbi.com and would enable greater integration. It will also require On a Sandy Beach to be directly published and hosted by Pipi CMS.

Blog URLs

Keep the current URL structure for each page and post. For example.

  • https://www.blog.ajabbi.com/2025/05/boxlang-vs-world-by-kai-koenig.html
  • https://www.blog.ajabbi.com/p/pipi.html

Ajabbi Research integration

This blog is Mike's working notes about building Pipi. The notes are written for Mike to remember what he has done and where to find references, but he is happy to share them with anyone interested. Many notes raise issues to solve or refer to external research results and differing opinions. The planned Ajabbi Research could use all this written material to provide a starting point. These are some problems to work on, not the answers.

Other voices

Making these changes will enable other blog authors at Ajabbi Research to encourage the rapid, free, robust, open exchange and experimental testing of ideas. Any occurrence of Cancel Culture will be ruthlessly exterminated.

Data Model version 3 (later)

  • Title
  • Page Url
  • Site-wide Navigation
  • Site-wide Breadcrumb
  • Notes (by Mike and others at Ajabbi Research)
  • Author
  • Source publication
  • Date Created
  • Author description
  • Body of the article
  • Experimental Results
  • References
  • Further Reading (replacing References)
  • Articles
  • See Also (cross links to Ajabbi.com website pages, replacing Repository URL)
  • External Links (replacing Resources)
  • Keywords (replacing Tags)
  • Sharing
  • Updated

The Oracle’s Curse: Mathematical Models in Modern Society

Mike's Notes

This article asks some hard questions about the use of mathematical models.

Resources

  • https://www.fairobserver.com/world-news/the-oracles-curse-mathematical-models-in-modern-society/
  • https://www.amazon.com/Escape-Model-Land-Mathematical-Models/dp/1541600983
  • https://sites.ualberta.ca/~dwiens/stat575/misc%20resources/regression%20to%20the%20mean.pdf
  • https://www.britannica.com/science/Brownian-motion
  • https://www.investopedia.com/terms/b/blackscholes.asp
  • https://rize.io/blog/quantified-self
  • https://royalsocietypublishing.org/doi/10.1098/rsos.230803
  • https://today.usc.edu/fukushima-disaster-was-preventable-new-study-finds/
  • https://pmc.ncbi.nlm.nih.gov/articles/PMC7351545/
  • https://www.pnas.org/doi/10.1073/pnas.0912953109
  • https://www.journalofaccountancy.com/issues/2002/apr/theriseandfallofenron/
  • https://www.britannica.com/topic/black-swan-event
  • https://thedecisionlab.com/biases/confirmation-bias
  • https://thedecisionlab.com/biases/availability-heuristic
  • https://intellectualtakeout.org/2015/07/wendell-berrys-unsettling-description-of-modern-life/
  • https://www.britannica.com/topic/negative-externality

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

19/05/2025

The Oracle’s Curse: Mathematical Models in Modern Society

By: Usama Malik
Fair Observer: 18/05/2025

Mathematical models help us understand complex systems. However, our overreliance on them — treating them as objective oracles rather than imperfect abstractions — has created disaster, ethical blind spots and social harm. We must rethink our relationship with models by embracing uncertainty, incorporating diverse human values and grounding decision-making in ethical reflection.

In an age dominated by data and algorithms, mathematical models have become the oracles of our time. From predicting weather patterns to forecasting economic trends, these complex mathematical constructs have infiltrated nearly every aspect of our lives. However, as we increasingly rely on these models to shape policy, guide business decisions and inform public discourse, we must confront a crucial question: Have we placed too much faith in these modern-day prophets?

The rise of Model Land

The journey into what statistician Erica Thompson calls “Model Land” began innocuously enough. The late 19th and early 20th centuries saw the emergence of statistical methods that promised to bring order to chaos, to discern patterns in the noise of reality. Polymath Francis Galton’s concept of regression to the mean and Albert Einstein’s work on Brownian motion laid the groundwork for a new way of understanding the world through mathematics.

As computational power grew exponentially in the latter half of the 20th century, so did the complexity and pervasiveness of mathematical models. Weather prediction, economic forecasting and financial risk assessment became increasingly reliant on these sophisticated tools. By the dawn of the 21st century, we found ourselves deep in the heart of Model Land, a place where reality is simplified, quantified and projected onto screens and balance sheets.

The infiltration of models

The allure of mathematical models lies in their promise of objectivity and precision. In politics, economic models began to shape policy decisions, offering a veneer of scientific rigor to inherently complex social issues. In science, models became central to disciplines ranging from climate studies to particle physics, often determining the direction of research and the allocation of resources.

In the realm of finance, models like the Black–Scholes equation transformed the way we understand and trade risk. These models didn’t just describe the financial world; they began to shape it, creating a feedback loop where the map increasingly became the territory.

Culture and society, too, felt the influence of this modeling revolution. Social media algorithms, built on complex models of human behavior and interaction, began to shape our digital landscapes and, by extension, our perceptions of reality. The “quantified self” movement encouraged individuals to model their own lives, reducing the richness of human experience to a series of data points and trends.

The peril of models as oracles

As our reliance on models grew, so did our tendency to treat them as oracles rather than tools. This shift from model-assisted to model-driven decision-making brought with it significant perils, many of which we are only now beginning to fully appreciate.

Thompson’s concept of Model Land provides a powerful framework for understanding these dangers. In Model Land, the complexities and uncertainties of the real world are stripped away, replaced by a set of equations and assumptions that, while internally consistent, may bear little resemblance to reality.

The danger lies not in the use of models themselves, but in our forgetting that we are operating in Model Land rather than the real world. As Thompson argues, we have become too comfortable in this abstracted space, often treating model outputs as infallible truths rather than simplified approximations or scenarios.

The great model failures

History is replete with examples of model failures, each serving as a stark reminder of the dangers of overreliance on these mathematical constructs. Below are a few examples of “models as oracles,” used across various domains, leading to catastrophic outcomes:

  • The 2008 Financial Crisis: Complex financial models failed to account for the systemic risks that led to the near collapse of the global financial system.
  • Covid-19 Pandemic: Early epidemiological models produced wildly varying predictions, leading to confusion and mistrust in scientific modeling.
  • Fukushima Daiichi Nuclear Disaster (2011): Risk assessment models underestimated the potential impact of a tsunami on the nuclear power plant.
  • Boeing 737 MAX Crashes (2018–2019): Flight control system models failed to accurately predict how pilots would respond to malfunctions, contributing to two fatal crashes.
  • Robert Moses’s Urban Planning in New York (1930s–1960s): Traffic models used to justify massive highway projects failed to account for induced demand, leading to increased congestion and urban sprawl.
  • Green Revolution Unintended Consequences: Models promoting the widespread adoption of high-yield crops and intensive farming methods failed to predict long-term soil degradation and loss of biodiversity.
  • Enron Scandal (2001): Complex financial models were used to hide losses and inflate profits, leading to one of the largest corporate bankruptcies in history.

These failures underscore what statistician Nassim Nicholas Taleb calls the “Black Swan” problem — the tendency of models to underestimate the impact of rare, high-consequence events. They also highlight the dangers of overreliance on models that often fail to capture the full complexity of the systems they attempt to represent.

Moreover, these examples demonstrate that model failures are not isolated incidents but a pervasive issue across many fields and industries. They serve as a powerful reminder of the need for a more nuanced, critical approach to modeling that acknowledges the limitations of our ability to predict and control complex systems.

The human factor: behavioral economics and model bias

The story of models is incomplete without considering the human factor. Behavioral economics, pioneered by researchers like Daniel Kahneman and Amos Tversky, reveals the myriad ways in which human cognition deviates from the rational ideal assumed by many models.

Our cognitive biases — from confirmation bias to the availability heuristic — don’t just affect how we interpret model outputs; they shape the very construction of the models themselves. Modelers, being human, bring their own biases and assumptions to their work, often unconsciously.

Moreover, once a model gains acceptance, it can create a self-reinforcing cycle of belief. The model’s predictions shape decisions and behaviors, which in turn generate data that seems to confirm the model’s validity. This feedback loop can lead to a dangerous form of groupthink, where alternative viewpoints are dismissed and the model’s authority goes unchallenged.

The amoral nature of model land

While the technical limitations of models have been widely discussed, there’s a more profound and often overlooked issue at the heart of our reliance on mathematical modeling: the fundamentally amoral nature of Model Land. This abstracted realm, where complex realities are reduced to equations and data points, often fails to capture — or worse, completely ignores — the moral and existential dimensions of the problems it attempts to solve.

The absence of human values

In Model Land, concepts like spirituality, sacredness, love, connection, creativity, well-being and safety — some of the cornerstones of the human experience — are either entirely absent or treated in reductionist, often sophomoric ways. Climate change models, for instance, might accurately predict rising temperatures and sea levels, but they struggle to capture the profound sense of loss and displacement felt by communities forced to relocate. Economic models may project GDP growth, but they often fail to account for the emotional and psychological toll of economic inequality.

This disconnect between Model Land and lived human experience has far-reaching consequences. As models increasingly drive decision-making processes, we risk creating policies and systems that optimize for abstract metrics while neglecting the very human values they’re meant to serve.

The tyranny of quantification

The philosopher-poet Wendell Berry once wrote, “The disease of the modern character is specialization.” In the context of modeling, this disease manifests as an obsession with quantification. Anything that can’t be easily measured or quantified — like the intrinsic value of an ecosystem or the cultural significance of a historical site — often gets left out of the model entirely.

This tyranny of quantification leads to what economists call “negative externalities” — costs or consequences that aren’t captured by the model but are borne by society or the environment. The classic example is pollution: A factory’s production model might show increased efficiency and profit but fail to account for the long-term health impacts on the surrounding community or the degradation of local ecosystems.

Moral abdication in the face of complexity

Perhaps most troublingly, the complexity and perceived objectivity of models can lead to a kind of moral abdication. Decision-makers may defer to model outputs rather than grappling with the difficult ethical questions that underlie many of our most pressing challenges. The use of predictive policing algorithms provides a stark example: Models predicting crime hotspots can inadvertently shift the discourse from one of addressing systemic social issues and racial bias (“How can we create a more just society?”) to one of resource allocation and efficiency (“Where should we deploy police to maximize arrests?”). This shift is subtle but profound. It moves us from the realm of values, ethics and collective responsibility to one of amoral calculation. In doing so, it can paralyze meaningful reform, as we endlessly debate the accuracy of predictive models rather than confronting the moral urgency of addressing the root causes of crime and social inequality.

The human cost of model-driven decisions

The consequences of this model-centric, amoral approach to decision-making are not merely theoretical. They play out in the lived experiences of individuals and communities around the world:

  • Urban Development: City planning models that prioritize efficiency and economic growth may lead to gentrification and the displacement of long-standing communities, ignoring the human cost of lost social networks and cultural heritage.
  • Healthcare: Models used to allocate medical resources may optimize for overall population health but fail to account for individual suffering or the moral imperative to care for the most vulnerable.
  • Environmental Policy: Cost-benefit analyses of environmental regulations often struggle to quantify the true value of biodiversity, clean air or the mental health benefits of access to nature.
  • Education: Performance models in education may drive policies that increase test scores but neglect the development of creativity, critical thinking and emotional intelligence.

In each of these illustrative cases, the limitations of Model Land collide with the complex, value-laden realities of human experience, often with devastating consequences for individuals and communities.

Towards a new relationship with models

Given these challenges, how can we forge a healthier relationship with mathematical models? Thompson and other critics offer several key recommendations:

  1. Transparency and interpretability: Models should be as transparent as possible, with their assumptions, limitations and potential biases clearly communicated.
  2. Embracing uncertainty: We must learn to be comfortable with uncertainty, treating model outputs as possibilities rather than prophecies.
  3. Holistic metrics: Expand our definition of “optimization” to include metrics of well-being, social cohesion and ecological health alongside traditional economic measures.
  4. Diverse perspectives: The development and interpretation of models should involve diverse voices, including those from outside the traditional modeling community, including interdisciplinary and participatory modeling, ensuring that local knowledge, stories, values and concerns are represented.
  5. Qualitative wisdom: While quantitative models are powerful tools, they should be balanced with qualitative expert judgment and real-world experience.
  6. Ethical modeling: Modelers and those who use models must consider the ethical implications of their work, particularly when models influence decisions that affect human lives.
  7. Models as tools, not oracles: Perhaps most importantly, we must shift our view of models from oracles to tools — aids to thinking rather than substitutes for thought.

As we navigate the complexities of the 21st century, mathematical models will undoubtedly remain powerful tools for understanding and shaping our world. However, we must resist the temptation to retreat entirely into the amoral abstraction of Model Land. Instead, we must strive to create bridges between our models and the rich, morally complex tapestry of human experience.

By recognizing the limitations of our models, embracing the full spectrum of human values and cultivating a more holistic approach to decision-making, we can harness the power of modeling while remaining grounded in the realities and responsibilities of our shared human condition. In doing so, we may find that our most powerful tool is not the model itself, but our capacity for moral reasoning, empathy and collective wisdom in the face of uncertainty.

As we “escape from Model Land,” in Thompson’s words, we can redefine our relationship with mathematical models, viewing them not as infallible prophets, but as powerful tools in the ongoing human endeavor to understand and shape our world. This new perspective, grounded in humility, ethics and a deep appreciation for the complexity of human experience, may be our best guide as we face the unprecedented challenges of our time.

[Lee Thompson-Kolar edited this piece.]

Are we ready? Understanding just how big solar flares can get

Mike's Notes

I'm gathering information about massive solar flares and their risk to electrical systems, including data centres.

  • What are they?
  • How often do they happen?
  • What risk do they pose?
  • How to build robust resiliency into a data centre
  • Is a Faraday Cage the answer?
The article below is copied from the excellent Knowable Magazine. A detailed paper about the Carrington Event is included in the resources.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Knowable Magazine
  • Home > Handbook > 

Last Updated

19/05/2025

Are we ready? Understanding just how big solar flares can get

By: Christopher Crockett
Knowable Magazine: 17/09/2021

Christopher Crockett is a staff researcher for Knowable and a freelance science writer living in Arlington, Virginia. He is thankful for the sun but wouldn’t want to see it when it’s angry.

On May 1, 2019, the star next door erupted.

In a matter of seconds, Proxima Centauri, the nearest star to our sun, got thousands of times brighter than usual — up to 14,000 times brighter in the ultraviolet range of the spectrum. The radiation burst was strong enough to split any water molecules that might exist on the temperate, Earth-sized planet orbiting that star; repeated blasts of that magnitude might have stripped the planet of any atmosphere.

It would be bad news if the Earth’s sun ever got so angry.

But the sun does have its moments — most famously, in the predawn hours of September 2, 1859. At that time, a brilliant aurora lit up the planet, appearing as far south as Havana. Folks in Missouri could read by its light, while miners sleeping outdoors in the Rocky Mountains woke up and, thinking it was dawn, started making breakfast. “The whole of the northern hemisphere was as light as though the sun had set an hour before,” the Times of London reported a few days later.

Meanwhile, telegraph networks went haywire. Sparks flew from equipment — some of which caught on fire — and operators in Boston and Portland, Maine, yanked telegraph cables from batteries but kept transmitting, powered by the electrical energy surging through the Earth.

The events of that Friday evoked biblical descriptions. “The hands of angels shifted the glorious scenery of the heavens,” reported the Cincinnati Daily Commercial. The actual impetus was a bit more prosaic: The skies had been set ablaze by an enormous blob of electrically charged gas, shot out from the sun following a flash of light known as a solar flare.

A graphic illustrates some of the ways that the sun influences space weather, including solar flares (which occur at the sun’s surface), coronal mass ejections (large amount of material erupting from sun and getting picked up by solar wind); solar wind (electrically charged particles constantly flowing from sun); and a geomagnetic storm (a disturbance in Earth's magnetic field caused by the CME).

Space weather encapsulates the prevailing conditions in the solar system caused by the solar wind and the sun’s far-reaching magnetic field. Sudden changes on the sun, such as flares and eruptions of material, are like weather fronts, bringing with them magnetic “storms” that can be felt on the planets. On Earth, this can cause stunning auroras, but it can also create havoc with electronics. The flash of light from a flare takes about 8 minutes to reach Earth; solar material expelled from the sun in a coronal mass ejection (CME) may take hours to days to travel the distance. Magnetic storms may be brief or last for many days.

Such a blob — a tangle of plasma and magnetic fields — is known as a coronal mass ejection. Upon arrival at Earth, such an ejection can trigger the most ferocious of geomagnetic storms. The 1859 storm, named the Carrington Event for the scientist who witnessed the flare that preceded it, has long been upheld as the most powerful wallop that the sun has ever delivered.

But in recent years, research has indicated that the Carrington Event was just a taste of what the sun can throw at us. Tree rings and ice cores encode echoes of dramatically stronger solar storms in the distant past. And other stars, such as Proxima Centauri, show that even the most energetic documented solar outbursts pale in comparison with what is possible.

Nevertheless, the Carrington Event offers important clues to what the sun might have in store for Earth in the future, solar physicist Hugh Hudson writes in the 2021 Annual Review of Astronomy and Astrophysics. “Danger lurks for humanity’s technological assets, especially those in space,” writes Hudson, of the University of Glasgow. In the wake of a Carrington-like event today, entire power grids could shut down and GPS satellites could be knocked offline.

Understanding just how severe solar storms can be provides insights into what the universe may sling our way — and maybe how to foretell the next one so that we’re better prepared when it happens.

Anatomy of a flare

Roughly 18 hours before the 1859 event brightened Earth’s skies, an English astronomer noticed something strange on the surface of the sun.

While working in his observatory, Richard Carrington saw two brilliant points of light emerge from among a clutch of dark sunspots and vanish within five minutes. Another English astronomer, Richard Hodgson, saw the same thing, noting that it was as if the brilliant star Vega had appeared on the sun. At the same time, compass-like needles at England’s Kew Observatory twitched, a hint of the magnetic storm about to ensue.

Before then, no one knew about solar flares — mostly because no one was tracking sunspots every clear day the way Carrington was. Decades would pass before astronomers and physicists could unravel the physics of solar flares and their impact on Earth.

Images show Richard Carrington’s original drawings of sunspots and the powerful solar flare that emerged.


In 1859, English astronomer Richard Carrington was making this sketch of sunspots (left), when he saw two beads of light emerge from the large cluster of spots near the top. Carrington drew the first appearance of the flare as two bean-shaped regions nestled in among the spots (labeled A and B in close-up at right). Five minutes later, the two white spots had drifted to the right and faded considerably (marked C and D).

CREDIT: S. PROSSER, OXFORD UNIVERSITY PRESS 2018 (LEFT) / RICHARD CARRINGTON, PUBLIC DOMAIN (RIGHT)

A solar flare is an eruption on the sun, a sudden flash of light — usually near a sunspot — that can release as much energy as roughly 10 billion 1-megaton nuclear bombs. The trigger is a sudden, localized release of pent-up magnetic energy that blasts out radiation across the entire electromagnetic spectrum, from radio waves to gamma rays.

Many solar flares, though not all, are accompanied by a coronal mass ejection, a massive chunk of the sun’s hot gas blown into space along with a tangle of magnetic fields. Billions of tons of sun stuff can billow out into the solar system, crossing the 150 million kilometers to Earth’s orbit in anywhere from about 14 hours to a few days. 

Most solar eruptions miss our planet by a wide margin. But occasionally, one gets aimed right at Earth. And that’s when things can get interesting.

About eight minutes after a solar flare, its light reaches Earth in a flash of visible light. That’s also when a spike in ultraviolet light and X-rays sprays the upper atmosphere, causing a slight magnetic disturbance at the surface. That was the twitch the magnetic instruments at the Kew sensed in 1859.

The coronal mass ejection can trigger a geomagnetic storm when it encounters the magnetic field that envelops Earth. The disturbance to the magnetic field induces electrical currents to course through conductors, including wires and even the planet itself. At the same time, high-speed charged particles spewed by the sun crash into atoms in the upper atmosphere, lighting up the aurora.

Close up of filaments erupting from surface of glowing sun.

On September 6, 2017, the sun emitted a powerful X-class solar flare — a designation reserved for the most intense flares. Seen here in ultraviolet light captured by NASA’s orbiting Solar Dynamics Observatory, the flare was one of the strongest seen in years and came amid a spate of solar eruptions that month. The glowing threads are scorching filaments of plasma ensnared by magnetic fields arcing over the sun’s surface.

CREDIT: NASA / GSFC / SDO

The 1859 flare has long been, and remains, a standout in its energy and effects on Earth. Comparably powerful solar eruptions are often referred to as “Carrington events.” But it does not stand alone.

“It’s oftentimes described as the most intense storm ever recorded,” says Jeffrey Love, a geophysicist at the US Geological Survey in Denver. “That’s possibly not exactly true, but it certainly is one of the two most intense storms.” Or three or four.

In May 1921, the sun dealt our planet a geomagnetic storm on par with the Carrington Event. As in 1859, a brilliant aurora appeared well beyond the polar regions. Telegraph and telephone systems broke down, with some sparking destructive fires.

And just 13 years after Carrington spied his eponymous flare, another solar storm came along that by some measures may have topped it. “It looks now, based on aurora and sparse magnetometer measurements, that an event in 1872 was probably larger than the Carrington Event,” says Ed Cliver, a solar physicist retired from the US Air Force.

These storms show that the Carrington Event wasn’t a “black swan,” Hudson says. If anything, the sun has been holding back in the modern era. Evidence from the more distant past points to a few solar storms that make the Carrington Event seem almost puny by comparison.

Forgotten flares

Trees have long memories. Each year of growth chronicles tidbits about environmental conditions at the time in concentric annual rings. From those rings researchers can reconstruct scenes from Earth’s past.

Some cedar trees in Japan recall a tsunami of atomic particles hurled from the sun around the year 775. Those trees recorded a significant uptick in carbon-14, a radioactive variant of carbon that trees absorb from the atmosphere. Carbon-14 emerges from run-ins between atmospheric nitrogen and cosmic rays — high-speed particles from space that pummel our planet daily. Some solar flares shower Earth with an excess of cosmic rays, which ramps up production of carbon-14. The change in carbon-14 levels recorded in 775 was about 20 times larger than the normal ebb and flow from the sun, researchers reported in 2012.

“The clear suggestion there was that super events could happen, because this was a factor of 10 — if it was a solar flare — a factor of 10 or 20 or more greater than the Carrington Event,” Hudson says.

Image shows the green glow of the northern lights hovering above a nighttime landscape.

In the early hours of March 1, 2011, a ripple in the solar wind whacked Earth’s magnetic field and triggered a minor geomagnetic storm, causing the ethereal aurora seen here over the Poker Flat Research Range in Alaska.

CREDIT: NASA / GSFC / JAMES SPANN

A carbon-14 boost in tree rings showed signs of another sizable solar event in 994. Ice cores from Antarctica showed a corresponding increase, in both 994 and 775, of beryllium-10, another product of cosmic rays — adding more certainty to the tree ring findings.

Looking farther back in time, a study of ice cores suggests a third similar event around 660 BCE. And in August (in a paper still undergoing peer review), researchers reported two more carbon-14 spikes in tree rings from around 7176 BCE and 5259 BCE, possibly on par with the 775 event.

It’s hard to directly compare these past storms with the Carrington Event, says Ilya Usoskin, a space physicist at the University of Oulu in Finland and a coauthor of the August study. The 1859 flare did not produce a particle downpour on Earth, so there are no carbon-14 counts to compare. But the 775 event appears to be one of the strongest solar particle storms recorded in the last 12,000 years, Usoskin says.

There is a catch, Hudson notes. Tree rings are laid down annually, so a few smaller flares within the span of several months might appear as one big event in the tree ring record.

But even then, any one of these smaller flares may still have been impressive. “Every one of those events would be at least on the order of three times as big as the Carrington Event in terms of its energy,” Cliver says.

That, however, is still modest compared with some other stars in our galaxy.

Super flares

If life does exist on the planet orbiting Proxima Centauri, it probably has a rough go of it.

“You really are looking at having something like a Carrington Event happening daily,” says Meredith MacGregor, an astrophysicist at the University of Colorado Boulder. Even stronger “super flares,” like the one she and colleagues spotted in 2019, may go off roughly every other day. Her team spotted that flare, possibly 100 times as powerful as the Carrington Event, after watching the star next door for just 40 hours.

With a near-constant barrage of flares, any atmosphere clinging to the rocky planet snuggled up close to the star would never have time to recover. “Yes, a Carrington Event [on Earth] would fry some electronics and would ruin GPS signals,” MacGregor says, “but it’s not going to destroy the habitability of our planet.”

A view of a star-filled sky shows the Centauri system with the brilliant Alpha Centauri A and B prominent in upper left and a much dimmer and smaller Proxima Centauri circled and labeled in lower right.


The star Proxima Centauri and its neighboring duo of Alpha Centauri A and B are the closest stars to the sun, lying a mere 4.2 light-years away. Proxima, the nearest of the trio, is a dim red orb with frequent, powerful flares that buffet the Earth-mass planet that orbits close to it. 

CREDIT: DIGITIZED SKY SURVEY 2. ACKNOWLEDGEMENT: DAVIDE DE MARTIN / MAHDI ZAMANI

To be clear, Proxima Centauri is not like the sun. It’s an M dwarf, a diminutive orb that glows red. And these tiny stars are famous for their oversized flares. But some sunlike stars can send up super flares as well.

This realization has come from telescopes in space designed to look for planets around other stars. NASA’s now-defunct Kepler telescope did this by looking for subtle dips in starlight as planets crossed in front of their suns.

Over four years, Kepler recorded 26 super flares — up to about 100 times as energetic as the Carrington Event — on 15 sunlike stars, researchers reported in January. NASA’s ongoing TESS mission, another space-based telescope hunting for exoplanets, found a similar frequency of superflares on sunlike stars in its first year of operation.

The Kepler data imply that sunlike stars experience the most powerful of these flares roughly once every 6,000 years. Our sun’s most powerful eruption in that time span is an order of magnitude weaker — but could a super flare be in our future?

“I don’t think any theory has sufficient predictive capability to mean anything,” Hudson says. “The leading theory basically says that the bigger the sunspot, the greater the flare.” Sunspots mark where the sun’s magnetic field punches through its surface, preventing hot gas from bubbling up from below. The spot looks dark because it’s cooler than everything around it.

And that is one difference between the sun and its eruptive neighbors. Super flares seem to happen on stars with cool, dark spots far larger than ever appear on the sun. “Based on known spot areas, there would therefore be a limit,” Hudson says.

The intricacies of any star’s magnetic machinations — spots, flares, etc. — are still poorly understood, so tying all these observations into one cohesive story will take time. But the quest to understand all this might improve predictions about what to expect from the sun in the future.

Flares that are powerful enough to disrupt our power grid probably occur, on average, a few times a century, Love says. “Looking at 1859 kind of helps put it in perspective, because what’s happened in the space-age era, since 1957, has been more modest.” The sun hasn’t aimed a Carrington-like flare at us in quite a while. A repeat of 1859 in the 21st century could be disastrous.

Humanity is far more technologically dependent than it was in 1859. A Carrington-like event today could wreak havoc on power grids, satellites and wireless communication. In 1972, a solar flare knocked out long-distance telephone lines in Illinois, for example. In 1989, a flare blacked out most of Quebec province, cutting power to roughly 6 million people for up to nine hours. In 2005, a solar storm disrupted GPS satellites for 10 minutes.

The best prevention is prediction. Knowing that a coronal mass ejection is on its way could give operators time to safely reconfigure or shut down equipment to prevent it from being destroyed.

Building in extra resiliency could help as well. For the power grid, that could include adding in redundancy or devices that can drain off excess charge. Federal agencies could have a stock of mobile power transformers standing by, ready to deploy to areas where existing transformers — which have been known to melt in previous solar storms — have been knocked out. In space, satellites could be put into a safe mode while they wait out the storm.

The Carrington Event was not a one-off. It was just a sample of what the sun can do. If research into past solar flares has taught us anything, it’s that humanity shouldn’t be wondering if a similar solar storm could happen again. All we can wonder is when.

A stunning visualisation explores the intricate circulatory system of our oceans

Mike's Notes

This stunning video from NASA's Goddard Space Flight Centre is an excellent example of a complex system. It has everything.

  • Fluid mechanics
  • Emergence

I discovered the video in a recent edition of Aeon Weekly.

It also shows how this circulation system can change and has changed.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Aeon Weekly
  • Home > Handbook > 

Last Updated

18/05/2025

A stunning visualisation explores the intricate circulatory system of our oceans

By: Kathleen Gaeta Greer
Aeon: 15/05/2025

This video from NASA’s Goddard Space Flight Center provides an unprecedented look at the intricate, interconnected flow of ocean currents around the world. The visualisation was created using a NASA model built from global data gathered from a wide range of inputs, including buoys and spacecrafts. The resulting imagery illustrates how factors including planetary physics, heat and salinity propel the ceaseless oceanic movement of this global ‘conveyer belt’. As the video zooms in on some of the most active and interesting currents around the globe, the NASA oceanographer Josh Willis details how this aquatic circulatory system affects human life, from fisheries to regional climates.

  • Video by NASA’s Goddard Space Flight Center
  • Producer: Kathleen Gaeta Greer

When it Comes to AI Policy, Congress Shouldn’t Cut States off at the Knees

Mike's Notes

I agree with the content of this essay by Gary Marcus. I believe;

  • LLM are an overhyped and speculative craze
  • Halucinate
  • Trained on the content of the internet including copywrited material and porn
  • Their design is fundamentally flawed.
  • They can be useful for limited tasks, including text translation and rough drafts.
  • Pipi 9 does not use AI based on LLM; it uses AI that mimics biological processes

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/05/2025

When it Comes to AI Policy, Congress Shouldn’t Cut States off at the Knees

By: Gary Marcus
Marcus on AI: 14/05/2025

This essay is coauthored with many representatives from States across the United States, as listed below.

Artificial intelligence holds immense promise—from accelerating disease detection to streamlining services—but it also presents serious risks, including deepfake deception, misinformation, job displacement, exploitation of vulnerable workers and consumers, and threats to critical infrastructure. As AI rapidly transforms our economy, workplaces, and civic life, the American public is calling for meaningful oversight. According to the Artificial Intelligence Policy Institute, 82% of voters support the creation of a federal agency to regulate AI. A Pew Research Center survey found that 52% of Americans are more concerned than excited about AI’s potential, and 67% doubt that government oversight will be sufficient or timely.

Public skepticism crosses party lines and reflects real anxiety: voters worry about data misuse, algorithmic bias, surveillance, and impersonation, and even catastrophic risks. Pope Francis has named AI as one of the defining challenges of our time, warning of its ethical consequences and impacts on ordinary people and calling for urgent action.

Yet instead of answering this call with guardrails and public protections, Congress, which has done almost nothing to address these concerns, is considering a major step backwards, a tool designed to prevent States from taking matters into their own hands: a sweeping last-minute preemption provision tucked into a federal budget bill that would ban all state regulation on AI for the next decade.

The provision, which is likely at odds with the 10th Amendment, demands that “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” The measure would prohibit any state from regulating AI for the next ten years in any way—even in the absence of any federal standards.

This would be deeply problematic under any circumstance, but it’s especially dangerous in the context of a rapidly evolving technology already reshaping healthcare, education, civil rights, and employment. If enacted, the statute would preempt states from acting —even if AI systems cause measurable harm, such as through discriminatory lending, unsafe autonomous vehicles, or invasive workplace surveillance. For example, twenty states have passed laws regulating the use of deepfakes in election campaigns, andColorado passed a law to ensure transparency and accountability when AI is used in crucial decisions affecting consumers and employees. The proposed federal law would automatically block the application of those state laws, without offering any alternative. The proposed provision would also preempt laws holding AI companies liable for any catastrophic damages that they contributed to, as the California Assembly tried to do.

The federal government should not get to control literally every aspect of how states regulate AI — particularly when they themselves have fallen down on the job —- and the Constitution makes pretty clear that the bill as written is far, far too broad. The 10th Amendment states, quite directly, that “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” In stepping so thoroughly on states’ rights, it is difficult to see how the proposed bill would not clash with this 234-year-old bedrock principle of the United States. (Defenders of this overbroad bill will claim that AI is part of interstate commerce; years of lawsuits will ensue.)

Of course there are always arguments on the other side. The Big Tech position was laid out well in a long piece from Friday in Lawfare by Kevin Frazier and Adam Thierer that has elements of truth but miss the larger picture. Part of it emphasizes the race with China and the need for speed. Their claim, exaggerating the costs of regulation and minimizing the costs of having none (not to mention states’ rights) is that AI regulation “could undermine the nation's efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying” and that “If this growing patchwork of parochial regulatory policies takes root, it could undermine U.S. AI innovation” and call on Congress "to get serious about preemption”.

What they miss is threefold. First, if current trends continue, the “race” with China will not end in victory, for either side. Because both countries are both building essentially the same kind of models with the same kinds of techniques using the same kinds of data, the results from the two nations are essentially converging on the same outcomes. So-called leaderboards are no longer dominated by any one country. Any advantage in Generative AI (which still hasn’t remotely made a net profit, and is all still speculative) will be minimal, and short-lasting. Our big tech giants will match theirs, and vice versa, and the only real question is about the size of the profits. Any regulation that is proposed will be absorbed as a cost of business (trivial for trillion dollar companies), and there is no serious argument that the relatively modest costs of regulation (which they don’t even bother to estimate) will have any real-world impact whatsoever on those likely tied outcomes. Silicon Valley loves to invoke China to get better terms, but it probably won’t make any difference. (China actually has far more national regulation around AI than the US does, and that has in no way stopped them from catching up)

Second, Frazier and Thierer are presenting a false choice. The comparison here is not between a coherent federal laws versus a patchwork of a state laws, but between essentially zero enduring federal AI law (only executive orders that seem to come and go with the tides) and the well-intentioned efforts of many state legislators to make up for the fact that Washington has failed. If Washington wants to pass a comprehensive privacy or AI law with teeth, more power to them, but we all know this is unlikely; Frazier and Thierer would leave citizens out to dry, much as low-touch advocates have left us all out to dry when it comes to social media.

Third, Frazier skirted the issue of States rights altogether, not even considering at all how AI fits relative to other sensitive issues such as abortion or gun control. In insisting that “might makes right” here for AI, they risk setting a dangerous precedent in which whatever party has Federal power makes all the rules, all the time, overriding the power to the States that the 10th Amendment exists to protect, and one of our last remaining checks and balances.

And as Senator Markey put it, “[a] 10-year moratorium on state AI regulation won’t lead to an AI Golden Age. It will lead to a Dark Age for the environment, our children, and marginalized communities.”

Consumer Reports’ Policy Analyst for AI Issues Grace Gedye also weighed in, “Congress has long abdicated its responsibility to pass laws to address emerging consumer protection harms; under this bill, it would also prohibit the states from taking actions to protect their residents"

Well aware of the challenges AI poses, state leaders have already been acting. An open letter from the International Association of Privacy Professionals, signed by 62 legislators from 32 states, underscores the importance of state-level AI legislation—especially in the absence of comprehensive federal rules. Since 2022, dozens of states have introduced or passed AI laws. In 2024 alone, 31 states, Puerto Rico, and the Virgin Islands enacted AI-related legislation or resolutions, and at least 27 states passed deepfake laws. These include advisory councils, impact assessments, grant programs, and comprehensive legislation like Colorado’s, which would have mandated transparency and anti-discrimination protections in high-risk AI systems. It would also undo literally every bit of State privacy legislation, despite the fact that no Federal bill has passed after many years of discussion.

It's specifically because of state momentum that Big Tech is trying to shut the states down. According to a recent report in Politico, “As California and other states move to regulate AI, companies like OpenAI, Meta, Google and IBM are all urging Washington to pass national AI rules that would rein in state laws they don’t like. So is Andreessen Horowitz, a Silicon Valley-based venture capitalist firm closely tied to President Donald Trump.” All largely behind closed doors. Why? With no regulatory pressure, tech companies would have little incentive to prioritize safety, transparency, or ethical design; any costs to society would be borne by society.

But the reality is that self-regulation has repeatedly failed the public, and the absence of oversight would only invite more industry lobbying to maintain weak accountability.

At a time when voters are demanding protection—and global leaders are sounding the alarm—Congress should not tie the hands of the only actors currently positioned to lead. A decade of deregulation isn’t a path forward. It’s an abdication of responsibility.

If you are among the 82% of Americans who think AI needs oversight, you need to call or write your Congress members now, or the door on AI regulation will slam shut at least for the next decade, if not forever, and we will be entirely at Silicon Valley’s mercy.

Signatories

  • Senator Katie Fry Hester, Maryland
  • Gary Marcus, Professor Emeritus, NYU
  • Delegate Michelle Maldonado, Virginia
  • Senator James Maroney, Connecticut
  • Senator Robert Rodriguez, Colorado
  • Representative Kristin Bahner, Minnesota
  • Representative Steve Elkins, Minnesota
  • Senator Kristen Gonzalez, New York
  • Representative Monique Priestley, Vermont

Why Developers Still Use ColdFusion in 2025

Mike's Notes

Pipi has been built in ColdFusion or CFML code since Pipi 3 in 2002. It has proven to be a good choice, because of the ease of rapid prototyping, and the few lines of code required to do anything compared to other languages.

CFML acts as a wrapper around Java. So it runs on the Java Virtual Machine (JVM).

Pipi 10 will fully migrate to the open-source BoxLang platform provided by Ortus, and still use CFML.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

15/05/2025

Why Developers Still Use ColdFusion in 2025

By: Nick Flewitt
FusionReactor: 13/05/2025

In a tech landscape dominated by JavaScript frameworks, Python, and cloud-native solutions, ColdFusion continues to maintain a dedicated user base. Despite being introduced back in 1995, this platform remains relevant for specific use cases and organizations. Let’s explore why some developers and companies still choose ColdFusion in today’s rapidly evolving development environment.

Legacy Systems and Institutional Knowledge

Many organizations built critical business applications on ColdFusion during its peak popularity in the late 1990s and early 2000s. These systems have been refined over decades and represent significant investments in both time and resources. For these organizations, the cost-benefit analysis often favors maintaining and gradually modernizing existing ColdFusion applications rather than complete rewrites.

“Rewriting working systems from scratch is one of the most expensive and risky decisions an organization can make,” explains a common sentiment among ColdFusion developers.

Rapid Development Speed

ColdFusion was one of the earliest platforms designed specifically for rapid application development (RAD), and this remains one of its strongest advantages. Compared to many other languages, the CFML (ColdFusion Markup Language) syntax allows developers to accomplish complex tasks with minimal code.

Tasks that might require dozens of lines in other languages often need just a few in ColdFusion. For example, querying a database, processing the results, and outputting formatted data can be accomplished in remarkably few lines of CFML.

Strong in the Enterprise

ColdFusion has found particular longevity in enterprise environments, especially in industries like healthcare, finance, government, and education. These sectors value stability, security, and vendor support – all areas where Adobe’s commercial backing of ColdFusion provides reassurance.

Modern Evolution

Contrary to popular belief, ColdFusion hasn’t remained static. Recent versions have introduced substantial modernizations:

  • Support for modern JavaScript frameworks
  • REST API capabilities
  • Performance improvements
  • Docker containerization
  • Enhanced security features

Additionally, Lucee (an open-source CFML engine) provides a free alternative that has helped rejuvenate interest in the language.

Developer Productivity and Salary Advantages

The ColdFusion job market presents an interesting dynamic – while demand for new CF developers has declined, experienced ColdFusion developers often command premium salaries due to the combination of their scarcity and the business-critical systems they maintain.

This niche expertise can be particularly lucrative for freelancers and consultants who specialize in ColdFusion, especially when helping organizations modernize legacy applications.

Integration Capabilities

ColdFusion excels at integrating disparate systems, a crucial capability in enterprise environments with complex technology ecosystems. Its Java foundation allows for integration with virtually any system, while built-in features simplify connecting to databases, APIs, and legacy systems.

When Choosing ColdFusion Makes Sense

New adoption of ColdFusion typically occurs in specific scenarios:

  • Organizations with existing ColdFusion applications expanding functionality
  • Teams with significant ColdFusion expertise tackling new projects
  • Rapid development of internal business applications where time-to-market outweighs other concerns
  • Specialized industries where established ColdFusion solutions exist

Looking Forward

While ColdFusion isn’t likely to regain its former prominence, it demonstrates an important lesson about technology adoption: tools that effectively solve real business problems can remain viable long after technology trends have passed.

Though smaller than in its heyday, the ColdFusion community remains active and passionate. User groups, conferences like CF Summit, and online forums continue to support developers working with the platform.

Conclusion

ColdFusion’s continued use in 2025 isn’t merely about resistance to change or technical debt. It represents a pragmatic choice for specific use cases and organizations that balances development speed, maintenance costs, and organizational expertise. While not the right choice for every project, its longevity demonstrates that technology adoption isn’t always about following the latest trends but finding the right tool for specific business contexts.

As one veteran ColdFusion developer puts it: “I’ve been hearing that ColdFusion is dying for twenty years now, but somehow I keep getting paid very well to work with it.”

How to Do Mobile Testing Right

Mike's Notes

This is an excellent article by Luca Rossi from the Substack Refactoring on testing applications for mobile devices.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Refactoring
  • Home > Handbook > 

Last Updated

15/05/2025

How to Do Mobile Testing Right

By: Luca Rossi
Refactoring: 14/05/2025

A thorough guide that includes practical playbooks for companies at every stage.

Mobile apps are the primary interface through which hundreds of millions of people interact with services daily.

Smartphones have been a thing for 15+ years, during which they have arguably changed… not so much, especially in recent years. For this reason, it may be reasonable to expect a flawless experience at every tap.

Instead, the reality is different and painful — especially for mobile engineering teams: an ever-expanding universe of devices, hardware, OS versions, screen resolutions, and capabilities, that software must navigate correctly.

Welcome to hell mobile fragmentation.

In my previous life as a startup CTO, I ran a travel web app that had native counterparts on iOS and Android, and I swear that mobile testing and QA was one of the things that kept me up at night. It is incredibly hard to do it right, yet supremely important, having a direct impact on user satisfaction, churn, and ultimately, the bottom line.

So today we are publishing a full guide on fragmentation testing which provides a comprehensive but pragmatic approach to the problem, by covering principles, strategies, and tools.

We are doing so by drawing from my own learnings, from the ones of people in the Refactoring community, and by bringing in the invaluable experience of Nakul Aggarwal, CTO and co-founder of BrowserStack.

BrowserStack is a cornerstone of how thousands of teams — including my former own — engage with real-device testing at scale, and Nakul is one of the world’s most knowledgeable people in this space.

So, as you will see, success in mobile testing is about making smart choices, and focusing your efforts where they yield the greatest return. We'll explore how to define "good enough" in a world of infinite variables, and how to build a testing approach that supports, rather than hinders, your engineering velocity.

Here is the agenda:

  • What is mobile fragmentation? — defining the beast and its many heads.
  • Cost of fragmentation — the real-world business consequences.
  • Fragmentation testing playbook — how to build your full testing process, from strategy down to tactics.
  • Testing strategy vs product maturity — how your playbook should evolve over time.
  • Navigating the trade-offs — balancing cost, speed, coverage, and developer experience.
  • The Future is Still Fragmented — trends, the role of AI, and some closing notes

Let's dive in!

What is Mobile Fragmentation?

We've thrown the term "mobile fragmentation" around, but what do we mean by that? Spoiler: it’s not something that happens to your phone screen after a drop.

At its core, mobile fragmentation is the sheer diversity of hardware and software across the millions of devices your application might encounter in the wild.

Such diversity is also multi-dimensional:

  • Device manufacturers (the who) — from the big players like Apple and Samsung, to the long tail of Xiaomi, Oppo, OnePlus, and the countless regional champions. Each comes with its own hardware quirks, custom Android skins, and unique interpretations of how Android should behave.
  • Operating systems & versions (the what & when) — you need to account for multiple major versions active concurrently. Update rollouts lead to notorious lags, and some devices never get updated beyond a certain point. This is true for both iOS and Android, with Android being typically much worse.
  • Screen sizes & resolutions (the where) — today’s smartphones range from compact screens, to phablets, foldables, and tablets: the range is vast. Other than physical size, you may need to account for pixel density, aspect ratios, and newer features like dynamic refresh rates or screen cutouts (notches, punch-holes), that can all wreak havoc on your UI if not handled gracefully.
  • Hardware differences (the how) — beneath the glass, there's even more: processors, memory constraints, GPUs, and sensors, which may or may not make a difference in how your app behaves.

Trying to account for every permutation is impossible. Understanding these dimensions, however, is the first step to building a smarter strategy.

The Cost of Fragmentation

One of the things I always try to do at Refactoring is to think from first principles, so the first question here is: what if you just ignore this? Seriously, let’s not take anything for granted.

How bad is this, for real?

Unfortunately, when fragmentation is managed poorly, it bites hard — on your users, your team, and the business:

  • Poor UX — the most immediate impact. Users encounter crashes, freezes, baffling UI glitches, or sluggish performance that makes the app feel broken. Frustrated users are 1) unlikely to give you a second chance, and 2) they often head straight to…
  • Bad reviews — users are quick to voice their displeasure, and negative App Store reviews are incredibly damaging—it doesn’t matter if they are about a small set of devices. A flood of "unusable on Android 12" reviews will torpedo your app's rating, affecting everyone.
  • Churn — if an existing user has a persistent issue on their device after an update, or a new user has a terrible first experience, they're likely to abandon your app. Acquiring users is expensive: losing them due to preventable issues is a painful, self-inflicted wound.
  • Support costs — your support team gets swamped with tickets and complaints related to device-specific bugs. Diagnosing these can be a nightmare, requiring detailed information about device models, OS versions, and steps to reproduce that users often struggle to provide.
  • Slower dev velocity (ironically) — if you are trying to move faster by avoiding thorough testing, think again. Fragmentation bugs in production can lead to constant firefighting and a reactive development cycle. This drains morale and pulls your team away from feature development.

Investing in a good testing strategy isn't just about "quality assurance" in an abstract sense: it's about protecting your revenue, your reputation, and your team's ability to move fast.

So how do you do that? Enter the playbook

Fragmentation Testing Playbook

We have established the why — now for the how. This section should work as your tactical playbook: the core strategies and tools you'll use to construct a robust, pragmatic mobile fragmentation testing process.

This playbook focuses on four key pillars:

  • Device matrix — your clear, data-driven plan of which devices and OS versions matter the most to your users.
  • Testing mix — a balanced portfolio of emulators, real devices, and cloud solutions to maximize coverage and efficiency.
  • Foundational quality — strong architecture, base testing, and monitoring to significantly reduce the number of bugs that reach device-specific testing.
  • Automation strategy — manual testing doesn't scale, and smart automation is crucial for maintaining velocity and reducing toil for your team.

Let's break these down

1) Build your device matrix

Your Device Matrix is the single most important artifact guiding your testing. It’s a curated inventory of devices and OS versions, tiered by importance, against which you validate your app. Here is how you build one:

1.1) Know your actual users (be data driven)

First of all, be data driven. Prioritize based on which devices, OS versions, and even geographical regions are most prevalent among your user base.

Action — Dive into your analytics. Understand your specific device landscape. What are the top 10, 20, 50 devices your active users are on? What OS versions dominate? This data is the bedrock of your device matrix.

1.2) Prioritize (risk assessment)

Not all devices or features are created equal in terms of risk.

Risk, in this context, is a function of likelihood (how many users on this device/OS?) and impact (how critical is this feature? What happens if it breaks?).

Action — Focus your most intensive testing on high-traffic user flows running on the most popular devices/OS versions within your user base. A bug in your checkout process on your top 5 Android devices is infinitely more critical than a minor UI glitch on an obscure device with little market share among your users.

1.3) Define tiers (risk acceptance)

Since you can't test everything equally, you need to explicitly define levels of risk you're willing to accept for different device segments. This formalizes your prioritization.

Action — Create device/OS tiers. For example:

    • Tier 1 (Critical) — your most popular devices/OS versions (e.g. top 80% of your user base). Bugs here are unacceptable. These devices get the full suite of manual and automated tests for all critical and important features.
    • Tier 2 (Important) — the next significant chunk of devices. Minor cosmetic issues might be tolerable, but core functionality must work. These might get critical path automation and focused manual testing.
    • Tier 3 (Supported/Best Effort) — older or less common devices. You aim for basic functionality, but known issues might be documented and not block a release if non-critical. Testing might be limited to smoke tests or exploratory testing if time permits.

1.4) Keep it alive

Finally, create a process in which you review and update the matrix on a periodic basis (e.g. quarterly), as your user base and the market will inevitably evolve. Your matrix is only useful as long as it is up to date.

2) Create your testing mix

No single testing method conquers fragmentation, but a balanced portfolio might do. Here are the most common approaches:

  • Emulators & Simulators — emulators are the first line of defense for developers. They are fast, free, scalable for basic layout and functional bug checks during development. However, they can't perfectly replicate real hardware performance, sensor behavior, or OEM-specific OS mods.
  • Real devices (In-house lab) — they provide the highest fidelity for performance, hardware interactions, and manufacturer quirks… but they can be expensive and logistically challenging to maintain.
  • Cloud device farms — the scalable solution for broad real-device testing (manual and automated) without owning hardware. Platforms like BrowserStack give you on-demand access to thousands of physical devices/OS versions globally, and allow precise matrix mirroring and massive test parallelization.

3) Establish foundational quality

Strong underlying code quality and good production significantly eases the load on device-specific testing. Your goal should be to minimize the number of issues that get to device-level, by intercepting them earlier:

  • Strong typing & static analysis — strongly typed languages like TypeScript (React Native), Kotlin, and Swift help you catch a lot of errors before runtime. Employ linters for further analysis.
  • Robust unit & integration tests — ensure core logic, utilities, and API integrations are thoroughly covered. Unit and integration tests are fast and cheap to run, especially compared to E2E tests.
  • Architect for testability — design choices matter. Keep the mobile app light by pushing as much business logic as possible into the backend layer, where it is easier to test. If you are using a universal framework like React Native or Flutter, restrain as much as possible from writing platform-specific code.
  • Intensive logging & production monitoring — your safety net. Implement good monitoring with tools like Firebase Crashlytics or Sentry, to catch issues that slip through as early as possible.

4) Automation strategy

Manual testing across a large matrix is unsustainable, but implementing automation across the board can be equally hard. Do smart automation choices for streamlining key areas while maintaining velocity:

  • Focus your automation — don't try to automate everything. Prioritize critical user flows ("must not break" scenarios) on your Tier 1 devices. Use well-established frameworks (Appium, Espresso, XCUITest).
  • Parallelize with cloud platforms — running suites sequentially is a bottleneck. Cloud platforms enable massive parallel execution across hundreds of configurations, providing fast feedback in CI/CD.
  • Incorporate visual regression testing — for UI-heavy apps, these tools automatically detect visual changes across devices, catching layout bugs functional tests miss.
  • Reduce toil & boost DevEx — automation's goal is to free your team from repetitive manual checks, leading to faster, more reliable feedback and higher developer confidence.

Testing Strategy vs Product Maturity

The principles we covered should somewhat work for everyone, but the truth is your specific approach to fragmentation should also change over time, and evolve alongside your product and team journey.

Obviously, applying an enterprise level of testing rigor to a pre-PMF product is a waste of resources, just as neglecting deeper testing once you have scale is a recipe for disaster.

So let's try mapping your fragmentation strategy to the typical QA / Product Journey stages we have discussed other times in the past.

Your fragmentation strategy should evolve alongside your product

There is a lot of nuance and “your mileage may vary” here, but let’s sketch a basic cheatsheet:

1) Zero-to-One

  • Focus — Speed, iteration, and validating core hypotheses.
  • Fragmentation Approach — Minimal and highly pragmatic.
    • Device "Matrix" — Likely just the founders' phones, maybe a couple of common emulators/simulators for basic layout checks. A formal matrix is overkill.
    • Testing — Primarily manual, "happy path" testing on these few devices. Does the core loop work? Can users sign up and perform the one key action?
    • Automation — Probably none, or at most, some very basic UI smoke tests if the team has a strong existing preference.
    • Risk Tolerance — Very high. Bugs are expected. The bigger risk is building the wrong product, not having a perfectly polished app on every obscure device.
  • Takeaway — Don't let fragmentation concerns prematurely slow you down. Focus on finding PMF.

2) Finding PMF / Early Growth

  • Focus — Stabilizing core features, growing the user base, and starting to understand user segments.
  • Fragmentation Approach — Begin to formalize, driven by initial user data.
    • Device Matrix — Start tracking user analytics (even basic ones from e.g. Firebase or your app store consoles). Identify your top 5-10 devices/OS versions. This forms your rudimentary, evolving matrix.
    • Testing — Still heavily manual, but more structured. Test critical user flows on your identified key devices. Introduce more thorough exploratory testing.
    • Automation — Consider introducing UI automation for 1-2 absolute critical paths (e.g., signup, core purchase flow) if you have the expertise. Keep it lean.
    • Tools — This is where you might start dipping your toes into cloud device services for occasional checks on devices you don't own, especially if user feedback points to issues on specific models.
    • Risk Tolerance — Medium. Core functionality on popular devices needs to be solid. You can still live with some rough edges on less common configurations.
  • Takeaway — Use early data to guide a Minimum Viable Testing process for fragmentation.

3) Scaling / Established Product

  • Focus — Reliability, performance at scale, expanding feature sets, and protecting brand reputation.
  • Fragmentation Approach — Strategic, data-driven, and increasingly automated.
    • Device Matrix — A well-defined, multi-tiered matrix (as discussed in "First Principles") is essential, constantly updated with fresh user analytics and market data.
    • Testing — A more sophisticated mix:
      • Manual — Focused exploratory testing, usability checks, and testing new features on key devices.
      • Automated — Significant investment in UI automation for regression testing across Tier 1 and critical Tier 2 devices, running in CI/CD.
    • Tools — Heavy reliance on cloud device farms (like BrowserStack) for comprehensive automated and manual testing coverage across the matrix. You might also maintain a small, curated in-house lab for frequently used dev/QA devices. Also this balance can shift over time.
    • Performance Monitoring — Actively monitor performance and stability across different device segments in production.
    • Risk Tolerance — Low to Very Low for Tier 1 devices and critical functionality. Higher for Tier 3.
  • Takeaway — Your fragmentation strategy is now a core part of your quality engineering process, deeply integrated and data-informed.

The Future is Still Fragmented

So, will fragmentation ever end?

Probably not, at least not anytime soon. While there are glimmers of hope (Google's Project Mainline, for instance), the fundamental drivers of diversity remain.

Hardware innovation in smartphones might be questionable today, but new form factors emerge all the time (foldables are already here, wearables are well established, and AR/VR may be on the horizon), and OS customizations persist.

What about AI?

There's certainly potential for AI to assist with this. AI may write test cases, better simulate E2E flows, and even predict high-risk device/OS combinations based on code changes.

However, for the core challenge of executing tests across diverse hardware remains, and AI is not a silver bullet.

The reality is that the right mindset and an intentional strategy (data-driven device matrix, smart testing mix, strong foundational quality, targeted automation) remain your most crucial assets for navigating the mobile landscape.

The landscape will shift, but the principles of smart, risk-based testing will endure.

Bottom line

And that’s it for today! Remember that navigating the fragmented world of mobile devices is a marathon, not a sprint. Here are some takeaways from today’s guide:

  • Let data drive your device matrix — your actual user analytics are the most reliable guide for deciding which devices and OS versions deserve your primary testing focus. Don't guess; know.
  • Embrace tiered, risk-based testing — not all devices or bugs are created equal. Prioritize ruthlessly, focusing maximum effort on high-impact areas and accepting calculated risks elsewhere.
  • Blend your testing mix wisely — combine emulators (for speed), a curated in-house lab (for frequent access/fidelity), and cloud device farms like BrowserStack (for breadth, scale, and specialized needs).
  • Build on foundational quality — strong typing, linting, robust unit/integration tests, and good architectural choices significantly reduce the burden on expensive end-to-end device testing.
  • Automate strategically, not exhaustively — focus UI automation on stable, critical user flows on your most important devices to reduce toil and get fast feedback, leveraging parallel execution in the cloud.
  • Evolve your strategy with maturity — the right level of testing rigor changes as your product grows from pre-PMF to scale. Continuously adapt your approach.

Final note: I want to thank Nakul for joining in on this. I am a fan of what BrowserStack is building for mobile testing and AI-powered workflows. You can learn more below