Open Payment Standard x402 Expands Capabilities in Major Upgrade

Mike's Notes

A need-to-know for handling payments in the future.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > InfoQ
  • Home > Handbook > 

Last Updated

22/03/2026

Open Payment Standard x402 Expands Capabilities in Major Upgrade

By: Sergio De Simone
InfoQ: 22/01/2026

Sergio De Simone is a software engineer. Sergio has been working as a software engineer for over twenty five years across a range of different projects and companies, including such different work environments as Siemens, HP, and small startups. For the last 10+ years, his focus has been on development for mobile platforms and related technologies. He is currently working for BigML, Inc., where he leads iOS and macOS development.

After six months of real-world usage, the open payment standard x402 has received a major update, extending the protocol beyond single-request, exact-amount payments. The release adds support for wallet-based identity, automatic API discovery, dynamic payment recipients, expanded multi-chain and fiat support via CAIP standards, and a fully modular SDK for custom networks and payment schemes.

"V2 is a major upgrade that makes the protocol more universal, more flexible, and easier to extend across networks, transports, identity models, and payment types. The spec is cleaner, more modular, and aligned with modern standards including CAIP and IETF header conventions, enabling a single interface for onchain and offchain payments."

x402 V2 offers a unified payment interface supporting stablecoins and tokens across multiple chains, including Base, Solana, and others, while maintaining compatibility with legacy payment rails such as ACH, SEPA, and card networks. It also introduces per-request routing to specific addresses, roles, or callback-based payout logic, enabling complex multi-step payment workflows.

Another enhancement in x402 V2 is the clear separation between the protocol specification, its SDK implementation, and facilitators (responsible for verifying and settling the payment on-chain), which improves extensibility and enables a modular, plug-in–based architecture.

The new standard also introduces wallet-based access, reusable sessions, and modular paywalls. Wallet support provides clients with greater flexibility, streamlining payment flows and reducing round-trip and latency for previously purchased items. Modular paywalls enable developers to integrate and extend new backend payment logic, fostering a more extensible ecosystem.

Finally, x402 V2 improves the developer experience by simplifying configuration through its modular design, adding support for choosing multiple facilitators simultaneously, and minimizing the amount of glue code or boilerplate required.

x402 is an open, web-native payment standard/protocol designed to make payments a first-class citizen of the internet. It enables micro-payments, pay-per-use, and machine-to-machine payments, allowing web apps, APIs, and autonomous agents (like AI bots) to pay for services directly over HTTP without traditional accounts, subscriptions, or complex payment flows. Within months, the protocol has processed over 100 million payment flows across APIs, web applications, and autonomous agents.

The protocol leverages the rarely used HTTP status code 402 (Payment Required) to signal when payment is required and to include payment instructions in the response. By using x402, payments can be executed directly within the HTTP request–response flow, eliminating the need to redirect users to external payment pages or to rely on API keys and personal accounts.

Cloudflare, as one of the original partners in the x402 Foundation alongside Coinbase, integrated support for the protocol into its developer tools and infrastructure. This includes both the Agents SDK, which allows developers to build agents capable of automatically making payments using x402, and MCP servers that expose x402-enabled tools and enable services to return 402 Payment Required responses and accept x402 payments from clients.

At Age 25, Wikipedia Refuses to Evolve

Mike's Notes

A fascinating insight from a former board member of the Wikipedia Foundation.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > IEEE Spectrum
  • Home > Handbook > 

Last Updated

21/03/2026

At Age 25, Wikipedia Refuses to Evolve

By: Dariusz Jemielniak
IEEE Spectrum: 30/01/2026

Dariusz Jemielniak is vice president of the Polish Academy of Sciences, a full professor at Kozminski University in Warsaw, and a faculty associate at the Berkman Klein Centre for Internet and Society at Harvard University. He served for a decade on the Wikimedia Foundation Board of Trustees and is the author of Common Knowledge? An Ethnography of Wikipedia (Stanford University Press).

The digital commons champion faces a crisis of its own making

Wikipedia once had protracted and open debates about new formats that could let it evolve—are those days past? Illustration: IEEE Spectrum. Source images: Nohat/Wikimedia; Getty Images

Wikipedia celebrates its 25th anniversary this month as the internet’s most reliable knowledge source. Yet behind the celebrations, a troubling pattern has developed: The volunteer community that built this encyclopedia has lately rejected a key innovation designed to serve readers. The same institution founded on the principle of easy and open community collaboration could now be proving unmovable—trapped between the need to adapt and an institutional resistance to change.

Wikipedia’s Digital Sclerosis

Political economist Elinor Ostrom won the 2009 Nobel Prize in economics for studying the ways communities successfully manage shared resources—the “commons.” Wikipedia’s two founders (Jimmy Wales and Larry Sanger) established the internet’s open-source encyclopedia 25 years ago on principles of the commons: Its volunteer editors create and enforce policies, resolve disputes, and shape the encyclopedia’s direction.

But building around the commons contains a trade-off, Ostrom’s work found. Communities that make collective decisions tend to develop strong institutional identities. And those identities sometimes spawn reflexively conservative impulses.

Giving users agency over Wikipedia’s rules, as I’ve discovered in some of my own studies of Wikipedia, can lead an institution away ultimately from the needs of those the institution serves.

Wikipedia’s editors have built the largest collaborative knowledge project in human history. But the governance these editors exercise increasingly resists new generations of innovation.

Paradoxically, Wikipedia’s revolutionarily collaborative structure once put it at the vanguard of innovation on the open internet. But now that same structure may be failing newer generations of readers.

Does Wikipedia’s Format Belong to Readers or Editors?

There’s a generational disconnect today at the heart of Wikipedia’s current struggles. The encyclopedia’s format remains wedded to the information-dense, text-heavy style of Encyclopedia Britannica—the very model Wikipedia was designed to replace.

A Britannica replacement made sense in 2001. One-quarter of a century ago, the average internet user was older and accustomed to reading long-form content.

However, teens and twentysomethings today are of a very different demographic and have markedly different media consumption habits compared to Wikipedia’s forebears. Gen Z and Gen Alpha readers are accustomed to TikTok, YouTube, and mobile-first visual media. Their impatience for Wikipedia’s impenetrable walls of text, as any parent of kids of this age knows, arguably threatens the future of the internet’s collaborative knowledge clearinghouse.

The Wikimedia Foundation knows this, too. Research has shown that many readers today greatly value quick overviews of any article, before the reader considers whether to dive into the article’s full text.

So last June, the Foundation launched a modest experiment they called “Simple Article Summaries.” The summaries consisted of AI-generated, simplified text at the top of complex articles. Summaries were clearly labeled as machine-generated and unverified, and they were available only to mobile users who opted in.

Even after all these precautions, however, the volunteer editor community barely gave the experiment time to begin. Editors shut down Simple Article Summaries within a day of its launch.

The response was fierce. Editors called the experiment a “ghastly idea” and warned of “immediate and irreversible harm” to Wikipedia’s credibility.

Comments in the village pump (a community discussion page) ranged from blunt (“Yuck”) to alarmed, with contributors raising legitimate concerns about AI hallucinations and the erosion of editorial oversight.

Revisiting Wikipedia’s Past Helps Reveal Its Future

Last year’s Simple Summaries storm, and sudden silencing, should be considered in light of historical context. Consider three other flashpoints from Wikipedia’s past:

In 2013, the Foundation launched VisualEditor—a “what you see is what you get” interface meant to make editing easier—as the default for all newcomers. However, the interface often crashed, broke articles, and was so slow that experienced editors fled. After protests erupted, a Wikipedia administrator overrode the Foundation’s rollout, returning VisualEditor to an opt-in feature.

The following year brought Media Viewer, which changed how images were displayed. The community voted to disable it. Then, when an administrator implemented that consensus, a Foundation executive reversed the change and threatened to revoke the admin’s privileges. On the German Wikipedia, the Foundation deployed a new “superprotect” user right to prevent the community from turning off Media Viewer.

Even proposals that technically won majority support met resistance. In 2011, the Foundation held a referendum on an image filter that would let readers voluntarily hide graphic content. Despite 56 percent support, the feature was shelved after the German Wikipedia community voted 86 percent against it.

These three controversies from Wikipedia’s past reveal how genuine conversations can achieve—after disagreements and controversy—compromise and evolution of Wikipedia’s features and formats. Reflexive vetoes of new experiments, as the Simple Summaries spat highlighted last summer, is not genuine conversation.

Supplementing Wikipedia’s Encyclopedia Britannica–style format with a small component that contains AI summaries is not a simple problem with a cut-and-dried answer, though neither were VisualEditor or Media Viewer.

Why did 2025’s Wikipedia crisis result in immediate clampdown, whereas its internal crises from 2011–2014 found more community-based debates involving discussions and plebiscites? Is Wikipedia’s global readership today witnessing the first signs of a dangerous generation gap?

Wikipedia Needs to Air Its Sustainability Crisis

A still deeper crisis haunts the online encyclopedia: the sustainability of unpaid labor. Wikipedia was built by volunteers who found meaning in collective knowledge creation. That model worked brilliantly when a generation of internet enthusiasts had time, energy, and idealism to spare. But the volunteer base is aging. A 2010 study found the average Wikipedia contributor was in their mid-twenties; today, many of those same editors are now in their forties or fifties.

Meanwhile, the tech industry has discovered how to extract billions in value from their work. AI companies train their large language models on Wikipedia’s corpus. The Wikimedia Foundation recently noted it remains one of the highest-quality datasets in the world for AI development. Research confirms that when developers try to omit Wikipedia from training data, their models produce answers that are less accurate, less diverse, and less verifiable.

The irony is stark. AI systems deliver answers derived from Wikipedia without sending users back to the source. Google’s AI Overviews, ChatGPT, and countless other tools have learned from Wikipedia’s volunteer-created content—then present that knowledge in ways that break the virtuous cycle Wikipedia depends on. Fewer readers visit the encyclopedia directly. Fewer visitors become editors. Fewer users donate. The pipeline that sustained Wikipedia for a quarter century is breaking down.

What Does Wikipedia’s Next 25 Years Look Like?

The Simple Summaries situation arguably risks making the encyclopedia increasingly irrelevant to younger generations of readers. And they’ll be relying on Wikipedia’s information commons for the longest time frame of any cohort now editing or reading it.

On the other hand, a larger mandate does, of course, remain at Wikipedia to serve as stewards of the information commons. And wrongly implementing Simple Summaries could fail this ambitious objective. Which would be terrible, too.

All of which, frankly, are what open discussions and sometimes-messy referenda are all about: not just sudden shutdowns.

Meanwhile, AI systems should credit Wikipedia when drawing on its content, maintaining the transparency that builds public trust. Companies profiting from Wikipedia’s corpus should pay for access through legitimate channels like Wikimedia Enterprise, rather than scraping servers or relying on data dumps that strain infrastructure without contributing to maintenance.

Perhaps as the AI marketplace matures, there could be room for new large language models trained exclusively on trustworthy Wikimedia data—transparent, verifiable, and free from the pollution of synthetic AI-generated content. Perhaps, too, Creative Commons licenses need updating to account for AI-era realities.

Perhaps Wikipedia itself needs new modalities for creating and sharing knowledge—ones that preserve editorial rigor while meeting audiences where they are.

Wikipedia has survived edit wars, vandalism campaigns, and countless predictions of its demise. It has patiently outlived the skeptics who dismissed it as unreliable. It has proven that strangers can collaborate to build something remarkable.

But Wikipedia cannot survive by refusing to change. Ostrom’s Nobel Prize–winning research reminds us that the communities that govern shared resources often grow conservative over time.

For anyone who cares about the future of reliable information online, Wikipedia’s 25th anniversary is not just a celebration. It is an urgent warning about what happens when the institutions we depend on cannot adapt to the people they are meant to serve.

Is Particle Physics Dead, Dying, or Just Hard?

Mike's Notes

This Quanta Magazine article about the current state of Particle Physics got me thinking.

  • Particle physicists are very good at statistical physics and maths
  • They have to be very bright and well-trained
  • They like hard problems to solve
  • There are unemployed particle physicists
  • Ajabbi Research will need people in the future. Hmmm

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Quanta Magazine
  • Home > Handbook > 

Last Updated

20/03/2026

Is Particle Physics Dead, Dying, or Just Hard?

By: Natalie Wolchover
Quanta Magazine: 26/01/2026

Natalie Wolchover is a columnist for Quanta Magazine, where she covered the physical sciences for more than a decade. Her writing has been featured in The Best American Science and Nature Writing, The Best American Magazine Writing, and The Best Writing on Mathematics, and has won several awards, including the 2022 Pulitzer Prize for Explanatory Reporting, the 2016 Evert Clark/Seth Payne Award, and the American Institute of Physics’ 2017 Science Communication Award. She was lead editor for the National Magazine Award–winning special issue, "The Unraveling of Space-Time." Her first book, The Question to Which the Universe Is the Answer, is scheduled for publication in 2027.

Columnist Natalie Wolchover checks in with particle physicists more than a decade after the field entered a profound crisis.

In July 2012, physicists at the Large Hadron Collider (LHC) in Europe triumphantly announced the discovery of the Higgs boson, the long-sought linchpin of the subatomic world. Interacting with Higgs bosons imbues other elementary particles with mass, making them slow down enough to assemble into atoms, which then clump together to make everything else.

A couple of months later, I took a job as the first staff reporter at the nascent science magazine that would become Quanta. Turns out I was starting on the physics beat just as the drama was picking up.

The drama wasn’t about the Higgs particle; by the time it materialized at the LHC there was already little doubt about its existence. The Higgs was the last piece of the Standard Model of particle physics, the 1970s-era set of equations governing the 25 known elementary particles and their interactions.

More striking was what did not emerge from the data.

Physicists had spent billions of euros building the 27-kilometer supercollider not only to confirm the Standard Model but also to supersede it by uncovering components of a more complete theory of nature. The Standard Model doesn’t include particles that could comprise dark matter, for instance. It doesn’t explain why matter dominates over antimatter in the universe, or why the Big Bang happened in the first place. Then there’s the inexplicably enormous disparity between the Higgs boson’s mass (which sets the physical scale of atoms) and the far higher mass-energy scale associated with quantum gravity, known as the Planck scale. The chasm between physical scales — atoms are vastly larger than the Planck scale — seems unstable and unnatural. In 1981, the great theorist Edward Witten thought of a solution(opens a new tab) for this “hierarchy problem”: Balance would be restored by the existence of additional elementary particles only slightly heavier than the Higgs boson. The LHC’s collisions should have been energetic enough to conjure them.

But when protons raced both ways around the tunnel and crashed head-on, spraying debris into surrounding detectors, only the 25 particles of the Standard Model were observed. Nothing else showed up.

In philosophy, “qualia” refers to the subjective qualities of our experience: what it’s like for Alice to see blue or for Bob to feel delighted. Qualia are “the ways things seem to us,” as the late philosopher Daniel Dennett put it. In these essays, our columnists follow their curiosity, and explore important but not necessarily answerable scientific questions.

The absence of any “new physics” — particles or forces beyond the known ones — fomented a crisis. “Of course, it is disappointing,” the particle physicist Mikhail Shifman told me that fall of 2012. “We’re not gods. We’re not prophets. In the absence of some guidance from experimental data, how do you guess something about nature?”

Once the standard reasoning about the hierarchy problem had been shown to be wrong, there was no telling where new physics might be found. It could easily lie beyond the reach of experiments. The particle physicist Adam Falkowski predicted to me at the time that, without a way to search for heavier particles, the field would undergo a slow decay: “The number of jobs in particle physics will steadily decrease, and particle physicists will die out naturally.”

The crisis and its fallout made for years of interesting reporting, but sure enough, the frequency of news stories related to particle physics diminished. I fell out of touch with sources. More than 13 years on, in this first column for Qualia, a new series of essays in Quanta Magazine, I’m taking stock. Is particle physics dying, as Falkowski predicted? Can new physics still be found? What’s the future for particle physicists? Will artificial intelligence help? How much hope is left in the search for answers to the many remaining mysteries of the universe?

Some particle physicists act as if there’s no crisis at all. The LHC is still running and will for at least another decade, and its operators are finding new sources of enthusiasm.

In the last couple of years, data handling at the collider has improved with the use of AI. Pattern recognizers can sort through the outgoing debris of proton collisions and classify collision events more accurately than human-made algorithms can. This helps the physicists to more accurately measure the “scattering amplitude,” essentially the probability that different particle interactions will occur. For instance, AI systems can determine more precisely how many top quarks arise in the aftermath of collisions versus the number of bottom quarks. Any statistical deviations from the predictions of the Standard Model could signify the involvement of unknown elementary particles.


A proton-proton collision documented by the Compact Muon Solenoid at CERN in 2012 shows evidence of the decay of the Higgs boson.

CMS Collaboration; Mc Cauley, Thomas

Novel particles as hefty as Higgs bosons would not be so subtle; they would have shown up already as pronounced bumps on data plots. But as Matt Strassler, a particle physicist affiliated with Harvard University, explained to me, the traces of lighter novel particles could still lie in so-called hidden valleys in the data. “There’s a huge amount of unexplored territory there,” he said. There might exist, for instance, an unstable type of dark matter particle that leaves its mark by occasionally arising and immediately decaying into an excessive number of muon-antimuon pairs. Detecting such an excess would point indirectly to the unstable particle’s existence. “For people who thought all the new physics is at high energies — they’re very disappointed right now,” Strassler said. “I don’t share that view. There are many opportunities for nature to provide clues at low energies.”

So far, though, no such indirect evidence of new physics has been detected. The more accurate the statistics have become at the LHC, the better they match the Standard Model. Michelangelo Mangano, a particle physicist at CERN, the laboratory that houses the LHC, said the collider today is like a tool for exploring the Standard Model’s predictions, and he considers this exploration worthwhile because not all consequences of the equations are easy to calculate. The search for new physics beyond the Standard Model is ongoing, Mangano said, but “the fact that it’s not giving positive results does not mean we are stuck, dead, or wasting our time.”

These questions are so fundamental that of course it’s worth nailing down every amplitude and checking every hidden valley, since we have the tool for the job. But for hunters of new physics, does the game end there?

The community wants to go bigger. CERN physicists want to build a Future Circular Collider, tripling the circumference of the LHC with a 91-kilometer tunnel beneath the Franco-Swiss border, to both probe higher energies and look for subtler signals. This FCC would initially collide electrons, which, unlike protons, are themselves elementary particles, with no substructure. Their clean collisions would allow more precise measurements of scattering amplitudes, making the FCC ultrasensitive to indirect signs of new physics. By the end of the century, the mega-collider would be upgraded to collide protons, as the LHC does now. Proton collisions are messier, but at the FCC they would achieve unprecedented energies — about seven times higher than the LHC can currently muster — so they have a chance, however slim, of revealing heavy particles beyond the LHC’s reach. (In theory, particle masses could range up to a million billion times greater than what the LHC energy scale can produce directly, so there’s no reason to expect them around the next bend.)

We’re not gods. We’re not prophets. In the absence of some guidance from experimental data, how do you guess something about nature? - Mikhail Shifman

As of now, the FCC’s fate is unknown; formal approval and funding commitments by member countries won’t come before 2028.

Meanwhile, U.S. particle physicists are aiming to complement the European strategy by constructing a brand-new type of machine: a muon collider. Muons are elementary like electrons, but they’re 200 times heavier, so their collisions would be both clean and energetic (albeit not reaching the collision energies of the LHC). Both the selling point and the challenge of this newfangled type of machine is that it will require major technical innovations (with all the spin-off potential that can bring), because muons are highly unstable. They must be accelerated and collided mere microseconds after they’re created.

Demonstrating the technology and then constructing the collider would take roughly 30 years, and that’s with federal funding. “We have to figure out how to do it in between 10 and 20 billion [dollars],” said Maria Spiropulu, a physics professor at the California Institute of Technology and co-chair of the committee behind a national report endorsing a muon collider program(opens a new tab) that came out in June 2025. Over the coming years, the Department of Energy will weigh whether to fund the proposal rather than competing science projects. What hurts its case is the lack of a “discovery guarantee,” which the LHC had with the Higgs boson.


Scientists and technicians inspected and upgraded systems at the Large Hadron Collider during the Long Shutdown 2, which began in 2018.

Maximilien Brice/CERN

Then again, as the mathematical physicist Peter Woit mused on his blog(opens a new tab), “Perhaps in our new world order where everything is controlled by trillionaire tech bros, the financing won’t be a problem.”

Deliberations about a Chinese supercollider have come to naught, I’m told. Instead, China has decided to pursue a “super-tau-charm facility”: a lower-energy particle scattering experiment that would cost mere hundreds of millions of dollars instead of tens of billions. The facility will produce a lot of tau particles and charm quarks, partly to study whether taus ever shape-shift into muons or electrons. This kind of switching isn’t predicted by the Standard Model, but it does happen in some theoretical extensions of it.

Okay, we might as well check. We’re desperate for new physics, and the price is good. But by definition it’s very difficult to know which shots in the dark are worth taking.

Adam Falkowski, who sounded the death knell for particle physics back in 2012, used to be known for the sharp commentary he supplied on his blog Résonaances(opens a new tab). But the Paris-based particle physicist hasn’t posted anything since 2022. He said that’s partly because he’s been tied up with fatherhood and partly because there hasn’t been much to say.

When we caught up on a video call, Falkowski told me, “I am very skeptical about future colliders. For me it’s very difficult to get excited about it.” He sees momentum behind CERN’s FCC campaign, but personally he worries about the huge costs and timescales, and the fact that “there are absolutely no hints that something is there within the reach of the next collider.”

For his part, Falkowski has turned to the theoretical study of scattering amplitudes, a growing research area focused on the geometric patterns underlying particle interaction statistics, patterns that could point toward a truer perspective on the quantum world. The field seeks to reformulate the equations of particle physics in a different mathematical language in hopes that this language might extend to quantum gravity. “There is a very vibrant program in trying to understand the structure of the physical theories,” Falkowski said. “The hope is that with the help of machine learning, that there can be very fast progress in the coming years. I think that’s where the best things have happened.”

But amplitudeology, as this field is known, is abstract — it’s no atom-smashing experiment. Falkowski said he does think experimental particle physics is dying. He has watched talented postdocs switch to other research areas or take data science jobs. “I’m not sure they are getting the best of the best as they used to,” he said, “because the prospects of returns are so distant. If you want to change the world now, you will do AI; you will do something different from particle physics.”


The ALICE (A Large Ion Collider Experiment) detector at the Large Hadron Collider was designed to study quark-gluon plasma.

CERN, Julien Marius Ordan/Science Source

This brain drain appears to be real. I spoke to Jared Kaplan, co-founder of Anthropic, the company behind the chatbot Claude. He was a physicist the last time we spoke. As a grad student at Harvard in the 2000s, he worked with the renowned theorist Nima Arkani-Hamed to open up the new directions in amplitude research that are being actively pursued today. But Kaplan left the field in 2019. “I started working on AI because it seemed plausible to me that … AI was going to make progress faster than almost any field in science historically,” he said. AI would be “the most important thing to happen while we’re alive, maybe one of the most important things to happen in the history of science. And so it seemed obvious that I should work on it.”

As for the future of particle physics, AI makes worrying about it now rather pointless, in Kaplan’s view. “I think that it’s kind of irrelevant what we plan on a 10-year timescale, because if we’re building a collider in 10 years, AI will be building the collider; humans won’t be building it. I would give like a 50% chance that in two or three years, theoretical physicists will mostly be replaced with AI. Brilliant people like Nima Arkani-Hamed or Ed Witten, AI will be generating papers that are as good as their papers pretty autonomously. … So planning beyond this couple-year timescale isn’t really something I think about very much.”

Cari Cesarotti, a postdoctoral fellow in the theory group at CERN, is skeptical about that future. She notices chatbots’ mistakes, and how they’ve become too much of a crutch for physics students. “AI is making people worse at physics,” she said. “What we need is humans to read textbooks and sit down and think of new solutions to the hierarchy problem.”

Cesarotti was a high school junior when the Higgs boson was discovered. She grew up near Fermilab, the U.S. national lab in Illinois that houses the Tevatron, which was the world’s highest-energy particle collider before the LHC. (The top quark was discovered there in 1995.) This proximity taught her that a particle physicist was a thing you could be. Later, it turned out to be her thing. “What are the fundamental building blocks of the universe — those were the questions that I was most interested in knowing the answer to,” she told me. “But what people said was, ‘Particle physics is dead. Don’t do this.’”

It may have been a fair warning; Cesarotti has yet to land a permanent job as a rising particle physicist. The subfield has continued to shrink, she and others said, as faculty hiring committees and grad students go in other directions. “Definitely all this rhetoric that there was nothing to be found and you should give up on it — people listened,” she said. “And of course that means there are fewer people. It becomes a self-fulfilling prophecy. If you’re pushing all these talented people out of trying to solve these problems into a field that it’s easier to make an impact on, then you’re setting yourself up for failure.”

Cesarotti echoed a sentiment I’d heard from others, which sounds correct to me as well: “Particle physics isn’t dead; it’s just hard.” It’s hard to know what to think about or look for. But the most devoted particle physicists are thinking and looking all the same.

“It was easy for 125 years,” Strassler said. “One thing led to the next. That lucky century has, for now, at least in the medium term, come to an end. That could change tomorrow, or next century, or who knows.”

A hint of a new lightweight particle could, in theory, show up at the LHC, or in some other experiment. Strassler is particularly excited about the study of radioactive thorium-229 decay, which could reveal variations in the fundamental constants. I’m slightly partial to experiments looking for “axions,” dark matter candidates that are so lightweight that they can act a little like light itself.

On the theory side, an obvious solution to the hierarchy problem could drop naturally out of the geometry behind scattering amplitudes. Or, if Kaplan is right, AI systems might someday suggest powerful new ideas for how the 25 particles of the Standard Model fit into a more comprehensive pattern — a possibility I didn’t foresee back when the crisis began.

Clearly, further progress toward the truth remains possible in particle physics. But there’s no discovery guarantee. I’ve had more than 13 years to think about it, and it remains a disturbing prospect: All the empirical clues we can glean about nature’s fundamental laws and building blocks might already be in hand. The universe may plan on keeping the rest of its secrets.

These New AI Models Are Trained on Physics, Not Words, and They’re Driving Discovery

Mike's Notes

A fantastic use of AI. My instinct is to incorporate fluid-like systems into a future Pipi. That will require a real data centre.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Simons Foundation
  • Home > Handbook > 

Last Updated

19/03/2026

These New AI Models Are Trained on Physics, Not Words, and They’re Driving Discovery

By: Elizabeth Fernandez
Fatiron Institute: 09/12/2025

Elizabeth Fernandez is a science writer specializing in science and society, science and philosophy, astronomy, physics, and geology.

While popular AI models such as ChatGPT are trained on language or photographs, new models created by researchers at the Flatiron Institute and other members of the Polymathic AI collaboration are trained using real scientific datasets. The models are already leveraging the knowledge they learn from one field to address seemingly completely different problems in another.

The Walrus AI model simulates fluid motion.

Walrus/Polymathic AI

While most AI models — including ChatGPT — are trained on text and images, a multidisciplinary team of scientists has something different in mind: AI trained on physics.

Recently, members of the Polymathic AI collaboration presented two new AI models trained using real scientific datasets to tackle problems in astronomy and fluidlike systems.

The models — called Walrus and AION-1 — are unique in that they can apply the knowledge they gain from one class of physical systems to seemingly completely different problems. For instance, Walrus can tackle systems ranging from exploding stars to Wi-Fi signals to the movement of bacteria.

That cross-disciplinary skill set is particularly exciting because it can accelerate scientific discovery and give researchers a leg up when faced with small samples or budgets, says Walrus lead developer Michael McCabe, a research scientist at Polymathic AI.

“Maybe you have new physics in your scenario that your field isn’t used to handling. Maybe you’re using experimental data, and you’re not quite sure what class it fits into. Maybe you’re just not a machine-learning researcher and just can’t burn the time working through all the possible models that might fit your scenario,” McCabe explains. “Our hope is that training on these broader classes makes something that is both easier to use and has a better chance of generalizing for those users, as the ‘new’ physics to them might be something another field has been handling for a while.”

Using cross-disciplinary models can also improve predictions when data is sparse or when studying rare events, says Liam Parker, a Ph.D. student at the University of California, Berkeley, and a lead researcher developing for AION-1.

The Polymathic AI team recently announced Walrus in a preprint on arXiv.org and presented AION-1 on Friday, December 5, at the NeurIPS conference in San Diego.

Walrus and AION-1 are ‘foundational models,’ meaning they’re trained on colossal sets of training data from different research areas or experiments. That’s unlike most AI models in science, which are trained with a particular subfield or problem in mind. Rather than learning the ins and outs of a particular situation or starting from a set of fundamental equations, foundational models instead learn the basis, or foundation, of the physical processes at work. Since these physical processes are universal, the knowledge that the AI learns can be applied to various fields or problems that share the same underlying physical principles. Foundational models have a host of benefits — from speeding up computations to performing well in low-data regimes to finding physics shared across different fields.

AION-1 is a foundational model for astronomy. It is trained on data from astronomical surveys that are already massive in their own right: the Legacy Survey, the Hyper Suprime-Cam (HSC), the Sloan Digital Sky Survey (SDSS), the Dark Energy Spectroscopic Instrument (DESI) and Gaia. All in all, that’s more than 200 million observations of stars, quasars and galaxies totaling around 100 terabytes of data. AION-1 uses images, spectra and a variety of other measurements to learn as much as it can about astronomical objects. Then, when a scientist obtains a low-resolution image of a galaxy, for example, AION-1 can extract more information about it, learned from the physics of millions of other galaxies.

Walrus’ domain is fluids and fluidlike systems. Walrus utilizes the Well — a massive dataset compiled by the Polymathic AI team. The Well’s data encompasses 19 different scenarios and 63 different fields in fluid dynamics. All in all, it contains 15 terabytes of data describing parameters such as density, velocity and pressure in physical systems as wide-ranging as merging neutron stars, acoustic waves and shifting layers in Earth’s atmosphere.

Such foundational models can be powerful. AION-1 and Walrus can utilize physics seen in a different case and apply it to learn about something new. It is similar to our senses. “Multiple senses together — rather than one at a time — gives you a fuller understanding of an experience,” the AION-1 team explained in a blog post about the project. “Over time, your brain learns associations between how things look, taste and smell, so if one sense is unavailable, you can often infer the missing information from the others.”

Then, when a scientist is performing a new experiment or observation, they have a starting point — a map of how physics behaves in other similar situations. “It’s like seeing many, many humans,” says Shirley Ho, Polymathic AI’s principal investigator and an astrophysicist and machine learning expert. Ho is a senior research scientist at the Flatiron Institute and a professor at New York University. When “you meet a new friend, because you’ve met so many people before now, you are able to map in your head … what this human is going to be like compared to all your friends before,” she says.

Foundational models make scientists’ lives easier by streamlining data processing. Scientists will no longer have to create a new framework from scratch for every project or task; instead, they can start with an already trained AI to use as a foundation. “I think our vision for some of this foundation model is that it enables anyone to start from a really powerful embedding of the data that they’re interested in … and still achieve state-of-the-art accuracy without having to build this whole pipeline from scratch,” says AION-1 lead researcher Parker.

Their goal is to make tools that scientists can use in their day-to-day research. “We want to bring all this AI intelligence” to the scientists who need it, Ho says.


Other Highlights From the NeurIPS 2025 Conference

CosmoBench: CosmoBench is a multiview, multiscale, multitask cosmology benchmark for geometric deep learning. Curated from the state-of-the-art cosmological simulations, CosmoBench is the largest benchmark of its kind, with over 34,000 point clouds and 25,000 directed trees. CosmoBench features challenging evaluation tasks from cosmology and diverse baselines, including cosmological methods, simple linear models and graph neural networks. This presentation will show how CosmoBench is pushing the frontiers of cosmology and geometric deep learning.

Lost in Latent Space: Physicists model and predict the behavior of physical systems using their understanding of the laws of physics. However, these calculations require significant computing power. Flatiron Institute scientists and other members of the Polymathic AI collaboration studied whether a less taxing form of computing can still yield accurate results. Known as ‘latent diffusion modeling,’ this computational model utilizes artificial intelligence to generate high-quality images at a lower computational cost while accurately capturing physical behavior.

Neurons as Detectors of Coherent Sets in Sensory Dynamics: Our perception of touch, taste, sight and pain is mediated by neurons that carry signals from peripheral receptors to the brain. This work shows that these neurons can be understood as detecting ‘coherent sets’ within the sensory stream — groups of stimulus trajectories that evolve together over time and therefore share a common past or a common future. By distinguishing these coherent sets, some neurons predominantly encode what has just occurred, while others reliably signal what is likely to happen next. Traditional classifications of sensory neurons can thus be reinterpreted as reflecting a division between past-focused and future-predictive processing. Understanding how the nervous system separates and transforms sensory input in this way may offer new routes for treating mental illness and may also guide the development of biologically inspired artificial intelligence.

Predicting Partially Observable Dynamical Systems: Scientists can predict the motion of a falling object or the evolution of fluids using deterministic models that compute a single future outcome from past observations. But this approach breaks down for physical systems where much of the state is hidden. A prominent example is the sun: We can observe the activity on its surface, but the processes deep inside remain largely invisible. Without access to those internal conditions, there isn’t enough information to forecast a single ‘correct’ future. Researchers at the Flatiron Institute, together with collaborators in the Polymathic AI project, have developed a probabilistic approach that can infer these hidden solar processes. By incorporating information from the distant past into a diffusion-based generative model, their method produces an ensemble of plausible futures, offering a clearer understanding of how past sunspot activity shapes its future evolution.

NVIDIA GTC Keynote 2026

Mike's Notes

I am attending NVIDIA GTC 2026 remotely. It is being held at the San Jose McEnery Convention Centre, in San Jose, California, USA, on March 16–19, 2026.

This is the Keynote by NVIDIA CEO Jensen Huang. The changes in technology were fascinating.

I joined the NVIDIA Developer Program to take a deep dive into algorithmic techniques by learning from some of the best.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

18/03/2026

NVIDIA GTC Keynote 2026

By: Jensen Huang
YouTube: 17/03/2026

Jen-Hsun Huang, commonly anglicized as Jensen Huang, is a Taiwanese and American business executive, electrical engineer, and philanthropist who is the founder, president, and chief executive officer of Nvidia, the world's largest company by market capitalization. - Wikipedia

Watch NVIDIA Founder and CEO Jensen Huang’s GTC keynote as he unveils the latest breakthroughs in AI and accelerated computing. See how agentic AI, AI factories, and physical AI are powering the next generation of intelligent systems.

Reservoir computing bootcamp—From Python/NumPy tutorial for the complete beginners to cutting-edge research topics of reservoir computing

Mike's Notes

Looks very useful, as a way into using Reservoir Computing. The abstract is copied below. Follow the PubMed link to find the full paper.

I have long planned to add Reservoir Computing to Pipi.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Semantic Scholar
  • Home > Handbook > 

Last Updated

17/03/2026

Reservoir computing bootcamp—From Python/NumPy tutorial for the complete beginners to cutting-edge research topics of reservoir computing

By: Katsuma Inoue, T. Kubota, Quoc Hoan Tran, Nozomi Akashi, Ryo Terajima, Tempei Kabayama, JingChuan Guan, Kohei Nakajima
Chaos: 01/02/2026

.

Abstract

Reservoir computing (RC) is a machine learning framework that uses recurrent neural networks and is characterized by directly capitalizing on intrinsic dynamics instead of adjusting internal parameters. In particular, in the form of physical reservoir computing (PRC), recent studies have advanced by treating various physical systems as reservoirs and applying them to time-series data processing and quantifying information-processing properties. In this way, RC and PRC potentially have interdisciplinary impact, and as more researchers from diverse academic disciplines learn and utilize RC and PRC, there is potential for more creative research to emerge. In this paper, we introduce a Jupyter Notebook-based educational material called RC bootcamp for learning RC, being made publicly available under an open-source license (https://rc-bootcamp.github.io/). The RC bootcamp was originally developed and continuously updated within our research group to efficiently train our collaborators and new students, ultimately enabling them to conduct experiments by themselves. Considering the diverse backgrounds of learners, it starts with the basics of computer science and numerical computation using Python/NumPy, as well as fundamental implementations in RC, such as echo state networks and linear regression. Furthermore, it covers important analytical indicators based on dynamical systems theory, such as Lyapunov exponents, echo state property index, and information-processing capacity, as well as cutting-edge approaches utilizing chaos, including first-order, reduced and controlled error (FORCE) learning and innate training, and attractor design via bifurcation embedding. We expect that the RC bootcamp will become a useful educational material for learning RC and PRC and further invigorate research activities in the RC and PRC fields.

Design for the People: The US Web Design System and the Public Sans Typeface

Mike's Notes

The article reproduced below is from a fascinating website and is about another useful Design System.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

16/03/2026

Design for the People: The US Web Design System and the Public Sans Typeface

By: Jon Keegan
On a Sandy Beach: 02/07/2024

Jon Keegan is an investigative data journalist who covers technology. His work has appeared in The Wall Street Journal, The Markup and MIT Technology Review. Jon’s work has won several journalism awards, including the Loeb Award, the Society of Professional Journalists’ Excellence in Journalism Award and the Society of News Design’s Best of Digital Gold Award.

The United States has an official web design system and a custom typeface that belongs to the people. This thoughtful public design system aims to make government websites not only look good, but to make them accessible and functional for all.

Before the internet, Americans may have interacted with the federal government by stepping into grand buildings adorned with impressive stone columns and gleaming marble floors. Today, the neoclassical architecture of those physical spaces has been replaced by the digital architecture of website design – HTML code, tables, forms, and buttons. 

While people visiting a government website to apply for student loans, research veterans’ benefits, or enroll in Medicare may not notice these digital elements, they play a crucial role. If a website is buggy or doesn’t work on your phone, taxpayers cannot access the services they have paid for. This can feel like walking up to a boarded-up government building with broken windows, creating a negative impression of the government itself.  

 In the US, there are about 26,000 federal websites. Early on, each site had its own designs, fonts, and login systems, creating frustration for the public, and wasting government resources.

 A survey of the many different styles of buttons from government websites as of 2015. Source: 18F / GSA 

The troubled launch Healthcare.gov in 2013 highlighted the need for a better way to build government digital services. In 2014, President Obama created two new teams to help improve government tech.

Within the General Services Administration (GSA), a new team called 18F (named for their Washington, DC office at 1800 F Street) was created to “collaborate with other agencies to fix technical problems, build products, and improve public service through technology.” The team was built to move at the speed of tech start-ups rather than lumbering bureaucratic agencies. 

The U.S. Digital Service (USDS) was tasked “to deliver better government services to the American people through technology and design.” In 2015, the two teams collaborated to build the US Web Design System (USWDS)—a style guide and collection of user interface components and design patterns to ensure a consistent user experience across government websites. “Inconsistency is felt, even if not always precisely articulated in usability research findings,” said Dan Williams, the USWDS program lead, in an email. 

Some of the sample design elements for the USWDS. Source: https://designsystem.digital.gov/

Today, the system defines 47 user interface components such as buttons, alerts, search boxes and forms each with their own design examples, sample code and guidelines such as “Be polite” and “Don’t overdo it.” The USWDS is now in its third iteration, and is used in 160 government websites. “As of September 2023, 94 agencies use USWDS code, and it powers about 1.1 billion pageviews on federal websites,” said Williams.

USWDS design principles include focusing on real users’ needs, earning trust and embracing accessibility. The system requires websites to be optimized for all users, including people with disabilities such as those using screen readers or those with color blindness. Williams said accessibility is important to the team’s efforts, noting that they “prioritize any accessibility-related bug or improvement we find (or is contributed by our community).”






Some federal websites that use the USWDS. Clockwise from top left: Va.gov, Medicaid.gov, Worker.gov, Supremecourt.gov

To ensure clear and consistent typography, the free and open-source typeface Public Sans was created for the US government. “It started as a design experiment,” said Williams, who designed the typeface, which was released in 2019. “We were interested in trying to establish an open source solution space for a typeface, just like we had for the other design elements in the design system,” said Williams. Based on the Libre Franklin typeface, Public Sans is described as “a strong, neutral, principles-driven, open-source typeface for text or display.” 


Both Public Sans and the USWDS embrace transparency and collaboration with government agencies and the public, inviting contributions to their development via the projects’ GitHub pages. 

To ensure that the hard-learned lessons of improving public technology aren’t forgotten, the projects embrace continuous improvement. One of Public Sans’ design principles offers key guidance in this area: “Strive to be better, not necessarily perfect.”

How your brain can be trained like a muscle

Mike's Notes

Some good tips.

  • Stop working when brain fade sets in.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

15/03/2026

How your brain can be trained like a muscle

By: Joanna Fong-Isariyawongse
RNZ: 1/02/2026

Joanna Fong-Isariyawongse is associate professor of Neurology, University of Pittsburgh.

When the brain is asked to stretch beyond routine, that slight mental discomfort is often the sign that the brain is being trained, a neurologist says.

If you have ever lifted a weight, you know the routine: challenge the muscle, give it rest, feed it and repeat. Over time, it grows stronger.

Of course, muscles only grow when the challenge increases over time. Continually lifting the same weight the same way stops working.

It might come as a surprise to learn that the brain responds to training in much the same way as our muscles, even though most of us never think about it that way. Clear thinking, focus, creativity and good judgment are built through challenge, when the brain is asked to stretch beyond routine rather than run on autopilot. That slight mental discomfort is often the sign that the brain is actually being trained, a lot like that good workout burn in your muscles.

Tasks that stretch your brain just beyond its comfort zone, such as knitting and crocheting, can improve cognitive abilities over your lifespan. Unsplash

Think about walking the same loop through a local park every day. At first, your senses are alert. You notice the hills, the trees, the changing light. But after a few loops, your brain checks out. You start planning dinner, replaying emails or running through your to-do list. The walk still feels good, but your brain is no longer being challenged.

Routine feels comfortable, but comfort and familiarity alone do not build new brain connections.

As a neurologist who studies brain activity, I use electroencephalograms, or EEGs, to record the brain’s electrical patterns.

Research in humans shows that these rhythms are remarkably dynamic. When someone learns a new skill, EEG rhythms often become more organized and coordinated. This reflects the brain’s attempt to strengthen pathways needed for that skill.

Your brain trains in zones too

For decades, scientists believed that the brain’s ability to grow and reorganize, called neuroplasticity, was largely limited to childhood. Once the brain matured, its wiring was thought to be largely fixed.

But that idea has been overturned. Decades of research show that adult brains can form new connections and reorganize existing networks, under the right conditions, throughout life.

Some of the most influential work in this field comes from enriched environment studies in animals. Rats housed in stimulating environments filled with toys, running wheels and social interaction developed larger, more complex brains than rats kept in standard cages. Their brains adapted because they were regularly exposed to novelty and challenge.

Human studies find similar results. Adults who take on genuinely new challenges, such as learning a language, dancing or practicing a musical instrument, show measurable increases in brain volume and connectivity on MRI scans.

The takeaway is simple: Repetition keeps the brain running, but novelty pushes the brain to adapt, forcing it to pay attention, learn and problem-solve in new ways. Neuroplasticity thrives when the brain is nudged just beyond its comfort zone.

The reality of neural fatigue

Just like muscles, the brain has limits. It does not get stronger from endless strain. Real growth comes from the right balance of challenge and recovery.

When the brain is pushed for too long without a break – whether that means long work hours, staying locked onto the same task or making nonstop decisions under pressure – performance starts to slip. Focus fades. Mistakes increase. To keep you going, the brain shifts how different regions work together, asking some areas to carry more of the load. But that extra effort can still make the whole network run less smoothly.

Neural fatigue is more than feeling tired. Brain imaging studies show that during prolonged mental work, the networks responsible for attention and decision-making begin to slow down, while regions that promote rest and reward-seeking take over. This shift helps explain why mental exhaustion often comes with stronger cravings for quick rewards, like sugary snacks, comfort foods or mindless scrolling. The result is familiar: slower thinking, more mistakes, irritability and mental fog.

This is where the muscle analogy becomes especially useful. You wouldn’t do squats for six hours straight, because your leg muscles would eventually give out. As they work, they build up byproducts that make each contraction a little less effective until you finally have to stop. Your brain behaves in a similar way.

Likewise, in the brain, when the same cognitive circuits are overused, chemical signals build up, communication slows and learning stalls.

But rest allows those strained circuits to reset and function more smoothly over time. And taking breaks from a taxing activity does not interrupt learning. In fact, breaks are critical for efficient learning.

The crucial importance of rest

Among all forms of rest, sleep is the most powerful.

Sleep is the brain’s night shift. While you rest, the brain takes out the trash through a special cleanup system called the glymphatic system that clears away waste and harmful proteins. Sleep also restores glycogen, a critical fuel source for brain cells.

And importantly, sleep is when essential repair work happens. Growth hormone surges during deep sleep, supporting tissue repair. Immune cells regroup and strengthen their activity.

During REM sleep, the stage of sleep linked to dreaming, the brain replays patterns from the day to consolidate memories. This process is critical not only for cognitive skills like learning an instrument but also for physical skills like mastering a move in sports.

On the other hand, chronic sleep deprivation impairs attention, disrupts decision-making and alters the hormones that regulate appetite and metabolism. This is why fatigue drives sugar cravings and late-night snacking.

Sleep is not an optional wellness practice. It is a biological requirement for brain performance.

Overdoing any task, whether it be weight training or sitting at the computer for too long, can overtax the muscles as well as the brain. Unsplash

Exercise feeds the brain too

Exercise strengthens the brain as well as the body.

Physical activity increases levels of brain-derived neurotrophic factor, or BDNF, a protein that acts like fertilizer for neurons. It promotes the growth of new connections, increases blood flow, reduces inflammation and helps the brain remain adaptable across one’s lifespan.

This is why exercise is one of the strongest lifestyle tools for protecting cognitive health.

Train, recover, repeat

The most important lesson from this science is simple. Your brain is not passively wearing down with age. It is constantly remodeling itself in response to how you use it. Every new challenge and skill you try, every real break, every good night of sleep sends a signal that growth is still expected.

You do not need expensive brain training programs or radical lifestyle changes. Small, consistent habits matter more. Try something unfamiliar. Vary your routines. Take breaks before exhaustion sets in. Move your body. Treat sleep as nonnegotiable.

So the next time you lace up your shoes for a familiar walk, consider taking a different path. The scenery may change only slightly, but your brain will notice. That small detour is often all it takes to turn routine into training.

The brain stays adaptable throughout life. Cognitive resilience is not fixed at birth or locked in early adulthood. It is something you can shape.

If you want a sharper, more creative, more resilient brain, you do not need to wait for a breakthrough drug or a perfect moment. You can start now, with choices that tell your brain that growth is still the plan.

Cascade of permissions on Pipi

Mike's Notes

Making visible how each account type has access to Pipi.

Resources

  • Resource

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/03/2026

Cascade of permissions on Pipi

By: Mike Peters
On a Sandy Beach: 14/03/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Think of Pipi as an iceberg. Most of it is hidden underwater. 

This is the matrix of Pipi account permissions. Each account has a different role, creating opportunities for the account above and dependency on the one below.

Pipi currently has about 3,000,000 lines of code. Most of it is self-generated, so the code volume is a bit of a guess at the moment, based on sampling. (There is semantic versioning as a record of change, not version control. 😊 )

Code % Role Account License
0.001 User Personal Open-source
1 System Enterprise
1 System Configuration Developer
1 Developer Rules Researcher
1 Pipi Admin Agent Closed-core
96 Self-organising Pipi

Personal Account

  • Everyone gets one and uses it to do work
  • User interface personalisation
    • language
    • accessibility
    • preferences
    • privacy

Enterprise Account

  • Run the system
  • Make the rules for users
    • Security
    • Roles and permissions.

Developer Account

  • Make the rules for the enterprise system
  • Modules
  • API
  • Integrations
  • IaC
  • Provide support

Researcher Account

  • Make the rules the developers use
  • Create constraints
    • Import the ontologies
    • Add the laws of physics

Agent Account

  • Interact with Pipi
  • Creates the rules for the researchers.

Pipi

  • Autonomous
  • Self-organising
  • World-model
  • Path taken
  • Evolving
  • Replicating
  • Digital twin
  • Swarm
  • Learns from how Pipi is being used by users