Dynamic Web Apps without JavaScript - HTMX Showcase at DjangoCon and Devoxx

Dynamic Web Apps without JavaScript - HTMX Showcase at DjangoCon and Devoxx


By: Bruno Couriol

From InfQ.com: October 21. 2022.

DjangoCon and Devoxx Belgium recently reported examples of interactive web applications developed without JavaScript developers. The showcased htmx HTML-first framework seems to target those applications that mainly propose a friendly interface to CRUD operations over remote resources. In one case, the team was able to remove the JavaScript developer.

At DjangoCon 2022, David Guillot reported reimplementing his company’s SaaS product with HTML-first framework htmx in two months with the following results:

Guillot gladly mentioned that 15,000 lines of JavaScript code disappeared from the codebase; performance improved (as measured by time to interactive and memory usage); the only JavaScript developer in the team left and back-end developers turned to full stack developers.

The htmx team however warns that such spectacular results were achieved because this particular SaaS application was a good fit for htmx’s HTML-first approach:

These are eye-popping numbers, and they reflect the fact that the Contexte application is extremely amenable to hypermedia: it is a content-focused application that shows lots of text and images. We would not expect every web application to see these sorts of numbers.

However, we would expect many applications to see dramatic improvements by adopting the hypermedia/htmx approach, at least for part of their system.

At Devoxx Belgium 2022, Wim Deblauwe showcased the kind of interactivity that htmx can implement without any JavaScript: search-as-you-type input field, update of the user interface as some remote resource is affected by CRUD operations, regularly refresh the user interface with server-sent data, and more.

The htmx team considers an application a good fit for the framework if the UI is mostly text and images; the UI mostly interfaces CRUD operations; and HTML updates mostly take place within well-defined blocks. Conversely, applications with many, dynamic interdependencies, that require offline functionality, or update state extremely frequently would not be a good first for htmx’s hypermedia approach.

In an htmx application, the server returns pages or fragments of pages.

<button hx-post="/clicked"




    Click Me!


The previous HTML excerpt encodes that when a user clicks on the button, htmx issues an HTTP POST request to the /clicked endpoint. Htmx uses the content from the posterior response to replace the element with the id parent-div in the DOM. With htmx, any element (i.e., not just anchors and forms), any event can trigger an HTTP request. The HTML response from the server only updates the relevant part of the UI. For the full overview of htmx capabilities, developers may refer to the documentation.

Interestingly, htmx is often showcased by back-end developers who belong to non-JavaScript ecosystems (e.g., Python/Django/Flask, PHP/Laravel, Ruby/Ruby on Rails). As Guillot mentioned, with htmx, back-end developers extend their scope to the entire stack without having to learn JavaScript, npm, Webpack, React, CSS-in-JS, and many more. Matt Layman, in his You don’t need JavaScript talk summarizes:

"It’s just this constant churn in the [JavaScript] ecosystem and adding a lot of complexity onto a flow when you’re trying to deliver a web application. You want to just get an experience out there that actually works for people. But then you end up fighting with [the toolchain].

At a certain amount of scale, JavaScript can be a fantastic thing to build into your applications. But for an average person or a small team, it’s a ton of extra complexity so my recommendation is: just don’t. We have other options."

htmx is an open-source project under the BSD-2-clause license.

htmx claims to provide AJAX, CSS Transitions, WebSockets, and Server-Sent Events directly in HTML, using attributes, so developers can build user interfaces with the simplicity and power of hypertext.


  • Roy Fielding, Architectural Styles and the Design of Network-based Software Architectures


More references here

Let’s look at the first feature of htmx: the ability of any web page element to issue HTTP requests. This is the core functionality provided by htmx, and it consists of five attributes that can be used to issue the five different developer-facing types of HTTP requests:


Issues an HTTP GET request.


Issues an HTTP POST request.


Issues an HTTP PUT request.


Issues an HTTP PATCH request.


Issues an HTTP DELETE request.


Relative Positional Expressions in Htmx.
  • next - Scan forward in the DOM for the next matching element, e.g., next .error
  • previousScan backwards in the DOM for the closest previous matching element, e.g., previous .alert
  • closest - Scan the parents of this element for the matching element, e.g., the closest table.
  • find - Scan the children of this element for matching element, e.g., find the span.
  • this - the current element is the target (default)


The hx-swap attribute supports the following values:

  • innerHTML - The default, replace the inner html of the target element.
  • outerHTML - Replace the entire target element with the response.
  • beforebegin - Insert the response before the target element.
  • afterbegin - Insert the response before the first child of the target element.
  • beforeend - Insert the response after the last child of the target element.
  • afterend - Insert the response after the target element.
  • delete - Deletes the target element regardless of the response.
  • none - No swap will be performed.

  • settle -Like swap, this allows you to apply a specific delay between when the content has been swapped into the DOM and when its attributes are “settled”, that is, updated from their old values (if any) to their new values. This can give you fine-grained control over CSS transitions.

  • show - Allows you to specify an element that should be shown — that is, scrolled into the viewport of the browser if necessary — when a request is completed.
  • scroll - Allows you to specify a scrollable element (that is, an element with scrollbars), that should be scrolled to the top or bottom when a request is completed.
  • focus-scroll - Allows you to specify that htmx should scroll to the focused element when a request completes. The default for this modifier is “false.”




“Pushes” the request URL (or some other value) into the navigation bar.


Causes a client-side redirection to a new location


Preserves a bit of the DOM between requests; the original content will be kept, regardless of what is returned.


Synchronized requests between two or more elements.


Disables htmx behavior on this element and any children. We will come back to this when we discuss the topic of security.




Let the user know that a search is in progress.

HTTP Request

When placed on an element, each attribute tells the htmx library: “When a user clicks (or whatever) this element, issue an HTTP request of the specified type.”


This will be the string “true” if the request is made via an element using hx-boost.


This will be the browser's current URL.


This will be the string “true” if the request is for history restoration after a miss in the local history cache.


This will contain the user response to an hx-prompt.


This value is always “true” for htmx-based requests.


This value will be the id of the target element if it exists.


If it exists, this value will be the name of the triggered element.


If it exists, this value will be the ID of the triggered element.
once. The given event will only trigger a request once.
  • delay - Allows you to specify a delay to wait before a request is issued. If the event occurs again, the first event is discarded and the timer resets. This allows you to “debounce” requests.

  • changed - Allows you to specify that a request should only be issued when the value property of the given element has changed.

  • throttle - Allows you to throttle events, only issuing them once every certain interval. This is different than delay in that the first event will trigger immediately, but any following events will not trigger until the throttle time period has elapsed.
  • from - A CSS selector that allows you to pick another element to listen for events on. We will see an example of this used later in the chapter.
  • target - A CSS selector that allows you to filter events to only those that occur directly on a given element. In the DOM, events “bubble” to their parent elements, so a click event on a button will also trigger a click event on a parent div, all the way up to the body element. Sometimes you want to specify an event directly on a given element, and this attribute allows you to do that.
  • consume - If this option is set to true, the triggering event will be cancelled and not propagate to parent elements.
  • queue - This option allows you to specify how events are queued in htmx. By default, when htmx receives a triggering event, it will issue a request and start an event queue. If the request is still in flight when another event is received, it will queue the event and, when the request finishes, trigger a new request. By default, it only keeps the last event it receives, but you can modify that behavior using this option: for example, you can set it to none and ignore all triggering events that occur during a request.

Lindy Effect

"The Lindy effect (also known as Lindy's Law) is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age. Thus, the Lindy effect proposes the longer a period something has survived to exist or be used in the present, the longer its remaining life expectancy. Longevity implies a resistance to change, obsolescence, or competition, and greater odds of continued existence into the future.[2] Where the Lindy effect applies, mortality rate decreases with time. Mathematically, the Lindy effect corresponds to lifetimes following a Pareto probability distribution.

The concept is named after Lindy's delicatessen in New York City, where the concept was informally theorized by comedians. The Lindy effect has subsequently been theorized by mathematicians and statisticians. Nassim Nicholas Taleb has expressed the Lindy effect in terms of "distance from an absorbing barrier".

The Lindy effect applies to "non-perishable" items, those that do not have an "unavoidable expiration date". For example, human beings are perishable: the life expectancy at birth in developed countries is about 80 years. So the Lindy effect does not apply to individual human lifespan: all else being equal, it is less likely for a 10-year-old human to die within the next year than for a 100-year-old, while the Lindy effect would predict the opposite. ..." - Wikipedia


MS Access and rapid prototyping

I use MS Access for rapid prototyping. It is a perfect tool for exploring different ideas visually, and it is easy to export into a production-grade database server. It also makes a quick-to-build front end. I use DBeaver to migrate data.

  • Postgresql
  • MS SQL
  • DBeaver


The references below come from isladogs run by Colin Riddington (Mendip Data Systems.

Access Forums

A list of specialist Access and other forums which have a significant Access section

Access World Forums
Currently the largest of the 3 main Access forums in terms of membership and daily traffic. Very active and with a wide range of sample databases and code samples

Utter Access
Although much quieter than before, this is a long-standing Access forum with a broad membership and a beneficial archive of samples and examples.

Access Forums.net
A smaller forum but also with many active members

MS Access Tech Community
Microsoft's current forum for developers. Slightly more complex to use, but several highly knowledgeable members actively involved

Access section of a site covering a wide range of applications as well as Access

Tek Tips
Another site covering a wide range of applications as well as Access

Stack Overflow
Excellent resource for information for a wide range of programming languages. Regards itself as a knowledge base rather than a discussion forum

Access Feedback
This is the latest official Microsoft site for user feedback and suggestions. It replaces the now-defunct Access User Voice site. Of course, whether MS does anything about the input this time is another matter!

Access Help Sites

A list of some of the most helpful help sites for Access

Access Forever
Karl Donaubauer started a new Access blog site in December 2022, assisted by other Access experts George Hepworth, Peter Doering, and Philipp Stiefel. Initially, the site will focus on Access information, including bugs, tools, and events.

Git Hub VBA Tools
The Git Hub website is a vast resource of open-source code. Microsoft now owns it, but they have promised to allow the site to remain fully independent.

No Longer Set
Run by Mike Wolfe. A DAILY blog of advanced techniques, unique perspectives, and strong opinions from the Microsoft Access version control guy.

Isladogs on Access
I am listing my own site here because at least one other site listed here has a direct link back to this page!
It is run by Colin Riddington (Mendip Data Systems). It is another extensive site with a regularly updated range of Access articles, example apps, and code samples!

Dev Hut
Run by Daniel Pineault (CARDA Consultants). Extensive site with a regularly updated range of Access articles and code samples

Computer Learning Zone
Extensive website with articles and example apps created by Access MVP Richard Rost. Many of these are free, whilst others are part of various training courses that can be purchased.

The DBGuy
Excellent site with articles, sample code and example databases by UA moderator, The DBGuy (Leo).

Allen Browne's tips for Microsoft Access
Allen Browne's site has many tips for Access users of all abilities. Unfortunately, Allen is no longer active in this area, but the site remains essential.

Pearson Software Consulting
This website by Chip Pearson is aimed mainly at Excel users, but much of the information applies to Access. Unfortunately, Chip died in April 2018, but his excellent site remains online.

Stephen Lebans retired in 2009, but his site has many very clever utilities, mostly graphics-related and based on extensive use of APIs.

Access Web
Another superb site for Access developers was started by Dev Ashish and is now run by Arvin Meyer. It is no longer updated but is an excellent resource.

Access Junkie
Another excellent site run by Jeff Conrad has many Access articles and code samples.
UPDATE Dec 2019 - Unfortunately, the site is no longer maintained but remains available for now

Access Diva
Run by Gina Whipp, this has a wide range of example code, tips and sample databases.

Commercial site by Luke Chung with many excellent Access utilities. It also has many detailed articles and several free code samples.

Doug Steele
More examples and samples by Access MVP Doug Steele.

Tremendous online learning resource covering HTTP, CSS, PHP, VBA, SQL, Java, and more.

Another outstanding site run by Philipp Stiefel. The site (in German and English) has many detailed Access articles and code samples. Highly recommended.

Windows API Reference: Functions
Very detailed resource of almost all API functions available in Access and other Office programs.

Wizhook reference
Online reference resource by Jason M with information about the little documented hidden Wizhook function

Access Ribbons
Examples and tutorial for ribbon creation by Gunter Avenius (in German and English). Ribbon creator utility available for purchase

Access MVPs
Official site for several Access MVPs. Many examples and code samples

BTAB Development
Free samples, tools, short and full-length tutorials and code. The site was created by Bob Larson & is now run by Juan Soto.

MS Access Gurus
Another very good site run by Crystal Long (MVP). This includes a wide range of articles for developers. Consists of an excellent Analyser tool

Learn MS Access Tips & Tricks
A long-established site run by APR Pillai containing a huge number of tips and tricks for Access users of all skill levels

Data Models
This is an updated version of the old Database Answers website run by Barry Williams. It contains many Industry Data Models with various schemas for database table design. The site is now run by Databases.Biz and is being updated and extended.

ASCII Tables
Helpful reference when you need to use the ACSII number codes for keyboard input

Access Object Model
Microsoft's official site contains documentation for all the objects, properties, methods, and events in the Access object model.

Tech On The Net
Detailed articles listing (almost) all Microsoft Access functions together with how/where each can be used.

Roger's Access Blog
Thoughts, opinions, samples, tips, and tricks about Microsoft Access by Roger Carlson

Wikibooks Visual Basic
Mainly for VB6, but much of the content can easily be adapted for VBA
It covers many different techniques and topics, including object-oriented programming, optimization of programs and coding guidelines.

Connection Strings
Outstanding reference about all types of connection strings. Essential info for all developers.

JKP Application Development
Site in both Dutch and English with valuable tips & code for Excel, Access and other Office applications, including a very useful non-ActiveX Treeview control

Tony Toews Access
One of the oldest Microsoft Access websites was started by Tony Toews (ex-MVP) in 1995.
The site contains Tony's Microsoft Access tips, hints, links, an email FAQ, and access-based accounting systems.

RegEx 101
This is a very useful site for testing & debugging regular expressions.

Extends Class Online Tools for Developers
This site, run by Cyril Bois, contains many handy online tools for developers. All are entirely free to use

This is a very useful site for online conversions. Many other conversion facilities are available, including CSV to JSON and CSV to HTML.

JSONLint is a validator and reformatter for JSON files. Use it to tidy and validate your JSON code.

JSON Editor
Another very useful site for managing JSON files

Microsoft Access Help Center
Official Microsoft site with help articles for Access - mainly for beginners but some helpful info for intermediate/advanced users as well

Microsoft Access Functions
This is the result of a search linking 9 separate Microsoft help articles. Many thanks to Pat Hartman for this suggestion.

Microsoft Access templates
Download site for all Access templates created by Microsoft to demonstrate Access features. NOTE: The quality of these templates varies significantly

My Engineering World
This site is very useful by Christos Samaras, with many tips and hints for Office products and other topics (in English and Greek).

Wayback Machine
The Wayback Machine is an initiative of the non-profit Internet Archive & contains a vast digital library of Internet sites and digital artefacts collected over a period of time.
If a website is no longer available with a 404 Page not found error, you may well find it here.

Access Video Tutorials

A list of some helpful video sites for Access.

Access Europe
This is the official playlist with all the videos for the Europe chapter of AccessUserGroups.org. A new video is added each month.

Isladogs on Access
My own Isladogs YouTube video site has additional content added regularly.

The YouTube channel for all chapters of AccessUserGroups.org has well over 400 videos, and several new videos are added each month.

Computer Learning Zone
This is a huge resource of over 1500 videos created by Access MVP Richard Rost. Many of these are free, while others are part of various training courses that can be purchased.

How to in Access 2013/2016
Complete a course of over 100 Access online tutorials by Steve Bishop for all levels of ability, from absolute beginners through intermediate to advanced users.

Another comprehensive set of videos by Access MVP Crystal Long.

Better VBA
This is yet another wide-ranging set of videos by Philipp Stiefel.

Nifty Access
AWF moderator Tony Hine has created many free Access tutorial videos aimed mainly at beginners and intermediate users.

Microsoft Access Videos
The official Microsoft Access video training site is aimed mainly at beginners.

Datapig Form Basics
Datapig Queries
These are collections of several older Access videos by Mike Alexander. The videos demonstrate various Access techniques in forms & queries.
Previously unavailable for several years, these have recently been uploaded to YouTube by Crystal Long.

Quanta - The New Math of How Large-Scale Order Emerges

The New Math of How Large-Scale Order Emerges By Philip Ball. From Quanta Magazine 10/06/2024

The puzzle of emergence asks how regularities emerge on macro scales out of uncountable constituent parts. A new framework has researchers hopeful that a solution is near.

A few centuries ago, the swirling polychromatic chaos of Jupiter’s atmosphere spawned the immense vortex that we call the Great Red Spot.

From the frantic firing of billions of neurons in your brain comes your unique and coherent experience of reading these words.

As pedestrians each try to weave their path on a crowded sidewalk, they begin to follow one another, forming streams that no one ordained or consciously chose.

The world is full of such emergent phenomena: large-scale patterns and organization arising from innumerable interactions between component parts. And yet there is no agreed scientific theory to explain emergence. Loosely, the behavior of a complex system might be considered emergent if it can’t be predicted from the properties of the parts alone. But when will such large-scale structures and patterns arise, and what’s the criterion for when a phenomenon is emergent and when it isn’t? Confusion has reigned. “It’s just a muddle,” said Jim Crutchfield, a physicist at the University of California, Davis.

“Philosophers have long been arguing about emergence, and going round in circles,” said Anil Seth, a neuroscientist at the University of Sussex in England. The problem, according to Seth, is that we haven’t had the right tools — “not only the tools for analysis, but the tools for thinking. Having measures and theories of emergence would not only be something we can throw at data but would also be tools that can help us think about these systems in a richer way.”

Though the problem remains unsolved, over the past few years, a community of physicists, computer scientists and neuroscientists has been working toward a better understanding. These researchers have developed theoretical tools for identifying when emergence has occurred. And in February, Fernando Rosas, a complex systems scientist at Sussex, together with Seth and five co-authors, went further, with a framework for understanding how emergence arises.

A complex system exhibits emergence, according to the new framework, by organizing itself into a hierarchy of levels that each operate independently of the details of the lower levels. The researchers suggest we think about emergence as a kind of “software in the natural world.” Just as the software of your laptop runs without having to keep track of all the microscale information about the electrons in the computer circuitry, so emergent phenomena are governed by macroscale rules that seem self-contained, without heed to what the component parts are doing.

Using a mathematical formalism called computational mechanics, the researchers identified criteria for determining which systems have this kind of hierarchical structure. They tested these criteria on several model systems known to display emergent-type phenomena, including neural networks and Game-of-Life-style cellular automata. Indeed, the degrees of freedom, or independent variables, that capture the behavior of these systems at microscopic and macroscopic scales have precisely the relationship that the theory predicts.

No new matter or energy appears at the macroscopic level in emergent systems that isn’t there microscopically, of course. Rather, emergent phenomena, from Great Red Spots to conscious thoughts, demand a new language for describing the system. “What these authors have done is to try to formalize that,” said Chris Adami, a complex-systems researcher at Michigan State University. “I fully applaud this idea of making things mathematical.”

A Need for Closure

Rosas came at the topic of emergence from multiple directions. His father was a famous conductor in Chile, where Rosas first studied and played music. “I grew up in concert halls,” he said. Then he switched to philosophy, followed by a degree in pure mathematics, giving him “an overdose of abstractions” that he “cured” with a Ph.D. in electrical engineering.

A few years ago, Rosas started thinking about the vexed question of whether the brain is a computer. Consider what goes on in your laptop. The software generates predictable and repeatable outputs for a given set of inputs. But if you look at the actual physics of the system, the electrons won’t all follow identical trajectories each time. “It’s a mess,” said Rosas. “It’ll never be exactly the same.”

The software seems to be “closed,” in the sense that it doesn’t depend on the detailed physics of the microelectronic hardware. The brain behaves somewhat like this too: There’s a consistency to our behaviors even though the neural activity is never identical in any circumstance.

Rosas and colleagues figured that in fact there are three different types of closure involved in emergent systems. Would the output of your laptop be any more predictable if you invested lots of time and energy in collecting information about all the microstates — electron energies and so forth — in the system? Generally, no. This corresponds to the case of informational closure: As Rosas put it, “All the details below the macro are not helpful for predicting the macro.”

What if you want not just to predict but to control the system — does the lower-level information help there? Again, typically no: Interventions we make at the macro level, such as changing the software code by typing on the keyboard, are not made more reliable by trying to alter individual electron trajectories. If the lower-level information adds no further control of macro outcomes, the macro level is causally closed: It alone is causing its own future.


This situation is rather common. Consider, for instance, that we can use macroscopic variables like pressure and viscosity to talk about (and control) fluid flow, and knowing the positions and trajectories of individual molecules doesn’t add useful information for those purposes. And we can describe the market economy by considering companies as single entities, ignoring any details about the individuals that constitute them.

The existence of a useful coarse-grained description doesn’t, however, by itself define an emergent phenomenon, said Seth. “You want to say something else in terms of the relationship between levels.” Enter the third level of closure that Rosas and colleagues think is needed to complete the conceptual apparatus: computational closure. For this they have turned to computational mechanics, a discipline pioneered by Crutchfield.

Crutchfield introduced a conceptual device called the ε- (epsilon) machine. This device can exist in some finite set of states and can predict its own future state on the basis of its current one. It’s a bit like an elevator, said Rosas; an input to the machine, like pressing a button, will cause the machine to transition to a different state (floor) in a deterministic way that depends on its past history — namely, its current floor, whether it’s going up or down and which other buttons were pressed already. Of course an elevator has myriad component parts, but you don’t need to think about them. Likewise, an ε-machine is an optimal way to represent how unspecified interactions between component parts “compute” — or, one might say, cause — the machine’s future state.

Computational mechanics allows the web of interactions between a complex system’s components to be reduced to the simplest description, called its causal state. The state of the complex system at any moment, which includes information about its past states, produces a distribution of possible future states. Whenever two or more such present states have the same distribution of possible futures, they are said to be in the same causal state. Our brains will never twice have exactly the same firing pattern of neurons, but there are plenty of circumstances where nevertheless we’ll end up doing the same thing.

Rosas and colleagues considered a generic complex system as a set of ε-machines working at different scales. One of these might, say, represent all the molecular-scale ions, ion channels and so forth that produce currents in our neurons; another represents the firing patterns of the neurons themselves; another, the activity seen in compartments of the brain such as the hippocampus and frontal cortex. The system (here the brain) evolves at all those levels, and in general the relationship between these ε-machines is complicated. But for an emergent system that is computationally closed, the machines at each level can be constructed by coarse-graining the components on just the level below: They are, in the researchers’ terminology, “strongly lumpable.” We might, for example, imagine lumping all the dynamics of the ions and neurotransmitters moving in and out of a neuron into a representation of whether the neuron fires or not. In principle, one could imagine all kinds of different “lumpings” of this sort, but the system is only computationally closed if the ε-machines that represent them are coarse-grained versions of each other in this way. “There is a nestedness” to the structure, Rosas said.

A highly compressed description of the system then emerges at the macro level that captures those dynamics of the micro level that matter to the macroscale behavior — filtered, as it were, through the nested web of intermediate ε-machines. In that case, the behavior of the macro level can be predicted as fully as possible using only macroscale information — there is no need to refer to finer-scale information. It is, in other words, fully emergent. The key characteristic of this emergence, the researchers say, is this hierarchical structure of “strongly lumpable causal states.”

Leaky Emergence

The researchers tested their ideas by seeing what they reveal about a range of emergent behaviors in some model systems. One is a version of a random walk, where some agent wanders around haphazardly in a network that could represent, for example, the streets of a city. A city often exhibits a hierarchy of scales, with densely connected streets within neighborhoods and much more sparsely connected streets between neighborhoods. The researchers find that the outcome of a random walk through such a network is highly lumpable. That is, the probability of the wanderer starting in neighborhood A and ending up in neighborhood B — the macroscale behavior — remains the same regardless of which streets within A or B the walker randomly traverses.

The researchers also considered artificial neural networks like those used in machine-learning and artificial-intelligence algorithms. Some of these networks organize themselves into states that can reliably identify macroscopic patterns in data regardless of microscopic differences between the states of individual neurons in the network. The decision of which pattern will be output by the network “works at a higher level,” said Rosas.


Would Rosas’ scheme help to understand the emergence of robust, large-scale structure in a case like Jupiter’s Great Red Spot? The huge vortex “might satisfy computational closure” Rosas said, “but we’d need to do a proper analysis before being able to claim anything.”

As for living organisms, they seem sometimes to be emergent but sometimes more “vertically integrated,” where microscopic changes do influence large-scale behavior. Consider, for example, a heart. Despite considerable variations in the details of which genes are being expressed, and how much, or what the concentrations of proteins are from place to place, all of our heart muscle cells seem to work in essentially the same way, enabling them to function en masse as a pump driven by coherent, macroscopic electrical pulses passing through the tissue. But it’s not always this way. While many of our genes carry mutations that make no difference to our health, sometimes a mutation — just one genetic “letter” in a DNA sequence that is “wrong” — can be catastrophic. So the independence of the macro from the micro is not complete: There is some leakage between levels. Rosas wonders if living organisms are in fact optimized by allowing for such “leaky” partial emergence — because in life, sometimes it is essential for the macro to heed the details of the micro.

Emergent Causes

Rosas’ framework could help complex systems researchers see when they can and can’t hope to develop predictive coarse-grained models. When a system meets the key requirement of being computationally closed, “you don’t lose any faithfulness by simulating the upper levels and neglecting the lower levels,” he said. But ultimately Rosas hopes an approach like his might answer some deep questions about the structure of the universe — why, for example, life seems to exist only at scales intermediate between the atomic and the galactic.

The framework also has implications for understanding the tricky question of cause and effect in complex and emergent systems. Traditionally, causation has been assumed to flow from the bottom up: Our choices and actions, for example, are ultimately attributed to those firing patterns of our neurons, which in turn are caused by flows of ions across cell membranes.

But in an emergent system, this is not necessarily so; causation can operate at a higher level independently from lower-level details. Rosas’ new computational framework seems to capture this aspect of emergence, which was also explored in earlier work. In 2013, neuroscientist Giulio Tononi of the University of Wisconsin, Madison, working with Erik Hoel and Larissa Albantakis (also at Wisconsin), claimed that, according to a particular measure of causal influence called effective information, the overall behavior of some complex systems is caused more at the higher than the lower levels. This is called causal emergence.

The 2013 work using effective information could have been just a quirk of measuring causal influence this way. But recently, Hoel and neuroscientist Renzo Comolatti have shown that it is not. They took 12 different measures of causal power proposed in the literature and found that with all of them, some complex systems show causal emergence. “It doesn’t matter what measure of causation you pick,” Hoel said. “We just went out into the literature and picked other people’s definitions of causation, and all of them showed causal emergence.” It would be bizarre if this were some chance quirk of all those different measures.

For Hoel, emergent systems are ones whose macroscale behavior has some immunity to randomness or noise at the microscale. For many complex systems, there’s a good chance you can find coarse-grained, macroscopic descriptions that minimize that noise. “It’s that minimization that lies at the heart of a good notion of emergence,” he said.

Tononi says that, while his approach and that of Rosas and colleagues address the same kinds of systems, they have somewhat different criteria for causal emergence. “They define emergence as being when the macro system can predict itself as much as it can be predicted from the micro level,” he said. “But we require more causal information at the macro level than at the micro level.”

The new ideas touch on the issue of free will. While hardened reductionists have argued that there can be no free will because all causation ultimately arises from interactions of atoms and molecules, free will may be rescued by the formalism of higher-level causation. If the main cause of our actions is not our molecules but the emergent mental states that encode memories, intentions, beliefs and so forth, isn’t that enough for a meaningful notion of free will? The new work shows that “there are sensible ways to think about macro-level causation that explain how agents can have a worthwhile form of causal efficacy,” Seth said.

Still, there remains disagreement among researchers about whether macroscopic, agent-level causation can emerge in complex systems. “I’m uncomfortable with this idea that the macroscale can drive the microscale,” said Adami. “The macroscale is just degrees of freedom that you’ve invented.” This is the sort of issue that the scheme proposed by Rosas and colleagues might help to resolve, by burrowing into the mechanics of how different levels of the system speak to one another, and how this conversation must be structured to achieve independence of the macro from the details of the levels below.

At this point, some of the arguments are pretty fuzzy. But Crutchfield is optimistic. “We’ll have this figured out in five or 10 years,” he said. “I really think the pieces are there.”


Links in the Quanta article.

Software in the natural world: A computational approach to hierarchical emergence

by Fernando E. Rosas, Bernhard C. Geiger, Andrea I Luppi, Anil K. Seth,  Daniel Polani, Michael Gastpar, and Pedro A.M. Mediano


Understanding the functional architecture of complex systems is crucial to illuminate their inner workings and enable effective methods for their prediction and control. Recent advances have introduced tools to characterise emergent macroscopic levels;  however, while these approaches are successful in identifying when emergence takes place, they are limited in the extent they can determine how it does. Here we address this important limitation by developing a computational approach to emergence, which characterises macroscopic processes in terms of their computational capabilities. Concretely, we articulate a view on emergence based on how software works, which is rooted on a mathematical formalisation of how macroscopic processes can express self-contained informational, interventional, and computational properties. This framework reveals a hierarchy of nested self-contained processes that determines what computations take place at what level, which in turn delineates the functional architecture of a complex system. This approach is illustrated on paradigmatic models from the statistical physics and computational neuroscience literature, which are shown to exhibit macroscopic processes that are akin to software in human-engineered systems. Overall, this framework enables a deeper understanding of the multi-level structure of complex systems,  revealing specific ways in which they can be efficiently simulated,  predicted, and controlled.

FIG. 1. Illustration of causal states. Causal states are sets of of trajectories which bear equal predictions for the future evolution of the system, as defined by the equivalence relationship in Eq.

FIG. 2. The two faces of ϵ-machines. Illustration of the dual interpretation of ϵ-machines that establish a bridge between causality and computation. a) Causal face: View of ϵ-machines as the effective mechanism driving the system, acting ‘behind the scenes’ to generate observable data (a1). Technically, this corresponds to interpreting it as a hidden Markov process — i.e., dynamics that take place on variables Et on a latent state-space, while generating the observable data Xt (a2). b) Computational face. Alternative view of ϵ-machines as discrete automata, where the data corresponds to inputs given by a user driving the system between different states (b1). Technically, this corresponds to seeing it as a discrete automata with states ek, whose deterministic transitions are governed by the input data xi (b2). Note that (a1) focuses on variables (e.g. Xt, Et), while (b2) portraits the states that those variables can take (e.g. x0, e0). Fig. (a1) is adapted from Ref. [39].

FIG. 3. The various machines associated with a macroscopic process. Diagram of the relationship between the different machines associated with a macroscopic process Z and its corresponding microscopic process X. The ϵ-machines with causal states Et and E ′ t correspond to the optimal prediction of the future of X and Z, respectively, using data from the same level. In contrast, the υ-machine with causal states Ut provides optimal prediction of the future of Z using data from X, hence using the minimal amount of micro information for optimally predicting the future of the macro.
FIG. 4. Example of computational closure. Illustration where micro causal states are shown as small golden nodes and macro causal states are represented as big pale-yellow nodes. Transitions of micro causal states are represented as simple arrows responding to three possible inputs: two inputs denoted by a and b (not shown) trigger transitions within the same macro state, and one input denoted by c (not show) triggers a transition to a new macro state. The coarse-graining f(a) = f(b) = 0 and f(c) = 1 generate deterministic dynamics for the macro states represented by double arrows, whereas 0 makes the state to remain and 1 makes a transition to the next state.

FIG. 5. Multilevel analysis via ϵ-machines. a) Optimal automata can be built at different levels of coarse-graining of observed data. Each automaton accounts for the resulting patterns taking place at that scale. b) If the considered levels of description are computationally closed, then the automata of higher levels are coarse-grainings of the ones of levels below. This process of coarse-graining of machines reveals the computations taking place at each of those levels.

FIG. 6. The multiple hiearchies describing multi-level computations in a complex system. Left: Lattice of all possible coarse-grainings, here illustrated for the case of a process that can take five possible values. Center : Sub-lattice of only those coarse-grainings that are causally/informationally closed. Right: Lattice of strongly-lumpable coarse-grainings of the ϵ-machine of the microscopic level. Only the last lattice provides a minimal blueprint that highlights the distinct computational processes, and distinguishes which computations take place at what level. 

FIG. 7. Possible computational architectures of an emergent macroscopic level. Our theory shows that the computations carried out by a causally closed process Z with respect to a microscopic process X and the trivial coarsegraining 1 can be categorised within four groups, illustrated here. The computations are the same as the ones at the microscale if the ϵ-machine of X and Z are equivalent (as in b and d), and are trivial if the ϵ-machines of Z and 1 are equivalent (as in c and d). At the left of each subplot is the lattice of coarse-grainings in real space, which is the same for the four cases; at the right is the lattice of corresponding ϵmachines in theory space, which better illustrates the effective computational structure of the system. 

FIG. 8. Conserved quantities in elementary cellular automata. Illustration of the computations associated to different types of conserved quantities. a) Rule 60 forces configurations to have even parity. Hence, the parity is a conserved quantity which is computationally trivial, akin to case (c) in Figure 7. b) In contrast, rule 150 keeps the parity of the initial condition. Hence, while the parity is also a conserved quantity for these dynamics, the computations associated with it are non-trivial, akin to case (a) in Figure 7.
FIG. 9. Ehrenfest diffusion model. a) The model considers particles contained in two connected chambers. The microscopic description of the system (Xt) is a binary vector that specifies in which container is each particle, while the macroscopic description (Zt) is the number of particles in the left chamber. b) Illustration of the finite state machine description of the ϵ-machine corresponding to the macroscopic variable. c) One realisation of the dynamics of the macroscopic process of a system of n = 40 particles, which naturally oscillates around n/2. 

FIG. 10. Energy dynamics of an Ising model are causally closed. When considering the Ising model under Glauber dynamics, it can be shown that its energy is a macroscopic variable whose dynamics are causally — and hence also computationally — closed.

 FIG. 11. Causally closed coarse-grainings of a random walk over a network. A random walk on a modular network can be coarse-grained such that the dynamics over the module’s labels is causally closed. Furthermore, by considering equivalence classes of modules given by their size provides a further causally closed macroscopic process

FIG. 12. Hopfield network compute memory retrieval on a causally closed macroscopic level. The state of a Hopfield network is determined by the activity of each of the involved neurons, here represented as a square grid. Nonetheless, the similarity between the present pattern and the patterns that the network stores (denoted by Z µ t , with µ ∈ {1, 2, 3, 4, 5} in the figure), which determines to which of the stored patterns is more similar to the current configuration. Our results show that Zt = (Z 1 t , . . . , Z5 t ) is a causally closed coarse-graining of the neural system, which critically determines the memory retrieval process. 

FIG. 13. Diagram illustrating the relationships between closure and lumpability of Markov chains. Informational/causal closure imply computational closure (Theorem 2). Within the space of Markov X, strong lumpability of X implies information closure (Proposition 4). The same does not hold for weak lumpability and computational closure: If X is weakly lumpable, then the same does not need to hold for E due to the minimality property of ε-machines. The diagram refers to (counter)examples in the text. Indeed, Example 1 is strongly lumpable, while Counterexample 4 is weakly lumpable. 

Mike Notes

Dogfood anyone

Ajabbi "eats its own dog food".

It runs on Pipi 9.

I use it every day to do real work.

I designed it so that I could use it to solve complex problems first. Soon, you will be able to use it, too.

So, I get to experience any bugs, which motivates me to fix them fast.

Any SaaS worth anything also eats its own dog food. Many don't, so don't use those ones.


ARIA and Design Systems

Pipi is primarily a workplace tool. One of Ajabbi's top objectives is to use technology to eliminate barriers so that disabled people can work as a fundamental human right.

The Pipi 9 Design System database has a draft set of UI component primitives to which any other design system components could be mapped.

The Pipi 9 Content Management System (CMS) always starts with these primitives (then calls the derived design system) when rendering UI on web pages.

I previously created a rough list of primitives. Then, I discovered that ARIA has a well-defined list of "patterns" that could be used as primitives.

ARIA list of "Patterns"

Solving several problems

  • Accessibility is not an afterthought.
  • Any Design System UI components would automatically be referenced to ARIA, and so work for disabled people.
  • ARIA is an accepted, robust international standard. Occasionally, the standards will be updated.
  • It answers the problem of what component primitives to use and how to name and describe them. It would also save a lot of work.
  • The page layout still works. And ARIA Roles would be the default names to use.
  • Nested components still work. ARIA will ignore the nesting.
  • The Design Tokens will not be affected.

  • The reference to ARIA in the Design System docs will be automated.

Unanswered questions

  • What happens with other languages? Are WAI-ARIA roles translated into Chinese, French, etc?

  • Are there Design System components that won't map to these primitives?

  • Is this the latest version of patterns? MDN and W3C ARIA have slightly different lists.

Next steps

  • Importing the ARIA "pattern" definitions.
  • Update the derived Pipi Design System.
  • Look at a series of existing websites that compare how different design systems implement the same component and what names they use. Try importing popular design systems, e.g. Bootstrap, Foundation, and Material, to ensure no mapping problems.


GitLab and PostHog Handbooks

I recently came across PostHog and discovered they had an open handbook on how they work. In turn, they were inspired by the GitLab handbook. I  have a lot of similar material looking for a home. Being open and transparent is a good plan. I will shamelessly copy the layout and format of these handbooks and create one for Ajabbi using my own content. I also liked the very open editorial policy content of Smashing Magazine. Remote also has a handbook.

Laws of software

"In software engineering, the laws of software evolution refer to a series of laws that Lehman and Belady formulated starting in 1974 with respect to software evolution.The laws describe a balance between forces driving new developments on one hand, and forces that slow down progress on the other hand. Over the past decades the laws have been revised and extended several times. ...

All told, eight laws were formulated:

  1. (1974) "Continuing Change" — an E-type system must be continually adapted or it becomes progressively less satisfactory.
  1. (1974) "Increasing Complexity" — as an E-type system evolves, its complexity increases unless work is done to maintain or reduce it.
  1. (1974) "Self Regulation" — E-type system evolution processes are self-regulating with the distribution of product and process measures close to normal.
  1. (1978) "Conservation of Organisational Stability (invariant work rate)" — the average effective global activity rate in an evolving E-type system is invariant over the product's lifetime.
  1. (1978) "Conservation of Familiarity" — as an E-type system evolves, all associated with it, developers, sales personnel and users, for example, must maintain mastery of its content and behaviour to achieve satisfactory evolution. Excessive growth diminishes that mastery. Hence the average incremental growth remains invariant as the system evolves.
  1. (1991) "Continuing Growth" — the functional content of an E-type system must be continually increased to maintain user satisfaction over its lifetime.
  1. (1996) "Declining Quality" — the quality of an E-type system will appear to be declining unless it is rigorously maintained and adapted to operational environment changes.
  1. (1996) "Feedback System" (first stated 1974, formalised as law 1996) — E-type evolution processes constitute multi-level, multi-loop, multi-agent feedback systems and must be treated as such to achieve significant improvement over any reasonable base.

" - Wikipedia