Pipi 9 RBAC

Mike's Notes

This is a summary of Pipi 9 RBAC (Role-based access control) now in use globally.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

18/09/2025

Pipi 9 RBAC

By: Mike Peters
On a Sandy Beach: 18/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Pipi 9 uses automated RBAC (Role-based access control) to control user access. Pipi 10 will have additional authorisation frameworks available to be chosen by enterprise accounts.

User Account Properties

  • ID
  • Code Name
  • Name
  • Default Language
  • Global authorisation framework (RBAC)
  • Account type
  • ...

Deployment Properties

  • ID
  • Code Name
  • Name
  • Description
  • One language (eg English)
  • One inherited User Account
  • One Deployment Class (type of tenancy)
  • ...

Workspace Properties

  • ID
  • Code Name
  • Name
  • Description
  • One inherited language (eg English)
  • One inherited User Account
  • One inherited Deployment
  • One Domain Model
  • One Domain Model Template
  • ...

Autonomous Agent Properties

  • ID
  • Code Name
  • Name
  • Description
  • One Type (Pipi System, Engine, Domain, Algorithm, CAS, etc.
  • ...
  • Many Roles
  • Many Policies
  • ...

Media Unit Options

Mike's Notes

My developing thoughts on how Ajabbi could record and broadcast quality video. This could be used for live streaming, office hours, interviews, and more.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/09/2025

Media Unit Options

By: Mike Peters
On a Sandy Beach: 17/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

The time is coming when much better video communication will be needed at Ajabbi. I have some older gear that will suffice for the moment, including high-quality cables, tripods, and microphones.

The main issue is a decent broadcast camera that doesn't require an operator.

"Set and forget"

Recently, I have watched video bloggers who interview guests. Their setup is highly reliable and produces good quality images and sound.

Looking at glimpses of the gear in the video, it seems that the Blackmagic Studio Camera and Rhode studio boom arm look good and are popular.

Blackmagic Studio Camera

"The Blackmagic Studio Camera 4K Plus G2 has the same features as large studio cameras, miniaturized into a single compact and portable design. Plus with digital film camera dynamic range and color science, the cameras can handle extremely difficult lighting conditions while producing cinematic looking images. The sensor features an ISO up to 25,600 so you can create amazing images even in dimly lit venues. It even works under moonlight! Advanced features include tally, camera control, built in color corrector, Blackmagic RAW recording to USB disks and much more! You can even add a focus and zoom demand for lens control!" - BlackMagic

Blackmagic Studio Converter

"With 10G Ethernet on Blackmagic Studio Converter and Blackmagic Studio Camera Pro, you can connect all camera signals using one Ethernet IP link connection. That means the camera feed, return program feed, timecode, reference, tally, talkback and control are all sent down the single cable. That's the same benefits of SMPTE fiber, but the standard 10G copper Ethernet cable is much lower cost. The Blackmagic Studio Converter allows breakout of all the video, audio and talkback connections at the studio end. It also includes a massive power supply that powers the camera down the Ethernet cable, so you don't even need a power source near the camera!" - BlackMagic

Rhode Studio Boom Arm

"The PSA1+ is the ultimate studio boom arm for podcasters, streamers and broadcasters. Its innovative parallelogram spring design ensures ultra-smooth movement and precise microphone placement in any position, while its fully damped internal springs and neoprene arm cover eliminate mechanical noise for completely silent operation. Integrated cable management ensures your setup is tidy and its extended reach and full 360-degree rotation make it easy to position your microphone exactly where it needs to be. Take your content to the next level." - Rhode

Still to sort out

A simple lighting kit, preferably LED. I need to ask around for recommendations.

Now I just need to find the money.

UI Guideline is now open source

Mike's Notes

Pipi 9 uses the names and technical properties of  UI Guideline v2. components in the Pipi Design System Engine. Additional components from open-source Metro UI have been added as well, eg, Table, Ribbon, etc.

Sergio Ruiz, AKA " Seruda", today announced that UI Guideline is now open source. UI Guideline standardises the names and descriptions of UI Components by surveying the top Design Systems in use today.

 Thanks for your great work, Seruda.

Once Ajabbi is making good coin, the open-source UI Guideline effort could be supported. (Ajabbi intends to provide generous long-term support to all open-source software that Pipi uses.)

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > UI Guideline
  • Home > Handbook > 

Last Updated

16/09/2025

UI Guideline is now open source

By: Sergio Ruiz
UI Guideline: 16/09/2025

Sergio Ruiz is the author of UI Guideline.

Hello, team

It’s been a while since my last update, and today I want to share something very important about the future of UI Guideline.

Why we’re changing

For more than 3 years, we’ve been curating and improving UI Guideline. In the beginning, our “lifetime license” model helped us grow. Thanks to those of you who supported us early, UI Guideline became possible.

But as tools like ChatGPT appeared, it became clear that our old model wasn’t sustainable. If we continued that way, we wouldn’t be able to dedicate the time and resources needed to keep UI Guideline alive and growing.

That’s why we’ve decided to evolve: all UI Guideline data will now be open source. This allows us to share our research with the whole community and keep improving with everyone’s contributions.

What this means for lifetime license users

First of all: thank you. Without your trust, UI Guideline wouldn’t be where it is today.

We know you believed in us and supported us with a lifetime license, and we want to make sure you feel valued—not left behind. That’s why you will now be recognized as VIP Users.

Being a VIP User means you’ll have:

  • Early access to any new features or products we launch.
  • Exclusive offers and perks reserved only for you.
  • Unlimited access, forever.
  • (And more benefits we’re designing together with the next version of UI Guideline).

Even though we are still shaping the details of UI Guideline v3, please know that our priority is to reward and add value to those who trusted us from the beginning.

Thank you again for being part of this journey. We’re excited about what’s coming, and we’ll be sharing more updates in the coming months.

The UI Guideline Team


The Design System... of top Design Systems

Save a lot of time in researching, defining and creating your UI components by synthesizing all the wisdom of the most popular Design Systems and UI libraries in one place." - UI Guideline 

Our researching process

Check out the step-by-step process of extracting and documenting UI components from top systems.

Step 1. Annual Top 20 Systems

Every year, we select the top 20 best Design Systems and UI libraries. This choice is based on several criteria: a survey conducted among hundreds of developers and designers, popularity, the number of components, whether they are up-to-date or not, and, of course, our experience of over 5 years in crafting UI components.

Step 2. Manual Review and Consolidation of Patterns

For each component, we manually reviewed the 20 systems one by one, looking for a repeating pattern. We observed the way they name the component, how they define its props and anatomy, and, above all, the best practices. Finally, we consolidate all this data into a single file. e.g. modal_consolidate.json.

Step 3. Identify a common UI Pattern

Identify a common pattern and synthesize these into a new file that defines the UI component in detail, including props, anatomy, alternative names, among other aspects. In UI Guideline, you'll be able to find the details of each component in one place.

Why? Save hours of time and effort

With UI Guideline, we save you hours of research and definition of your own components. For example, when you need to define or create a Sidebar component from scratch, you will no longer have to review system by system. Here you'll find all the necessary information to define your Sidebar, based on the wisdom of the best systems of the year.

Active item visual representation

60 Unique Components:

Unveiled Through Extensive Research

Explore 60 components analyzed from 20 design systems and UI Library, offering you refined insights and practical design solutions.

  • Accordion
  • Alert
  • Avatar
  • Badge
  • Breadcrumbs
  • Button
  • Calendar
  • Card
  • Carousel
  • Checkbox
  • Collapse
  • Color Picker
  • Combobox
  • Date Picker
  • Divider
  • Empty State
  • Error State
  • File Uploader
  • Inline Alert
  • Link
  • Menu
  • Modal
  • Number Input
  • Pagination
  • Popover
  • Progress Bar
  • Radio
  • Rating
  • Search
  • Select
  • Sidebar
  • Skeleton
  • Slider
  • Spinner
  • Stepper
  • Success State

Why developers stopped asking permission to use feature flags

Mike's Notes

I discovered this article about how Antona uses Feature Flags via the Refactoring newsletter, from Luca Rossi. 

"Feature Flags as part of the core system, not an external service."

I came to the same conclusion and have also added Feature Flags, AKA Feature Toggles, to Pipi.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Building Atona
  • Home > Ajabbi Research > Library > Subscriptions > Refactoring
  • Home > Handbook > 

Last Updated

15/09/2025

Why developers stopped asking permission to use feature flags

By: Troy McAlpin
Building Atono: 11/07/2025

Troy is the co-founder of Atono.

We interviewed eight developers about their experience with feature flags—both in previous tools and now in Atono—to understand how different approaches affect development workflows. What we discovered wasn't about technical capabilities, but about something more fundamental.

The most revealing moment in our research wasn't about features. It was about friction.

Dylan, one of our senior developers, described his experience at a previous company: "When you've got multiple systems for these things, it was harder to convince PMs when there wasn't a feature flag for a story. There was always reluctance to create a new feature flag because they proliferate through the codebase and never go away."

Then he contrasted it with Atono: "When feature flags are in the same system, it makes more sense to do it the more powerful way. I can be more trusting that it will eventually go away."

That shift—from asking permission to autonomy in making decisions—reveals something fundamental about how tool design shapes team behavior. When we embedded feature flags directly into stories instead of managing them in separate systems, we accidentally eliminated a bureaucratic bottleneck that most teams don't even realize they have.

The hidden cost of separated systems

Feature flags aren't new. Most development teams understand their value for safer deployments and gradual rollouts. But implementation often gets derailed by organizational friction.

In traditional setups, feature flags live in dedicated services like LaunchDarkly or Split. Someone has to provision them, configure permissions, and manage lifecycle. These tools also come with expensive licenses, which usually means only a few people in the company can toggle flags—adding another layer of access restrictions. Creating a flag becomes a decision that requires justification, coordination, and follow-up.

Over time, this can lead developers to avoid the overhead by shipping features the old way—bigger releases, more risk, less experimentation.

Mark Henzi, our VP of Engineering, put it simply: "Before Atono, we didn't really use feature flags at all. Now we do it by default—it gives us the confidence to move faster without worrying about users." That shift—from hesitation to habit—is what happens when flagging stops feeling like a heavyweight process.

It changed from a system of asking permission to one of trust. That subtle shift encouraged developers to move faster, take initiative, and experiment more freely.

How embedded flags empower developers

Embedding flags directly in stories wasn't just about convenience—it fundamentally changed how our developers approach feature development and gave them unprecedented control over their work.

Sandra described the practical empowerment: "It's really nice when I'm working on something new that can't get released yet—I just throw in a feature flag and go straight into Atono to turn it on and off. It beats having to change an environment variable."

But Lex identified the deeper confidence shift: "It makes it easier for me to think about the features I'm developing. I just think, 'oh, it's all behind a feature flag. It's okay.' I can be more trusting that it will eventually go away."

Set up and manage how features roll out across environments—right from the story view in Atono.

This isn't about technical capability. It's about developer autonomy at the moment they structure their work. When flags are integrated with stories, developers gain ownership over feature rollout decisions rather than depending on external gatekeepers.

The empowerment extends beyond individual confidence. Developers now approach feature development with the assumption they control the release timeline. They can experiment freely, knowing they have the power to enable features for themselves first, then gradually expand access as confidence grows. This sense of ownership transforms how they think about risk, testing, and iteration.

The broader pattern of developer empowerment

The feature flag insight revealed a broader principle about empowering development teams through tool design.

When workflows require switching between systems or asking permission from gatekeepers, developers naturally optimize for tool limitations rather than best practices. They avoid feature flags not because they don't understand their value, but because the bureaucratic overhead makes experimentation feel expensive and risky.

This pattern shows up with other development practices too. In many teams, developers skip writing comprehensive tests when testing tools are disconnected from development workflows. They avoid refactoring when tracking technical debt requires separate systems with their own approval processes. They compromise on thorough code review when it adds complex coordination steps to deployment pipelines.

The solution isn't cramming everything into one interface. It's identifying the moments where permission structures discourage good practices, then designing those bottlenecks away. When developers have direct control over feature flags within their normal workflow, they use them by default. When they have to ask someone else or switch systems, flags become an exception rather than standard practice.

It's not about having more features. It's about giving developers direct control over their tools and reducing friction between intention and action.

What this means for empowering development teams

This research changed how we think about feature development—specifically, how to design tools that empower rather than constrain developer decision-making.

Instead of asking "what features do teams need?" we started asking "what good practices do teams avoid because they require permission or coordination?" The answer shapes different design decisions around developer autonomy.

For feature flags, it meant embedding them in stories so any developer can create and control them without external approval. For acceptance criteria, it meant making them directly editable and linkable so developers can reference and update requirements without bureaucratic overhead. For team coordination, it meant connecting Slack discussions directly to stories so developers can initiate focused conversations without manual setup.

The pattern isn't about building comprehensive platforms that replace everything. It's about identifying the moments where permission structures discourage best practices, then giving developers direct control over those workflows.

When developers have the autonomy to use good practices without asking permission, those practices become habits rather than exceptions. When bureaucratic friction is removed and developers feel empowered to make technical decisions, development velocity increases in ways that are measurable and sustainable.

Self-Assembly Gets Automated in Reverse of ‘Game of Life’

Mike's Notes

An excellent article from Quanta Magazine. Pipi 9 has some similarities to this.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Quanta Magazine
  • Home > Handbook > 

Last Updated

14/09/2025

Self-Assembly Gets Automated in Reverse of ‘Game of Life’

By: George Musser
Quanta Magazine 10/09/2025

George Musser is a contributing editor at Scientific American magazine and the author of two books, Spooky Action at a Distance and The Complete Idiot’s Guide to String Theory. He is the recipient of the 2011 American Institute of Physics Science Writing Award and the 2010 American Astronomical Society’s Jonathan Eberhart Planetary Sciences Journalism Award. He was a Knight Science Journalism Fellow at MIT from 2014 to 2015. He can be found on Mastodon and Bluesky.

In cellular automata, simple rules create elaborate structures. Now researchers can start with the structures and reverse-engineer the rules.

Alexander Mordvintsev showed me two clumps of pixels on his screen. They pulsed, grew and blossomed into monarch butterflies. As the two butterflies grew, they smashed into each other, and one got the worst of it; its wing withered away. But just as it seemed like a goner, the mutilated butterfly did a kind of backflip and grew a new wing like a salamander regrowing a lost leg.

Mordvintsev, a research scientist at Google Research in Zurich, had not deliberately bred his virtual butterflies to regenerate lost body parts; it happened spontaneously. That was his first inkling, he said, that he was onto something. His project built on a decades-old tradition of creating cellular automata: miniature, chessboard-like computational worlds governed by bare-bones rules. The most famous, the Game of Life, first popularized in 1970, has captivated generations of computer scientists, biologists and physicists, who see it as a metaphor for how a few basic laws of physics can give rise to the vast diversity of the natural world.

In 2020, Mordvintsev brought this into the era of deep learning by creating neural cellular automata, or NCAs. Instead of starting with rules and applying them to see what happened, his approach started with a desired pattern and figured out what simple rules would produce it. “I wanted to reverse this process: to say that here is my objective,” he said. With this inversion, he has made it possible to do “complexity engineering,” as the physicist and cellular-automata researcher Stephen Wolfram proposed in 1986(opens a new tab) — namely, to program the building blocks of a system so that they will self-assemble into whatever form you want. “Imagine you want to build a cathedral, but you don’t design a cathedral,” Mordvintsev said. “You design a brick. What shape should your brick be that, if you take a lot of them and shake them long enough, they build a cathedral for you?”

Such a brick sounds almost magical, but biology is replete with examples of basically that. A starling murmuration or ant colony acts as a coherent whole, and scientists have postulated simple rules that, if each bird or ant follows them, explain the collective behavior.  Similarly, the cells of your body play off one another to shape themselves into a single organism. NCAs are a model for that process, except that they start with the collective behavior and automatically arrive at the rules.

Alexander Mordvintsev created complex cell-based digital systems that use only neighbor-to-neighbor communication.

Courtesy of Alexander Mordvintsev

The possibilities this presents are potentially boundless. If biologists can figure out how Mordvintsev’s butterfly can so ingeniously regenerate a wing, maybe doctors can coax our bodies to regrow a lost limb. For engineers, who often find inspiration in biology, these NCAs are a potential new model for creating fully distributed computers that perform a task without central coordination. In some ways, NCAs may be innately better at problem-solving than neural networks.

Life’s Dreams

Mordvintsev was born in 1985 and grew up in the Russian city of Miass, on the eastern flanks of the Ural Mountains. He taught himself to code on a Soviet-era IBM PC clone by writing simulations of planetary dynamics, gas diffusion and ant colonies. “The idea that you can create a tiny universe inside your computer and then let it run, and have this simulated reality where you have full control, always fascinated me,” he said.

He landed a job at Google’s lab in Zurich in 2014, just as a new image-recognition technology based on multilayer, or “deep,” neural networks was sweeping the tech industry. For all their power, these systems were (and arguably still are) troublingly inscrutable. “I realized that, OK, I need to figure out how it works,” he said.

He came up with “deep dreaming,” a process that takes whatever patterns a neural network discerns in an image, then exaggerates them for effect. For a while, the phantasmagoria that resulted — ordinary photos turned into a psychedelic trip of dog snouts, fish scales and parrot feathers — filled the internet. Mordvintsev became an instant software celebrity.

Among the many scientists who reached out to him was Michael Levin(opens a new tab) of Tufts University, a leading developmental biologist. If neural networks are inscrutable, so are biological organisms, and Levin was curious whether something like deep dreaming might help to make sense of them, too. Levin’s email reawakened Mordvintsev’s fascination with simulating nature, especially with cellular automata.

From a single cell, this neural cellular automata transforms into the shape of a lizard.

The core innovation made by Mordvintsev, Levin and two other Google researchers, Ettore Randazzo and Eyvind Niklasson, was to use a neural network to define the physics of the cellular automaton. In the Game of Life (or just “Life” as it’s commonly called), each cell in the grid is either alive or dead and, at each tick of the simulation clock, either spawns, dies or stays as is. The rules for how each cell behaves appear as a list of conditions: “If a cell has more than three neighbors, it dies,” for example. In Mordvintsev’s system, the neural network takes over that function. Based on the current condition of a cell and its neighbors, the network tells you what will happen to that cell. The same type of network is used to classify an image as, say, a dog or cat, but here it classifies the state of cells. Moreover, you don’t need to specify the rules yourself; the neural network can learn them during the training process.

To start training, you seed the automaton with a single “live” cell. Then you use the network to update the cells over and over again for dozens to thousands of times. You compare the resulting pattern to the desired one. The first time you do this, the result will look nothing like what you intended. So you adjust the neural network’s parameters, rerun the network to see whether it does any better now, make further adjustments, and repeat. If rules exist that can generate the pattern, this procedure should eventually find them.

The adjustments can be made using either backpropagation, the technique that powers most modern deep learning, or a genetic algorithm, an older technique that mimics Darwinian evolution. Backpropagation is much faster, but it doesn’t work in every situation, and it required Mordvintsev to adapt the traditional design of cellular automata. Cell states in Life are binary — dead or alive — and transitions from one state to the other are abrupt jumps, whereas backpropagation demands that all transitions be smooth. So he adopted an approach developed by, among others, Bert Chan at Google’s Tokyo lab in the mid-2010s. Mordvintsev made the cell states continuous values, anything from 0 to 1, so they are never strictly dead or alive, but always somewhere in between.

Mordvintsev also found that he had to endow each cell with “hidden” variables, which do not indicate whether that cell is alive or dead, or what type of cell it is, but nonetheless guide its development. “If you don’t do that, it just doesn’t work,” he said. In addition, he noted that if all the cells updated at the same time, as in Life, the resulting patterns lacked the organic quality he was seeking. “It looked very unnatural,” he said. So he began to update at random intervals.

Finally, he made his neural network fairly beefy — 8,000 parameters. On the face of it, that seems perplexing. A direct translation of Life into a neural network would require just 25 parameters, according to simulations done in 2020 by Jacob Springer, who is now a doctoral student at Carnegie Mellon University, and Garrett Kenyon of Los Alamos National Laboratory. But deep learning practitioners often have to supersize their networks, because learning to perform a task is harder than actually performing it.

Moreover, extra parameters mean extra capability. Although Life can generate immensely rich behaviors, Mordvintsev’s monsters reached another level entirely.

Fixer Upper

The paper that introduced NCAs to the world in 2020 included an applet(opens a new tab) that generated the image of a green lizard. If you swept your mouse through the lizard’s body, you left a trail of erased pixels, but the animal pattern soon rebuilt itself. The power of NCAs not just to create patterns, but to re-create them if they got damaged, entranced biologists. “NCAs have an amazing potential for regeneration,” said Ricard Solé of the Institute of Evolutionary Biology in Barcelona, who was not directly involved in the work.

The butterfly and lizard images are not realistic animal simulations; they do not have hearts, nerves or muscles. They are simply colorful patterns of cells in the shape of an animal. But Levin and others said they do capture key aspects of morphogenesis, the process whereby biological cells form themselves into tissues and bodies. Each cell in a cellular automaton responds only to its neighbors; it does not fall into place under the direction of a master blueprint. Broadly, the same is true of living cells. And if cells can self-organize, it stands to reason that they can self-reorganize.

Cut off the tail of an NCA lizard and the form will regenerate itself.

Sometimes, Mordvintsev found, regeneration came for free. If the rules shaped single pixels into a lizard, they also shaped a lizard with a big gash through it into an intact animal again. Other times, he expressly trained his network to regenerate. He deliberately damaged a pattern and tweaked the rules until the system was able to recover. Redundancy was one way to achieve robustness. For example, if trained to guard against damage to the animal’s eyes, a system might grow backup copies. “It couldn’t make eyes stable enough, so they started proliferating — like, you had three eyes,” he said.

"A kind of computer that looks like an NCA instead would be a vastly more efficient kind of computer." - Blaise Agüera y Arcas

Sebastian Risi(opens a new tab), a computer scientist at the IT University of Copenhagen, has sought to understand what exactly gives NCAs their regenerative powers. One factor, he said, is the unpredictability that Mordvintsev built into the automaton through features such as random update intervals. This unpredictability forces the system to develop mechanisms to cope with whatever life throws at it, so it will take the loss of a body part in stride. A similar principle holds for natural species. “Biological systems are so robust because the substrate they work on is so noisy,” Risi said.

Last year, Risi, Levin and Ben Hartl, a physicist at Tufts and the Vienna University of Technology, used NCAs to investigate how noise leads to robustness. They added one feature to the usual NCA architecture: a memory. This system could reproduce a desired pattern either by adjusting the network parameters or by storing it pixel-by-pixel in its memory. The researchers trained it under various conditions to see which method it adopted.

If all the system had to do was reproduce a pattern, it opted for memorization; fussing with the neural network would have been overkill. But when the researchers added noise to the training process, the network came into play, since it could develop ways to resist noise. And when the researchers switched the target pattern, the network was able to learn it much more rapidly because it had developed transferable skills such as drawing lines, whereas the memorization approach had to start from scratch. In short, systems that are resilient to noise are more flexible in general.

Even if disturbed, the textures created by NCAs have the ability to heal themselves.

The researchers argued that their setup is a model for natural evolution. The genome does not prescribe the shape of an organism directly; instead, it specifies a mechanism that generates the shape. That enables species to adapt more quickly to new situations, since they can repurpose existing capabilities. “This can tremendously speed up an evolutionary process,” Hartl said.

Ken Stanley, an artificial intelligence researcher at Lila Sciences who has studied computational and natural evolution, cautioned that NCAs, powerful though they are, are still an imperfect model for biology. Unlike machine learning, natural evolution does not work toward a specific goal. “It’s not like there was an ideal form of a fish or something which was somehow shown to evolution, and then it figured out how to encode a fish,” he noted. So the lessons from NCAs may not carry over to nature.

Auto Code

In regenerating lost body parts, NCAs demonstrate a kind of problem-solving capability, and Mordvintsev argues that they could be a new model for computation in general. Automata may form visual patterns, but their cell states are ultimately just numerical values processed according to an algorithm. Under the right conditions, a cellular automaton is as fully general as any other type of computer.

The standard model of a computer, developed by John von Neumann in the 1940s, is a central processing unit combined with memory; it executes a series of instructions one after another. Neural networks are a second architecture that distributes computation and memory storage over thousands to billions of interconnected units operating in parallel. Cellular automata are like that, but even more radically distributed. Each cell is linked only to its neighbors, lacking the long-range connections that are found in both the von Neumann and the neural network architectures. (Mordvintsev’s neural cellular automata incorporate a smallish neural network into each cell, but cells still communicate only with their neighbors.)

"You are forcing it not to memorize that answer, but to learn a process to develop the solution." - Stefano Nichele

Long-range connections are a major power drain, so if a cellular automaton could do the job of those other systems, it would save energy. “A kind of computer that looks like an NCA instead would be a vastly more efficient kind of computer,” said Blaise Agüera y Arcas, the chief technology officer of the Technology and Society division at Google.

But how do you write code for such a system? “What you really need to do is come up with [relevant] abstractions, which is what programming languages do for von Neumann–style computation,” said Melanie Mitchell of the Santa Fe Institute. “But we don’t really know how to do that for these massively distributed parallel computations.”

A neural network is not programmed per se. The network acquires its function through a training process. In the 1990s Mitchell, Jim Crutchfield of the University of California, Davis, and Peter Hraber at the Santa Fe Institute showed how cellular automata could do the same. Using a genetic algorithm, they trained automata to perform a particular computational operation, the majority operation: If a majority of the cells are dead, the rest should die too, and if the majority are alive, all the dead cells should come back to life. The cells had to do this without any way to see the big picture. Each could tell how many of its neighbors were alive and how many were dead, but it couldn’t see beyond that. During training, the system spontaneously developed a new computational paradigm. Regions of dead or living cells enlarged or contracted, so that whichever predominated eventually took over the entire automaton. “They came up with a really interesting algorithm, if you want to call it an algorithm,” Mitchell said.

She and her co-authors didn’t develop these ideas further, but Mordvintsev’s system has reinvigorated the programming of cellular automata. In 2020 he and his colleagues created an NCA that read handwritten digits, a classic machine learning test case. If you draw a digit within the automaton, the cells gradually change in color until they all have the same color, identifying the digit. This year, Gabriel Béna of Imperial College London and his authors, building on unpublished work by the software engineer Peter Whidden, created algorithms for matrix multiplication and other mathematical operations. “You can see by eye that it’s learned to do actual matrix multiplication,” Béna said.

Stefano Nichele, a professor at Østfold University College in Norway who specializes in unconventional computer architectures, and his co-authors recently adapted NCAs to solve problems from the Abstraction and Reasoning Corpus, a machine learning benchmark aimed at measuring progress toward general intelligence. These problems look like a classic IQ test. Many consist of pairs of line drawings; you have to figure out how the first drawing is transformed into the second and then apply that rule to a new example. For instance, the first might be a short diagonal line and the second a longer diagonal line, so the rule is to extend the line.

Neural networks typically do horribly, because they are apt to memorize the arrangement of pixels rather than extract the rule. A cellular automaton can’t memorize because, lacking long-range connections, it can’t take in the whole image at once. In the above example, it can’t see that one line is longer than the other. The only way it can relate them is to go through a process of growing the first line to match the second. So it automatically discerns a rule, and that enables it to handle new examples. “You are forcing it not to memorize that answer, but to learn a process to develop the solution,” Nichele said.

Other researchers are starting to use NCAs to program robot swarms. Robot collectives were envisioned by science fiction writers such as Stanisłav Lem in the 1960s and started to become reality in the ’90s. Josh Bongard, a robotics researcher at the University of Vermont, said NCAs could design robots that work so closely together that they cease to be a mere swarm and become a unified organism. “You imagine, like, a writhing ball of insects or bugs or cells,” he said. “They’re crawling over each other and remodeling all the time. That’s what multicellularity is really like. And it seems — I mean, it’s still early days — but it seems like that might be a good way to go for robotics.”

To that end, Hartl, Levin and Andreas Zöttl, a physicist at the University of Vienna, have trained virtual robots — a string of beads in a simulated pond — to wriggle like a tadpole. “This is a super-robust architecture for letting them swim,” Hartl said.

For Mordvintsev, the crossover between biology, computers and robots continues a tradition dating to the early days of computing in the 1940s, when von Neumann and other pioneers freely borrowed ideas from living things. “To these people, the relation between self-organization, life and computing was obvious,” he said. “Those things somehow diverged, and now they are being reunified.”

The rise and fall of the standard user interface

Mike's Notes

An overview of the development of the standard user interface by Liam Proven. There are many links in the original article. The Register is a good read. The three references are useful.

Resources

References

  • Common User Access - Principles of User Interface Design, IBM. 1989.
  • Human Interface Guidelines: The Apple Desktop Interface. Apple. 1997.
  • Principles of User Interface Design: Common User Access. 2007.

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Register
  • Home > Handbook > 

Last Updated

13/09/2025

The rise and fall of the standard user interface

By: Liam Proven
The Register: 24/01/2024

Liam is an EMEA-based Register journalist covering free-and-open-source software (FOSS), operating systems, and cloud developments. Prior to joining our publisher, he has had decades of experience in the worlds of IT and publishing, with roles ranging from tech support and IT manager to teacher, technical writer, and software director.

IBM's SAA and CUA brought harmony to software design… until everyone forgot

Retro Tech Week In the early days of microcomputers, everyone just invented their own user interfaces, until an Apple-influenced IBM standard brought about harmony. Then, sadly, the world forgot.

In 1981, the IBM PC arrived and legitimized microcomputers as business tools, not just home playthings. The PC largely created the industry that the Reg reports upon today, and a vast and chaotic market for all kinds of software running on a vast range of compatible computers. Just three years later, Apple launched the Macintosh and made graphical user interfaces mainstream. IBM responded with an obscure and sometimes derided initiative called Systems Application Architecture, and while that went largely ignored, one part of it became hugely influential over how software looked and worked for decades to come.

One bit of IBM's vast standard described how software user interfaces should look and work – and largely by accident, that particular part caught on and took off. It didn't just guide the design of OS/2; it also influenced Windows, and DOS and DOS apps, and of pretty much all software that followed. So, for instance, the way almost every Linux desktop and GUI app works is guided by this now-forgotten doorstop of 1980s IBM documentation.

But its influence never reached one part of the software world: the Linux (and Unix) shell. Today, that failure is coming back to bite us. It's not the only reason – others lie in Microsoft marketing and indeed Microsoft legal threats, as well as the rise of the web and web apps, and of smartphones.

Culture clash

Although they have all blurred into a large and very confused whole now, 21st century softwares evolved out of two very different traditions. On one side, there are systems that evolved out of Unix, a multiuser OS designed for minicomputers: expensive machines, shared by a team or department, and used only via dumb text-only terminals on slow serial connections. At first, these terminals were teletypes – pretty much a typewriter with a serial port, physically printing on a long roll of paper. Devices with screens, glass teletypes, only came along later and at first faithfully copied the design – and limitation – of teletypes.

That evolution forced the hands of the designers of early Unix software: the early terminals didn't have cursor keys, or backspace, or modifier keys like Alt. (A fun aside: the system for controlling such keys, called Bucky bits, is another tiny part of the great legacy of Niklaus Wirth whose nickname while in California was "Bucky.") So, for instance, one of the original glass teletypes, the Lear-Siegler ADM3A, is the reason for Vi's navigation keys, and ~ meaning the user's home directory on Linux.

When you can't freely move the cursor around the screen, or redraw isolated regions of the screen, it's either impossible or extremely slow to display menus over the top of the screen contents, or have them change to reflect the user navigating the interface.

The other type of system evolved out of microcomputers: inexpensive, standalone, single-user computers, usually based on the new low-end tech of microprocessors. Mid-1970s microprocessors were fairly feeble, eight-bit things, meaning they could only handle a maximum of 64kB of RAM. One result was tiny, very simple OSes. But the other was that most had their own display and keyboard, directly attached to the CPU. It was cheaper, but it was faster. That meant video games, and that meant pressure to get a graphical display, even if a primitive one.

The first generation of home computers, from Apple, Atari, Commodore, Tandy, plus Acorn and dozens of others – all looked different, worked differently, and were totally mutually incompatible. It was the Wild West era of computing, and that was just how things were. Worse, there was no spare storage for luxuries like online help.

However, users were free to move the cursor around the screen and even edit the existing contents. Free from the limitations of being on the end of a serial line that only handled (at the most) some thousands of bits per second, apps and games could redraw the screen whenever needed. Meanwhile, even when micros were attached to bigger systems as terminals, over on Unix, design decisions that had been made to work around these limitations of glass teletypes still restricted how significant parts of the OS worked – and that's still true in 2024.

Universes collide

By the mid-1980s, the early eight-bit micros begat a second generation of 16-bit machines. In user interface design, things began to settle on some agreed standards of how stuff worked… largely due to the influence of Apple and the Macintosh.

Soon after Apple released the Macintosh, it published a set of guidelines for how Macintosh apps should look and work, to ensure that they were all similar enough to one another to be immediately familiar. You can still read the 1987 edition online [PDF].

This had a visible influence on the 16-bit generation of home micros, such as Commodore's Amiga, Atari's ST, and Acorn's Archimedes. (All right, except the Sinclair QL, but it came out before the Macintosh.) They all have reasonable graphics and sound, a floppy drive, and came with a mouse as standard. They all aped IBM's second-generation keyboard layout, too, which was very different from the original PC keyboard.

But most importantly, they all had some kind of graphical desktop – the famed WIMP interface. All had hierarchical menus, a standard file manager, copy-and-paste between apps: things that we take for granted today, but which in 1985 or 1986 were exciting and new. Common elements, such as standard menus and dialog boxes, were often reminscent of MacOS.

One of the first graphical desktops to conspicuously imitate the Macintosh OS was Digital Research's GEM. The PC version was released in February 1985, and Apple noticed the resemblance and sued, which led to PC GEM being crippled. Fortunately for Atari ST users, when that machine followed in June, its version of GEM was not affected by the lawsuit.

The ugly duckling

Second-generation, 16-bit micros looked better and worked better – all except for one: the poor old IBM PC-compatible. These dominated business, and sold in the millions, but mid-1980s versions still had poor graphics, and no mouse or sound chip as standard. They came with text-only OSes: for most owners, PC or MS-DOS. For a few multiuser setups doing stock control, payroll, accounts and other unglamorous activities, DR's Concurrent DOS or SCO Xenix. Microsoft offered Windows 1 and then 2, but they were ugly, unappealing, had few apps, and didn't sell well.

This is the market IBM tried to transform in 1987 with its new PS/2 range of computers, which set industry standards that kept going well into this century: VGA and SVGA graphics, high-density 1.4MB 3.5 inch floppy drives, and a new common design for keyboard and mouse connectors – and came with both ports as standard.

IBM also promoted a new OS it had co-developed with Microsoft, OS/2, which we looked at 25 years on. OS/2 did not conquer the world, but as mentioned in that article, one aspect of OS/2 did: part of IBM's Systems Application Architecture. SAA was an ambitious effort to define how computers, OSes and apps could communicate, and in IBM mainframe land, a version is still around. One small part of SAA did succeed, a part called Common User Access. (The design guide mentioned in that blog post is long gone, but the Reg FOSS desk has uploaded a converted version to the Internet Archive.)

CUA proposed a set of principles on how to design a user interface: not just for GUI or OS/2 apps, but all user-facing software, including text-only programs, even on mainframe terminals. CUA was, broadly, IBM's version of Apple's Human Interface Guidelines – but cautiously, proposed a slightly different interface, as it was published around the time of multiple look-and-feel lawsuits, such as Apple versus Microsoft, Apple versus Digital Research, and Lotus versus Paperback.

CUA advised a menu bar, with a standard set of single-word menus, each with a standard basic set of options, and standardized dialog boxes. It didn't assume the computer had a mouse, so it defined standard keystrokes for opening and navigating menus, as well as for near-universal operations such opening, saving and printing files, cut, copy and paste, accessing help, and so on.

There is a good summary of CUA design in this 11-page ebook, Principles of UI Design [PDF]; it's from 2007 and has a slight Windows XP flavor to the pictures.

CUA brought SAA-nity to DOS

Windows 3.0 was released in 1990 and precipated a transformation in the PC industry. For the first time, Windows looked and worked well enough that people actually used it from choice. Windows 3's user interface – it didn't really have a desktop as such – was borrowed directly from OS/2 1.2, from Program Manager and File Manager, right down to the little fake-3D shaded minimize and maximize buttons. Its design is straight by the CUA book.

Even so, Windows took a while to catch on. Many PCs were not that high-spec. If you had a 286 PC, it could use a megabyte or more of memory. If you had a 386 with 2MB of RAM, it could run in 386 Enhanced Mode, and not merely multitask DOS apps but also give them 640kB each. But for comparison, this vulture's work PC in 1991 only had 1MB of RAM, and the one before that didn't have a mouse, like many late-1980s PCs.

As a result, DOS apps continued to be the best sellers. The chart-topping PC apps of 1990 were WordPerfect v5.1 and Lotus 1-2-3 v2.2.

Lotus 123 screenshot

Lotus 1-2-3 was the original PC killer app and is a good example of a 1980s user interface. It had a two-line menu at the top of the screen, opened with the slash key. File was the fifth option, so to open a file, you pressed /, f, then r for Retrieve.

Microsoft Word for DOS also had a two-line menu, but at the bottom of the screen, with file operations under Transfer. So, in Word, the same operation used the Esc key to open the menus, then t, then l for Load.

JOE, running on macOS 12, is a flashback to WordStar in the 1980s.

Pre-WordPerfect hit word-processor WordStar used Ctrl plus letters, and didn't have a shortcut for opening a file, so you needed Ctrl+k, d, then pick a file and press d again to open a Document. For added entertainment, different editions of WordStar used totally different keystrokes: WordStar 2000 had a whole new interface, as did WordStar Express, known to many Amstrad PC owners as WordStar 1512.

The word processor that knocked WordStar off the Number One spot was the previous best-selling version of WordPerfect, 4.2. WordPerfect used the function keys for everything, to the extent that its keyboard template acted as a sort of copy-protection: it was almost unusable without one. (Remarkably, they are still on sale.) To open a file in WordPerfect, you pressed F7 for the full-screen File menu, then 3 to open a document. The big innovation in WordPerfect 5 was that, in addition to the old UI, it also had CUA-style drop-down menus at the top of the screen, which made it much more usable. For many fans, WordPerfect 5.1 remains the classic version to this day.

Every main DOS application had its own, unique user interface, and nothing was sacred. While F1 was Help in many programs, WordPerfect used F3 for that. Esc was often some form of Cancel, but WordPerfect used it to repeat a character.

With every app having a totally different UI, even knowing one inside-out didn't help in any other software. Many PC users mainly used one program and couldn't operate anything else. Some software vendors encouraged this, as it helped them sell companion apps with compatible interfaces – for example, WordPerfect vendors SSI also offered a database called DataPerfect, while Lotus Corporation offered a 1-2-3 compatible word processor, Lotus Manuscript.

WordPerfect 7 for UNIX, running perfectly happily in a Linux terminal in 2022

CUA came to this chaotic landscape and imposed a sort of ceasefire. Even if you couldn't afford a new PC able to run Windows well, you could upgrade your apps with new versions with this new, standardized UI. Microsoft Word 5.0 for DOS had the old two-line menus, but Word 5.5 had the new look. (It's now a free download and the curious can try it.) WordPerfect adopted the menus in addition to its old UI, so experienced users could just keep going while newbies could explore and learn their way around gradually.

Borland acquired the Paradox database and grafted on a new UI based on its TurboVision text-mode windowing system, loved by many from its best-selling development tools – as dissected in this excellent recent retrospective, The IDEs we had 30 years ago… and we lost.

The chaos creeps back

IBM's PS/2 range brought better graphics, but Windows 3 was what made them worth having, and its successor Windows 95 ended up giving the PC a pretty good GUI of its own. In the meantime, though, IBM's CUA standard brought DOS apps into the 1990s and caused vast improvements in usability: what IBM's guide called a "walk up and use" design, where someone who has never seen a program before can operate it first time.

The impacts of CUA weren't limited to DOS. The first ever cross-Unix graphical desktop, the Open Group's Common Desktop Environment uses a CUA design. Xfce, the oldest Linux desktop of all was modelled on CDE, so it sticks to CUA, even now.

Released one year after Xfce, KDE was based on a CUA design, but its developers seem to be forgetting that. In recent versions, some components no longer have menu bars. KDE also doesn't honour Windows-style shortcuts for window management and so on. GNOME and GNOME 2 were largely CUA-compliant, but GNOME 3 famously eliminated most of that… which opened up window of opportunity for Linux Mint, which methodically put that UI back.

For the first decade of graphical Linux desktops environments, they all looked and worked pretty much like Windows. The first version of Ubuntu was released in 2004, arguably the first decent desktop Linux distro that was free of charge, which put GNOME 2 in front of a lot of new users.

Microsoft, of course, noticed. The Reg had already warned of future patent claims against Linux in 2003. In 2006, it began, with fairly general statements. In 2007, Microsoft started counting the patents that it claimed Linux desktops infringed, although it was too busy to name them.

Over a decade ago, The Reg made the case that the profusion of new, less-Windows-like desktops such as GNOME 3 and Unity were a direct result of this. Many other new environments have also come about since then, including a profusion of tiling window managers – the Arch wiki lists 14 for X.org and another dozen for Wayland.

These are mostly tools for Linux (and, yes, other FOSS Unix-like OS) users juggling multiple terminal windows. The Unix shell is a famously rich environment: hardcore shell users find little reason to leave it, except for web browsing.

And this environment is the one place where CUA never reached. There are many reasons. One is that tools such as Vi and Emacs were already well-established by the 1980s, and those traditions continued into Linux. Another is, as we said earlier, that tools designed for glass-teletype terminals needed different UIs, which have now become deeply entrenched.

Aside from Vi and Emacs, most other common shell editors don't follow CUA, either. The popular Joe uses classic WordStar keystrokes. Pico and Nano have their own.

Tilde

It's not that they don't exist. They do, they just never caught on, despite plaintive requests. Midnight Commanders' mcedit is a stab in the general direction. The FOSS desk favourite Tilde is CUA, and as 184 comments imply, that's controversial.

The old-style tools could be adapted perfectly well. A decade ago, a project called Cream made Vim CUA-compliant; more recently, the simpler Modeless Vim delivers some of that. GNU Emacs' built-in cua-mode does almost nothing to modify the editor's famously opaque UI, but ErgoEmacs does a much better job. Even so, these remain tiny, niche offerings.

The problem is that developers who grew up with these pre-standardization tools, combined with various keyboardless fondleslabs where such things don't exist, don't know what CUA means. If someone's not even aware there is a standard, then the tools they build won't follow it. As the trajectories of KDE and GNOME show, even projects that started out compliant can drift in other directions.

This doesn't just matter for grumpy old hacks. It also disenfranchizes millions of disabled computer users, especially blind and visually-impaired people. You can't use a pointing device if you can't see a mouse pointer, but Windows can be navigated 100 per cent keyboard-only if you know the keystrokes – and all blind users do. Thanks to the FOSS NVDA tool, there's now a first-class screen reader for Windows that's free of charge.

Most of the same keystrokes work in Xfce, MATE and Cinnamon, for instance. Where some are missing, such as the Super key not opening the Start menu, they're easily added. This also applies to environments such as LXDE, LXQt and so on.

Indeed, as we've commented before, the Linux desktop lacks diversity of design, but where you find other designs, the price is usually losing the standard keyboard UI. This is not necessary or inevitable: for instance, most of the CUA keyboard controls worked fine in Ubuntu's old Unity desktop, despite its Mac-like appearance. It's one of the reasons we still like it.

Menus bars, dialog box layouts, and standard keystrokes to operate software are not just some clunky old 1990s design to be casually thrown away. They were the result of millions of dollars and years of R&D into human-computer interfaces, a large-scale effort to get different types of computers and operating systems talking to one another and working smoothly together. It worked, and it brought harmony in place of the chaos of the 1970s and 1980s and the early days of personal computers. It was also a vast step forward in accessibility and inclusivity, opening computers up to millions more people.

Just letting it fade away due to ignorance and the odd traditions of one tiny subculture among computer users is one of the biggest mistakes in the history of computing.

Footnote

Yes, we didn't forget Apple kit. MacOS comes with the VoiceOver screen reader built in, but it imposes a whole new, non-CUA interface, so you can't really use it alongside a pointing device as Windows users can. As for VoiceOver on Apple fondleslabs, we don't recommend trying it. For a sighted user, it's the 21st century equivalent of setting the language on a mate's Nokia to Japanese or Turkish.

Namespace Engine key to RBAC

Mike's Notes

After a bit of experimentation, it turns out the Namespace Engine (nsp) is key to reliably implementing RBAC globally.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

12/09/2025

Namespace Engine key to RBAC

By: Mike Peters
On a Sandy Beach: 12/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

This afternoon, I figured out that the Namespace Engine (nsp) already has a way to register the interfaces of every agent that is automatically built by the Factory Engine (fac). This includes industry domain-based applications, such as Websites, Health and Rail.

I added additional Interface Class Types, "Role", and "Policy", and solved the problem of making this global.

Roles

This allowed for the rapid addition of roles to any autonomous agent.

Examples

  • Website Owner
  • Website Administrator
  • Website Editor
  • Website Visitor
  • Website Search Engine
  • etc

This automatically generates security role names used by the Security Engine (scr).

Policy

This allowed for the rapid addition of policy to any autonomous agent.

Examples

  • CNAME Record
  • Website Hosting
  • Patient Record
  • etc

This automatically generates security policy names used by the Security Engine (scr).

Integration

This would also enable configuration storage in XML or other open formats for interchange and documentation.

This configuration system could be used for open-source SaaS applications.