Self-Assembly Gets Automated in Reverse of ‘Game of Life’

Mike's Notes

An excellent article from Quanta Magazine. Pipi 9 has some similarities to this.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Quanta Magazine
  • Home > Handbook > 

Last Updated

14/09/2025

Self-Assembly Gets Automated in Reverse of ‘Game of Life’

By: George Musser
Quanta Magazine 10/09/2025

George Musser is a contributing editor at Scientific American magazine and the author of two books, Spooky Action at a Distance and The Complete Idiot’s Guide to String Theory. He is the recipient of the 2011 American Institute of Physics Science Writing Award and the 2010 American Astronomical Society’s Jonathan Eberhart Planetary Sciences Journalism Award. He was a Knight Science Journalism Fellow at MIT from 2014 to 2015. He can be found on Mastodon and Bluesky.

In cellular automata, simple rules create elaborate structures. Now researchers can start with the structures and reverse-engineer the rules.

Alexander Mordvintsev showed me two clumps of pixels on his screen. They pulsed, grew and blossomed into monarch butterflies. As the two butterflies grew, they smashed into each other, and one got the worst of it; its wing withered away. But just as it seemed like a goner, the mutilated butterfly did a kind of backflip and grew a new wing like a salamander regrowing a lost leg.

Mordvintsev, a research scientist at Google Research in Zurich, had not deliberately bred his virtual butterflies to regenerate lost body parts; it happened spontaneously. That was his first inkling, he said, that he was onto something. His project built on a decades-old tradition of creating cellular automata: miniature, chessboard-like computational worlds governed by bare-bones rules. The most famous, the Game of Life, first popularized in 1970, has captivated generations of computer scientists, biologists and physicists, who see it as a metaphor for how a few basic laws of physics can give rise to the vast diversity of the natural world.

In 2020, Mordvintsev brought this into the era of deep learning by creating neural cellular automata, or NCAs. Instead of starting with rules and applying them to see what happened, his approach started with a desired pattern and figured out what simple rules would produce it. “I wanted to reverse this process: to say that here is my objective,” he said. With this inversion, he has made it possible to do “complexity engineering,” as the physicist and cellular-automata researcher Stephen Wolfram proposed in 1986(opens a new tab) — namely, to program the building blocks of a system so that they will self-assemble into whatever form you want. “Imagine you want to build a cathedral, but you don’t design a cathedral,” Mordvintsev said. “You design a brick. What shape should your brick be that, if you take a lot of them and shake them long enough, they build a cathedral for you?”

Such a brick sounds almost magical, but biology is replete with examples of basically that. A starling murmuration or ant colony acts as a coherent whole, and scientists have postulated simple rules that, if each bird or ant follows them, explain the collective behavior.  Similarly, the cells of your body play off one another to shape themselves into a single organism. NCAs are a model for that process, except that they start with the collective behavior and automatically arrive at the rules.

Alexander Mordvintsev created complex cell-based digital systems that use only neighbor-to-neighbor communication.

Courtesy of Alexander Mordvintsev

The possibilities this presents are potentially boundless. If biologists can figure out how Mordvintsev’s butterfly can so ingeniously regenerate a wing, maybe doctors can coax our bodies to regrow a lost limb. For engineers, who often find inspiration in biology, these NCAs are a potential new model for creating fully distributed computers that perform a task without central coordination. In some ways, NCAs may be innately better at problem-solving than neural networks.

Life’s Dreams

Mordvintsev was born in 1985 and grew up in the Russian city of Miass, on the eastern flanks of the Ural Mountains. He taught himself to code on a Soviet-era IBM PC clone by writing simulations of planetary dynamics, gas diffusion and ant colonies. “The idea that you can create a tiny universe inside your computer and then let it run, and have this simulated reality where you have full control, always fascinated me,” he said.

He landed a job at Google’s lab in Zurich in 2014, just as a new image-recognition technology based on multilayer, or “deep,” neural networks was sweeping the tech industry. For all their power, these systems were (and arguably still are) troublingly inscrutable. “I realized that, OK, I need to figure out how it works,” he said.

He came up with “deep dreaming,” a process that takes whatever patterns a neural network discerns in an image, then exaggerates them for effect. For a while, the phantasmagoria that resulted — ordinary photos turned into a psychedelic trip of dog snouts, fish scales and parrot feathers — filled the internet. Mordvintsev became an instant software celebrity.

Among the many scientists who reached out to him was Michael Levin(opens a new tab) of Tufts University, a leading developmental biologist. If neural networks are inscrutable, so are biological organisms, and Levin was curious whether something like deep dreaming might help to make sense of them, too. Levin’s email reawakened Mordvintsev’s fascination with simulating nature, especially with cellular automata.

From a single cell, this neural cellular automata transforms into the shape of a lizard.

The core innovation made by Mordvintsev, Levin and two other Google researchers, Ettore Randazzo and Eyvind Niklasson, was to use a neural network to define the physics of the cellular automaton. In the Game of Life (or just “Life” as it’s commonly called), each cell in the grid is either alive or dead and, at each tick of the simulation clock, either spawns, dies or stays as is. The rules for how each cell behaves appear as a list of conditions: “If a cell has more than three neighbors, it dies,” for example. In Mordvintsev’s system, the neural network takes over that function. Based on the current condition of a cell and its neighbors, the network tells you what will happen to that cell. The same type of network is used to classify an image as, say, a dog or cat, but here it classifies the state of cells. Moreover, you don’t need to specify the rules yourself; the neural network can learn them during the training process.

To start training, you seed the automaton with a single “live” cell. Then you use the network to update the cells over and over again for dozens to thousands of times. You compare the resulting pattern to the desired one. The first time you do this, the result will look nothing like what you intended. So you adjust the neural network’s parameters, rerun the network to see whether it does any better now, make further adjustments, and repeat. If rules exist that can generate the pattern, this procedure should eventually find them.

The adjustments can be made using either backpropagation, the technique that powers most modern deep learning, or a genetic algorithm, an older technique that mimics Darwinian evolution. Backpropagation is much faster, but it doesn’t work in every situation, and it required Mordvintsev to adapt the traditional design of cellular automata. Cell states in Life are binary — dead or alive — and transitions from one state to the other are abrupt jumps, whereas backpropagation demands that all transitions be smooth. So he adopted an approach developed by, among others, Bert Chan at Google’s Tokyo lab in the mid-2010s. Mordvintsev made the cell states continuous values, anything from 0 to 1, so they are never strictly dead or alive, but always somewhere in between.

Mordvintsev also found that he had to endow each cell with “hidden” variables, which do not indicate whether that cell is alive or dead, or what type of cell it is, but nonetheless guide its development. “If you don’t do that, it just doesn’t work,” he said. In addition, he noted that if all the cells updated at the same time, as in Life, the resulting patterns lacked the organic quality he was seeking. “It looked very unnatural,” he said. So he began to update at random intervals.

Finally, he made his neural network fairly beefy — 8,000 parameters. On the face of it, that seems perplexing. A direct translation of Life into a neural network would require just 25 parameters, according to simulations done in 2020 by Jacob Springer, who is now a doctoral student at Carnegie Mellon University, and Garrett Kenyon of Los Alamos National Laboratory. But deep learning practitioners often have to supersize their networks, because learning to perform a task is harder than actually performing it.

Moreover, extra parameters mean extra capability. Although Life can generate immensely rich behaviors, Mordvintsev’s monsters reached another level entirely.

Fixer Upper

The paper that introduced NCAs to the world in 2020 included an applet(opens a new tab) that generated the image of a green lizard. If you swept your mouse through the lizard’s body, you left a trail of erased pixels, but the animal pattern soon rebuilt itself. The power of NCAs not just to create patterns, but to re-create them if they got damaged, entranced biologists. “NCAs have an amazing potential for regeneration,” said Ricard Solé of the Institute of Evolutionary Biology in Barcelona, who was not directly involved in the work.

The butterfly and lizard images are not realistic animal simulations; they do not have hearts, nerves or muscles. They are simply colorful patterns of cells in the shape of an animal. But Levin and others said they do capture key aspects of morphogenesis, the process whereby biological cells form themselves into tissues and bodies. Each cell in a cellular automaton responds only to its neighbors; it does not fall into place under the direction of a master blueprint. Broadly, the same is true of living cells. And if cells can self-organize, it stands to reason that they can self-reorganize.

Cut off the tail of an NCA lizard and the form will regenerate itself.

Sometimes, Mordvintsev found, regeneration came for free. If the rules shaped single pixels into a lizard, they also shaped a lizard with a big gash through it into an intact animal again. Other times, he expressly trained his network to regenerate. He deliberately damaged a pattern and tweaked the rules until the system was able to recover. Redundancy was one way to achieve robustness. For example, if trained to guard against damage to the animal’s eyes, a system might grow backup copies. “It couldn’t make eyes stable enough, so they started proliferating — like, you had three eyes,” he said.

"A kind of computer that looks like an NCA instead would be a vastly more efficient kind of computer." - Blaise Agüera y Arcas

Sebastian Risi(opens a new tab), a computer scientist at the IT University of Copenhagen, has sought to understand what exactly gives NCAs their regenerative powers. One factor, he said, is the unpredictability that Mordvintsev built into the automaton through features such as random update intervals. This unpredictability forces the system to develop mechanisms to cope with whatever life throws at it, so it will take the loss of a body part in stride. A similar principle holds for natural species. “Biological systems are so robust because the substrate they work on is so noisy,” Risi said.

Last year, Risi, Levin and Ben Hartl, a physicist at Tufts and the Vienna University of Technology, used NCAs to investigate how noise leads to robustness. They added one feature to the usual NCA architecture: a memory. This system could reproduce a desired pattern either by adjusting the network parameters or by storing it pixel-by-pixel in its memory. The researchers trained it under various conditions to see which method it adopted.

If all the system had to do was reproduce a pattern, it opted for memorization; fussing with the neural network would have been overkill. But when the researchers added noise to the training process, the network came into play, since it could develop ways to resist noise. And when the researchers switched the target pattern, the network was able to learn it much more rapidly because it had developed transferable skills such as drawing lines, whereas the memorization approach had to start from scratch. In short, systems that are resilient to noise are more flexible in general.

Even if disturbed, the textures created by NCAs have the ability to heal themselves.

The researchers argued that their setup is a model for natural evolution. The genome does not prescribe the shape of an organism directly; instead, it specifies a mechanism that generates the shape. That enables species to adapt more quickly to new situations, since they can repurpose existing capabilities. “This can tremendously speed up an evolutionary process,” Hartl said.

Ken Stanley, an artificial intelligence researcher at Lila Sciences who has studied computational and natural evolution, cautioned that NCAs, powerful though they are, are still an imperfect model for biology. Unlike machine learning, natural evolution does not work toward a specific goal. “It’s not like there was an ideal form of a fish or something which was somehow shown to evolution, and then it figured out how to encode a fish,” he noted. So the lessons from NCAs may not carry over to nature.

Auto Code

In regenerating lost body parts, NCAs demonstrate a kind of problem-solving capability, and Mordvintsev argues that they could be a new model for computation in general. Automata may form visual patterns, but their cell states are ultimately just numerical values processed according to an algorithm. Under the right conditions, a cellular automaton is as fully general as any other type of computer.

The standard model of a computer, developed by John von Neumann in the 1940s, is a central processing unit combined with memory; it executes a series of instructions one after another. Neural networks are a second architecture that distributes computation and memory storage over thousands to billions of interconnected units operating in parallel. Cellular automata are like that, but even more radically distributed. Each cell is linked only to its neighbors, lacking the long-range connections that are found in both the von Neumann and the neural network architectures. (Mordvintsev’s neural cellular automata incorporate a smallish neural network into each cell, but cells still communicate only with their neighbors.)

"You are forcing it not to memorize that answer, but to learn a process to develop the solution." - Stefano Nichele

Long-range connections are a major power drain, so if a cellular automaton could do the job of those other systems, it would save energy. “A kind of computer that looks like an NCA instead would be a vastly more efficient kind of computer,” said Blaise Agüera y Arcas, the chief technology officer of the Technology and Society division at Google.

But how do you write code for such a system? “What you really need to do is come up with [relevant] abstractions, which is what programming languages do for von Neumann–style computation,” said Melanie Mitchell of the Santa Fe Institute. “But we don’t really know how to do that for these massively distributed parallel computations.”

A neural network is not programmed per se. The network acquires its function through a training process. In the 1990s Mitchell, Jim Crutchfield of the University of California, Davis, and Peter Hraber at the Santa Fe Institute showed how cellular automata could do the same. Using a genetic algorithm, they trained automata to perform a particular computational operation, the majority operation: If a majority of the cells are dead, the rest should die too, and if the majority are alive, all the dead cells should come back to life. The cells had to do this without any way to see the big picture. Each could tell how many of its neighbors were alive and how many were dead, but it couldn’t see beyond that. During training, the system spontaneously developed a new computational paradigm. Regions of dead or living cells enlarged or contracted, so that whichever predominated eventually took over the entire automaton. “They came up with a really interesting algorithm, if you want to call it an algorithm,” Mitchell said.

She and her co-authors didn’t develop these ideas further, but Mordvintsev’s system has reinvigorated the programming of cellular automata. In 2020 he and his colleagues created an NCA that read handwritten digits, a classic machine learning test case. If you draw a digit within the automaton, the cells gradually change in color until they all have the same color, identifying the digit. This year, Gabriel Béna of Imperial College London and his authors, building on unpublished work by the software engineer Peter Whidden, created algorithms for matrix multiplication and other mathematical operations. “You can see by eye that it’s learned to do actual matrix multiplication,” Béna said.

Stefano Nichele, a professor at Østfold University College in Norway who specializes in unconventional computer architectures, and his co-authors recently adapted NCAs to solve problems from the Abstraction and Reasoning Corpus, a machine learning benchmark aimed at measuring progress toward general intelligence. These problems look like a classic IQ test. Many consist of pairs of line drawings; you have to figure out how the first drawing is transformed into the second and then apply that rule to a new example. For instance, the first might be a short diagonal line and the second a longer diagonal line, so the rule is to extend the line.

Neural networks typically do horribly, because they are apt to memorize the arrangement of pixels rather than extract the rule. A cellular automaton can’t memorize because, lacking long-range connections, it can’t take in the whole image at once. In the above example, it can’t see that one line is longer than the other. The only way it can relate them is to go through a process of growing the first line to match the second. So it automatically discerns a rule, and that enables it to handle new examples. “You are forcing it not to memorize that answer, but to learn a process to develop the solution,” Nichele said.

Other researchers are starting to use NCAs to program robot swarms. Robot collectives were envisioned by science fiction writers such as Stanisłav Lem in the 1960s and started to become reality in the ’90s. Josh Bongard, a robotics researcher at the University of Vermont, said NCAs could design robots that work so closely together that they cease to be a mere swarm and become a unified organism. “You imagine, like, a writhing ball of insects or bugs or cells,” he said. “They’re crawling over each other and remodeling all the time. That’s what multicellularity is really like. And it seems — I mean, it’s still early days — but it seems like that might be a good way to go for robotics.”

To that end, Hartl, Levin and Andreas Zöttl, a physicist at the University of Vienna, have trained virtual robots — a string of beads in a simulated pond — to wriggle like a tadpole. “This is a super-robust architecture for letting them swim,” Hartl said.

For Mordvintsev, the crossover between biology, computers and robots continues a tradition dating to the early days of computing in the 1940s, when von Neumann and other pioneers freely borrowed ideas from living things. “To these people, the relation between self-organization, life and computing was obvious,” he said. “Those things somehow diverged, and now they are being reunified.”

The rise and fall of the standard user interface

Mike's Notes

An overview of the development of the standard user interface by Liam Proven. There are many links in the original article. The Register is a good read. The three references are useful.

Resources

References

  • Common User Access - Principles of User Interface Design, IBM. 1989.
  • Human Interface Guidelines: The Apple Desktop Interface. Apple. 1997.
  • Principles of User Interface Design: Common User Access. 2007.

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Register
  • Home > Handbook > 

Last Updated

13/09/2025

The rise and fall of the standard user interface

By: Liam Proven
The Register: 24/01/2024

Liam is an EMEA-based Register journalist covering free-and-open-source software (FOSS), operating systems, and cloud developments. Prior to joining our publisher, he has had decades of experience in the worlds of IT and publishing, with roles ranging from tech support and IT manager to teacher, technical writer, and software director.

IBM's SAA and CUA brought harmony to software design… until everyone forgot

Retro Tech Week In the early days of microcomputers, everyone just invented their own user interfaces, until an Apple-influenced IBM standard brought about harmony. Then, sadly, the world forgot.

In 1981, the IBM PC arrived and legitimized microcomputers as business tools, not just home playthings. The PC largely created the industry that the Reg reports upon today, and a vast and chaotic market for all kinds of software running on a vast range of compatible computers. Just three years later, Apple launched the Macintosh and made graphical user interfaces mainstream. IBM responded with an obscure and sometimes derided initiative called Systems Application Architecture, and while that went largely ignored, one part of it became hugely influential over how software looked and worked for decades to come.

One bit of IBM's vast standard described how software user interfaces should look and work – and largely by accident, that particular part caught on and took off. It didn't just guide the design of OS/2; it also influenced Windows, and DOS and DOS apps, and of pretty much all software that followed. So, for instance, the way almost every Linux desktop and GUI app works is guided by this now-forgotten doorstop of 1980s IBM documentation.

But its influence never reached one part of the software world: the Linux (and Unix) shell. Today, that failure is coming back to bite us. It's not the only reason – others lie in Microsoft marketing and indeed Microsoft legal threats, as well as the rise of the web and web apps, and of smartphones.

Culture clash

Although they have all blurred into a large and very confused whole now, 21st century softwares evolved out of two very different traditions. On one side, there are systems that evolved out of Unix, a multiuser OS designed for minicomputers: expensive machines, shared by a team or department, and used only via dumb text-only terminals on slow serial connections. At first, these terminals were teletypes – pretty much a typewriter with a serial port, physically printing on a long roll of paper. Devices with screens, glass teletypes, only came along later and at first faithfully copied the design – and limitation – of teletypes.

That evolution forced the hands of the designers of early Unix software: the early terminals didn't have cursor keys, or backspace, or modifier keys like Alt. (A fun aside: the system for controlling such keys, called Bucky bits, is another tiny part of the great legacy of Niklaus Wirth whose nickname while in California was "Bucky.") So, for instance, one of the original glass teletypes, the Lear-Siegler ADM3A, is the reason for Vi's navigation keys, and ~ meaning the user's home directory on Linux.

When you can't freely move the cursor around the screen, or redraw isolated regions of the screen, it's either impossible or extremely slow to display menus over the top of the screen contents, or have them change to reflect the user navigating the interface.

The other type of system evolved out of microcomputers: inexpensive, standalone, single-user computers, usually based on the new low-end tech of microprocessors. Mid-1970s microprocessors were fairly feeble, eight-bit things, meaning they could only handle a maximum of 64kB of RAM. One result was tiny, very simple OSes. But the other was that most had their own display and keyboard, directly attached to the CPU. It was cheaper, but it was faster. That meant video games, and that meant pressure to get a graphical display, even if a primitive one.

The first generation of home computers, from Apple, Atari, Commodore, Tandy, plus Acorn and dozens of others – all looked different, worked differently, and were totally mutually incompatible. It was the Wild West era of computing, and that was just how things were. Worse, there was no spare storage for luxuries like online help.

However, users were free to move the cursor around the screen and even edit the existing contents. Free from the limitations of being on the end of a serial line that only handled (at the most) some thousands of bits per second, apps and games could redraw the screen whenever needed. Meanwhile, even when micros were attached to bigger systems as terminals, over on Unix, design decisions that had been made to work around these limitations of glass teletypes still restricted how significant parts of the OS worked – and that's still true in 2024.

Universes collide

By the mid-1980s, the early eight-bit micros begat a second generation of 16-bit machines. In user interface design, things began to settle on some agreed standards of how stuff worked… largely due to the influence of Apple and the Macintosh.

Soon after Apple released the Macintosh, it published a set of guidelines for how Macintosh apps should look and work, to ensure that they were all similar enough to one another to be immediately familiar. You can still read the 1987 edition online [PDF].

This had a visible influence on the 16-bit generation of home micros, such as Commodore's Amiga, Atari's ST, and Acorn's Archimedes. (All right, except the Sinclair QL, but it came out before the Macintosh.) They all have reasonable graphics and sound, a floppy drive, and came with a mouse as standard. They all aped IBM's second-generation keyboard layout, too, which was very different from the original PC keyboard.

But most importantly, they all had some kind of graphical desktop – the famed WIMP interface. All had hierarchical menus, a standard file manager, copy-and-paste between apps: things that we take for granted today, but which in 1985 or 1986 were exciting and new. Common elements, such as standard menus and dialog boxes, were often reminscent of MacOS.

One of the first graphical desktops to conspicuously imitate the Macintosh OS was Digital Research's GEM. The PC version was released in February 1985, and Apple noticed the resemblance and sued, which led to PC GEM being crippled. Fortunately for Atari ST users, when that machine followed in June, its version of GEM was not affected by the lawsuit.

The ugly duckling

Second-generation, 16-bit micros looked better and worked better – all except for one: the poor old IBM PC-compatible. These dominated business, and sold in the millions, but mid-1980s versions still had poor graphics, and no mouse or sound chip as standard. They came with text-only OSes: for most owners, PC or MS-DOS. For a few multiuser setups doing stock control, payroll, accounts and other unglamorous activities, DR's Concurrent DOS or SCO Xenix. Microsoft offered Windows 1 and then 2, but they were ugly, unappealing, had few apps, and didn't sell well.

This is the market IBM tried to transform in 1987 with its new PS/2 range of computers, which set industry standards that kept going well into this century: VGA and SVGA graphics, high-density 1.4MB 3.5 inch floppy drives, and a new common design for keyboard and mouse connectors – and came with both ports as standard.

IBM also promoted a new OS it had co-developed with Microsoft, OS/2, which we looked at 25 years on. OS/2 did not conquer the world, but as mentioned in that article, one aspect of OS/2 did: part of IBM's Systems Application Architecture. SAA was an ambitious effort to define how computers, OSes and apps could communicate, and in IBM mainframe land, a version is still around. One small part of SAA did succeed, a part called Common User Access. (The design guide mentioned in that blog post is long gone, but the Reg FOSS desk has uploaded a converted version to the Internet Archive.)

CUA proposed a set of principles on how to design a user interface: not just for GUI or OS/2 apps, but all user-facing software, including text-only programs, even on mainframe terminals. CUA was, broadly, IBM's version of Apple's Human Interface Guidelines – but cautiously, proposed a slightly different interface, as it was published around the time of multiple look-and-feel lawsuits, such as Apple versus Microsoft, Apple versus Digital Research, and Lotus versus Paperback.

CUA advised a menu bar, with a standard set of single-word menus, each with a standard basic set of options, and standardized dialog boxes. It didn't assume the computer had a mouse, so it defined standard keystrokes for opening and navigating menus, as well as for near-universal operations such opening, saving and printing files, cut, copy and paste, accessing help, and so on.

There is a good summary of CUA design in this 11-page ebook, Principles of UI Design [PDF]; it's from 2007 and has a slight Windows XP flavor to the pictures.

CUA brought SAA-nity to DOS

Windows 3.0 was released in 1990 and precipated a transformation in the PC industry. For the first time, Windows looked and worked well enough that people actually used it from choice. Windows 3's user interface – it didn't really have a desktop as such – was borrowed directly from OS/2 1.2, from Program Manager and File Manager, right down to the little fake-3D shaded minimize and maximize buttons. Its design is straight by the CUA book.

Even so, Windows took a while to catch on. Many PCs were not that high-spec. If you had a 286 PC, it could use a megabyte or more of memory. If you had a 386 with 2MB of RAM, it could run in 386 Enhanced Mode, and not merely multitask DOS apps but also give them 640kB each. But for comparison, this vulture's work PC in 1991 only had 1MB of RAM, and the one before that didn't have a mouse, like many late-1980s PCs.

As a result, DOS apps continued to be the best sellers. The chart-topping PC apps of 1990 were WordPerfect v5.1 and Lotus 1-2-3 v2.2.

Lotus 123 screenshot

Lotus 1-2-3 was the original PC killer app and is a good example of a 1980s user interface. It had a two-line menu at the top of the screen, opened with the slash key. File was the fifth option, so to open a file, you pressed /, f, then r for Retrieve.

Microsoft Word for DOS also had a two-line menu, but at the bottom of the screen, with file operations under Transfer. So, in Word, the same operation used the Esc key to open the menus, then t, then l for Load.

JOE, running on macOS 12, is a flashback to WordStar in the 1980s.

Pre-WordPerfect hit word-processor WordStar used Ctrl plus letters, and didn't have a shortcut for opening a file, so you needed Ctrl+k, d, then pick a file and press d again to open a Document. For added entertainment, different editions of WordStar used totally different keystrokes: WordStar 2000 had a whole new interface, as did WordStar Express, known to many Amstrad PC owners as WordStar 1512.

The word processor that knocked WordStar off the Number One spot was the previous best-selling version of WordPerfect, 4.2. WordPerfect used the function keys for everything, to the extent that its keyboard template acted as a sort of copy-protection: it was almost unusable without one. (Remarkably, they are still on sale.) To open a file in WordPerfect, you pressed F7 for the full-screen File menu, then 3 to open a document. The big innovation in WordPerfect 5 was that, in addition to the old UI, it also had CUA-style drop-down menus at the top of the screen, which made it much more usable. For many fans, WordPerfect 5.1 remains the classic version to this day.

Every main DOS application had its own, unique user interface, and nothing was sacred. While F1 was Help in many programs, WordPerfect used F3 for that. Esc was often some form of Cancel, but WordPerfect used it to repeat a character.

With every app having a totally different UI, even knowing one inside-out didn't help in any other software. Many PC users mainly used one program and couldn't operate anything else. Some software vendors encouraged this, as it helped them sell companion apps with compatible interfaces – for example, WordPerfect vendors SSI also offered a database called DataPerfect, while Lotus Corporation offered a 1-2-3 compatible word processor, Lotus Manuscript.

WordPerfect 7 for UNIX, running perfectly happily in a Linux terminal in 2022

CUA came to this chaotic landscape and imposed a sort of ceasefire. Even if you couldn't afford a new PC able to run Windows well, you could upgrade your apps with new versions with this new, standardized UI. Microsoft Word 5.0 for DOS had the old two-line menus, but Word 5.5 had the new look. (It's now a free download and the curious can try it.) WordPerfect adopted the menus in addition to its old UI, so experienced users could just keep going while newbies could explore and learn their way around gradually.

Borland acquired the Paradox database and grafted on a new UI based on its TurboVision text-mode windowing system, loved by many from its best-selling development tools – as dissected in this excellent recent retrospective, The IDEs we had 30 years ago… and we lost.

The chaos creeps back

IBM's PS/2 range brought better graphics, but Windows 3 was what made them worth having, and its successor Windows 95 ended up giving the PC a pretty good GUI of its own. In the meantime, though, IBM's CUA standard brought DOS apps into the 1990s and caused vast improvements in usability: what IBM's guide called a "walk up and use" design, where someone who has never seen a program before can operate it first time.

The impacts of CUA weren't limited to DOS. The first ever cross-Unix graphical desktop, the Open Group's Common Desktop Environment uses a CUA design. Xfce, the oldest Linux desktop of all was modelled on CDE, so it sticks to CUA, even now.

Released one year after Xfce, KDE was based on a CUA design, but its developers seem to be forgetting that. In recent versions, some components no longer have menu bars. KDE also doesn't honour Windows-style shortcuts for window management and so on. GNOME and GNOME 2 were largely CUA-compliant, but GNOME 3 famously eliminated most of that… which opened up window of opportunity for Linux Mint, which methodically put that UI back.

For the first decade of graphical Linux desktops environments, they all looked and worked pretty much like Windows. The first version of Ubuntu was released in 2004, arguably the first decent desktop Linux distro that was free of charge, which put GNOME 2 in front of a lot of new users.

Microsoft, of course, noticed. The Reg had already warned of future patent claims against Linux in 2003. In 2006, it began, with fairly general statements. In 2007, Microsoft started counting the patents that it claimed Linux desktops infringed, although it was too busy to name them.

Over a decade ago, The Reg made the case that the profusion of new, less-Windows-like desktops such as GNOME 3 and Unity were a direct result of this. Many other new environments have also come about since then, including a profusion of tiling window managers – the Arch wiki lists 14 for X.org and another dozen for Wayland.

These are mostly tools for Linux (and, yes, other FOSS Unix-like OS) users juggling multiple terminal windows. The Unix shell is a famously rich environment: hardcore shell users find little reason to leave it, except for web browsing.

And this environment is the one place where CUA never reached. There are many reasons. One is that tools such as Vi and Emacs were already well-established by the 1980s, and those traditions continued into Linux. Another is, as we said earlier, that tools designed for glass-teletype terminals needed different UIs, which have now become deeply entrenched.

Aside from Vi and Emacs, most other common shell editors don't follow CUA, either. The popular Joe uses classic WordStar keystrokes. Pico and Nano have their own.

Tilde

It's not that they don't exist. They do, they just never caught on, despite plaintive requests. Midnight Commanders' mcedit is a stab in the general direction. The FOSS desk favourite Tilde is CUA, and as 184 comments imply, that's controversial.

The old-style tools could be adapted perfectly well. A decade ago, a project called Cream made Vim CUA-compliant; more recently, the simpler Modeless Vim delivers some of that. GNU Emacs' built-in cua-mode does almost nothing to modify the editor's famously opaque UI, but ErgoEmacs does a much better job. Even so, these remain tiny, niche offerings.

The problem is that developers who grew up with these pre-standardization tools, combined with various keyboardless fondleslabs where such things don't exist, don't know what CUA means. If someone's not even aware there is a standard, then the tools they build won't follow it. As the trajectories of KDE and GNOME show, even projects that started out compliant can drift in other directions.

This doesn't just matter for grumpy old hacks. It also disenfranchizes millions of disabled computer users, especially blind and visually-impaired people. You can't use a pointing device if you can't see a mouse pointer, but Windows can be navigated 100 per cent keyboard-only if you know the keystrokes – and all blind users do. Thanks to the FOSS NVDA tool, there's now a first-class screen reader for Windows that's free of charge.

Most of the same keystrokes work in Xfce, MATE and Cinnamon, for instance. Where some are missing, such as the Super key not opening the Start menu, they're easily added. This also applies to environments such as LXDE, LXQt and so on.

Indeed, as we've commented before, the Linux desktop lacks diversity of design, but where you find other designs, the price is usually losing the standard keyboard UI. This is not necessary or inevitable: for instance, most of the CUA keyboard controls worked fine in Ubuntu's old Unity desktop, despite its Mac-like appearance. It's one of the reasons we still like it.

Menus bars, dialog box layouts, and standard keystrokes to operate software are not just some clunky old 1990s design to be casually thrown away. They were the result of millions of dollars and years of R&D into human-computer interfaces, a large-scale effort to get different types of computers and operating systems talking to one another and working smoothly together. It worked, and it brought harmony in place of the chaos of the 1970s and 1980s and the early days of personal computers. It was also a vast step forward in accessibility and inclusivity, opening computers up to millions more people.

Just letting it fade away due to ignorance and the odd traditions of one tiny subculture among computer users is one of the biggest mistakes in the history of computing.

Footnote

Yes, we didn't forget Apple kit. MacOS comes with the VoiceOver screen reader built in, but it imposes a whole new, non-CUA interface, so you can't really use it alongside a pointing device as Windows users can. As for VoiceOver on Apple fondleslabs, we don't recommend trying it. For a sighted user, it's the 21st century equivalent of setting the language on a mate's Nokia to Japanese or Turkish.

Namespace Engine key to RBAC

Mike's Notes

After a bit of experimentation, it turns out the Namespace Engine (nsp) is key to reliably implementing RBAC globally.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

12/09/2025

Namespace Engine key to RBAC

By: Mike Peters
On a Sandy Beach: 12/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

This afternoon, I figured out that the Namespace Engine (nsp) already has a way to register the interfaces of every agent that is automatically built by the Factory Engine (fac). This includes industry domain-based applications, such as Websites, Health and Rail.

I added additional Interface Class Types, "Role", and "Policy", and solved the problem of making this global.

Roles

This allowed for the rapid addition of roles to any autonomous agent.

Examples

  • Website Owner
  • Website Administrator
  • Website Editor
  • Website Visitor
  • Website Search Engine
  • etc

This automatically generates security role names used by the Security Engine (scr).

Policy

This allowed for the rapid addition of policy to any autonomous agent.

Examples

  • CNAME Record
  • Website Hosting
  • Patient Record
  • etc

This automatically generates security policy names used by the Security Engine (scr).

Integration

This would also enable configuration storage in XML or other open formats for interchange and documentation.

This configuration system could be used for open-source SaaS applications.

Useful CF Blogs

Mike's Notes

Today I filled in the State of the CF Union survey 2025 at TeraTech. One of the survey questions was

  • What CF blogs do you read (Check all that apply)?
It was a great question, especially because it listed some CF blogs I didn't know about. Here is that list, plus a few more.

Thanks, Michalea.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

11/09/2025

Useful CF Blogs

By: Mike Peters
On a Sandy Beach: 11/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Pipi 9 is built in CFML code (Pipi 10 will be CFML + BoxLang). This is a list of useful blogs about CFML (ColdFusion Markup Language). I will update this list as I find more.

RBAC Policies

Mike's Notes

I'm building the Policy part of RBAC

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

11/09/2025

RBAC Policies

By: Mike Peters
On a Sandy Beach: 10/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

I'm loosely copying how Windows implements RBAC. Mainly to understand how it works.

From Windows Learn (without the pictures).

To create an access policy

  1. In Server Manager, click IPAM. The IPAM client console appears.
  2. In the navigation pane, click ACCESS CONTROL. In the lower navigation pane, right-click Access Policies, and then click Add Access Policy.
  3. The Add Access Policy dialog box opens. In User Settings, click Add.
  4. The Select User or Group dialog box opens. Click Locations.
  5. The Locations dialog box opens. Browse to the location that contains the user account, select the location, and then click OK. The Locations dialog box closes.
  6. In the Select User or Group dialog box, in Enter the object name to select, type the user account name for which you want to create an access policy. Click OK.
  7. In Add Access Policy, in User Settings, User alias now contains the user account to which the policy applies. In Access Settings, click New.
  8. In Add Access Policy, Access Settings changes to New Setting.
  9. Click Select role to expand the list of roles. Select one of the built-in roles or, if you have created new roles, select one of the roles that you created. For example, if you created the IPAMSrv role to apply to the user, click IPAMSrv.
  10. Click Add Setting.
  11. The role is added to the access policy. To create additional access policies, click Apply, and then repeat these steps for each policy that you want to create. If you do not want to create additional policies, click OK.
  12. In the IPAM client console display pane, verify that the new access policy is created.

RBAC causes a change to the Deployment Engine

Mike's Notes

Building out RBAC is leading to other changes in Pipi.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

12/09/2025

RBAC causes a change to the Deployment Engine

By: Mike Peters
On a Sandy Beach: 09/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Using RBAC to redesign the Pipi security model has led to some changes in other Engines. 

User Account

A User Account has one or more Deployments. 

Deployment

A Deployment is a container for one or more Workspaces. A deployment has these properties;

  • ID
  • Code Name
  • Name
  • Description
  • One language (eg English)
  • One User Account
  • One Deployment Class (type of tenancy)

Those properties are inherited by all workspaces.

Workspace

A workspace has these properties;

  • ID
  • Code Name
  • Name
  • Description
  • One inherited language (eg English)
  • One inherited User Account
  • One Deployment
  • One pre-built Domain Model (eg, Screen Production).
  • One Domain Model Template (eg, Feature Film, Documentary, Live Broadcast), These templates can be customised and shared.

Those properties are inherited. This means each Workspace/Domain Model comes with its own set of prebuilt Security Roles, Security Profiles

Security Role

A security role has these properties;

  • ID
  • Code Name
  • Name
  • Description
  • One User Account Class ( eg, Pipi, Enterprise, DevOps, SME, User)
  • One language (eg, English, French, Hebrew)
  • One Domain Model (eg, health, website, sewerage, public transport, art gallery).

Domain Model

Here is an example of Domain Model properties;

  • Domain Model/Workspace: Website
  • Domain Model Function: create CNAME, delete CNAME
  • Security Role: Webmaster, Editor, Visitor
  • Security Profile: DNS Registry, Website, Design System, CDN

Role-Based Access Control (RBAC)

Mike's Notes

I'm revisiting the role-based access control (RBAC) in Pipi.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

13/09/2025

Role-Based Access Control (RBAC)

By: Mike Peters
On a Sandy Beach: 08/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Pipi 4

Pipi utilised a simple role-based access control (RBAC) system. It enforced a change of passwords for both admins and users. The old data model was similar to the one depicted in this diagram.

Pipi 9

Using RBAC to administer accounts for users must scale from very simple to large. Something much more robust is required.

Requirements

Entities;

  • Users
  • Policy
  • Roles
  • Permissions
  • Groups
  • Objects
  • Sessions
  • Join tables

Roles;

  • In a hierarchy.
  • Separation of duties by allowing and denying access.
  • Fine-grained.

Uses;

  • Pipi as an ecosystem and an individual system.
  • Each account
  • Organisational structures within an account
  • Shares
  • Individual users
  • The public
The RBAC needs to be automatically logged, tested and audited.

Fuzz Testing

Mike's Notes

Automated self-testing is needed for Pipi. Something like Chaos Monkey would be great. These are working notes as I learn and do some experiments.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

07/09/2025

Fuzz Testing

By: Mike Peters
On a Sandy Beach: 07/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Pipi is a self-organising system that can autonomously adapt. This makes it difficult to integrate external standard software tools. However, it requires robust testing to make it tough and reliable. One way might be to deliberately provide a hostile environment for Pipi.

Ecosystem Testing

A copy of Pipi could attack other copies as a way to automate testing. Both the hunter and the hunted could have an arms race, forcing evolutionary-driven survival. This might work well with the internal evolutionary algorithms.

Survivors would go into production.

Random Testing

"Random testing is a black-box software testing technique where programs are tested by generating random, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail. In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing." - Wikipedia

Monkey Testing

"In software testing, monkey testing is a technique where the user tests the application or system by providing random inputs and checking the behaviour, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automated unit tests." - Wikipedia

Fuzz Testing

"In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, such as in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.

For the purpose of security, input that crosses a trust boundary is often the most useful. For example, it is more important to fuzz code that handles a file uploaded by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user." - Wikipedia

J.D.Ullman

Mike's Notes

References to the life's work by Stanford Professor J.D.Ulman on databases. He wrote excellent books and articles on other subjects as well. These are classic works. Includes a lot of free downloads.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Authors > J.D.Ullman
  • Home > Handbook > 

Last Updated

06/09/2025

J.D.Ullman

By: Mike Peters
On a Sandy Beach: 06/09/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

"Jeffrey David Ullman (born November 22, 1942) is an American computer scientist and the Stanford W. Ascherman Professor of Engineering, Emeritus, at Stanford University. His textbooks on compilers (various editions are popularly known as the dragon book), theory of computation (also known as the Cinderella book), data structures, and databases are regarded as standards in their fields. He and his long-time collaborator Alfred Aho are the recipients of the 2020 Turing Award, generally recognized as the highest distinction in computer science." - Wikipedia