What is Thermodynamic Computing and how does it help AI development?!

Mike's Notes

The reasoning behind this chip is the same as behind Pipi 9. Pipi 9 runs on noise.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

17/06/2025

What is Thermodynamic Computing and how does it help AI development?!

By: Laszlo Fazekas
Medium: 05/04/2024

The foundation of modern computing is the transistor, a miniature electronic switch from which logic gates can be constructed, creating complex digital circuits like CPUs or GPUs. With the advancement of technology, transistors have become progressively smaller. According to Moore’s Law, the number of transistors in integrated circuits approximately doubles every 2 years. This exponential growth has enabled the exponential development of computing technology. However, there is a limit to how much the size of transistors can be reduced; we will soon reach a threshold below which transistors cannot function. Moreover, the advancement of AI has made the need for increased computational capacity more critical than ever before.


Transistor count per year from https://en.wikipedia.org/wiki/Moore%27s_law

The fundamental issue is that nature is stochastic (unpredictable). And here, I’m not just referring to quantum mechanical effects. Environmental influences, thermal noise, and other disruptive factors must be considered when designing a circuit. For a transistor, the expectation is that it operates deterministically (predictably). If I run an algorithm 100 times in succession, I must get the same result every time. Currently, transistors are large enough that these factors do not interfere with their operation, but as their size is reduced, these issues will become increasingly relevant. So, what direction can technology take from here? The “usual” answer: quantum computers.

An image of a quantum computer from https://www.flickr.com/photos/ibm_research_zurich/50252942522

In fact, with quantum computers, we encounter the same issue: the need to eliminate environmental effects and thermal noise. This is why quantum computers must be cooled to temperatures near absolute zero. These extreme conditions preclude quantum processors from replacing today’s CPUs. But what could be the solution? It appears that to move forward, we must abandon our deterministic computers and embrace the stochastic nature of the world. This idea is not new. It’s several billion years old.

Educational videos often depict the functioning of cells as little factories, where everything operates with the precision of clockwork. Enzymes, like tiny robots, cut up DNA, to which amino acids attach, leading to the production of proteins. These proteins neatly interlock and, during cell division, separate from the old cell to form a new one. However, this is a highly simplified model. In reality, particles move entirely at random, and when the right components happen to come together, they bind. While human-made structures operate under strict rules, here processes form spontaneously under the compelling influence of physical and chemical laws. Of course, from a bird’s-eye view, the system might appear to function with the precision of a clockwork.

DNA replication from https://en.wikipedia.org/wiki/DNA

A very simple example is when we mix cold water with hot water. It would be impossible to track the random motion of each particle. Some particles move faster, while others move slower. Occasionally, particles collide and exchange energy. The system is entirely chaotic, requiring immense computational capacity to simulate. Despite this, we can accurately predict that after a short period, the water will reach a uniform temperature. This is also a simple self-organizing system that is very complex at the particle level, yet entirely predictable due to the laws of physics and the rules of statistics. Similarly, cell division becomes predictable as a result of complex chemical processes and random motion. Of course, errors can occur. The DNA may not copy correctly, mutations may develop, or other errors may occur. That’s why the system is highly redundant. Several processes will destroy the cell in case of an error (apoptosis), thus preventing faulty units from causing problems (or only very rarely, which is how diseases like cancer can develop).

The energy consumption of a transistor can be comparable to the energy consumption of a cell, even though a cell is orders of magnitude more complex. Imagine the complex calculations we could perform with such low consumption if we carried them out in an analog manner, exploiting the laws of nature.

In biology, thermal noise is not only not a problem, but it is necessary. Below certain temperatures, biological systems are incapable of functioning. It is the random motion induced by heat that powers them.

The foundation of thermodynamic computing is similar. Instead of trying to eliminate the stochastic nature of physical processes, we utilize it. But what can be done with a computer whose operation is non-deterministic?

In fact, in the field of machine learning, there are many random components. For example, in the case of a neural network, the initial weights are randomly initialized. The dropout layer, which eliminates overfitting, also randomly discards inputs. But at a higher level, for instance, diffusion models also use random noise for their operation. In the case of Midjourney, for example, the model was trained to generate images from random noise, taking into account the given instructions.

Here, a bit of noise is added to the image at every step until the entire image becomes noise. The neural network is then trained to reverse this process, that is, to generate an image from noise based on the given text. If the system is trained with enough images and text, it will be capable of generating images from random noise based on text. This is how Midjourney operates.


Steps of Stable Diffusion from https://en.wikipedia.org/wiki/Stable_Diffusion

In current systems, we eliminate the random thermal noise to obtain deterministic transistors, and then on these deterministic transistors, we simulate randomness, which is necessary for the operation of neural networks. Instead of simulation, why not leverage nature’s randomness? The idea is similar to that of any analog computer. Instead of digitally simulating a given process, we should utilize the opportunities provided by nature and run it in an analog manner.

The startup Extrophic is working on the development of such a chip. Like Google, the company was founded by two guys: Guillaume Verdon and Trevor McCourt. Both worked in the field of quantum computing before founding the company, and their chip lies somewhere halfway between traditional integrated circuits and quantum computers.

Extropic’s circuit works in an analog manner. The starting state is completely random, normally distributed thermal noise. Through programming the circuit, this noise can be modified within each component. Instead of transistors, analog weights take their place, which are noisy, but the outcome can be determined through statistical analysis of the output. The guys call this probabilistic computing.

Microscope image of an Extropic chip from https://www.extropic.ai/future

These analog circuits are much faster and consume much less energy, and since the thermal noise is not only non-disruptive but an essential component of the operation, they do not require the special conditions needed by quantum computers. The chips can be manufactured with existing production technology, so they could enter the commercial market within a few years.

As we have seen from the above, Extropic’s technology is very promising. However, what personally piqued my interest is that it is more biologically plausible. Of course, I don’t think that the neurons in artificial neural networks have anything to do with human brain neurons. These are two very different systems. However, the human brain does not learn through gradient descent. Biological learning is something entirely different, and randomness certainly plays a significant role in it.

As I mentioned, in biology and nature, everything operates randomly. What we see as deterministic at a high level is just what statistically stands out from many random events. This is how, for example, many living beings (including us humans) came to be through completely random evolution yet are built with almost engineering precision. I suspect that the human brain operates in a similar way to evolution. A multitude of random events within a suitably directed system, which we perceive from the outside as consistent thinking. This is why genetic algorithms were so intriguing to me, and now I see the same principle in Extropic’s chip.

If you are interested, check the company homepage or this interview with the founder guys.

It Was the Damn Phones

Mike's Notes

I discovered this searing poem on the After Babel substack. It says everything about the disgusting misuse of technology by major tech companies and the effects on people, especially the mental health of the young.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

16/06/2025

It Was the Damn Phones

A Gen Z poet conveys the effects of the phone-based childhood

By: Kori Jane Spaulding
After Babel: 12/06/2025

A poet from Houston, Texas, Kori is just 21 years old and has already published three books of poems and one novel.

Below is a transcription of the spoken word poem shared in the video. A slightly different version of the poem can be found in two of Kori’s books: Books Close (pg. 92-93), and Ajar (pg. 270-271).

I think our parents were right.

It was the damn phones.

We laughed as children, hearing, “It’s that Snapgram and Instachat and Facetok”.

They didn’t understand. They couldn’t even say it right. We thought we knew better than them.

They didn’t know what it was like, having the world at the tip of our fingers.

We scroll through the trash so much, we have news headlines tattooed on our skin.

Wires for veins. AI for a brain. And they may not have understood. But they were right.

It was the damn phones.

I prided myself on sobriety, on being drunk with only propriety. I was above addiction.

A hypocritical notion. For am I not addicted to my own anxiety?

Brought on by a need for constant stimulation. A drug in our pockets.

But who can blame us? We were but children when they were given.

We didn’t know how to stop it. If I added up all the hours I spent on a screen,

existential dread and regret would creep in. So I ignore this fact by opening my phone.

And it’s not like I can throw it away. It’s how we communicate. It’s how we relate.

It’s a medicine that is surely making our souls die.

I used to say I was born in the wrong generation, but I was mistaken.

For I do everything I say I hate. Exchanging hobbies for Hinge,

truth with TikTok, intimacy with Instagram, sanity with Snapchat.

I have become self-aware. Almost worse than being naive. I know it’s poison, but I drink away.

The character behind the phone screen has become self-aware.

We used to be scared of robots gaining consciousness, a lie by the media companies.

To keep us distracted enough, so not to become conscious of the mess they created.

We are the robots. We are the product. And so I sit and I scroll and I rot on repeat.

Sit and scroll and rot.

Until my thoughts are what is being fed to me on TV,

until my feelings are wrapped up in celebrities,

until my body is a tool of my political identity.

I sit and I scroll and I rot.

And I post on the internet how the internet has failed us

so that I may not fail my internet presence. I think our parents were right.

It was the damn phones.

Google’s “What’s New in Web UI” Talk: Less Custom Component JavaScript, More Web Standards

Mike's Notes

In the long term, this could help resolve UI issues.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > InfoQ Weekly Digest
  • Home > Handbook > 

Last Updated

15/06/2025

Google’s “What’s New in Web UI” Talk: Less Custom Component JavaScript, More Web Standards

By: Bruno Couriol
InfoQ: 08/06/2025

Una Kravets recently presented in a talk recent developments in Web UI supported by the Chrome team. Some common UI patterns that currently require a significant amount of JavaScript may soon be implemented in a declarative manner with new features of HTML and CSS, with less custom JavaScript, and with built-in accessibility.

The talk focuses on three particularly tricky UI patterns: customizable select menus, carousels, and hover cards. All three UI patterns are commonly found in design systems, with many lines of JavaScript to implement custom styling, presentation, layout, interaction, or accessibility patterns. With browser vendors evolving web standards to incorporate those patterns away from userland into the browsers themselves, developers may have less work to do in the future and simply rely on the platform. Less custom JavaScript also benefits users in the shape of increased performance. The proposed declarative APIs have already shipped in at least one stable browser engine.

The first pattern discussed is the customizable select menu. The native <select> element’s internal structure has been historically difficult to style consistently across browsers:

A common frustration for developers who try to work with the browser’s built-in form controls (<select> and various <input> types) is that they cannot customize the appearance of these controls to fit their site’s design or user experience. In a survey of web developers about form controls and components, the top reason that devs rewrite their own versions of these controls is the inability to sufficiently customize the appearance of the native controls.

The building blocks for a customizable select are the Popover API and Anchor Positioning.

The Popover API handles the floating list of options, ensuring it appears above other UI elements, is easy to dismiss, and manages focus. Popover has reached baseline status and is now available in all browsers.

Command invokers (command and commandfor attributes) provide a declarative HTML solution similar to popovertarget for connecting button clicks to actions (e.g., opening a dialog), reducing the need for boilerplate JavaScript.

Anchor Positioning is a CSS API that lets developers position elements relative to other elements, known as anchors. This API simplifies complex layout requirements for many interface features like menus and submenus, tooltips, selects, labels, cards, settings dialogs, and many more. Anchor Positioning is part of Interop 2025, meaning that it should land in all browsers by the end of the year.

The improved select element anatomy showcases two parts, a button, and a popover anchored to that button, all with corresponding selectors for targeting and styling:

Styles can be applied to the popover through the selector ::picker(select). An example of custom styling is as follows:

/* enter custom mode */
select,
::picker(select) {
  appearance: base-select;
}
/* style the button */
::select-fallback-button {
  background: gold;
  font-family: fantasy;
  font-size: 1.2rem;
}
/* style the picker dropdown */
::picker(select) {
  border-radius: 1rem;
}
/* style the options */
option {
  font-family: monospace;
  padding: 0.5rem 1rem 0.5rem 0;
  font-size: 1.2rem;
}
/* style selected option in the dropdown */
option:checked {
  background: powderblue;
}
/* style the option on hover or focus */
option:hover,
option:focus-visible {
  background-color: pink;
}
/* style the active option indicator */
option::before {
  content: '';
  font-size: 80%;
  margin: 0.5rem;
}
/* etc. */
body {
  padding: 2rem;
}

Developers are encouraged to review the full talk for additional technical details, demos, and explanations. The talk additionally explains how recent features from the CSS Overflow 5 specification, namely scroll buttons and scroll markers, enable scroll-driven animations (e.g., carousels) purely in CSS.

A knockout blow for LLMs?

Mike's Notes

Some criticisms by Gary Marcus of the fundamental weaknesses of LLMs. The Tower of Hanoi looks fun. I must make one to try.

Resources

References

  • The Algebraic Mind (2021) MIT Press, by Gary Marcus.

Repository

  • Home > Ajabbi Research > Library > Authors > Gary Marcus
  • Home > Ajabbi Research > Library > Subscriptions > Marcus on AI
  • Home > Handbook > 

Last Updated

14/06/2025

A knockout blow for LLMs?

By: Gary Marcus
Marcus on AI: 07/06/2025

LLM “reasoning” is so cooked they turned my name into a verb

Quoth Josh Wolfe, well-respected venture capitalist at Lux Capital:

Ha ha ha. But What’s the fuss about?

Apple has a new paper; it’s pretty devastating to LLMs, a powerful followup to one from many of the same authors last year.

There’s actually an interesting weakness in the new argument—which I will get to below—but the overall force of the argument is undeniably powerful. So much so that LLM advocates are already partly conceding the blow while hinting at, or at least hoping for, happier futures ahead.

Wolfe lays out the essentials in a thread:

In fairness, the paper both GaryMarcus’d and Subbarao (Rao) Kambhampati’d LLMs.

On the one hand, it echoes and amplifies the training distribution argument that I have been making since 1998: neural networks of various kinds can generalize within a training distribution of data they are exposed to, but their generalizations tend to break down outside that distribution. That was the crux of my 1998 paper skewering multilayer perceptrons, the ancestors of current LLM, by showing out-of-distribution failures on simple math and sentence prediction tasks, and the crux in 2001 of my first book (The Algebraic Mind) which did the same, in a broader way, and central to my first Science paper (a 1999 experiment which demonstrated that seven-month-old infants could extrapolate in a way that then-standard neural networks could not). It was also the central motivation of my 2018 Deep Learning: Critical Appraisal, and my 2022 Deep Learning is Hitting a Wall. I singled it out here last year as the single most important — and important to understand — weakness in LLMs. (As you can see, I have been at this for a while.)

On the other hand it also echoes and amplifies a bunch of arguments that Arizona State University computer scientist Subbarao (Rao) Kambhampati has been making for a few years about so-called “chain of thought” and “reasoning models” and their “reasoning traces” being less than they are cracked up to be. For those not familiar a “chain of thought” is (roughly) the stuff a system says it “reasons” its way to answer, in cases where the system takes multiple steps; “reasoning models” are the latest generation of attempts to rescue the inherent limitations of LLMs, by forcing them to “reason” over time, with a technique called “inference-time compute”. (Regular readers will remember that when Satya Nadella waved the flag of concession in November on pure pretraining scaling - the hypothesis that my deep learning is a hitting a wall paper critique addressed - he suggested we might find a new set of scaling laws for inference time compute.)

Rao, as everyone calls him, has been having none of it, writing a clever series of papers that show, among other things that the chains of thoughts that LLMs produce don’t always correspond to what they actually do. Recently, for example, he observed that people tend to overanthromorphize the reasoning traces of LLMs, calling it “thinking” when it perhaps doesn’t deserve that name. Another of his recent papers showed that even when reasoning traces appear to be correct, final answers sometimes aren’t. Rao was also perhaps the first to show that a “reasoning model”, namely o1, had the kind of problem that Apple documents, ultimately publishing his initial work online here, with followup work here.

The new Apple paper adds to the force of Rao’s critique (and my own) by showing that even the latest of these new-fangled “reasoning models” still —even having scaled beyond o1 — fail to reason beyond the distribution reliably, on a whole bunch of classic problems, like the Tower of Hanoi. For anyone hoping that “reasoning” or “inference time compute” would get LLMs back on track, and take away the pain of m mutiple failures at getting pure scaling to yield something worthy of the name GPT-5, this is bad news.

Hanoi is a classic game with three pegs and multiple discs in which you need to move all the discs on the left peg to the right peg, never stacking a larger disc on top of a smaller one.

(You can try a digital version at mathisfun.com.)

If you have never seen it before, it takes a moment or to get the hang of it. (Hint, start with just a few discs).

With practice, a bright (and patient) seven-year-old can do it. And it’s trivial for a computer. Here’s a computer solving the seven-disc version, using an algorithm that any intro computer science student should be able to write:

[VIDEO unable to copy]

Claude, on the other hand, can barely do 7 discs, getting less than 80% accuracy, left bottom panel below, and pretty much can’t get 8 correct at all.

Apple found that the widely praised o3-min (high) was no better (see accuracy, top left panel, legend at bottom), and they found similar results for multiple tasks:

It is truly embarrassing that LLMs cannot reliably solve Hanoi. (Even with many libraries of source code to do it freely available on the web!)

An, as the paper’s co-lead-author Iman Mirzadeh told me via DM,

it's not just about "solving" the puzzle. In section 4.4 of the paper, we have an experiment where we give the solution algorithm to the model, and all it has to do is follow the steps. Yet, this is not helping their performance at all.

So, our argument is NOT "humans don't have any limits, but LRMs do, and that's why they aren't intelligent". But based on what we observe from their thoughts, their process is not logical and intelligent.

If you can’t use a billion dollar AI system to solve a problem that Herb Simon one of the actual “godfathers of AI”, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote.

That said, I warned you that there was a weakness in the new paper’s argument. Let’s discuss.

The weakness, which was well-laid out by anonymous account on X (usually not the source of good arguments) was this: (ordinary) humans actually have a bunch of (well-known) limits that parallel what the Apple team discovered. Many (not all) humans screw up on versions of the Tower of Hanoi with 8 discs.

But look, that’s why we invented computers and for that matter calculators: to compute solutions large, tedious problems reliably. AGI shouldn’t be about perfectly replicating a human, it should (as I have often said) be about combining the best of both worlds, human adaptiveness with computational brute force and reliability. We don’t want an “AGI” that fails to “carry the one” in basic arithmetic just because sometimes humans do. And good luck getting to “alignment” or “safety” without reliabilty.

The vision of AGI I have always had is one that combines the strengths of humans with the strength of machines, overcoming the weaknesses of humans. I am not interested in a “AGI” that can’t do arithmetic, and I certainly wouldn’t want to entrust global infrastructure or the future of humanity to such a system.

Whenever people ask me why I (contrary to widespread myth) actually like AI, and think that AI (though not GenAI) may ultimately be of great benefit to humanity, I invariably point to the advances in science and technology we might make if we could combine the causal reasoning abilities of our best scientists with the sheer compute power of modern digital computers.

We are not going to be “extract the light cone” of the earth or “solve physics” [whatever those Altman claims even mean] with systems that can’t play Tower of Hanoi on a tower of 8 pegs. [Aside from this, models like o3 actually hallucinate a bunch more than attentive humans, struggle heavily with drawing reliable diagrams, etc; they happen to share a few weakness with humans, but on a bunch of dimensions they actually fall short.]

And humans, to the extent that they fail, often fail because of a lack of memory; LLMs, with gigabytes of memory, shouldn’t have the same excuse.

What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)

In the best case (not always reached) they can write python code, supplementing their own weaknesses with outside symbolic code, but even this is not reliable. What this means for business and society is that you can’t simply drop o3 or Claude into some complex problem and expect it to work reliably.

Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.

At least for the next decade, LLMs (with and without inference time “reasoning”) will continue have their uses, especially for coding and brainstorming and writing. And as Rao told me in a message this morning, “the fact that LLMs/LRMs don't reliably learn any single underlying algorithm is not a complete deal killer on their use. I think of LRMs basically making learning to approximate the unfolding of an algorithm over increasing inference lengths.” In some contexts that will be perfectly fine (in others not so much).

But anybody who thinks LLMs are a direct route to the sort AGI that could fundamentally transform society for the good is kidding themselves. This does not mean that the field of neural networks is dead, or that deep learning is dead. LLMs are just one form of deep learning, and maybe others — especially those that play nicer with symbols – will eventually thrive. Time will tell. But this particular approach has limits that are clearer by the day.

Supercharge Your BoxLang Applications with Maven Integration

Mike's Notes

This is an excellent addition to BoxLang. Pipi 9 is capable of generating these code samples. It could also edit the pom.xml file.

The question remains how to get this code from Pipi into BoxLang, so Pipi can run BoxLang autonomously.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

13/06/2025

Supercharge Your BoxLang Applications with Maven Integration

By: Luis Majeno
Ortus Solutions: 06/06/2025

Luis is the CEO of Ortus.

We're excited to announce a supercharged feature for BoxLang developers: Maven Integration! This powerful addition opens the door to the entire Java ecosystem, allowing you to seamlessly incorporate thousands of Java libraries into your BoxLang applications with just a few simple commands.

Why Maven Integration Matters

BoxLang has always been about combining the best of both worlds - the simplicity of dynamic languages with the power of the JVM. With Maven integration, we're taking this philosophy to the next level by giving you instant access to:

Thousands of Java libraries from Maven Central

  • Automatic dependency management - no need to manage it manually or copy jars around
  • Zero configuration - it just works out of the box
  • Clean management - add and remove dependencies with simple commands

This integration that we ship with BoxLang is at the runtime home level. However, it can easily be adapted to individual applications if needed, in case you are in shared environments.

How It Works

The magic happens through BoxLang's pre-configured pom.xml file located in your BoxLang home directory (~/.boxlang). The workflow is simple:

Add dependencies to the pom.xml file

Run mvn install to download libraries

Start using Java libraries immediately in your BoxLang code

That's it! BoxLang automatically loads all JARs from the lib/ folder, making them available throughout your runtime. For our full documentation please visit our book: https://boxlang.ortusbooks.com/getting-started/configuration/maven-integration

Real-World Examples

Let's see this in action with some practical examples that showcase the power of this integration.

Generate QR Codes in Seconds

Need to create QR codes? Just add the ZXing dependency:

<dependency>
    <groupId>com.google.zxing</groupId>
    <artifactId>core</artifactId>
    <version>3.5.2</version>
</dependency>

Then use it in your BoxLang code:

function createQRCodeGenerator() {
    return {
        "saveToFile": ( text, filePath, size = 300 ) => {
            var writer = new com.google.zxing.qrcode.QRCodeWriter()
            var bitMatrix = writer.encode( 
                text, 
                new com.google.zxing.BarcodeFormat().QR_CODE, 
                size, 
                size 
            )
            
            var image = new com.google.zxing.client.j2se.MatrixToImageWriter()
                .toBufferedImage( bitMatrix )
            var file = new java.io.File( filePath )
            
            new javax.imageio.ImageIO().write( image, "PNG", file )
            return filePath
        }
    }
}
// Generate QR code for your website
qrGenerator = createQRCodeGenerator()
qrFile = qrGenerator.saveToFile( 
    "https://boxlang.ortussolutions.com", 
    "/tmp/boxlang-qr.png", 
    400 
)
println( "QR code saved to: " & qrFile )
Create Professional PDFs
Want to generate dynamic PDFs? Add iText and you're ready to go:
<dependency>
    <groupId>com.itextpdf</groupId>
    <artifactId>itext7-core</artifactId>
    <version>8.0.2</version>
    <type>pom</type>
</dependency>

Now create beautiful PDFs programmatically:

function createStyledPDF( filePath, title, content ) {
    var writer = new com.itextpdf.kernel.pdf.PdfWriter( filePath )
    var pdf = new com.itextpdf.kernel.pdf.PdfDocument( writer )
    var document = new com.itextpdf.layout.Document( pdf )
    
    // Add styled title
    var titleParagraph = new com.itextpdf.layout.element.Paragraph( title )
        .setFontSize( 20 )
        .setBold()
    document.add( titleParagraph )
    
    // Add content
    var contentParagraph = new com.itextpdf.layout.element.Paragraph( content )
        .setFontSize( 12 )
    document.add( contentParagraph )
    
    document.close()
    return filePath
}
// Generate a professional report
reportPDF = createStyledPDF(
    "/tmp/quarterly-report.pdf",
    "Q4 2024 Business Report",
    "BoxLang continues to revolutionize dynamic programming on the JVM."
)

Getting Started

Getting started with Maven integration is incredibly simple:

1. Install Maven

macOS (Homebrew):

brew install maven

Windows (Chocolatey):

choco install maven

Linux (Ubuntu/Debian):

sudo apt install maven

2. Navigate to BoxLang Home

cd ~/.boxlang

3. Add Dependencies

Edit the pom.xml file and add your desired dependencies. Search Maven Central for libraries.

4. Install Dependencies

mvn install

5. Start Coding!

All dependencies are now available in your BoxLang applications immediately.

Why This Changes Everything

Maven integration fundamentally transforms what's possible with BoxLang:

Instant Access to Specialized Libraries

Need machine learning? Add Weka or DL4J. Want advanced image processing? Add ImageIO extensions. Need specialized data formats? There's probably a Java library for that.

Dependency Management Made Simple

Gone are the days of manually downloading JARs and managing versions. Maven handles transitive dependencies, version conflicts, and updates automatically.

Enterprise-Ready from Day One

Access to mature, battle-tested Java libraries means your BoxLang applications can handle enterprise requirements without reinventing the wheel.

Easy Experimentation

Want to try a new library? Add it to your pom.xml, run mvn install, and start experimenting. Don't like it? Run mvn clean and it's gone.

What's Next?

This is just the beginning! Maven integration opens up a world of possibilities for BoxLang developers. We're excited to see what amazing applications you'll build with access to the entire Java ecosystem.

Some areas we're particularly excited about:

  • Machine Learning: Integrate Weka, DL4J, or other ML libraries
  • Scientific Computing: Use Apache Commons Math for statistical operations
  • Data Formats: Work with Excel files, XML processing, and specialized formats
  • External Integrations: Connect to cloud services, databases, and APIs with dedicated clients

Try It Today!

Maven integration is available now in the latest version of BoxLang. Here's how to get started:

  • Update BoxLang to the latest version
  • Navigate to your BoxLang home (cd ~/.boxlang)
  • Edit the pom.xml file to add dependencies
  • Run mvn install
  • Start building amazing applications!

Technological Jerk: Why Users Resist Your New Features (And What to Do About It)

Mike's Notes

Looks great. I must read this book and then write a review afterwards.

Resources

References

  • Progressive Delivery: Build The Right Thing For The Right People At The Right Time (IT Revolution Press, November 2025), By James Governor, Kim Harrison, Heidi Waterhouse, and Adam Zimman.

Repository

  • Home > Ajabbi Research > Library > Subscriptions > IT Revolution
  • Home > Ajabbi Research > Library > Publisher > IT Revolution Press
  • Home > Handbook > 

Last Updated

12/06/2025

Technological Jerk: Why Users Resist Your New Features (And What to Do About It)

By: Leah Brown
IT Revolution: 03/06/2025

Leah Brown is Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.

It’s Friday at 9:52 p.m. You open the app on your phone to adjust the alarm on your smart speakers. You need to ensure you’re up early to make a flight. When the app opens, it’s different. “Oh, cool, a new update,” you think at first. But after twenty minutes of fruitlessly tapping around the screen, you discover through Reddit that the new app update has completely removed the ability to control alarms.

You spend the next hour setting up physical alarm clocks while trying not to wake your family.

Sound familiar?

This scenario isn’t just frustrating—it represents a fundamental disconnect in delivering software. Software developers are often proud of their innovations, and businesses are eager to ship them. But users are increasingly exhausted by the constant technological churn disrupting daily lives and workflows.

In their upcoming book, Progressive Delivery: Build The Right Thing For The Right People At The Right Time (IT Revolution Press, November 2025), authors James Governor, Kim Harrison, Heidi Waterhouse, and Adam Zimman call this phenomenon technological jerk.

The Physics of Disruption

In physics, “jerk” isn’t just someone cutting you off in traffic—it’s the rate at which acceleration changes. It’s the feeling that makes you grab for the subway pole when the train lurches or brace yourself during an elevator’s sudden start.

As the authors explain in the book, “Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.”

This isn’t about resistance to change itself. It’s about our human capacity to absorb the rate of change. And in today’s software environment, that rate is accelerating far beyond what many users can comfortably process.

The Business Impact of Technological Jerk

When users experience technological jerk, they don’t typically blame themselves—they blame your product. This manifests in ways that directly impact your business:

  • Decreased engagement: Users avoid using features they’re not confident navigating.
  • Rising support costs: Every abrupt change creates a flood of inquiries and complaints.
  • Negative reviews: More than ever, users vocalize their frustration publicly.
  • Increased churn: At its worst, users switch to competitive products that feel more stable.
  • Feature abandonment: New capabilities that cost thousands of development hours go unused.

In 2019, Slack faced significant backlash after releasing a major UI redesign that disrupted established workflows. Despite the company’s belief that the new interface would ultimately improve productivity, users revolted against the change. Some organizations even delayed upgrading to maintain productivity.

Even worse, in January 2025, Sonos CEO Patrick Spence was forced to resign after an app update broke core functionality. The cost of failing to manage technological jerk isn’t just customer dissatisfaction—it can be existential.

Why We Create Technological Jerk

If the effects are so damaging, why do we keep creating software experiences that jar our users? Several factors are at play:

1. The Curse of Knowledge

When you’ve spent months designing and building a feature, the change seems intuitive, even obvious. You can’t un-see what you know. This cognitive bias makes it nearly impossible to accurately predict how disruptive a change will feel to someone encountering it for the first time.

2. Deployment ≠ Release ≠ Adoption

Many organizations have embraced CI/CD to optimize their deployment pipelines, shipping code dozens or hundreds of times daily. But we haven’t created equivalent sophistication around how we release those changes to users and support their adoption journey.

The software industry conflates three distinct processes:

  • Deployment: Getting code to production environments
  • Release: Making features available to users
  • Adoption: Users successfully incorporating features into workflows

While optimizing for deployment speed, we’ve neglected the human-centered processes of release and adoption.

3. The “User Knows Best” Fallacy

“But users asked for this!” is a common defense when pushback occurs. This ignores a crucial reality: users typically ask for outcomes, not specific implementations.

When a user says, “I want a faster search function,” what they’re really asking for is, “I need to find critical information during customer calls without losing the customer’s attention.” Your implementation of a “faster search” might actually disrupt the workflow they’ve optimized around the current search.

4. The False “Everyone” Narrative

Product teams often speak of “our users” as a monolithic entity. “Our users want this.” “Our users will love this.” This ignores the reality that your user base contains multiple personas with dramatically different needs, technical sophistication levels, and change tolerance thresholds.

What delights your early adopters may alienate your steady mainstream users. Assuming “everyone” will react similarly to change is a recipe for creating technological jerk.

Early Signs of a Better Approach

Some organizations have begun exploring solutions to this problem:

Feature Flagging Beyond A/B Testing

Companies like GitHub use sophisticated feature flagging not just for testing but as a fundamental control mechanism that separates deployment from release. Rather than abruptly pushing changes to all users simultaneously, they create control points that allow for gradual, deliberate exposure of new capabilities.

Ring Deployments

Microsoft has pioneered the concept of “ring deployments,” where changes progress through increasingly larger circles of users, starting with internal teams and expanding gradually to early adopters before reaching the general population. This creates a progressive exposure pattern that catches issues early while allowing most users to avoid the earliest, most disruptive moments of a new feature.

User-Controlled Release Cadence

Some products now offer explicit user choice in when and how they adopt new features. Google Workspace, for instance, allows administrators to choose between “Rapid Release” and “Scheduled Release” tracks, acknowledging that different organizations have different change absorption capacities.

The Rise of Product Operations

Just as DevOps emerged to bridge the gap between development and operations, a new discipline—Product Operations—is forming to manage the increasingly complex interface between product teams and users. This emerging function explicitly owns the user transition experience, much as DevOps owns the code transition experience.

Beyond Adhoc Solutions

These approaches represent important first steps, but they remain fragmented and inconsistent across the industry. What’s needed is a comprehensive framework that systematically addresses technological jerk by reconceptualizing how we deliver software.

Such a framework would need to:

  • Recognize different user segments’ varying capacities for change absorption
  • Provide mechanisms to measure and manage the rate of change
  • Create feedback loops that detect when change is happening too rapidly
  • Delegate control to those closest to the impact
  • Balance the innovation needs of development teams with the stability needs of users

This isn’t about slowing innovation—it’s about enabling sustainable innovation that users can absorb and benefit from. It’s about finding the sweet spot between technological stagnation and technological whiplash.

What’s Next?

Progressive Delivery introduces a comprehensive framework to do just this—a systematic approach to managing technological jerk while maintaining innovation velocity.

Drawing on their extensive combined experience in the industry, as well as case studies from companies like GitHub, Disney, Adobe, and AWS, the authors demonstrate how organizations of various sizes and industries have addressed this challenge through a combination of cultural, procedural, and technical practices.

Until the book comes out, you can start by recognizing when you’re creating technological jerk in your own products:

  • Are users complaining about the pace of change rather than the changes themselves?
  • Do support tickets spike after every release?
  • Do you have features with mysteriously low adoption despite obvious benefits?
  • Have users created workarounds to avoid using your latest capabilities?

These are all signs that your delivery approach may be creating more friction than function. By recognizing the problem, you’ve taken the first step toward building better relationships with your users through more thoughtful software delivery.

Because at the end of the day, what matters isn’t just what we build—it’s how we deliver it, to whom, and at what pace. Get that right, and both your users and your business will thrive.

This post explores concepts from the upcoming book Progressive Delivery: Build The Right Thing For The Right People At The Right Time by James Governor, Kim Harrison, Heidi Waterhouse, and Adam Zimman (IT Revolution Press, November 2025), which introduces a comprehensive framework for delivering software in ways that respect both innovation needs and user adoption capacities.

Why Is Everything So Slow In Large Companies?

Mike's Notes

Recognition of the scale of a significant problem that exists in large enterprise systems.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Smashing Magazine
  • Home > Handbook > 

Last Updated

11/06/2025

Why Is Everything So Slow In Large Companies?

By: Vitaly Friedman
LinkedIn: 06/06/2025

If you work in a large organization, you might find yourself puzzled by how slow and seemingly inefficient they can be. Decisions take time. Reviews tend to run in circles. Meetings always start late and overrun. Features get infinitely delayed. And teams become growingly protective of their silos.

And with larger companies, it compounds dramatically to the point that shipping on time becomes an exception, rather than the rule. But why does that happen? Why do projects have to be so painfully slow to have the slightest chance of being successful? And how do move the needle in the right direction? Well, let’s get to the bottom of it.

Heads up: Meet How To Measure UX and Design Impact  — on how to show the business impact of your incredible UX work. Use a friendly code LINKEDIN  to save 15%. Jump to table of contents.

“Wicked Problems”

At best, slowdowns are highly inefficient and wasteful. And at worst, they create a poor culture that propagates throughout the entire organization. The result: teams that feel utterly frustrated, confused, slowed down or simply ignored. That’s when people stop talking in team calls and send their AI notetakers instead.

As Sean Goedecke beautifully explains, the reasons for that aren’t inefficient processes, poor coordination, or lack of competency. It’s simply a different scale of problems that need to be solved, with many dependencies, decisions, and business goals to meet.

Wicked problems are unique and interconnected, which means failures have big consequences on the business.

Large companies are heavily constrained by a very small but very consequential set of problems, called ”wicked problems”. These are deeply inter-connected problems that interfere with many features, actors, systems, flows, and users — and often live at the very heart of the organization.

These are typical challenges of wicked problems:

  • Problems are “unique” and not properly understood
  • Many dependencies, with legacy or third parties
  • Multiple stakeholders with conflicting agendas
  • Involve many different parts of an organization
  • Every solution affects parts of the entire system
  • Solutions aren’t right or wrong, but better or worse
  • Take a lot of time to evaluate and make decisions
  • Can never be properly solved, just addressed
  • Failures have big consequences on the business

If you find yourself in the middle of a wicked problem, you’ll need to shift gears. The only option to avoid disastrous failures is to slow down. You have to spend more time in planning and risk management before designing a single pixel on the screen. Every UX decision comes at a cost, and there will be people keeping a very close eye on these costs.

With wicked problems, you will always be perceived as a disruptor who endangers business-critical processes if you aren't meticulous and careful enough. As a result, you become more strategic about your UX work. It means assessing risks and setting up a testing strategy early, but also building working groups and design guilds. It also means running cross-team workshops to uncover dependencies, conflicts, and constraints early.

The Curse of Slow Shipping

As a company and its products keep growing, by default it becomes more difficult to ship new features. Each new feature will in some way interact with existing features and systems, critical user flows, potentially legacy systems, third-party vendors, and custom systems done by other units.

And as Dave Stewart noted, the "work" is never just the "work". It's meetings, reviews, research, experimentation, scoping, setup, infrastructure, procurement, iteration, maintenance, tooling, changes, omissions, nice-to-haves, scope creep, surprises, contingency, sudden change of timelines and priorities, updates and fixes. And with larger companies, it compounds dramatically to the point that shipping on time becomes an exception, rather than the rule.

In many companies, projects get indefinitely delayed, moved, cancelled or abandoned all the time. The sad reality is that despite all the incredibly hard work put in by UX and engineering teams, such projects are perceived at best as wasted efforts, and at worst as costly failures by “undeperforming” teams.

In fact, even if the work was completed well ahead of the schedule, the underwhelming impact of that work might still reflect badly on the entire team. The whole project might be perceived as a good idea poorly implemented — with little room for debates about small wins and successes here and there. And for that reason, I spend an enormous amount of prep work to filter out, prioritize, scrutinize and pre-test design ideas before heading straight into the design mode.

In complex projects, execution is at best around 35% of all work, with only 20% of time needed for planned work. That's why many estimates are utterly wrong. 

Ensuring that a feature can be neatly integrated with all dependencies around it takes time and effort — and the more complex the product, the more time and effort it will require with every single change. It holds true especially if a particular change is the main “hub” for key user flows or business priorities.

In larger products, failures can have disastrous consequences at scale. So every change must be meticulously reviewed. High-risk scenarios must be thoroughly addressed and mitigated. More planning and prep work is required as it’s the only way to reduce the likelihood of things going terribly wrong. The illustration by John Cutler below beautifully shows how the usual workflow plays out if not enough strategic thinking has been done.


Slowdowns are often caused by habits and structures deeply rooted in an organization. Addressing them is crucial to improving workflows. (Credit: John Cutler)

Plus, user flows typically need to be revised as features must be easy to find, easy to use efficiently but also difficult to make mistakes with. Not to mention the usual politics and powerplay as different units thrive for attribution, higher visibility, influence, and budgets for the upcoming year.

Unsurprisingly, with all of it in play, things slow down enormously as the project is being pulled and pushed and reshuffled and revisited over and over again.

How To Slowly Introduce Change

Most companies claim to put quality over schedule any time, but very often there is nothing more critical for companies than to ship frequently, and on time. In fact, research and UX work often are perceived as blockers of shipping early and refining on the go.

Im practice though, we need to know at least enough to not be wasting time on a feature that provides little business value or little user value. Research isn't a blocker but a filter for shipping projects that matter. And that's one of the most common points I tend to raise early.


Company Culture Playbook (Notion docs) is a fantastic effort to bring free tools, templates and resources to improve company culture (Image links to the Notion hub).

Slowness doesn’t mean that it’s impossible to make a change in such environments. But you will need enough patience, enthusiasm and trust to slowly start moving the needle in the right direction. Personally, I would start by zooming in on things that affect everyone in the team. Well-known bottlenecks that slow people down. Meetings that end without action points or run late. Communication channels with key decisions made all over the place.

Little changes there can make a huge impact on everyone, and people will notice. The goal is to help other people see the benefits that your contributions bring and build up confidence for your work and your good intentions. Sometimes that might be just enough to get them on your side to address wicked problems with the due diligence and attention they deserve.

The Little Book on Strategy, a wonderful little cheat sheet with actionable advice on strategy and leadership, by Peter Biehr (image linked).

Most importantly: become more strategic and calibrate expectations. We don’t know how our stakeholders work, so we shouldn’t expect that they know and understand design process. The more sincere and vulnerable you are, the more likely you are to get understanding and support, rather than fast turnaround requests.

Useful Resources

  • Company Culture Playbook, by OpenOrg
  • New Ways Of Working: Playbook For Modern Teams” (Notion), by Mark Eddleston 
  • The Little Book of Strategy, by Peter Bihr 
  • Design Is Taking Too Long. When Can We Ship?, by Pavel Samsonov 
  • How I ship projects at big tech companies, by Sean Goedecke

Happy Birds: How To Measure UX (Video + Live UX Training)

I've been spending quite a bit of time reviewing and drafting new sections for the video courses on UX:

  • Measure UX and Design Impact (8h + live UX training)
  • Smart Interface Design Patterns (15h + live UX training)
  • Both video courses come with a live UX training with 1:1 feedback and UX certification.
  • Use the coupon code 🎟 LINKEDIN to save 15 off.

Thank you so much for your support, everyone — and happy designing!