Workspace trial causes engine modifications

Mike's Notes

Changes triggered by the recent successful workspace trial. I slept on this for a few weeks, just to be sure. πŸ˜‡

Most done already. A lot better.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

5/12/2026

Workspace trial causes engine modifications

By: Mike Peters
On a Sandy Beach: 5/12/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Some Pipi 9 agent engines are getting minor modifications as a result of lessons learned from the recent successful stress-test trial involving 15,000 generated web pages and directories in a workspace website UI. It took creating a massive static mockup for volunteer testers to explore freely and identify gaps.

After these simple modifications, Pipi 9 will automatically render in about 50 seconds, another static mockup of a huge workspace website for testing. If that stress test is 100% successful, the next stage will be rendering a live version.

Pipi 9 will then automatically generate much larger custom live workspaces without error. And on demand.

The necessary minor modifications will change the deployment-workspace hierarchy previously described on this engineering blog.

The previous workspace settings from 2018, stored as structured training data, also need to be edited. The test results indicate that everything can now be significantly simplified, resulting in a performance boost.

Deployment Engine (dpl)

A Deployment Type has been added to create a simple separation of functions and to drive URL pattern creation.

  • Applications a/
  • Customer c/
  • Global g/
  • Settings s/

Workspace Engine (wsp)

A new Module Entity has been added to store each workspace module's attributes. They are all inherited from a library of predefined Domain Model Objects.

The revised hierarchy is

User Account

A User Account has three or more Deployments.

  • Applications
  • Customer
  • Settings
An Enterprise Account can also have Global Deployments to share properties.

Deployment

A Deployment is a container for one or more Workspaces. A deployment has these properties;

  • ID
  • Code Name
  • Name
  • Description
  • One language (eg English)
  • One User Account
  • One Deployment Class (type of tenancy)
  • One Deployment Type (type of container)
  • Those properties are inherited by all workspaces.

Workspace

A Workspace is a container for one or many Modules. A workspace has these properties;

  • ID
  • Code Name
  • Name
  • Description
  • One inherited language (eg English)
  • One inherited User Account
  • One inherited Deployment
  • One pre-built Domain Model (eg, Screen Production).
  • One Domain Model Template (eg, Feature Film, Documentary, Live Broadcast), These templates can be customised and shared.
  • Main menu (Ribbon, etc)
Those properties are inherited. This means each Workspace/Domain Model comes with its own set of prebuilt Security Roles and Security Profiles.

Module

A Module is a container for nil, one or more modules in a nested tree hierarchy. Modules can be rearranged in the Workspace UI via drag-and-drop. They are all semantically linked. A module has these properties;

  • ID
  • Parent Module ID
  • Code Name
  • Name
  • Description
  • Sort order
  • One inherited language (eg English)
  • One inherited User Account
  • One inherited Deployment
  • One inherited Workspace
  • One pre-built Domain Model Object (eg, Task, Runway, Locomotive, Film Set, Medical Device).
  • Set of prebuilt Security RolesSecurity Profiles.
  • Context menu (Ribbon, etc)
  • Learning objects
  • Contextural help
  • Attached
    • Tools
    • Workflows
    • State
    • Code
    • Data

Playing with Krobar.ai

Mike's Notes

For some years now, I have subscribed to the Kromatic weekly email newsletter. It is one of the best. This is no innovation theatre, which is refreshing compared to NZ. I really like using mathematical analysis to test ideas. Especially when Tristan explains how to do it.

It was here that I first learned how to use Monte Carlo Standard Deviation analysis using an Excel spreadsheet designed by Tristan Kromer. I then incorporated Monte Carlo into Pipi 9 to deal with uncertainty.

I had actually been using Monte Carlo in Pipi 6 for many years since 2017, but I did not know it was called that. It was buried deep in algorithms inside an early module that later became an engine.

Steve Blank, Alexander OsterwalderJason Cohen and Tristan Kromer are all top-notch. No bullshit. Everyone else is just a clone.

I had a free office hour with Tristan Kromer earlier this year and will do more next year. Tristan is excellent.

Once Ajabbi gets financially strong, I intend to sign up for weekly paid mentoring and advice from Tristan. It's not cheap, but it will be the best and worth it.

I recently got early access to the Krobar.ai beta from Kromatic. Below are my notes, which I'm adding to as I play.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Kromatic
  • Home > Handbook > 

Last Updated

05/12/2025

Playing with Krobar.ai

By: Mike Peters
On a Sandy Beach: 05/12/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Krobar.ai is a simulation platform from Kromatic. It's currently in beta.

Signing up

I signed up for a free trial in early September to test and provide feedback. I then got distracted and forgot about the trial. When I remembered, Meagan Wilder kindly let me back in early November, and I started playing with Krobar.

Experiments

There is no help documentation yet. My experiments started with me pressing buttons and seeing what happened, OK, playing. I then learned how to write ChatGPT 4.1 prompts and slowly built some skills. The chat dialogue appears highly accurate. I usually ask it to use industry-standard data for the default modelling values. I didn't have enough industry-standard values for mean, standard deviation, min, and max to tune the models yet. πŸ˜‰

One trick I found was to go to Google AI, ask a question, then turn the answer into a Krobar Prompt. Then I might find some of Steve Blank's writing, feed it into Gemini 3, and use the output as a prompt. πŸ˜€

Simulation

Each simulation has these pages.

  • Journey Map
  • Workflow diagram
  • Simulation
    • Tornado Chart
    • Histogram
    • Sensitivity Analysis
    • Financial Model (Spreadsheet)

I'm still learning by trying the menu items. There are some hidden tricks. I learned some more by watching the video interview below. The order in which things are done matters with the AI.

Downloads

Everything can be downloaded in various formats. Very useful.

  • PDF Document
  • JPG image
  • Excel Formula
  • CSV data
  • JASON document

Models

These large, complex simulation models were built. Sometimes it took several attempts. They got better as I learned.

  • Ajabbi startup
  • Airport
  • Data Centre
  • Forestry Logging
  • Hospital
  • Movie Studio
  • Shipping (maritime)

I exported the Excel formulas for import into Pipi's industry financial models.

I will do some more soon, with a better understanding of how this works and predesigned prompts, and a bit more cunning. 😊 Then try ARR, ROI, etc.

Suggestions

Direct visual editing of the journey map

  • Enable drag-and-drop editing of journey map connections.
  • Enable more journey map steps to be created and added using a form.
  • Acts as input to AI

Interview with Tristan Kromer about Krobar.ai

Sample outputs from Film Studio 2 Simulation

Knowledge Graphs Should Not Be Just for Analytics and Insights

Mike's Notes

Good words from John Gorman in Canada.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

04/12/2025

Knowledge Graphs Should Not Be Just for Analytics and Insights

By: John Gorman
LinkedIn: 06/11/2025

Information management professional specializing in semantic interoperability, with over 30 years of experience. Principal, Founder and Chief Disambiguation Officer at Quantum Semantics Inc in Calgary. Inventor of the Q6 information management model.

There is, in my view, a large and currently under-served audience for knowledge graphs in the enterprise. These are employees, contractors, and yes, even executives who need access to a more granular level of access to company knowledge.

Maybe we should start calling them 'Learning Graphs' instead.

The interesting thing is, by providing this level of service, solutions to other gnarly challenges like data governance, metadata, reference and master data management, data fluency, semantic interoperability, and when done properly (i.e. FAIR-From-Birth) dimensional analysis of operational data stores, make themselves available.

Sounds ambitious? Why not? Enterprise Information is now all about language, so how about we start there and leverage persistent patterns of classification and usage. Here are some of the immediate and 'downstream' benefits:

  • Data Governance. Most companies make the mistake of starting with data. The more natural approach - and with a lot less risk to life and limb - is to begin with the language of the business. And, you get to roll out a 'FAIR-From-Birth' business glossary as a side benefit, just like the big boys and girls.
  • Metadata. Business users rarely even care about metadata, but when they see how the language of the business connects to and is equivalent to metadata values the light bulbs start to go on. As a bonus, they also get to see how missing, misspelled, and misshapen vocabulary gums things up.
  • Reference and Master Data. This is another opportunity to see how the information supply chain should work. Crap components upstream means crap assemblies downstream. Ask your local supermarket manager how he handles a shipment of rotting tomatoes. 

The benefit for business owners is two-fold:

  1. They get to see what kind of cleanup is required when they throw crap over the fence.
  2. They also get to see how one-and-done actually reduces their workload.

Data Fluency. Seeing the connections between how they talk about the business and what kinds of data those conversations connect to? Priceless.

Semantic Interoperability. Ah, yes... the Holy Grail. What if we made it possible to access semantically equivalent pairs as a start? So, if Jane McCallum, the CFO of Acme Inc. doesn't know (have knowledge of) the meaning of the acronym FLOC, she can simply look it up on her phone during her Monday executive team meeting and learn it. No embarrassing interruptions, just immediate access to enterprise knowledge. What a concept!

Analytics and Insight. Finally, the raison d'Γͺtre for almost every technical innovation so far this decade. When an information supply chain starts with the notion that every component, especially the very granular ones, should be engineered to fit into a downstream, multi-dimensional ecosystem of semantically connected information good things start to happen. 

DM me if you want to learn more about Semantium's set of protocols.

Your URL Is Your State

Mike's Notes

There were dozens of complex, hard problems that had to be solved before cracking the emergent problem of massive IT system failures. This took years. They often had to be solved in parallel because of their interwoven effects. URL patterns were one of them.

I have been thinking about URL structure patterns for a while. I finally solved this problem for Pipi 9 back in October 2025. It was one of the last problems to solve before it could successfully build the UI of enterprise-scale workspaces. The UI turned out to be a thin wrapper, which was a complete surprise.

The recent successful stress-test trial involved 15K web pages and directories in a rapidly built (days) web UI, and the URL pattern was perfect. It will now become possible for Pipi 9 to automatically generate much larger custom workspaces without error. and on demand.

This is a great article from Ahmad Alfy in Egypt. He is totally correct in what he writes, and he helped me see the problem and patterns much more clearly.

Thank you, Ahmad. I look forward to meeting you. Maybe you could be part of the team. 😊

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

03/12/2025

Your URL Is Your State

By: Ahmad Alfy
AlfyBlog: 31/10/2025

Blog about front-end development and the web.

Couple of weeks ago when I was publishing The Hidden Cost of URL Design I needed to add SQL syntax highlighting. I headed to PrismJS website trying to remember if it should be added as a plugin or what. I was overwhelmed with the amount of options in the download page so I headed back to my code. I checked the file for PrismJS and at the top of the file, I found a comment containing a URL:

/* https://prismjs.com/download.html#themes=prism&languages=markup+css+clike+javascript+bash+css-extras+markdown+scss+sql&plugins=line-highlight+line-numbers+autolinker */

I had completely forgotten about this. I clicked the URL, and it was the PrismJS download page with every checkbox, dropdown, and option pre-selected to match my exact configuration. Themes chosen. Languages selected. Plugins enabled. Everything, perfectly reconstructed from that single URL.

It was one of those moments where something you once knew suddenly clicks again with fresh significance. Here was a URL doing far more than just pointing to a page. It was storing state, encoding intent, and making my entire setup shareable and recoverable. No database. No cookies. No localStorage. Just a URL.

This got me thinking: how often do we, as frontend engineers, overlook the URL as a state management tool? We reach for all sorts of abstractions to manage state such as global stores, contexts, and caches while ignoring one of the web’s most elegant and oldest features: the humble URL.

In my previous article, I wrote about the hidden costs of bad URL design. Today, I want to flip that perspective and talk about the immense value of good URL design. Specifically, how URLs can be treated as first-class state containers in modern web applications.

The Overlooked Power of URLs

Scott Hanselman famously said “URLs are UI” and he’s absolutely right. URLs aren’t just technical addresses that browsers use to fetch resources. They’re interfaces. They’re part of the user experience.

But URLs are more than UI. They’re state containers. Every time you craft a URL, you’re making decisions about what information to preserve, what to make shareable, and what to make bookmarkable.

Think about what URLs give us for free:

  • Shareability: Send someone a link, and they see exactly what you see
  • Bookmarkability: Save a URL, and you’ve saved a moment in time
  • Browser history: The back button just works
  • Deep linking: Jump directly into a specific application state

URLs make web applications resilient and predictable. They’re the web’s original state management solution, and they’ve been working reliably since 1991. The question isn’t whether URLs can store state. It’s whether we’re using them to their full potential.

Before we dive into examples, let’s break down how URLs encode state. Here’s a typical stateful URL:

Anatomy of a URL
Source: What is a URL - MDN Web Docs

For many years, these were considered the only components of a URL. That changed with the introduction of Text Fragments, a feature that allows linking directly to a specific piece of text within a page. You can read more about it in my article Smarter than ‘Ctrl+F’: Linking Directly to Web Page Content.

Different parts of the URL encode different types of state:

  1. Path Segments (/path/to/myfile.html). Best used for hierarchical resource navigation:
    • /users/123/posts - User 123’s posts
    • /docs/api/authentication - Documentation structure
    • /dashboard/analytics - Application sections
  2. Query Parameters (?key1=value1&key2=value2). Perfect for filters, options, and configuration:
    • ?theme=dark&lang=en - UI preferences
    • ?page=2&limit=20 - Pagination
    • ?status=active&sort=date - Data filtering
    • ?from=2025-01-01&to=2025-12-31 - Date ranges
  3. Anchor Fragment (#SomewhereInTheDocument). Ideal for client-side navigation and page sections:
    • #L20-L35 - GitHub line highlighting
    • #features - Scroll to section
    • #/dashboard - Single-page app routing (though it’s rarely used these days)

Common Patterns That Work for Query Parameters

Multiple values with delimiters

Sometimes you’ll see multiple values packed into a single key using delimiters like commas or plus signs. It’s compact and human-readable, though it requires manual parsing on the server side.

?languages=javascript+typescript+python
?tags=frontend,react,hooks

Nested or structured data

Developers often encode complex filters or configuration objects into a single query string. A simple convention uses key–value pairs separated by commas, while others serialize JSON or even Base64-encode it for safety.

?filters=status:active,owner:me,priority:high
?config=eyJyaWNrIjoicm9sbCJ9==  (base64-encoded JSON)

Boolean flags

For flags or toggles, it’s common to pass booleans explicitly or to rely on the key’s presence as truthy. This keeps URLs shorter and makes toggling features easy.

?debug=true&analytics=false
?mobile  (presence = true)

Arrays (Bracket notation)

?tags[]=frontend&tags[]=react&tags[]=hooks

Another old pattern is bracket notation, which represents arrays in query parameters. It originated from early web frameworks like PHP where appending [] to a parameter name signals that multiple values should be grouped together.

?tags[]=frontend&tags[]=react&tags[]=hooks
?ids[0]=42&ids[1]=73

Many modern frameworks and parsers (like Node’s qs library or Express middleware) still recognize this pattern automatically. However, it’s not officially standardized in the URL specification, so behavior can vary depending on the server or client implementation. Notice how it even breaks the syntax highlighting on my website.

The key is consistency. Pick patterns that make sense for your application and stick with them.

State via URL Parameters

Let’s look at real-world examples of URLs as state containers:

PrismJS Configuration

https://prismjs.com/download.html#themes=prism&languages=markup+css+clike+javascript&plugins=line-numbers

The entire syntax highlighter configuration encoded in the URL. Change anything in the UI, and the URL updates. Share the URL, and someone else gets your exact setup. This one uses anchor and not query parameters, but the concept is the same.

GitHub Line Highlighting

https://github.com/zepouet/Xee-xCode-4.5/blob/master/XeePhotoshopLoader.m#L108-L136

It links to a specific file while highlighting lines 108 through 136. Click this link anywhere, and you’ll land on the exact code section being discussed.

Google Maps

https://www.google.com/maps/@22.443842,-74.220744,19z

Coordinates, zoom level, and map type all in the URL. Share this link, and anyone can see the exact same view of the map.

Figma and Design Tools

https://www.figma.com/file/abc123/MyDesign?node-id=123:456&viewport=100,200,0.5

Before shareable design links, finding an updated screen or component in a large file was a chore. Someone had to literally show you where it lived, scrolling and zooming across layers. Today, a Figma link carries all that context like canvas position, zoom level, selected element. Literally everything needed to drop you right into the workspace.

E-commerce Filters

https://store.com/laptops?brand=dell+hp&price=500-1500&rating=4&sort=price-asc

This is one of the most common real-world patterns you’ll encounter. Every filter, sort option, and price range preserved. Users can bookmark their exact search criteria and return to it anytime. Most importantly, they can come back to it after navigating away or refreshing the page.

Frontend Engineering Patterns

Before we discuss implementation details, we need to establish a clear guideline for what should go into the URL. Not all state belongs in URLs. Here’s a simple heuristic:

Good candidates for URL state:

  • Search queries and filters
  • Pagination and sorting
  • View modes (list/grid, dark/light)
  • Date ranges and time periods
  • Selected items or active tabs
  • UI configuration that affects content
  • Feature flags and A/B test variants

Poor candidates for URL state:

  • Sensitive information (passwords, tokens, PII)
  • Temporary UI states (modal open/closed, dropdown expanded)
  • Form input in progress (unsaved changes)
  • Extremely large or complex nested data
  • High-frequency transient states (mouse position, scroll position)

If you are not sure if a piece of state belongs in the URL, ask yourself: If someone else clicking this URL, should they see the same state? If so, it belongs in the URL. If not, use a different state management approach.

Implementation using Plain JavaScript

The modern URLSearchParams API makes URL state management straightforward:

// Reading URL parameters
const params = new URLSearchParams(window.location.search);
const view = params.get('view') || 'grid';
const page = params.get('page') || 1;
// Updating URL parameters
function updateFilters(filters) {
  const params = new URLSearchParams(window.location.search);
  // Update individual parameters
  params.set('status', filters.status);
  params.set('sort', filters.sort);
  // Update URL without page reload
  const newUrl = `${window.location.pathname}?${params.toString()}`;
  window.history.pushState({}, '', newUrl);
  // Now update your UI based on the new filters
  renderContent(filters);
}
// Handling back/forward buttons
window.addEventListener('popstate', () => {
  const params = new URLSearchParams(window.location.search);
  const filters = {
    status: params.get('status') || 'all',
    sort: params.get('sort') || 'date'
  };
  renderContent(filters);
});

The popstate event fires when the user navigates with the browser’s Back or Forward buttons. It lets you restore the UI to match the URL, which is essential for keeping your app’s state and history in sync. Usually, your framework’s router handles this for you, but it’s good to know how it works under the hood.

Implementation using React

React Router and Next.js provide hooks that make this even cleaner:


import { useSearchParams } from 'react-router-dom';
// or for Next.js 13+: import { useSearchParams } from 'next/navigation';
function ProductList() {
  const [searchParams, setSearchParams] = useSearchParams();
  // Read from URL (with defaults)
  const color = searchParams.get('color') || 'all';
  const sort = searchParams.get('sort') || 'price';
  // Update URL
  const handleColorChange = (newColor) => {
    setSearchParams(prev => {
      const params = new URLSearchParams(prev);
      params.set('color', newColor);
      return params;
    });
  };
  return (
    <div>
      <select value={color} onChange={e => handleColorChange(e.target.value)}>
        <option value="all">All Colors</option>
        <option value="silver">Silver</option>
        <option value="black">Black</option>
      </select>
      {/* Your filtered products render here */}
    </div>
  );
}

Best Practices for URL State Management

Now that we’ve seen how URLs can hold application state, let’s look at a few best practices that keep them clean, predictable, and user-friendly.

Handling Defaults Gracefully

Don’t pollute URLs with default values:

// Bad: URL gets cluttered with defaults
?theme=light&lang=en&page=1&sort=date
// Good: Only non-default values in URL
?theme=dark  // light is default, so omit it
Use defaults in your code when reading parameters:
function getTheme(params) {
  return params.get('theme') || 'light'; // Default handled in code
}

Debouncing URL Updates

For high-frequency updates (like search-as-you-type), debounce URL changes:

import { debounce } from 'lodash';
const updateSearchParam = debounce((value) => {
  const params = new URLSearchParams(window.location.search);
  if (value) {
    params.set('q', value);
  } else {
    params.delete('q');
  }
  window.history.replaceState({}, '', `?${params.toString()}`);
}, 300);
// Use replaceState instead of pushState to avoid flooding history

pushState vs. replaceState

When deciding between pushState and replaceState, think about how you want the browser history to behave. pushState creates a new history entry, which makes sense for distinct navigation actions like changing filters, pagination, or navigating to a new view — users can then use the Back button to return to the previous state. On the other hand, replaceState updates the current entry without adding a new one, making it ideal for refinements such as search-as-you-type or minor UI adjustments where you don’t want to flood the history with every keystroke.

URLs as Contracts

When designed thoughtfully, URLs become more than just state containers. They become contracts between your application and its consumers. A good URL defines expectations for humans, developers, and machines alike

Clear Boundaries

A well-structured URL draws the line between what’s public and what’s private, client and server, shareable and session-specific. It clarifies where state lives and how it should behave. Developers know what’s safe to persist, users know what they can bookmark, and machines know whats worth indexing.

URLs, in that sense, act as interfaces: visible, predictable, and stable.

Communicating Meaning

Readable URLs explain themselves. Consider the difference between the two URLs below.

https://example.com/p?id=x7f2k&v=3
https://example.com/products/laptop?color=silver&sort=price

The first one hides intent. The second tells a story. A human can read it and understand what they’re looking at. A machine can parse it and extract meaningful structure.

Jim Nielsen calls these “examples of great URLs”. URLs that explain themselves.

Caching and Performance

URLs are cache keys. Well-designed URLs enable better caching strategies:

  • Same URL = same resource = cache hit
  • Query params define cache variations
  • CDNs can cache intelligently based on URL patterns

You can even visualize a user’s journey without any extra tracking code:

/products => selects category => /products?category=laptops => adds price filter => products?category=laptops&price=500-1000

Your analytics tools can track this flow without additional instrumentation. Every URL parameter becomes a dimension you can analyze.

Versioning and Evolution

URLs can communicate API versions, feature flags, and experiments:

  • ?v=2                   // API version
  • ?beta=true             // Beta features
  • ?experiment=new-ui     // A/B test variant

This makes gradual rollouts and backwards compatibility much more manageable.

Anti-Patterns to Avoid

Even with the best intentions, it’s easy to misuse URL state. Here are common pitfalls:

“State Only in Memory” SPAs

The classic single-page app mistake:

// User hits refresh and loses everything
const [filters, setFilters] = useState({});

If your app forgets its state on refresh, you’re breaking one of the web’s fundamental features. Users expect URLs to preserve context. I remember a viral video from years ago where a Reddit user vented about an e-commerce site: every time she hit “Back,” all her filters disappeared. Her frustration summed it up perfectly. If users lose context, they lose patience.

Sensitive Data in URLs

This one seems obvious, but it’s worth repeating:

// NEVER DO THIS
?password=secret123

URLs are logged everywhere: browser history, server logs, analytics, referrer headers. Treat them as public.

Inconsistent or Opaque Naming

// Unclear and inconsistent
?foo=true&bar=2&x=dark
// Self-documenting and consistent
?mobile=true&page=2&theme=dark

Choose parameter names that make sense. Future you (and your team) will thank you.

Overloading URLs with Complex State

?config=eyJtZXNzYWdlIjoiZGlkIHlvdSByZWFsbHkgdHJpZWQgdG8gZGVjb2RlIHRoYXQ_IiwiZmlsdGVycyI6eyJzdGF0dXMiOlsiYWN0aXZlIiwicGVuZGluZyJdLCJwcmlvcml0eSI6WyJoaWdoIiwibWVkaXVtIl0sInRhZ3MiOlsiZnJvbnRlbmQiLCJyZWFjdCIsImhvb2tzIl0sInJhbmdlIjp7ImZyb20iOiIyMDI0LTAxLTAxIiwidG8iOiIyMDI0LTEyLTMxIn19LCJzb3J0Ijp7ImZpZWxkIjoiY3JlYXRlZEF0Iiwib3JkZXIiOiJkZXNjIn0sInBhZ2luYXRpb24iOnsicGFnZSI6MSwibGltaXQiOjIwfX0==

If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state.

URL Length Limits

Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach.

Breaking the Back Button

// Replacing state incorrectly
history.replaceState({}, '', newUrl); // Used when pushState was needed

Respect browser history. If a user action should be “undoable” via the back button, use pushState. If it’s a refinement, use replaceState.

Closing Thought

That PrismJS URL reminded me of something important: good URLs don’t just point to content. They describe a conversation between the user and the application. They capture intent, preserve context, and enable sharing in ways that no other state management solution can match.

We’ve built increasingly sophisticated state management libraries like Redux, MobX, Zustand, Recoil and others. They all have their place but sometimes the best solution is the one that’s been there all along.

In my previous article, I wrote about the hidden costs of bad URL design. Today, we’ve explored the flip side: the immense value of good URL design. URLs aren’t just addresses. They’re state containers, user interfaces, and contracts all rolled into one.

If your app forgets its state when you hit refresh, you’re missing one of the web’s oldest and most elegant features.

Developer access to Pipi, is coming

Mike's Notes

The data is precise on this one. Unfortunately, the current developer interest in NZ and Australia is 3% and 0%, respectively. I also can't find anyone in NZ who has the slightest technical understanding of what I'm doing. But there are plenty overseas, especially in MLOps. We speak the same language, even if the architecture and algorithms are radically different. Also, top-grade mathematicians get it. Internally, Pipi 9 uses a lot of maths.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

05/12/2025

Developer access to Pipi is coming

By: Mike Peters
On a Sandy Beach: 02/12/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

The problem

To launch Pipi 9 with limited resources in 2026, the effort needs to be highly focused on developers who build large enterprise systems and have experienced failure, cost overruns, and staggering complexity. This is for them.

Waste

The staggering annual global cost of IT failures on big projects is around $US 3 trillion. How many schools or knee replacements would that be?

  • 15% succeed
  • 25% makes no difference
  • 60% fail

Web traffic stats

The steadily growing web traffic statistics of public interest in Pipi 9 are becoming very clear.

Since early 2019, the total traffic stats by country are;

  • Singapore 19%
  • United States 17%
  • Hong Kong 13%
  • Brazil 12%

Developer Accounts

The initial paid Developer Accounts will be restricted to experienced DevOps teams with great internal culture in those four countries. That also affects the language, currency, hours of support availability, etc. Later, as interest and resources grow, that list of countries can be expanded.

Personal Accounts

The free Personal Accounts will initially use an English interface and will not be restricted by country of residence. They will get community support.

Enterprise Accounts

The initial paid Enterprise Accounts will be supported by their associated Developer Accounts, who can charge them whatever they want for that service. They will initially use an English interface and will not be restricted by country of residence. Developer Accounts will be able to translate UI and documentation into any language and writing system.

Pipi 9 is in hiding

This engineering blog and the many other Ajabbi documentation websites are deliberately hidden from search engines. People are visiting because they are curious, as I write notes to myself and build, learning as I go. It is not easy to find the technical documentation unless you are really interested, clever and very determined. That has helped me find some early, keen technical fans who provide testing and feedback. It has also protected me from being overwhelmed by enquiries.

Communication constraints

I am a very slow writer, using assistive technology, have hearing problems and prefer video chats with people who speak good, clear English. I don't speak any other language apart from tiny bits of Maori, French and Spanish.

SEO and GEO

The SEO/GEO settings will be fixed when

  • Pipi 9 matures and becomes ready for public use
  • Community support is in place
  • Bugs fixed
  • Enough self-help documentation to help people get started.

Developer Account waitlist

There will be a signup queue to control demand, so scaling is steady with a positive resource feedback loop to solve the chicken-and-egg problem. The small queue is growing now. I will pick the best candidates with the highest chance of success. They will gain a first-mover advantage in building large, custom enterprise systems faster and at a much lower cost. The first ones will get free unlimited support. 

Relying on word-of-mouth recommendations.

There will be no marketing or sales, just good, clear documentation, live demos, and regular bookable office hours (NZ daytime) for having a chat.

Inflexion point

In the future, as workspaces mature and Pipi 10 becomes even easier to work with, resource constraints will disappear, teams will grow, and an inflexion point will be reached. Anyone will then be able to sign up.

Workspaces for Research

Mike's Notes

This is where I will keep detailed working notes on creating Workspaces for Research. Eventually, these will become permanent, better-written documentation stored elsewhere. Hopefully, someone will come up with a better name than this working title.

This replaces coverage in Industry Workspace written on 13/10/2025.

Testing

The current online mockup is version 1 and will be updated frequently. If you are helping with testing, please remember to delete your browser cache so you see the daily changes. Eventually, a live demo version will be available for field trials.

Learning

Initially, Pipi 4 had a module called EcoTrack. It mirrored various existing paper tools for biodiversity sampling and measurement in NZ ecosystems. Basically, data storage of observations for any Ecological Restoration Site.

  • Trapping
  • Soil
  • Plant growth
  • Water quality
  • Microorganisms
  • Climate
  • Photo records
  • Etc

It was planned to join this data with ESRI mapping to help visualise it. ESRI provided a $NZ600,000 grant of their software for this project. This grant came after I gave a live demo at a NZ GIS User Conference, and the ESRI head of programming was in the audience. The software arrived on 2 pallets 2 weeks later.

Landcare Research, DOC and Eagle Technology were helping with this project.

Then there was a change in government, funding dried up, and the Christchurch earthquakes caused havoc. Pipi 4 died.

In 2016, when rebuilding Pipi from memory as Pipi 6, I was greatly influenced by David C. Hay's work on data models for laboratory tests. I then figured out how to add the Business Model testing experiments of Steve Blank and Alexander Osterwalder. Plus, I have a home lab for looking at bugs, running chemistry experiments, and making useful concoctions for art projects. Alex helped with how to provide literature references. Add in a catalogued reference library and seminars. So that's the origin story, starting at a basic level and slowly growing over time.

Why

Ajabbi Research will be the first user of this workspace to organise the research needed to support and improve Pipi for people to use. The workspace will also be used for testing the Researcher Account. Eventually, this workspace will be available to anyone.

Resources

References


References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

28/11/2025

Workspaces for Research

By: Mike Peters
On a Sandy Beach: 28/11/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Open-source

This open-source SaaS cloud system will be shared on GitHub and GitLab.

Dedication

This workspace is dedicated to the life and work of Jocelyn Bell Burnell, who at the age of 25 discovered Pulsars.

Bell Burnell in 2009

Source: https://en.wikipedia.org/wiki/Jocelyn_Bell_Burnell#/media/File:Launch_of_IYA_2009,_Paris_-_Grygar,_Bell_Burnell_cropped.jpg

"Dame Susan Jocelyn Bell Burnell (/bɜːrˈnΙ›l/; nΓ©e Bell; born 15 July 1943) is a Northern Irish physicist who, while conducting research for her doctorate, discovered the first radio pulsars in 1967. This discovery later earned the Nobel Prize in Physics in 1974, but she was not among the awardees.

Bell Burnell was president of the Royal Astronomical Society from 2002 to 2004, president of the Institute of Physics from October 2008 until October 2010, and interim president of the Institute following the death of her successor, Marshall Stoneham, in early 2011. She was Chancellor of the University of Dundee from 2018 to 2023.

In 2018, she was awarded the Special Breakthrough Prize in Fundamental Physics. Following the announcement of the award, she decided to use the $3 million (£2.3 million) prize money to establish a fund to help female, minority and refugee students to become research physicists. The fund is administered by the Institute of Physics.

In 2021, Bell Burnell became the second female recipient (after Dorothy Hodgkin in 1976) of the Copley Medal. In 2025, Bell Burnell's image was included on an An Post stamp celebrating women in STEM." - Wikipedia

Change Log

Ver 1 includes research.

Existing products


Features

This is a basic comparison of features in research software.

[TABLE]

Data Model

words

Database Entities

  • Facility
  • Party
  • etc

Standards

The workspace needs to comply with all international standards.

  • (To come)

Workspace navigation menu

This default outline needs a lot of work. The outline can be easily customised by future users using drag-and-drop and tick boxes to turn features off and on.

  • Enterprise Account
    • Applications
      • Facility
        • Accounts
        • Maintenance
        • Supplies
      • Library
        • Borrower
        • Collection
        • Loan
      • Publish
        • Presentation
        • Website
          • Blog
          • Wiki
      • Research Program
        • Experiment
        • Theory
          • Customer (v2)
            • Bookmarks
              • (To come)
            • Support
              • Contact
              • Forum
              • Live Chat
              • Office Hours
              • Requests
              • Tickets
            • (To come)
              • Feature Vote
              • Feedback
              • Surveys
            • Learning
              • Explanation
              • How to Guide
              • Reference
              • Tutorial
            • Settings (v3)
              • Account
              • Billing
              • Deployments
                • Workspaces
                  • Modules
                  • Plugins
                  • Templates
                    • Institute
                    • Lab
                    • Student
                  • Users

          "You Don't Need Kafka, Just Use Postgres" Considered Harmful

          Mike's Notes

          Note

          Resources

          References

          • Reference

          Repository

          • Home > Ajabbi Research > Library >
          • Home > Handbook > 

          Last Updated

          01/12/2025

          "You Don't Need Kafka, Just Use Postgres" Considered Harmful

          By: Gunnar Morling
          Random Musings on All Things Software Engineering: 03/11/2025

          Gunnar Morling is an open-source software engineer in the Java and data streaming space. He currently works as a Technologist for Confluent. In his past role at Decodable he focused on developer outreach and helped them build their stream processing platform based on Apache Flink. Prior to that, he spent ten years at Red Hat, where he led the Debezium project, a platform for change data capture.

          Looking to make it to the front page of HackerNews? Then writing a post arguing that "Postgres is enough", or why "you don’t need Kafka at your scale" is a pretty failsafe way of achieving exactly that. No matter how often it has been discussed before, this topic is always doing well. And sure, what’s not to love about that? I mean, it has it all: Postgres, everybody’s most favorite RDBMS—​check! Keeping things lean and easy—​sure, count me in! A somewhat spicy take—​bring it on!

          The thing is, I feel all these articles kinda miss the point; Postgres and Kafka are tools designed for very different purposes, and naturally, which tool to use depends very much on the problem you actually want to solve. To me, the advice "You Don’t Need Kafka, Just Use Postgres" is doing more harm than good, leading to systems built in a less than ideal way, and I’d like to discuss why this is in more detail in this post. Before getting started though, let me get one thing out of the way really quick: this is not an anti-Postgres post. I enjoy working with Postgres as much as the next person (for those use cases it is meant for). I’ve used it in past jobs, and I’ve written about it on this blog before. No, this is a pro-"use the right tool for the job" post.

          So what’s the argument of the "You Don’t Need Kafka, Just Use Postgres" posts? Typically, they argue that Kafka is hard to run or expensive to run, or a combination thereof. When you don’t have "big data", this cost may not be justified. And if you already have Postgres as a database in your tech stack, why not keep using this, instead of adding yet another technology?

          Usually, these posts then go on to show how to use SELECT ... FOR UPDATE SKIP LOCKED for building a… job queue. Which is where things already start to make a bit less sense to me. The reason being that queuing just is not a typical use case for Kafka to begin with. It requires message-level consumer parallelism, as well as the ability to acknowledge individual messages, something Kafka historically has not supported. Now, the Kafka community actually is working towards queue support via KIP-932, but this is not quite ready for primetime yet (I took a look at that KIP earlier this year). Until then, the argument boils down to not use Kafka for something it has not been designed for in the first place. Hm, yeah, ok?

          That being said, building a robust queue on top of Postgres is actually harder than it may sound. Long-running transactions by queue consumers can cause MVCC bloat and WAL pile-up; Postgres' vacuum process not being able to keep up with the rate of changes can quickly become a problem for this use case. So if you want to go down that path, make sure to run representative performance tests, for a sustained period of time. You won’t find out about issues like this by running two minute tests.

          So let’s actually take a closer look at the "small scale" argument, as in "with such a low data volume, you just can use Postgres". But to use it for what exactly? What is the problem you are trying to solve? After all, Postgres and Kafka are tools designed for addressing specific use cases. One is a database, the other is an event streaming platform. Without knowing and talking about what one actually wants to achieve, the conversation boils down to "I like this tool better than that" and is pretty meaningless.

          Kafka enables a wide range of use cases such as microservices communication and data exchange, ingesting IoT sensor data, click streams, or metrics, log processing and aggregation, low-latency data pipelines between operational databases and data lakes/warehouses, and realtime stream processing, for instance for fraud detection and recommendation systems.

          So if you have one of those use cases, but at a small scale (low volume of data), could you then use Postgres instead of Kafka? And if so, does it make sense? To answer this, you need to consider the capabilities and features you get from Kafka which make it such a good fit for these applications. And while scalability indeed is one of Kafka’s core characteristics, it has many other traits which make it very attractive for event streaming applications:

          • Log semantics: At its core, Kafka is a persistent ordered event log. Records are not deleted after processing, instead they are subject to time-based retention policies or key-based compaction, or they could be retained indefinitely. Consumers can replay a topic from a given offset, or from the very beginning. If needed, consumers can work with exactly-once semantics. This goes way beyond simple queue semantics and replicating it on top of Postgres will be a substantial undertaking.
          • Fault tolerance and high availability (HA): Kafka workloads are scaled out in clusters running on multiple compute nodes. This is done for two reasons: increasing the throughput the system can handle (not relevant at small scale) and increasing reliability (very much relevant also at small scale). By replicating the data to multiple nodes, instance failures can be easily tolerated. Each node in the cluster can be a leader for a topic partition (i.e., receive writes), with another node taking over if the previous leader becomes unavailable.
          •         With Postgres in contrast, all writes go to a single node, while replicas only support read requests. A broker failover in Kafka will affect (in the form of increased latencies) only those partitions it is the leader for, whereas the failure of the Postgres primary node in a cluster is going to affect all writers. While Kafka broker failovers happen automatically, manual intervention is required in order to promote a Postgres replica to primary, or an external coordinator such as Patroni must be used. Alternatively, you might consider Postgres-compatible distributed databases such as CockroachDB, but then the conversation shifts quite a bit away from "Just use Postgres".
          • Consumer groups: One of the strengths of the Kafka protocol is its support for organizing consumers in groups. Multiple clients can distribute the load of reading the messages from a given topic, making sure that each message is processed by exactly one member of the group. Also when handling only a low volume of messages, this is very useful. For instance, consider a microservice which receives messages from another service. For the purposes of fault-tolerance, the service is scaled out to multiple instances. By configuring a Kafka consumer group for all the service instances, the incoming messages will be distributed amongst them.
          •         How would the same look when using Postgres? Considering the "small scale" scenario, you could decide that only one of the service instances should read all the messages. But which one do you select? What happens if that node fails? Some kind of leader election would be required. Ok, so let’s make each member of the application cluster consume from the topic then? For this you need to think about how to distribute the messages from the Postgres-based topic, how to handle client failures, etc. So your job now essentially is to re-implement Kafka’s consumer rebalance protocol. This is far from trivial and it certainly goes against the initial goal of keeping things simple.
          • Low latency: Let’s talk about latency, i.e. the time it takes from sending a message to a topic until it gets processed by a consumer. Having a low data volume doesn’t necessarily imply that you do not want low latency. Think about fraud detection, for example. Also when processing only a handful of transactions per second, you want to be able to spot fraudulent patterns very quickly and take action accordingly. Or a data pipeline from your operational data store to a search index. For a good user experience, search results should be based on the latest data as much as possible. With Kafka, latencies in the milli-second range can be achieved for use cases like this. Trying to do the same with Postgres would be really tough, if possible at all. You don’t want to hammer your database with queries from a herd of poll-based queue clients too often, while LISTEN/NOTIFY is known to suffer from heavy lock contention problems.
          • Connectors: One important aspect which is usually omitted from all the "Just use Postgres" posts is connectivity. When implementing data pipelines and ETL use cases, you need to get data out of your data source and put it into Kafka. From there, it needs to be propagated into all kinds of data sinks, with the same dataset oftentimes flowing into multiple sinks at once, such as a search index and a data lake. Via Kafka Connect, Kafka has a vast ecosystem of source and sink connectors, which can be combined, mix-and-match style. Taking data from MySQL into Iceberg? Easy. Going from Salesforce to Snowflake? Sure. There’s ready-made connectors for pretty much every data system under the sun.
          •         Now, what would this look like when using Postgres instead? There’s no connector ecosystem for Postgres like there is for Kafka. This makes sense, as Postgres never has been meant to be a data integration platform, but it means you’ll have to implement bespoke source and sink connectors for all the systems you want to integrate with.
          • Clients, schemas, developer experience: One last thing I want to address is the general programming model of a "Just use Postgres" event streaming solution. You might think of using SQL as the primary interface for producing and consuming messages. That sounds easy enough, but it’s also very low level. Building some sort of client will probably make sense. You may need consumer group support, as discussed above. You’ll need support for metrics and observability ("What’s my consumer lag?"). How do you actually go about converting your events into a persistent format? Some kind of serializer/deserializer infrastructure will be needed, and while at it, you probably should have support for schema management and evolution, too. What about DLQ support? With Kafka and its ecosystem, you get battle-proven clients and tooling, which will help you with all that, for all kinds of programming languages. You could rebuild all this, of course, but it would take a long time and essentially equate to recreating large parts of Kafka and its ecosystem.

          So where does all that leave us? Should you use Postgres as a job queue then? I mean, why not, if it fits the bill for you, go for it. Don’t build it yourself though, use an existing extension like pgmq. And make sure to understand the potential implications on MVCC bloat and vacuuming discussed above.

          Now, when it comes to using Postgres instead of Kafka as an event streaming platform, this proposition just doesn’t make an awful lot of sense to me, no matter what the volume of the data is going to be. There’s so much more to event streaming than what’s typically discussed in the "Just use Postgres" posts; while you might be able to punt some of the challenges for some time, you’ll eventually find yourself in the business of rebuilding your own version of Kafka, on top of Postgres. But what’s the point of recreating and maintaining the work already done by hundreds of contributors in the course of many years? What starts as an effort to "keep things simple" actually creates a substantial amount of unnecessary complexity. Solving this challenge might sound like a lot of fun purely from an engineering perspective, but for most organizations out there, it’s probably just not the right problem they should focus on.

          Another problem of the "small scale" argument is that what’s a low data volume today may be a much bigger volume next week. This is a trade-off, of course, but a common piece of advice is to build your systems for the current and the next order of magnitude of load: you should be able to sustain 10x of your current load and data volume as your business grows. This will be easily doable with Kafka which has been designed with scalability at its core, but it may be much harder for a queue implementation based on Postgres. It is single-writer as discussed above, so you’d have to look at scaling up, which becomes really expensive really quickly. So you might decide to migrate to Kafka eventually, which will be a substantial effort when thinking of migrating data, moving your applications from your home-grown clients to Kafka, etc.

          In the end, it all comes down to choosing the right tool for the job. Use Postgres if you want to manage and query a relational data set. Use Kafka if you need to implement realtime event streaming use cases. Which means, yes, oftentimes, it actually makes sense to work with both tools as part of your overall solution: Postgres for managing a service’s internal state, and Kafka for exchanging data and events with other services. Rather than trying to emulate one with the other, use each one for its specific strengths. How to keep both Postgres and Kafka in sync in this scenario? Change data capture, and in particular the outbox pattern can help there. So if there is a place for "Postgres over Kafka", it is actually here: for many cases it makes sense to write to Kafka not directly, but through your database, and then to emit events to Kafka via CDC, using tools such as Debezium. That way, both resources are (eventually) consistent, keeping things very simple from an application developer perspective.

          This approach also has the benefit of decoupling (and protecting) your operational datastore from the potential impact of downstream event consumers. You probably don’t want to be at the risk of increased tail latencies of your operational REST API because there’s a data lake ingest process, perhaps owned by another team, which happens to reread an entire topic from a table in your service’s database at the wrong time. Adhering to the idea of the synchrony budget, it makes sense to separate the systems for addressing these different concerns.

          What about the operational overhead then? While this definitely warrants consideration, I believe that oftentimes that concern is overblown. Running Kafka for small data sets really isn’t that hard. With the move from ZooKeeper to KRaft mode, running a single Kafka instance is trivial for scenarios not requiring fault tolerance. Managed services make running Kafka a very uneventful experience (pun intended) and should be the first choice, in particular when setting out with low scale use cases. Cost will be manageable kinda by definition by virtue of having a low volume of data. Plus, the time and effort for solving all the issues with a custom implementation discussed above should be part of the TCO consideration to be useful.

          So yes, if you want to make it to the front page of HackerNews, arguing that "Postgres is enough" may get you there; but if you actually want to solve your real-world problems in an effective and robust way, make sure to understand the sweet spots and limitations of your tools and use the right one for the job.