"Collaboration" is bullshit

Mike's Notes

Hell, this reminds me of innovation theatre. 😎

I love the honesty.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Amazing CTO
  • Home > Handbook > 

Last Updated

13/04/2026

"Collaboration" is bullshit

By: J.A. Westenberg
Westenberg: 22/03/2026

I'm JA Westenberg. I publish a weekly column on technology, culture, philosophy and what it means to be a human being.

In 1944, the Wehrmacht launched into Hitler’s last ditch effort to save the Third Reich. The Battle of the Bulge was a doomed campaign and a doomed gamble from a doomed regime, but its brutality was a true second test of the US Army on the Western Front. During the battle, Army historian S.L.A Marshall began interviewing infantry companies who’d been baptised in combat. Published 3 years later in his 1947 book, Men Against Fire, Marshall’s research showed that just 15-20% of riflemen in active combat positions ever fired their weapons - most kept their heads down. They moved when they were ordered and they held their positions, and they mimicked the outward appearance of a soldier in battle - but shoot, they did not. By any standard organisational metric, the men were present and accounted for, but 4 out of 5 never pulled the trigger. 

You can debate the extent of Marshall’s numbers, and you can debate his methodology, but his ratio shows up, again and again. IBM stumbled onto it in the ‘60s when they discovered that 80% of computer usage came from 20% of the system’s features. The pattern recurs because it describes something real about how effort is distributed inside groups, where a fraction of the people do most of the work, and the rest provide what you might ~charitably call “structural support.”

Anyone who has worked in any large organisation knows exactly what I’m talking about. 

The modern tech industry looked at the problem of human coordination and participation and decided the solution was “collaboration.” If only 20% of us are operating with a “killer instinct” we need to be better at managing the shared instincts of the other 80%. And so collaboration became our shared obsession. We pursue “teamwork” as a holy grail. 

The teamwork revolution, if you can call it that, gave us Notion for our documents, ClickUp for our tasks, Slack for our conversations, Jira for our tickets, Monday for our boards, Teams for the calls that should been emails, emails for the things that we couldn’t squeeze in anywhere else, and now agents attempting to re-invent the whole stack. The average knowledge worker maintains accounts across system after system, switching between applications hundreds of times per day. And they produce, in aggregate, a staggering amount of coordinated and collaborative activity that never actually becomes anything resembling ~output. 

When you strip away the product marketing and the dev relations and the blog posts and the funding rounds and the fuckery-upon-fuckery of it all, we’re left with a simulation of collective engagement - but very little else. Transparency got confused with progress, visibility got confused with accountability, and being included in the thread became the same thing, socially and organizationally, as owning the outcome.

Once that confusion set in at the cultural level it became nearly impossible to dislodge. The feeling of collaboration is pleasant in a way that personal accountability can never be. Owning something means you, specifically and visibly you, can fail at it, specifically and visibly, in ways that attach to your name.

Collaborating means the failure belongs to the process.

So everyone chose collaboration, and we called it culture.

Marshall's riflemen were ordinary people responding to the diffusion of responsibility that happens inside any group. Maximilien Ringelmann measured the same phenomenon with ropes in 1913, long before there were Slack workspaces to offer an emoji-react to it. Individual effort drops predictably as group size increases. The presence of others dissolves the sense of personal responsibility in a way that feels, to everyone experiencing it, entirely reasonable. You're part of a team, you're contributing, you're also (measurably) pulling less hard than you would if the rope were yours alone. Every single person on the rope is doing this simultaneously, which is why the total force never adds up the way the headcount says it should.

Frederick Brooks identified the same dynamic in software development in 1975, watching IBM's System/360 project illustrate his emerging thesis that adding people to a late project makes it later. Communication overhead grows faster than headcount, coordination costs compound, and every new person contributes their capacity along with their relationships to everyone else. Those relationships require maintenance and produce misalignment and generate the need for more meetings to address the misalignment those meetings created.

Brooks might as well have described your company's Q3 roadmap planning cycle and your startup's sprint retrospective, all of which have gotten longer every year and produced, relative to their investment, less.

The collaboration industry has spent a fortune obscuring a dirty truth: most complex, high-quality work is done by individuals or very small groups operating with clear authority and sharp accountability, then rationalized into the language of teamwork afterward. Dostoevsky wrote _The Brothers Karamazov_ alone. The Apollo Guidance Computer came from a team at MIT small enough to have real ownership, hierarchical enough that Margaret Hamilton's name could go on the error-detection routines she personally designed.

Communication matters, and shared context matters. But there’s a huge difference between communication and collaboration as infrastructure to support individual, high-agency ownership, and communication and collaboration as the primary activity of an organisation. Which, if we’re honest, is what most collaboration-first cultures have actually built. They’ve constructed extraordinarily sophisticated machinery for the social management of work, without actually doing the work they’re socialising about. 

If and when it exists, ownership looks like an individual who deeply gives a shit, making a call without waiting for group-consensus. That individual will be right sometimes, and they’ll be wrong other times, and they’ll own it. They won’t sit around waiting to find out who has the authority to move a card from one column to another and post about it in the #celebrations  channel. 

But being that person sucks when “collaboration” is the reigning value, because every unilateral decision gets read as a cultural violation and a signal that you aren’t a team player. Collaboration-as-ideology has made ownership and responsibility feel antisocial, which is a hell of a thing, given that ownership is the only mechanism that gets anything across the finish line. 

You can see this excess everywhere. Standups where people announce their busy work and as long as everyone’s “on the same page” nobody changes course. Documents that are written to perform thinking so somebody else can perform thinking, with no decision in sight. Retros, and kickoffs, and WIP meetings that spawn their own retros, kickoffs and WIP meetings like cells dividing and re-dividing, with zero connection to the work that it’s nominally organising around. 

Every project now seems to carry more coordination overhead than execution time, and when it fails the postmortem just recommends more collaboration...

At some point (and I think that point was fucking yesterday) we have to ask ourselves - what are we actually producing and who is actually responsible for producing it? 

Because at some level, the answer for “who is responsible for X” has to be one single person, no matter how much the collaborative apparatus layered over modern work has been engineered to make that person invisible and dissolve accountability. 

We need to find some path back to trusting that individuals will do their jobs, without every responsibility being visible to an entire organisation, without follow-ups being scheduled by a cadre of overpaid managers with their overfed metrics. 

Maybe - just maybe - we could make our lives a little easier. Maybe we could let human beings keep their own lists of tasks, and we could let them sink or swim by how they manage those tasks, and we could assign blame to them and to them alone when they fuck up. Maybe we could do it without needing to have team-level views of every Kanban, calendar and task list. And maybe - if we let go of the warm, expensive fiction of collective endeavour - we could make it a little easier to see who among us are pulling the trigger and who are just keeping their heads down. 

Changes to the Pipi System Engine (sys) data model

Mike's Notes

The next job is to write the 6 configuration files for each Pipi Nest.

A big thanks is owed to Ben Nadel for the sample code he shared on his CFML blog, which explained some ways to do this.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

13/04/2026

Changes to the Pipi System Engine (sys) data model

By: Mike Peters
On a Sandy Beach: 12/04/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Changes this week to how Pipi is organised in the data centre continue.

This job is to write the 6 base configuration files for each Pipi Nest. Because the recent code tests were successful, most of the time will be spent documenting. This should only need to be done once. The 6 files can be done sequentially, starting from the root.

Pipi Nest

The fundamental organising principle is to use a uniquely named "Pipi Nest" directory to host Pipi.

Every nest has these properties;

  • One Pipi major version
  • One Pipi edition
  • One account type
  • Can host
    • One or more accounts
    • OR
    • One codebase.
      • with one or more pipi instances.

Account

Every customer or user has an account, which is opened when they sign up.

A customer account has these properties;

  • One account type.
  • Contains one or more deployments.

Pipi Instance

Each Pipi instance has these properties;
  • One account name, e.g., "pipiupdate123".
  • Shares a codebase, e.g., "loki".

Deployment

A deployment has these properties;

  • One deployment tenancy type.
  • One language.
  • Contains one or more deployment objects.
  • Can contain other deployments to create global settings for an account (Enterprise or DevOps).

Revised Configuration Hierarchy

Child files inherit properties from parent files and can also override them.

  • Pipi Nest > Account > Deployment > Deployment Object > Publication > Website > Workspace.
  • Pipi Nest > Codebase > Pipi Instance.

Configuration files

Pipi uses a hierarchy of CFML configuration files to set system properties.

<pipi nest>/
  • pipi/
    • Application.cfc [1]
    • pipi_nest.cfm
    • <name>/
      • Application.cfc [2]
      • pipi_account.cfm
      • pip/
        • Application.cfc [3]
        • pipi_xxx.cfm

Configuration notes

The 3 different Application.cfc files are common across all Pipi Nests and don't ever change.

Application.cfc [1] defines Pipi Nest variables

  • OS
  • Java environment
  • Platform-appropriate absolute physical path
  • Nest datasources

Application.cfc [2] defines Name variables

  • Account name
  • Deployments
  • Pipi Instances

Application.cfc [3] defines Pipi Codebase variables

  • Version
  • Edition
  • State

The .cfm files contain specific local configuration variables that can be directly edited by Pipi.

  • pipi_nest.cfm
  • pipi_account.cfm
  • pipi_xxx.cfm

    Example

    This is the list of 6 configuration files for the Pipi Nest 9cc/

    • 9cc/pipi/Application.cfc
    • 9cc/pipi/pipi_nest.cfm
    • 9cc/pipi/loki/Application.cfc
    • 9cc/pipi/loki/pipi_account.cfm
    • 9cc/pipi/loki/pip/Application.cfc
    • 9cc/pipi/loki/pip/pipi_xxx.cfm

    Pipi Nest

    Mike's Notes

    Thursday, 09/04/2026 (NZ time) was the longest day. I couldn't make a single mistake, so I turned off the phone and the internet to concentrate from 8am to 9pm, then slept like a log. I was a walking zombie yesterday. I'm glad this will never need to be done again.

    This is the successful culmination of months of mentally challenging preparatory work for the new Pipi Core data centre.

    A typical day for me is

    • 50% learning
    • 40% thinking
    • 10% doing

    The less I do, the more productive the solutions. The faster the progress.

    Speed will come from Pipi automation, not Mike working harder.

    Resources

    References

    • Reference

    Repository

    • Home > Ajabbi Research > Library >
    • Home > Handbook > 

    Last Updated

    11/04/2026

    Pipi Nest

    By: Mike Peters
    On a Sandy Beach: 10/04/2026

    Mike is the inventor and architect of Pipi and the founder of Ajabbi.

    Inside the Pipi Core data centre, the careful migration and reorganisation of hundreds of thousands of Pipi-related files began and were completed yesterday. Each file or directory was placed in only one Pipi Nest, and then the historical Pipi Nests were converted into zip archives.

    It is a good solution, but it took a month of trial and error to figure out. It had to be 100% correct to enable rapid, reliable data centre automation, self-managed by Pipi.

    It was fascinating to watch the recent Lex Fridman interview with NVIDIA CEO Jensen Huang, who described NVIDIA's large-scale problem-solving process.

    Problems to solve

    A large number of problems had to be solved in parallel. Each problem affected the others.

    • Where to put self-organising Pipi swarms.
    • Pipi instances
      • Naming
      • Self-evolving
      • Versioning
    • How to use DevOps automation with all of the above.
    • The knock-on effects on
      • Namespaces
      • Backups
      • Replication
      • Accounts
      • Workspaces
      • i18n
      • Web URLs
      • Developer documentation
      • Training
      • etc.
    • Securing 100% security and privacy
    • Cross-platform and environment portability

    This has caused some changes to the underlying Pipi System Engine (sys) data model, which I will write about tomorrow.

    Pipi Nest

    The fundamental organising principle is to use a uniquely named directory, now named as a "Pipi Nest".

    The unique name is a string combination made of

    • <pipi major version> (integer)
    • <pipi edition> (single lowercase letter)
    • <account type> (single lowercase letter)

    Data Centre / DevOps

    Here are some examples.

    • 6pg/
    • 9ae/

    Backups

    Customer backup examples.

    • pipi_9ae_ajabbi_data_pg_20260411
    • pipi_9ae_ajabbi_pipi_www_learn.ajabbi.com_20260411

    Archiving

    Here are some examples.

    • pipi_6pg_20180219.zip
    • pipi_9ae_20241211.zip

    Next job

    Now, I can start on configuring the files and settings for each Pipi Nest. The necessary code has already been successfully tested. Eventually, this process will be completely automated.

    • Application.cfc
    • server.xml
    • Datasources
    • OS environment
    • Java
    • Application server
    • Cloud platform
    • REPL

    Speed is King

    Then, each Pipi will be back in business and can be left running 24x7 in its own Pipi Nest, thereby increasing DevOps speed by at least 10x. The priority now is to increase Pipi's Data Centre speed by 1000x using automation and keep going. So what previously took a year can be done in an hour. The deadline is June 2026.

    Future customers

    The increase in speed will also directly benefit all future customers using the SaaS workspace applications. Deployments, configuration, and updates will also get the same 1000x speed increases at no extra cost.

    Google Researchers Propose Bayesian Teaching Method for Large Language Models

    Mike's Notes

    This is a rather cool idea.

    Resources

    References

    • Reference

    Repository

    • Home > Ajabbi Research > Library > Subscriptions > InfoQ
    • Home > Handbook > 

    Last Updated

    10/04/2026

    Google Researchers Propose Bayesian Teaching Method for Large Language Models

    By: Daniel Dominguez
    InfoQ: 14/03/2026

    Daniel is the Managing Partner at SamXLabs an AWS Partner Network company. He has over 13 years of experience in software product development for startups and Fortune 500 companies. Daniel holds a degree in Engineering and a Machine Learning specialisation from the University of Washington. He is passionate about leveraging AI and cloud computing to create innovative solutions. As an AWS Community Builder in the Machine Learning tier, Daniel is committed to sharing knowledge and driving innovation in software products.

    Google Researchers have proposed a training method that teaches large language models to approximate Bayesian reasoning by learning from the predictions of an optimal Bayesian system. The approach focuses on improving how models update beliefs as they receive new information during multi-step interactions.

    The study examines how language models update beliefs when interacting with users over time. In many real-world applications, such as recommendation systems, models need to infer user preferences gradually based on new information. Bayesian inference provides a mathematical framework for updating probabilities as new evidence becomes available. The researchers investigated whether language models behave in ways consistent with Bayesian belief updates and explored training methods to improve that behavior.

    To evaluate this, the team created a simulated flight recommendation task. In the experiment, a model interacted with a simulated user for five rounds. In each round, the assistant and user were shown three flight options defined by departure time, duration, number of stops, and price. Each simulated user had hidden preferences for these attributes. After each recommendation, the user indicated whether the assistant selected the correct option and revealed the preferred flight. The assistant was expected to use this feedback to improve future recommendations.

    The researchers compared several language models with a Bayesian assistant that maintains a probability distribution over possible user preferences and updates it using Bayes’ rule after each interaction. In the experiment, the Bayesian assistant reached about 81% accuracy in selecting the correct option. Language models performed worse and often showed limited improvement after the first interaction, suggesting that they did not effectively update their internal estimates of user preferences.

    The study then tested a training approach called Bayesian teaching. Instead of learning only from correct answers, models were trained to imitate the predictions of the Bayesian assistant during simulated interactions. In early rounds, the Bayesian assistant sometimes made incorrect recommendations due to uncertainty about the user’s preferences, but its decisions reflected probabilistic reasoning based on the available evidence.

    The image below shows the recommendation accuracy of Gemma and Qwen after fine-tuning on user interactions with the Bayesian assistant or with an oracle.

    The training data for supervised fine-tuning consisted of simulated conversations between users and the Bayesian assistant. For comparison, the researchers tested a method in which the model learned from an assistant that always selected the correct option because it had perfect knowledge of the user’s preferences.

    Both fine-tuning approaches improved model performance, but Bayesian teaching produced better results. Models trained with this method made predictions that more closely matched those of the Bayesian assistant and demonstrated stronger improvement across multiple interaction rounds. The trained models also showed higher agreement with the Bayesian system when evaluating user choices.

    Community reactions to the Google Research post were largely positive, with commenters highlighting improved probabilistic reasoning and multi-turn adaptation in LLMs. 

    Software developer Yann Kronberg commented:

    People talk about reasoning benchmarks, but this is basically about belief updates. We know that most LLMs don’t revise their internal assumptions well after new information arrives, so @GoogleResearch teaching them to approximate Bayesian inference could matter a lot for long-running agents.

    Some also questioned the use of supervised fine-tuning rather than reinforcement learning to approximate Bayesian inference.

    Researcher Aidan Li quoted:

    Why did the authors use SFT instead of RL to train the model to approximate probabilistic inference? There is a wealth of work relating RL and probabilistic inference, even for LLMs. Maybe I'm missing something but RL seems like the obvious choice.

    The researchers describe the method as a form of model distillation in which a neural network learns to approximate the behaviour of a symbolic system implementing Bayesian inference. The results suggest that language models can acquire probabilistic reasoning skills through post-training that demonstrate optimal decision strategies during sequential interactions.

    A Comprehensive Analysis of Palantir’s Forward Deployed Engineering Model

    Mike's Notes

    A fascinating article by Diogo Santo, about Palantir, with many lessons for implementing enterprise systems.

    I figured out a very long time ago that large system complexity had to be embraced, not minimised, and proceeded with building Pipi on that basis.

    As it turns out, Pipi and Palantir have some similarities due to convergent evolution. For example, Pipi is also ontology-driven and a multi-industry platform.

    I agree 100% with what Diogo writes.

    "Enterprise software fails because software vendors refuse to become students of the institutions they're trying to change

    The FDE model is not a service delivery strategy that happens to look like product development. It is a product development strategy that looks like services from the outside.

    The real insight is that institutional complexity is not a problem to be minimized. It is the environment that the product has to live in, and the only way to build something that functions in that environment is to understand it from the inside. The gravel road to paved highway is not about customization, it is about ground truth. The Echo/Delta team formation is not about about coverage, it’s about complementary perspectives on action and reality. The meritocracy of outcomes is not a culture value, it is a selection mechanism for the specific kind of intelligence that institutional embedding requires." - Diogo Santo

    Resources

    References

    • Reference

    Repository

    • Home > Ajabbi Research > Library > Subscriptions > Vertical AI Advantage
    • Home > Handbook > 

    Last Updated

    09/04/2026

    A Comprehensive Analysis of Palantir’s Forward Deployed Engineering Model

    By: Diogo Santo
    Vertical AI Advantage: 07/04/2026

    Helping senior consultants build specialized consulting practices that outcompete the big firms | Author | Senior Director of Data & AI @ Fujitsu.

    Most enterprise AI is still trying to solve from the outside what Palantir figured out can only be solved from within.

    Note to the Reader:

    If this article feels extensive or you’re short on time, you can skip to the Key Takeaways at the end for a concise summary.

    At its core, this piece explores how Palantir transformed enterprise software delivery by embedding engineers directly inside complex institutions. It shows why traditional product discovery often fails, how team structure and talent selection drive real impact, and what SaaS companies or consulting firms can learn about building solutions that actually work in the messy realities of organizations. The lessons here are about turning field experience into product insight, creating sustainable transformation, and bridging the gap between technical capability and operational reality.

    In September 2001, the U.S. intelligence departments had more data than it had ever collected in its history. It had analysts who were extraordinarily skilled at interpreting fragments of information. What it did not have was the ability to make those two things talk to each other. The CIA had its systems. The NSA had its systems. The FBI had its systems. None of them could see what the others were seeing.

    Palantir was founded, in part, to solve that specific problem. And the solution they arrived at was not a better algorithm or a cleaner data model. It was a kind of person. An engineer who would go inside, stay for months or years, and not leave until the institution’s data reflected its operational reality.

    The rest of the software industry looked at this and called it services. Palantir looked at it and called it product discovery.

    That gap in interpretation is still producing winners and losers twenty years later.

    The reality about enterprise software is that most of it does not actually get used.

    Enterprise software is not used in the scenarios the product managers imagined. Somewhere between the demo environment and the production floor, between the quarterly business review and the analyst’s actual morning, the software encounters the institution, an organization with dynamics — and the institution wins.

    This is the problem everyone in enterprise software knows. We all know as well that current model of software delivery is structurally broken.

    The conventional delivery model works roughly like this: a product team defines requirements through interviews and usage data, builds a product in a controlled environment, and deploys it to a customer who is then responsible for adoption. The product team is intelligent and well-intentioned. The customer is usually paying significant money and is sincerely motivated to adopt. And yet the gap between capability and actual use remains wide, across every sector, every company size, every level of leadership commitment.

    The surface explanation is change management. The real explanation is deeper than that.

    The product team, building from the outside, has an approximate model of his customer. They know what their customer says it does. They know what their customer believes it does. And although all of that might be true, the operational reality tends to drift from that picture, because of existing culture, legacy systems, undocumented errors and code, disaligned incentives, among others.

    Palantir’s founders understood this because their first customer, the CIA, made the conventional discovery process not just impractical but structurally impossible. Analysts couldn’t describe their workflows to an external vendor. They couldn’t share their data. Requirements couldn’t be fixed because threats evolved daily. And none of this could be managed by signing a NDA.

    How could they build an impactful solution and deliver real outcomes in such conditions?

    To solve this, Palantir built a model that placed the engineer inside the illegibility. Not to study it from a safe distance, but to operate from within. To build under actual constraints rather than imagined ones. To treat operational complexity not as a blocker to be engineered around, but as the environment in which the product had to live.

    By 2016, Palantir employed more forward-deployed engineers than traditional software engineers. That ratio was not just strategy, but a profound realization that without truly understanding the day to day operational complexities of their customers, the success of their product could not be guaranteed.

    The System Behind Palantir’s Scalable Deployment Engine

    The first structural insight was the Field-Driven Productization. Forward-deployed engineers built rough, tactical, client-specific solutions. Quick fixes on unstable pipelines. Workarounds for undocumented APIs. Hacks for data schemas that are not fully mapped. The priority was not architectural elegance. The priority was that the analyst (their customer) could do her job today, under the actual conditions of her actual job.

    Meanwhile, the core engineering team was looking for patterns. When entity resolution appeared as a problem at one government agency and then at a pharmaceutical company and then at a financial institution, in different forms, with different surface characteristics, but with the same structural core, it got abstracted into a reusable primitive and pulled upstream into the platform.

    The field work existed to generate the product, not just to generate revenue.

    That distinction matters more than it might appear. A consulting firm builds something once for one client and bills for the hours. Palantir builds something once for one client, watches it fail in interesting ways, and turns the failure into platform infrastructure. Every engagement was, functionally, an R&D investment that paid in operational insight. The deployment cost per customer declined as the platform matured. The advantage compounded.

    The second structural insight was the team formation. Palantir didn’t send just a single engineer into a client environment. It sent two distinct profiles:

    • The Delta — the Forward Deployed Engineer — writes production-grade code. Data pipelines. Ontology modeling. AI agent design. They pass the same technical interview as Palantir’s core product architects and engineers. They are not solution consultants. They are engineers who happen to be working inside a customer instead of a corporate campus. They have the profile of a scrappy startup CTO: technically deep, comfortable with ambiguity, able to navigate a broken data environment and produce something that functions. A Delta might spend three months designing a pipeline that routes unknown fields to a dataset and alerts on contract violations just because they’ve spent enough time inside organizations to understand what happens when this isn’t there. They understand foundational work is key to make downstream work to function.
    • The Echo — the Deployment Strategist — is usually not a software engineer. They are former military officers. Former clinicians. Former forensic accountants. People with specific domain knowledge. People who understand how institutions actually work. Which departments carry unspoken adversarial relationships, which data is politically untouchable, which workflow has been broken for a decade and quietly worked around by everyone who knows better than to escalate it. The Echo translates mission reality into technical requirements. They own the relationship. They own adoption. They own the long-term durability of what the Delta has built. When the Delta’s pipeline is technically complete, the Echo is the one who understands why three departments won’t use it and what change management it will take to make them start.

    The tension between these two profiles is the point. A Delta left alone builds something technically correct and operationally irrelevant. An Echo left alone generates beautifully aligned strategy without nothing tangible and concrete. This team of 2 is designed so that both pressures are always present, always competing, always correcting each other.

    The third structural insight is about what kind of person makes this model work at all. Palantir got particularly interested in free thinkers and independents motivated by the problem rather than the org chart. The willingness to eat pain — to stay inside a broken institution long enough to actually understand it — is not a competency that traditional consulting careers develop or reward. The meritocracy is built around outcomes, not credentials, and that selection effect ripples through everything the company builds.

    You cannot hire your way into this model with standard enterprise talent. The profile that makes it work is specific, somewhat contrarian, and deeply uncomfortable in environments that measure outputs rather than outcomes. Most companies that have tried to replicate the FDE approach fail not at the structural level, but at the hiring level. They send consultants who are limited in how far they can challenge the status quo—people who prioritize keeping the client comfortable, echoing what the customer wants rather than addressing what will actually move the needle. Palantir, by contrast, hires engineers and operators who are pragmatic, willing to confront messy realities, and focused on delivering real transformation for clients who are ready to change, not just preserving political niceties or operating within a scripted theater.

    Enterprise software fails because software vendors refuse to become students of the institutions they're trying to change

    The FDE model is not a service delivery strategy that happens to look like product development. It is a product development strategy that looks like services from the outside.

    The real insight is that institutional complexity is not a problem to be minimized. It is the environment that the product has to live in, and the only way to build something that functions in that environment is to understand it from the inside. The gravel road to paved highway is not about customization, it is about ground truth. The Echo/Delta team formation is not about about coverage, it’s about complementary perspectives on action and reality. The meritocracy of outcomes is not a culture value, it is a selection mechanism for the specific kind of intelligence that institutional embedding requires.

    What Palantir built was not a software platform with an unusual go-to-market. It built an institutional learning machine that happens to produce software as its primary output. The software improves because the learning compounds. The differentiator isn’t the platform, is the institutional understanding which is encoded in the platform and every new deployment makes it deeper.

    None of this is to say the model is easy to replicate or without genuine costs.

    Forward deployment at Palantir’s standard requires a talent profile that is hard to faind and hard to train. Engineers who can write production-grade code and navigate institutional politics simultaneously, domain experts who understand both the mission and the messy data architecture that serves it.

    Lessons for SaaS and Consulting in Enterprise AI

    For SaaS product companies, the lesson isn’t just “embed your engineers.” The lesson is that product discovery cannot be safely delegated just to customer interviews, usage analytics, and quarterly business reviews. Those tools are adequate for understanding a market from a comfortable distance. They are not adequate for understanding how work actually gets done inside a complex institution — the decisions that happen outside any documented workflow, the data that never makes it into the system of record, the workaround that has been load-bearing for years without anyone acknowledging it.

    For founders building enterprise AI products, the practical version of this is the Bootcamp — and Palantir’s execution of it beginning in 2023 is worth studying carefully. One to five days. Working on your data. Your actual operational problem. A functioning capability at the end, not a slide deck with next steps. U.S. commercial revenue grew 137% year-over-year by Q4 2025. The fastest way to overcome an institution’s uncertainty about whether AI can work for them is to show them a piece of it already working inside their own environment. You are not selling a platform. You are selling a glimpse of their own operational reality, improved. Skepticism dissolves faster than any roadmap could dissolve it.

    The second lesson is about sustainability and also applies to SaaS Product Companies. From the beginning, Palantir structured its engagements to end with a customer who no longer needs Palantir to operate the platform. Palantir built it as the growth mechanism. Customers who own their platform build more on it. Customers who depend on vendor engineers remain cautious about expanding scope, because every expansion means another engagement they can't control. Self-sufficiency is the condition that makes the relationship valuable enough to deepen. Customers invest more per year because they learn how to operate the platform, not because Palantir continued to operate it for them.

    The third lesson applies to consulting firms. It reshapes the consulting industry into categories. The market is splitting into three layers:

    1. Strategy consultancies (such as McKinsey, BCG, Bain, Roland Berger, Oliver Wyman) doing high-level transformation architecture, operating model redesign. The deck that frames the problem before the technology conversation begins. I believe this layer is not going away. It is, however, becoming progressively decoupled from the implementation work that follows it, because the gap between a transformation roadmap and a functioning production system is widening faster than these firms are moving to close it;
    2. Industrial-scale integrators (such as Accenture, Deloitte, IBM, Capgemini) operating as primary delivery partners for enterprise AI platforms. These firms will own the middle of the market, the deployments that are large enough to need coordinated delivery but standardised enough not to require genuine institutional embedding;
    3. And a third category that currently has no clean name — forward-deployed engineering teams that wire AI into live systems, govern it in production, and remain accountable for what happens six months after the platform vendor has moved on. It does not yet have a standard business model, a recognised category name, or a talent pipeline that trains people for it deliberately.

    The third category is going to become, in my opinion, the most valuable layer of the three, because it is the only one willing to operate inside the institutional complexity that neither the strategists nor the integrators are prepared to enter. Most boutique consulting firms are currently sitting in the first or second bucket by default.

    To conclude. What the Palantir story tell us is that growth is not directly tied to platform’s value. SaaS product growth happens when an institution discovers that a piece of technology understands its specific operational reality — not in the abstract way that a platform demo understands it, but in the way that only comes from someone sitting inside the mess for months, learning the undocumented APIs, mapping the workflows that don't appear in any org chart, staying until the gravel road becomes passable.

    Most enterprise AI is currently being sold as a destination. Buy the platform, complete the implementation, arrive at transformation. That transformation is a process of continuous institutional learning.

    Key Takeways

    1. Product Success Requires Immersion in Operational Reality

    1. Enterprise software often fails because product teams rely on interviews, usage data, or quarterly reviews, rather than understanding day-to-day workflows inside the institution.
    2. Palantir’s solution: Forward-Deployed Engineers (FDEs) operate inside the customer environment, building under real constraints rather than imagined ones.
    3. Software becomes effective when it reflects ground truth, not assumptions or polished demos.

    2. Field-Driven Productisation Compounds Advantage

    • Initial deployments are tactical, client-specific, and often messy (“quick fixes on unstable pipelines”).
    • Core engineering observes patterns across clients to abstract reusable primitives.
    • Every deployment acts as R&D, reducing future customer deployment costs and improving the platform.
    • Palantir converts operational failures into platform infrastructure, creating compounding advantage.

    3. Team Structure Matters: Delta + Echo

    • Delta (Engineer): writes production-grade code, navigates broken data, implements solutions.
    • Echo (Strategist): understands institutional politics, workflow realities, and adoption barriers.
    • The tension between the two ensures both operational functionality and strategic alignment.

    4. Talent Selection is Critical

    • Success is not just about hiring experienced enterprise consultants; it’s about contrarian, outcome-focused people willing to endure operational friction.
    • Palantir hires engineers and operators who can confront messy realities and drive real transformation, not just maintain client comfort or political correctness.
    • The meritocracy of outcomes selects for the type of intelligence capable of navigating institutional complexity.

    5. Transformation is a Process, Not a Destination

    • Software must be embedded in institutional learning, not treated as a one-off implementation.
    • Institutional complexity is not a problem to bypass; it is the environment in which the product must function.
    • Palantir’s approach turns product delivery into continuous operational learning, where each engagement deepens institutional insight.

    6. Lessons for SaaS and Consulting Firms

    • SaaS: Product discovery cannot rely solely on distant analytics; short, hands-on engagements (“Bootcamps”) reveal real operational impact.
    • Sustainability: Design hand-off and internal capability development into every engagement. Customers who can operate independently expand platform and partnerships adoption more aggressively.
    • Consulting Industry: A new category of “forward-deployed engineering teams” is emerging, bridging gaps between strategy consultancies and large integrators by embedding in complex institutional systems.

    7. Strategic Implication

    • Firms that succeed in enterprise AI are those willing to operate inside institutional complexity, show immediate proof of value, and build customer capability, rather than just delivering slides or managing relationships.
    • Value is created through learning inside the client’s reality, not by selling a destination or platform alone.

    Realworld AI Coding Agent Exercise

    Mike's Notes

    An open-source version of Virtuoso will be installed in the data centre to explore possibilities. Kingsley Uyi Idehen's articles are all fascinating. I first came across him via the Ontology Forum.

    The original post on LinkedIn has more links.

    Resources

    References

    • Reference

    Repository

    • Home > Ajabbi Research > Library >
    • Home > Handbook > 

    Last Updated

    08/04/2026

    Realworld AI Coding Agent Exercise

    By: Kingsley Uyi Idehen
    LinkedIn: 07/02/2026

    Founder & CEO at OpenLink Software | Driving GenAI-Based AI Agents | Harmonising Disparate Data Spaces (Databases, Knowledge Bases/Graphs, and File System Documents).

    This post explains how I used Claude Code (Pro level, powered by Opus 4.5) and Mistral Vibe (whenever Claude Code rate limits kicked in) to modernize the aesthetics of our uniquely powerful faceted search and browsing interface for knowledge graph exploration—essentially giving the UI around the core engine a facelift.

    At OpenLink Software, we strongly believe that LLM-powered AI Agents are exactly the right tools for tackling this long-standing challenge—provided the RDF-based knowledge graph runs on a platform that isn’t constrained by dataset size. In other words, one that scales naturally in linear fashion—such as our Virtuoso multi-model platform for managing data spaces spanning databases, knowledge bases, filesystems, and APIs.

    Situation Analysis

    As with many aspects of RDF (Resource Description Framework), challenges in tooling creation often stem from general misunderstandings about the framework itself. RDF-based knowledge graph representation is one such area, and linear visualization is another.

    In the case of linear visualization, the goal is to present the description of an entity of interest (the subject) along with its associated attributes—i.e., predicate–object pairings that express attribute names and values.

    Compounding the difficulty is the fact that, although faceted search UI/UX patterns have long served as the conceptual foundation, implementing them at the scale typical of RDF-based knowledge graphs remains extremely challenging. This challenge is further amplified by the complexity of delivering such interfaces in HTML leveraging CSS and JavaScript.

    Virtuoso's Faceted Search & Browsing System Workings

    Fundamentally, this system allows you to perform text search, attribute-name lookup, or entity-identifier–based exploration across one or more knowledge graphs hosted in a Virtuoso instance. It provides a property-sheet–style interface that presents entities (subjects) alongside their associated attributes (relationship predicates) and values (objects).

    Thanks to Linked Data principles, hyperlink-based denotation of entities, attributes, and values (optionally) creates a Web of data. This enables the same “click and explore” experience—also known as the follow-your-nose exploration pattern—that users enjoy when interacting with web pages through a browser.

    Old System

    Here are screenshots depicting the UI/UX for a simple search sequence along the following lines:

    • Text Search Input: Virtuoso.

    • Initial matches presented in a list, sorted by text score and entity rank (think: page rank for data).

    • Add filters by attributes such as "type" (rdf:type) and "name" (schema:name) where the name value is "Virtuoso".

    • Click on item from the result to obtain a description of Virtuoso via the entity description property sheet based page.

    New System

    Original resource: New Virtuoso Faceted Browser Showcase Screencast

    Here's the same demonstration sequence, but experienced via the revamped UI/UX.

    • Text Search Input: Virtuoso.

    • Initial matches presented in a list, sorted by text score and entity rank (think: page rank for data).

    • Add filters by attributes such as "type" (rdf:type) and "name" (schema:name) where the name value is "Virtuoso".

    • Click on item from the result to obtain a description of Virtuoso via the entity description property sheet based page.

    Additional Notable Faceted Search & Browsing Features

    These include the following:

    1. Handling the fact that lots of attributes (thousands+) could be associated with an entity sources for a massive collection of source knowledge graphs during the filtering stage via a sticky scrollable paging control
    2. Handling the fact that a select entity description can also comprise lots of attributes (thousands+) via a stickly scrollable paging control
    3. Spreadsheet-like table (with resizable, moveable, and sort enabling columns) for handling query results from filtering or when presenting entity descriptions
    4. Ability to export the description of an entity in a variety of formats (JSON-LD, RDF-Turtle, RDF-XML, N-Triples, RDF/JSON, CSV, etc)
    5. Permalinking for sharing interaction state e.g., a filter page or entity description page
    6. Ability to reveal underling SPARQL query that drives filtering
    7. Metadata that provides information on source named graph(s) from which attributes and values have been sourced for an entity description by way of entity (subject) or value (object) role in the source graph triples
    8. Metadata that automatically identifies explicit (via owl:sameAs attribute values) coreferences (via values of attributes that are uniquely identifying i.e., inverse-functional e.g., email addresses or any other attribute with the inverse-functional designation in a loaded ontology)
    9. Settings for enabling or disabling reasoning and inference informed by built-in or custom inference rules

    The Refactoring Process

    I achieved this very difficult refactoring task, alongside my other daily duties, by prompting Claude Code and Mistral. Claude Code (using the Opus 4.5 model) handled most of the heavy refactoring and planning work through the initial working ports.

    I brought Mistral on board once rate limits kicked in, which became an important part of the experiment—namely, determining what’s possible with the Pro edition of Claude Code.

    That said, Mistral also impressed me to the point where I see it as the closest rival to Claude Code in the battle for coding agent market dominance. Its CLI aesthetics and assistance mode are top-notch.

    Live Demonstration Instances

    URIBurner

    This is a live Virtuoso instance since 2008 functioning as a Linked Data utility showcase and bridge to the massive Linked Open Data Cloud Knowledge Graph collective.

    1. Text Search: Virtuoso
    2. Entity Types associated with text pattern: Virtuoso
    3. Attribute Filtering on Type, Name, and Value: Virtuoso Universal Server (Row Store & Cluster Server Edition)
    4. Selected Entity Description Page

    Conclusion

    Software development is evolving before our eyes. True power now comes from pairing capable AI Agents with human expertise—letting judgment guide automation, producing dependable outcomes, and delivering real-world value. The age of AI isn’t just about smarter tools; it’s about amplifying what humans do best.

    Pipi naming pattern

    Mike's Notes

    It was helpful talking over automation with Alex a few nights ago.

    There are hundreds of thousands of files to be put somewhere in the new data centre using automation.

    Resources

    References

    • Reference

    Repository

    • Home > Ajabbi Research > Library >
    • Home > Handbook > 

    Last Updated

    11/04/2026

    Pipi naming pattern

    By: Mike Peters
    On a Sandy Beach: 07/04/2026

    Mike is the inventor and architect of Pipi and the founder of Ajabbi.

    This is the new naming pattern for every version, edition and account type of Pipi since 1997. These names are ASCI lowercase for compatibility with Windows, Linux, Solaris, MacOS, etc. This is now named the Pipi Nest.

    It is highly deterministic, enabling safe, reliable automation.

    Assumptions

    Pipi versions 1-8 had no editions or account types, so these default names will be used.

    • Edition = "p" (Pipi)
    • Account Type = "g" (General)

    Archive

    All the existing files, including research material, experiments, sample code, web crawls and notes, will be moved to these locations. The archive format is a ZIP file with a date string appended. An example would be;

    • pipi_7pg_20201230.zip

    Production

    It is also the root deployment directory. An example would be;

    • pipi/9ae/


    Major Version Edition Account Type Pipi Nest
    1 p g 1pg
    2 p g 2pg
    3 p g 3pg
    4 p g 4pg
    5 p g 5pg
    6 p g 6pg
    7 p g 7pg
    8 p g 8pg
    9 a a 9aa
    9 a d 9ad
    9 a e 9ae
    9 a p 9ap
    9 a r 9ar
    9 a s 9as
    9 a t 9at
    9 c c 9cc
    9 i i 9ii
    9 r b 9rb
    10 a a 10aa
    10 a d 10ad
    10 a e 10ae
    10 a p 10ap
    10 a r 10ar
    10 a s 10as
    10 a t 10at
    10 c c 10cc
    10 i i 10ii
    10 r b 10rb