A Comprehensive Analysis of Palantir’s Forward Deployed Engineering Model

Mike's Notes

A fascinating article by Diogo Santo, about Palantir, with many lessons for implementing enterprise systems.

I figured out a very long time ago that large system complexity had to be embraced, not minimised, and proceeded with building Pipi on that basis.

As it turns out, Pipi and Palantir have some similarities due to convergent evolution. For example, Pipi is also ontology-driven and a multi-industry platform.

I agree 100% with what Diogo writes.

"Enterprise software fails because software vendors refuse to become students of the institutions they're trying to change

The FDE model is not a service delivery strategy that happens to look like product development. It is a product development strategy that looks like services from the outside.

The real insight is that institutional complexity is not a problem to be minimized. It is the environment that the product has to live in, and the only way to build something that functions in that environment is to understand it from the inside. The gravel road to paved highway is not about customization, it is about ground truth. The Echo/Delta team formation is not about about coverage, it’s about complementary perspectives on action and reality. The meritocracy of outcomes is not a culture value, it is a selection mechanism for the specific kind of intelligence that institutional embedding requires." - Diogo Santo

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Vertical AI Advantage
  • Home > Handbook > 

Last Updated

09/04/2026

A Comprehensive Analysis of Palantir’s Forward Deployed Engineering Model

By: Diogo Santo
Vertical AI Advantage: 07/04/2026

Helping senior consultants build specialized consulting practices that outcompete the big firms | Author | Senior Director of Data & AI @ Fujitsu.

Most enterprise AI is still trying to solve from the outside what Palantir figured out can only be solved from within.

Note to the Reader:

If this article feels extensive or you’re short on time, you can skip to the Key Takeaways at the end for a concise summary.

At its core, this piece explores how Palantir transformed enterprise software delivery by embedding engineers directly inside complex institutions. It shows why traditional product discovery often fails, how team structure and talent selection drive real impact, and what SaaS companies or consulting firms can learn about building solutions that actually work in the messy realities of organizations. The lessons here are about turning field experience into product insight, creating sustainable transformation, and bridging the gap between technical capability and operational reality.

In September 2001, the U.S. intelligence departments had more data than it had ever collected in its history. It had analysts who were extraordinarily skilled at interpreting fragments of information. What it did not have was the ability to make those two things talk to each other. The CIA had its systems. The NSA had its systems. The FBI had its systems. None of them could see what the others were seeing.

Palantir was founded, in part, to solve that specific problem. And the solution they arrived at was not a better algorithm or a cleaner data model. It was a kind of person. An engineer who would go inside, stay for months or years, and not leave until the institution’s data reflected its operational reality.

The rest of the software industry looked at this and called it services. Palantir looked at it and called it product discovery.

That gap in interpretation is still producing winners and losers twenty years later.

The reality about enterprise software is that most of it does not actually get used.

Enterprise software is not used in the scenarios the product managers imagined. Somewhere between the demo environment and the production floor, between the quarterly business review and the analyst’s actual morning, the software encounters the institution, an organization with dynamics — and the institution wins.

This is the problem everyone in enterprise software knows. We all know as well that current model of software delivery is structurally broken.

The conventional delivery model works roughly like this: a product team defines requirements through interviews and usage data, builds a product in a controlled environment, and deploys it to a customer who is then responsible for adoption. The product team is intelligent and well-intentioned. The customer is usually paying significant money and is sincerely motivated to adopt. And yet the gap between capability and actual use remains wide, across every sector, every company size, every level of leadership commitment.

The surface explanation is change management. The real explanation is deeper than that.

The product team, building from the outside, has an approximate model of his customer. They know what their customer says it does. They know what their customer believes it does. And although all of that might be true, the operational reality tends to drift from that picture, because of existing culture, legacy systems, undocumented errors and code, disaligned incentives, among others.

Palantir’s founders understood this because their first customer, the CIA, made the conventional discovery process not just impractical but structurally impossible. Analysts couldn’t describe their workflows to an external vendor. They couldn’t share their data. Requirements couldn’t be fixed because threats evolved daily. And none of this could be managed by signing a NDA.

How could they build an impactful solution and deliver real outcomes in such conditions?

To solve this, Palantir built a model that placed the engineer inside the illegibility. Not to study it from a safe distance, but to operate from within. To build under actual constraints rather than imagined ones. To treat operational complexity not as a blocker to be engineered around, but as the environment in which the product had to live.

By 2016, Palantir employed more forward-deployed engineers than traditional software engineers. That ratio was not just strategy, but a profound realization that without truly understanding the day to day operational complexities of their customers, the success of their product could not be guaranteed.

The System Behind Palantir’s Scalable Deployment Engine

The first structural insight was the Field-Driven Productization. Forward-deployed engineers built rough, tactical, client-specific solutions. Quick fixes on unstable pipelines. Workarounds for undocumented APIs. Hacks for data schemas that are not fully mapped. The priority was not architectural elegance. The priority was that the analyst (their customer) could do her job today, under the actual conditions of her actual job.

Meanwhile, the core engineering team was looking for patterns. When entity resolution appeared as a problem at one government agency and then at a pharmaceutical company and then at a financial institution, in different forms, with different surface characteristics, but with the same structural core, it got abstracted into a reusable primitive and pulled upstream into the platform.

The field work existed to generate the product, not just to generate revenue.

That distinction matters more than it might appear. A consulting firm builds something once for one client and bills for the hours. Palantir builds something once for one client, watches it fail in interesting ways, and turns the failure into platform infrastructure. Every engagement was, functionally, an R&D investment that paid in operational insight. The deployment cost per customer declined as the platform matured. The advantage compounded.

The second structural insight was the team formation. Palantir didn’t send just a single engineer into a client environment. It sent two distinct profiles:

  • The Delta — the Forward Deployed Engineer — writes production-grade code. Data pipelines. Ontology modeling. AI agent design. They pass the same technical interview as Palantir’s core product architects and engineers. They are not solution consultants. They are engineers who happen to be working inside a customer instead of a corporate campus. They have the profile of a scrappy startup CTO: technically deep, comfortable with ambiguity, able to navigate a broken data environment and produce something that functions. A Delta might spend three months designing a pipeline that routes unknown fields to a dataset and alerts on contract violations just because they’ve spent enough time inside organizations to understand what happens when this isn’t there. They understand foundational work is key to make downstream work to function.
  • The Echo — the Deployment Strategist — is usually not a software engineer. They are former military officers. Former clinicians. Former forensic accountants. People with specific domain knowledge. People who understand how institutions actually work. Which departments carry unspoken adversarial relationships, which data is politically untouchable, which workflow has been broken for a decade and quietly worked around by everyone who knows better than to escalate it. The Echo translates mission reality into technical requirements. They own the relationship. They own adoption. They own the long-term durability of what the Delta has built. When the Delta’s pipeline is technically complete, the Echo is the one who understands why three departments won’t use it and what change management it will take to make them start.

The tension between these two profiles is the point. A Delta left alone builds something technically correct and operationally irrelevant. An Echo left alone generates beautifully aligned strategy without nothing tangible and concrete. This team of 2 is designed so that both pressures are always present, always competing, always correcting each other.

The third structural insight is about what kind of person makes this model work at all. Palantir got particularly interested in free thinkers and independents motivated by the problem rather than the org chart. The willingness to eat pain — to stay inside a broken institution long enough to actually understand it — is not a competency that traditional consulting careers develop or reward. The meritocracy is built around outcomes, not credentials, and that selection effect ripples through everything the company builds.

You cannot hire your way into this model with standard enterprise talent. The profile that makes it work is specific, somewhat contrarian, and deeply uncomfortable in environments that measure outputs rather than outcomes. Most companies that have tried to replicate the FDE approach fail not at the structural level, but at the hiring level. They send consultants who are limited in how far they can challenge the status quo—people who prioritize keeping the client comfortable, echoing what the customer wants rather than addressing what will actually move the needle. Palantir, by contrast, hires engineers and operators who are pragmatic, willing to confront messy realities, and focused on delivering real transformation for clients who are ready to change, not just preserving political niceties or operating within a scripted theater.

Enterprise software fails because software vendors refuse to become students of the institutions they're trying to change

The FDE model is not a service delivery strategy that happens to look like product development. It is a product development strategy that looks like services from the outside.

The real insight is that institutional complexity is not a problem to be minimized. It is the environment that the product has to live in, and the only way to build something that functions in that environment is to understand it from the inside. The gravel road to paved highway is not about customization, it is about ground truth. The Echo/Delta team formation is not about about coverage, it’s about complementary perspectives on action and reality. The meritocracy of outcomes is not a culture value, it is a selection mechanism for the specific kind of intelligence that institutional embedding requires.

What Palantir built was not a software platform with an unusual go-to-market. It built an institutional learning machine that happens to produce software as its primary output. The software improves because the learning compounds. The differentiator isn’t the platform, is the institutional understanding which is encoded in the platform and every new deployment makes it deeper.

None of this is to say the model is easy to replicate or without genuine costs.

Forward deployment at Palantir’s standard requires a talent profile that is hard to faind and hard to train. Engineers who can write production-grade code and navigate institutional politics simultaneously, domain experts who understand both the mission and the messy data architecture that serves it.

Lessons for SaaS and Consulting in Enterprise AI

For SaaS product companies, the lesson isn’t just “embed your engineers.” The lesson is that product discovery cannot be safely delegated just to customer interviews, usage analytics, and quarterly business reviews. Those tools are adequate for understanding a market from a comfortable distance. They are not adequate for understanding how work actually gets done inside a complex institution — the decisions that happen outside any documented workflow, the data that never makes it into the system of record, the workaround that has been load-bearing for years without anyone acknowledging it.

For founders building enterprise AI products, the practical version of this is the Bootcamp — and Palantir’s execution of it beginning in 2023 is worth studying carefully. One to five days. Working on your data. Your actual operational problem. A functioning capability at the end, not a slide deck with next steps. U.S. commercial revenue grew 137% year-over-year by Q4 2025. The fastest way to overcome an institution’s uncertainty about whether AI can work for them is to show them a piece of it already working inside their own environment. You are not selling a platform. You are selling a glimpse of their own operational reality, improved. Skepticism dissolves faster than any roadmap could dissolve it.

The second lesson is about sustainability and also applies to SaaS Product Companies. From the beginning, Palantir structured its engagements to end with a customer who no longer needs Palantir to operate the platform. Palantir built it as the growth mechanism. Customers who own their platform build more on it. Customers who depend on vendor engineers remain cautious about expanding scope, because every expansion means another engagement they can't control. Self-sufficiency is the condition that makes the relationship valuable enough to deepen. Customers invest more per year because they learn how to operate the platform, not because Palantir continued to operate it for them.

The third lesson applies to consulting firms. It reshapes the consulting industry into categories. The market is splitting into three layers:

  1. Strategy consultancies (such as McKinsey, BCG, Bain, Roland Berger, Oliver Wyman) doing high-level transformation architecture, operating model redesign. The deck that frames the problem before the technology conversation begins. I believe this layer is not going away. It is, however, becoming progressively decoupled from the implementation work that follows it, because the gap between a transformation roadmap and a functioning production system is widening faster than these firms are moving to close it;
  2. Industrial-scale integrators (such as Accenture, Deloitte, IBM, Capgemini) operating as primary delivery partners for enterprise AI platforms. These firms will own the middle of the market, the deployments that are large enough to need coordinated delivery but standardised enough not to require genuine institutional embedding;
  3. And a third category that currently has no clean name — forward-deployed engineering teams that wire AI into live systems, govern it in production, and remain accountable for what happens six months after the platform vendor has moved on. It does not yet have a standard business model, a recognised category name, or a talent pipeline that trains people for it deliberately.

The third category is going to become, in my opinion, the most valuable layer of the three, because it is the only one willing to operate inside the institutional complexity that neither the strategists nor the integrators are prepared to enter. Most boutique consulting firms are currently sitting in the first or second bucket by default.

To conclude. What the Palantir story tell us is that growth is not directly tied to platform’s value. SaaS product growth happens when an institution discovers that a piece of technology understands its specific operational reality — not in the abstract way that a platform demo understands it, but in the way that only comes from someone sitting inside the mess for months, learning the undocumented APIs, mapping the workflows that don't appear in any org chart, staying until the gravel road becomes passable.

Most enterprise AI is currently being sold as a destination. Buy the platform, complete the implementation, arrive at transformation. That transformation is a process of continuous institutional learning.

Key Takeways

1. Product Success Requires Immersion in Operational Reality

  1. Enterprise software often fails because product teams rely on interviews, usage data, or quarterly reviews, rather than understanding day-to-day workflows inside the institution.
  2. Palantir’s solution: Forward-Deployed Engineers (FDEs) operate inside the customer environment, building under real constraints rather than imagined ones.
  3. Software becomes effective when it reflects ground truth, not assumptions or polished demos.

2. Field-Driven Productisation Compounds Advantage

  • Initial deployments are tactical, client-specific, and often messy (“quick fixes on unstable pipelines”).
  • Core engineering observes patterns across clients to abstract reusable primitives.
  • Every deployment acts as R&D, reducing future customer deployment costs and improving the platform.
  • Palantir converts operational failures into platform infrastructure, creating compounding advantage.

3. Team Structure Matters: Delta + Echo

  • Delta (Engineer): writes production-grade code, navigates broken data, implements solutions.
  • Echo (Strategist): understands institutional politics, workflow realities, and adoption barriers.
  • The tension between the two ensures both operational functionality and strategic alignment.

4. Talent Selection is Critical

  • Success is not just about hiring experienced enterprise consultants; it’s about contrarian, outcome-focused people willing to endure operational friction.
  • Palantir hires engineers and operators who can confront messy realities and drive real transformation, not just maintain client comfort or political correctness.
  • The meritocracy of outcomes selects for the type of intelligence capable of navigating institutional complexity.

5. Transformation is a Process, Not a Destination

  • Software must be embedded in institutional learning, not treated as a one-off implementation.
  • Institutional complexity is not a problem to bypass; it is the environment in which the product must function.
  • Palantir’s approach turns product delivery into continuous operational learning, where each engagement deepens institutional insight.

6. Lessons for SaaS and Consulting Firms

  • SaaS: Product discovery cannot rely solely on distant analytics; short, hands-on engagements (“Bootcamps”) reveal real operational impact.
  • Sustainability: Design hand-off and internal capability development into every engagement. Customers who can operate independently expand platform and partnerships adoption more aggressively.
  • Consulting Industry: A new category of “forward-deployed engineering teams” is emerging, bridging gaps between strategy consultancies and large integrators by embedding in complex institutional systems.

7. Strategic Implication

  • Firms that succeed in enterprise AI are those willing to operate inside institutional complexity, show immediate proof of value, and build customer capability, rather than just delivering slides or managing relationships.
  • Value is created through learning inside the client’s reality, not by selling a destination or platform alone.

Realworld AI Coding Agent Exercise

Mike's Notes

An open-source version of Virtuoso will be installed in the data centre to explore possibilities. Kingsley Uyi Idehen's articles are all fascinating. I first came across him via the Ontology Forum.

The original post on LinkedIn has more links.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

08/04/2026

Realworld AI Coding Agent Exercise

By: Kingsley Uyi Idehen
LinkedIn: 07/02/2026

Founder & CEO at OpenLink Software | Driving GenAI-Based AI Agents | Harmonising Disparate Data Spaces (Databases, Knowledge Bases/Graphs, and File System Documents).

This post explains how I used Claude Code (Pro level, powered by Opus 4.5) and Mistral Vibe (whenever Claude Code rate limits kicked in) to modernize the aesthetics of our uniquely powerful faceted search and browsing interface for knowledge graph exploration—essentially giving the UI around the core engine a facelift.

At OpenLink Software, we strongly believe that LLM-powered AI Agents are exactly the right tools for tackling this long-standing challenge—provided the RDF-based knowledge graph runs on a platform that isn’t constrained by dataset size. In other words, one that scales naturally in linear fashion—such as our Virtuoso multi-model platform for managing data spaces spanning databases, knowledge bases, filesystems, and APIs.

Situation Analysis

As with many aspects of RDF (Resource Description Framework), challenges in tooling creation often stem from general misunderstandings about the framework itself. RDF-based knowledge graph representation is one such area, and linear visualization is another.

In the case of linear visualization, the goal is to present the description of an entity of interest (the subject) along with its associated attributes—i.e., predicate–object pairings that express attribute names and values.

Compounding the difficulty is the fact that, although faceted search UI/UX patterns have long served as the conceptual foundation, implementing them at the scale typical of RDF-based knowledge graphs remains extremely challenging. This challenge is further amplified by the complexity of delivering such interfaces in HTML leveraging CSS and JavaScript.

Virtuoso's Faceted Search & Browsing System Workings

Fundamentally, this system allows you to perform text search, attribute-name lookup, or entity-identifier–based exploration across one or more knowledge graphs hosted in a Virtuoso instance. It provides a property-sheet–style interface that presents entities (subjects) alongside their associated attributes (relationship predicates) and values (objects).

Thanks to Linked Data principles, hyperlink-based denotation of entities, attributes, and values (optionally) creates a Web of data. This enables the same “click and explore” experience—also known as the follow-your-nose exploration pattern—that users enjoy when interacting with web pages through a browser.

Old System

Here are screenshots depicting the UI/UX for a simple search sequence along the following lines:

  • Text Search Input: Virtuoso.

  • Initial matches presented in a list, sorted by text score and entity rank (think: page rank for data).

  • Add filters by attributes such as "type" (rdf:type) and "name" (schema:name) where the name value is "Virtuoso".

  • Click on item from the result to obtain a description of Virtuoso via the entity description property sheet based page.

New System

Original resource: New Virtuoso Faceted Browser Showcase Screencast

Here's the same demonstration sequence, but experienced via the revamped UI/UX.

  • Text Search Input: Virtuoso.

  • Initial matches presented in a list, sorted by text score and entity rank (think: page rank for data).

  • Add filters by attributes such as "type" (rdf:type) and "name" (schema:name) where the name value is "Virtuoso".

  • Click on item from the result to obtain a description of Virtuoso via the entity description property sheet based page.

Additional Notable Faceted Search & Browsing Features

These include the following:

  1. Handling the fact that lots of attributes (thousands+) could be associated with an entity sources for a massive collection of source knowledge graphs during the filtering stage via a sticky scrollable paging control
  2. Handling the fact that a select entity description can also comprise lots of attributes (thousands+) via a stickly scrollable paging control
  3. Spreadsheet-like table (with resizable, moveable, and sort enabling columns) for handling query results from filtering or when presenting entity descriptions
  4. Ability to export the description of an entity in a variety of formats (JSON-LD, RDF-Turtle, RDF-XML, N-Triples, RDF/JSON, CSV, etc)
  5. Permalinking for sharing interaction state e.g., a filter page or entity description page
  6. Ability to reveal underling SPARQL query that drives filtering
  7. Metadata that provides information on source named graph(s) from which attributes and values have been sourced for an entity description by way of entity (subject) or value (object) role in the source graph triples
  8. Metadata that automatically identifies explicit (via owl:sameAs attribute values) coreferences (via values of attributes that are uniquely identifying i.e., inverse-functional e.g., email addresses or any other attribute with the inverse-functional designation in a loaded ontology)
  9. Settings for enabling or disabling reasoning and inference informed by built-in or custom inference rules

The Refactoring Process

I achieved this very difficult refactoring task, alongside my other daily duties, by prompting Claude Code and Mistral. Claude Code (using the Opus 4.5 model) handled most of the heavy refactoring and planning work through the initial working ports.

I brought Mistral on board once rate limits kicked in, which became an important part of the experiment—namely, determining what’s possible with the Pro edition of Claude Code.

That said, Mistral also impressed me to the point where I see it as the closest rival to Claude Code in the battle for coding agent market dominance. Its CLI aesthetics and assistance mode are top-notch.

Live Demonstration Instances

URIBurner

This is a live Virtuoso instance since 2008 functioning as a Linked Data utility showcase and bridge to the massive Linked Open Data Cloud Knowledge Graph collective.

  1. Text Search: Virtuoso
  2. Entity Types associated with text pattern: Virtuoso
  3. Attribute Filtering on Type, Name, and Value: Virtuoso Universal Server (Row Store & Cluster Server Edition)
  4. Selected Entity Description Page

Conclusion

Software development is evolving before our eyes. True power now comes from pairing capable AI Agents with human expertise—letting judgment guide automation, producing dependable outcomes, and delivering real-world value. The age of AI isn’t just about smarter tools; it’s about amplifying what humans do best.

Pipi naming pattern

Mike's Notes

It was helpful talking over automation with Alex a few nights ago.

There are hundreds of thousands of files to be put somewhere in the new data centre using automation.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

07/04/2026

Pipi naming pattern

By: Mike Peters
On a Sandy Beach: 07/04/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

This is the new naming pattern for every version, edition and account type of Pipi since 1997. These names are ASCI lowercase for compatibility with Windows, Linux, Solaris, MacOS, etc.

It is highly deterministic, enabling safe, reliable automation.

Assumptions

Pipi versions 1-8 had no editions or account types, so these default names will be used.

  • Edition = "p" (Pipi)
  • Account Type = "g" (General)

Archive

All the existing files, including research material, experiments, sample code, web crawls and notes, will be moved to these locations. The archive format is a ZIP file with a date string appended. An example would be;

  • pipi_7pg_20201230.zip

Production

It is also the root deployment directory. An example would be;

  • pipi/9ae/

Major Version Edition Account Type Archive
1 p g 1pg
2 p g 2pg
3 p g 3pg
4 p g 4pg
5 p g 5pg
6 p g 6pg
7 p g 7pg
8 p g 8pg
9 a a 9aa
9 a d 9ad
9 a e 9ae
9 a p 9ap
9 a r 9ar
9 a s 9as
9 a t 9at
9 c c 9cc
9 i i 9ii
9 r b 9rb
10 a a 10aa
10 a d 10ad
10 a e 10ae
10 a p 10ap
10 a r 10ar
10 a s 10as
10 a t 10at
10 c c 10cc
10 i i 10ii
10 r b 10rb

Pipi Core directory structure

Mike's Notes

Here are some recent changes, and no doubt there will be many more on the road to autonomous automated production. This has been driven by;

  • Application.cfc configuration
  • Cross-platform deployment
  • Cost reduction
  • Security
  • Privacy
  • Archiving
  • Import previous versions
  • Generate future versions

Editions

Setting up the data centre has resulted in Pipi 9 being explicitly split into 4 editions this week, each with a different role. They all work together to provide socially useful critical infrastructure services.

  • Pipi Application
  • Pipi Core
  • Pipi IaC
  • Pipi Robot

Pipi Application

SaaS applications that run in the cloud (AWS, Azure, GCP, IBM, Oracle, etc).

Pipi Core

The mother ship runs in the isolated data centre.

Pipi IaC

A slave of Pipi Core, it uses Infrastructure as Code (IaC) to deploy into the cloud.

Pipi Robot

A slave of Pipi Core, it runs in the cloud to super-administer all SaaS applications.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

06/04/2026

Pipi Core directory structure

By: Mike Peters
On a Sandy Beach: 07/04/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

This is the current default directory configuration for the Pipi Core accounts in a Windows data centre.

Pipi can automatically edit the configuration file to easily change the location of these key directories for a particular host, such as Windows or Linux.

  • Linux
  • MacOS
  • Windows
  • etc.

Each major build version contains a different directory structure. This configuration file is for version 9.

Configuration file

9cc/

  • data/
    • db2/
    • msaccess/
    • mssql/
    • oracle/
    • pg/
      • 18/
  • pipi/
      • <name>
        • pip/
          • i18n/
          • log/
          • <pipi version>.txt
          • sys/
          • temp/
          • template/
            • _include/
            • _layout/
                • _log/
    • work/
      • backup/
      • install/
      • project/


    Descriptions


    Directory Example Description
    9cc 9cc/ Major version = 9
    Edition = c (Pipi Core).
    Account type = c (Core).
    <name> loki/ Pipi instance name (retired deity, lowercase, unique ASCII).
    pip pip/ Pipi Core goes here
    i18n i18n/ Translation files
    log log/ Log files
    <pipi version>.txt pipi.9.1.2.34567.txt ASCII text file (current build).
    sys sys/ Application
    temp temp/
    template template/

    The Memristor

    Mike's Notes

    Pipi 11 (2029-) will take a deep dive into hybrid analogue-digital computing. These include;

    • Reservoir computing
    • Memristors
    This article by Brian Hayes is an excellent introduction to memristors.
    Memristor kits are available from Knowm.

    Conceptual symmetries of resistor, capacitor, inductor, and memristor - Wikipedia

    Resources

    References

    • Reference

    Repository

    • Home > Ajabbi Research > Library >
    • Home > Handbook > 

    Last Updated

    05/04/2026

    The Memristor

    By: Brian Hayes
    American Scientist: 03/2011

    Brian Hayes is a former editor and columnist for American Scientist. His most recent book is Foolproof, and Other Mathematical Meditations (MIT Press, 2017).

    The first new passive circuit element since the 1830s might transform computer hardware

    When Bell Telephone Laboratories announced the invention of the transistor in 1948, the press release boasted that “more than a hundred of them can easily be held in the palm of the hand.” Today, you can hold more than 100 billion transistors in your hand. What’s more, those transistors cost less than a dollar per billion, making them the cheapest and most abundant manufactured commodity in human history. Semiconductor fabrication lines churn out far more transistors than the world’s farmers grow grains of wheat or rice.

    In this thriving transistor monoculture, can a new circuit element find a place to take root and grow? That’s the question posed by the memristor, a device first discussed theoretically 40 years ago and finally implemented in hardware in 2008. The name is a contraction of “memory resistor,” which offers a good clue to how it works.

    Memristor enthusiasts hope the device will bring a new wave of innovation in electronics, packing even more bits into smaller volumes. Memristors would not totally supplant transistors but would supplement them in computer memories and logic circuits, and might also bring some form of analog information processing back into the world of computing. Farther out on the horizon is a vision of “neuromorphic” computers, modeled on animal nervous systems, where the memristor would play the role of the synapse.

    Whether the memristor will ultimately fulfill all these hopes remains to be seen. The history of invention is littered with promising novelties that failed to dislodge an incumbent technology. On the other hand, there is now widespread agreement that some fundamental shift in circuit design will be needed if computer hardware is to remain a growth industry. The memristor looks like a strong candidate.

    Titania

    The device that has sparked all the recent excitement over memristors was created in 2008 by R. Stanley Williams and several colleagues at Hewlett-Packard Laboratories. The Williams memristor consists of two metal electrodes separated by a thin film of titanium dioxide, or TiO2. This substance, also known as titania, is familiar to artists as a white pigment and to beachgoers as an ingredient of sunscreen.

    The memristor is a new building block for electrical circuits, an addition to the family of “passive” devices that also includes the resistor, the capacitor and the inductor. A version of the memristor invented at Hewlett-Packard in 2008 has a layer of insulating titanium dioxide, TiO2, sandwiched between two platinum (Pt) electrodes. Part of the TiO2 layer is “doped”: It has oxygen vacancies (orange disks) that act as positive ions and liberate electrons (purple dots) that can carry an electric current. The overall resistance of the device depends on the position of the boundary between the doped and the undoped regions; moreover, this boundary can be moved by an applied electric field. The diagrams above show several possible states of the memristor. Proceeding upward from the middle of the sequence, a forward bias voltage expands the doped region and thereby lowers the resistance; proceeding downward from the middle, a reverse bias voltage leaves most of the layer undoped, raising the resistance.

    Illustration by Brian Hayes.

    In its natural form titania is an electrical insulator, presenting very high resistance to the flow of electric current. In the memristor, part of the titania layer retains this natural insulating character, but the rest is altered during deposition by restricting the amount of oxygen available. The resulting oxygen vacancies in the crystal lattice reduce the resistance of the material by supplying mobile electrons that can carry a current. The oxygen-starved layer is said to be “doped.” (This term usually refers to added impurity atoms, but the effect of the oxygen deficiency is the same.)

    An electric current passing through the memristor has to cross both the doped and the undoped regions, and so the total resistance is the sum of contributions from the two layers. The total depends on the relative thickness of the layers, or in other words on the position of the boundary between them. What gives the memristor its special traits is that this boundary can move.

    Consider what happens inside the titania film when a voltage is applied to the terminals of the memristor, so that a current flows through it. The current is carried by conduction electrons—mostly electrons liberated by the oxygen vacancies. Electrons have a negative charge, and so they are repelled by the negative terminal and attracted to the positive one. In the background, meanwhile, another process is going on. The oxygen vacancies also have an electric charge; they act as positive ions, which drift toward the negative electrode. Movement of the vacancies requires physical rearrangement of the crystal lattice, and so it is much slower than the flow of electrons.

    The relatively slow drift of the oxygen vacancies makes no significant contribution to the electric current flowing through the memristor, but by shifting the boundary line between doped and undoped layers, it alters the overall resistance of the device. Depending on the polarity of the applied voltage, the resistance can either increase (if the doped region is squeezed into a narrower layer) or decrease (if the doped region expands to include more of the total thickness). When the external voltage is removed, the boundary line stays put in its new position.

    It is the migrating boundary between doped and undoped regions that gives the memristor its memory. And it’s not hard to see how this property can be put to work for information storage. One simple scheme defines a low-resistance state as a binary 0 and high-resistance state as a binary 1. To write a bit into the memory cell, apply a strong voltage pulse of the appropriate polarity, thereby setting the resistance either high or low. To read the stored state of the cell, use a lower voltage or a briefer pulse, which can measure the resistance without appreciably altering it.

    A notable advantage of the memristor is that it can be made very small. As a matter of fact, it must be small, at least along one dimension—the thickness of the TiO2 film. The ratio of maximum to minimum resistance varies inversely as the square of this thickness. In practical devices the film might be as thin as 10 nanometers, which is just 25 or 30 atomic diameters.

    It’s also notable that the memristor offers nonvolatile storage: The device retains its memory even when the power is turned off.

    Lost and Found

    There is a long tradition of explaining electric circuits by hydraulic analogies. Thus a conductor is compared to a pipe; electric current is analogous to the flow of water through the pipe; and voltage is like the pressure difference that drives the flow. In this imaginary world of electrical plumbing, a resistor is a small orifice that restricts the flow of water through a pipe. Similarly, a diode (or rectifier) might be likened to a one-way check valve, with a flap that the water pushes open when flowing in the right direction; pressure in the opposite direction closes the flap, which prevents any flow.

    What is the hydraulic equivalent of a memristor? The closest analogy I can think of is a sand filter, an item of apparatus used in water-purification plants. As contaminated water flows through a bed of sand and gravel, sediment gradually clogs the pores of the filter and thereby increases resistance. Reversing the flow flushes out the sediment and reduces resistance. Note that this behavior differs from that of a check valve. Although in both cases the direction of flow is what controls the state of the device, at any given instant the resistance of the sand filter is the same in both directions. The memristor, too, is symmetric in this sense.

    Plumbing analogies offer intuition about how a component works, but engineers need more—they need a predictive mathematical theory. The memristor has such a theory. It was formulated by Leon O. Chua of the University of California, Berkeley, in the early 1970s—when he had no physical device to which the theory applied. Chua’s 1971 paper on the subject was titled “Memristor—the missing circuit element.” Williams and his colleagues titled their 2008 announcement “The missing memristor found.”

    Chua’s theory has nothing to say about oxygen vacancies or other details of materials and structures. It is framed in terms of the basic equations of electric circuits. Those equations link four quantities: voltage (v), current (i), charge (q) and magnetic flux (φ). Each equation establishes a relation between two of these variables. For example, the best-known equation is Ohm’s Law, v=Ri, which says that voltage is proportional to current, with the constant of proportionality given by the resistance R. If a current of i amperes is flowing through a resistance of R ohms, then the voltage measured across the resistance will be v volts. A graph of current versus voltage for an ideal resistor is a straight line whose slope is R.

    Equations of circuit theory led to a prediction of the memristor almost 40 years before the device was discovered. The equations state relations among four variables: charge (q), current (i), voltage (v) and magnetic flux (phi). Taking these variables in pairs, there are six possible combinations, but only five equations were known. Leon O. Chua of the University of California, Berkeley, showed that the missing sixth equation, which links q and phi, defines the property he named memristance. In this matrix each equation appears in two forms, one the inverse of the other. Four paired equations (colored squares) are associated with basic circuit elements: resistance, capacitance, inductance and memristance. The remaining equations (gray squares) define charge as the time integral of current and voltage as the time derivative of flux.

    Illustration by Brian Hayes.

    Equations of the same form but with different pairs of variables describe two more basic electrical properties, capacitance and inductance. And two more equations define current and voltage in terms of charge and flux. That makes a total of five equations, which bring together various pairings of the four variables v, i, q and φ. Chua observed that four things taken two at a time yield six possible combinations, and so a sixth equation could be formulated. The missing equation would connect charge q and magnetic flux φ and would describe a new circuit element, joining the resistor, the capacitor and the inductor. Those three devices had all been known since the 1830s, so the new element would be a very late and unexpected addition to the family. Chua named it the memristor.

    No law of physics demanded that such a device exist, but no law forbade it either; the existing theory of circuits with resistance, capacitance and inductance could be augmented in a straightforward way to include memristance as well. Chua argued for the plausibility of the memristor on grounds of symmetry and completeness, suggesting an analogy with Dmitri Mendeleev’s construction of the periodic table. Nature is not required to fill every square of this table, but a blank spot is certainly a good place to look for a new chemical element—or a new circuit element.

    What would a device linking charge and flux look like? Framing the question in this way may be part of the reason it took so long to identify a physical memristor. The variables q and φ invite visions of electric and magnetic fields interacting in some conspicuous way. But the memristor invented at Hewlett-Packard has no obvious connection with magnetic phenomena. Instead it works as a special kind of variable resistor. How can this device be described in terms of q and φ?

    Chua’s answer is that q and φ are more important as mathematical variables than as physical quantities. The charge q is the time integral of an electric current: The current is a rate of flow—the number of electrons per second passing some point in the circuit—whereas the charge is the total number of electrons passing that point. A similar relation defines voltage in terms of magnetic flux. By making use of these definitions, we can describe the action of the memristor in terms of voltage and current instead of charge and flux.

    The simplest form of the memristor equation is just a variant of Ohm’s Law: v=M(q)i. Where Ohm’s Law has a constant, R, representing resistance, the memristor equation has a function, M(q). M is not a constant; instead it varies as a function of the quantity of charge that has passed through the device. This functional dependence allows memristance to be controlled in ways that ordinary resistance cannot. (Nevertheless, memristance is expressed in the same unit of measure as resistance, namely the ohm.)

    Long before Williams announced the TiO2 memristor, there were reports of “anomalous” resistance effects that can now be understood in terms of memristance. Chua has compiled a list of examples going back to 1976, and Williams himself had been exploring such phenomena since 1997. What changed in 2008 was the recognition that Chua’s memristor theory could be applied to these devices. The connection between theory and experiment is more than a formality; it allows memristors to be modeled in circuit-simulation software, an essential in the design of large-scale systems.

    Hysteresis and Memory

    The resistor, the capacitor, the inductor and the memristor are all described as “passive” circuit elements, to distinguish them from “active” devices such as transistors, which can amplify signals and inject power into circuits. But the memristor differs from the other passive components in a crucial way: It is necessarily a nonlinear device. In an ideal resistor, as mentioned above, the relation between current and voltage is one of simple proportionality, and so the graph of this relation is a straight line of slope R. The equivalent graph for an ideal memristor is not a line but a curve, where the slope varies from place to place.


    Plotting the relation of current to voltage highlights the difference between a resistor (above) and a memristor (see next illustration). In these graphs, voltage and current are each plotted separately as a function of time, and then combined in a current-voltage curve that shows the evolving state of the system. (The separate voltage and current graphs are turned 90 degrees to each other so that their axes match those of the current-voltage graph.) The input voltage is a sine wave. For an ideal resistor, current is proportional to voltage, and so the current is also sinusoidal. The current-voltage curve is a straight line whose slope is the resistance, a constant.

    Illustration by Brian Hayes.

    In the TiO2 memristor it’s easy to see where the nonlinearity comes from. Suppose the device is connected to a source of constant voltage. As current passes through the memristor in the “forward” direction—enlarging the conductive, doped, layer—the memristance decreases; this allows more current to pass, which further reduces the memristance, and so on. Reversing the polarity of the voltage source leads to the opposite kind of feedback loop, where increasing memristance causes still further increases.

    The memristor current-voltage curve takes the form of a “pinched hysteresis loop.” Because of the shifting boundary between doped and undoped regions, memristance is not a constant. Current initially grows slower than voltage, then speeds up and continues increasing even after voltage has reached its peak. The loop closes again at the origin: Whenever the voltage is zero, so is the current. The colored dots are meant to help in tracing the trajectory of the system along the current-voltage curve. They identify five corresponding points in the first half-cycle of the sine wave; the sequence progresses from yellow through orange to red.

    Illustration by Brian Hayes.

    The nature of the nonlinearity can be seen clearly by tracing the response of the device to a sinusoidal signal—a smoothly alternating voltage. The plot starts at zero volts and zero amperes. As the voltage steadily increases, so does the current, at an accelerating rate reflecting the nonlinear memristance. Then, after the voltage reaches its maximum and starts to fall again, the current continues to rise briefly because the resistance of the TiO2 film is still diminishing. When the current finally does retreat, the descending branch of the curve does not retrace the path of the ascending branch. Instead it forms a loop, called a hysteresis loop (a term borrowed from the study of magnetic systems). Specifically, the memristor’s curve is a “pinched” hysteresis loop, because the two branches cross at the origin. It’s a characteristic of the memristor that whenever the voltage is zero, so is the current, and vice versa. This fact implies that the memristor stores no energy, not even briefly. (The same is true of resistors, but not of capacitors or inductors.)

    Hysteresis creates a fundamental distinction between resistors and memristors. In a resistor, current is a simple, single-valued function of voltage; the same voltage always elicits the same current. The hysteresis curve of a memristor driven by a sinusoidal input signal implies that the same voltage can yield two different currents. More generally, when we consider inputs other than simple sine waves, a given voltage can correspond to many different values of current. What value is observed depends on the internal state of the memristor, which in turn depends on its history. This is just another way of saying that the memristor retains a memory of its own past.

    Switching in Time

    The transistor is a three-terminal device, with three connections to the rest of the circuit. It acts as a switch or amplifier, with a voltage applied to one terminal controlling a current flowing between the other two terminals. No such design is possible with memristors, which have just two terminals. But memristors can nonetheless be used to build both memory and digital logic. The key is to exploit the memristor’s built-in sense of history: A signal applied at one instant can affect another signal that later travels the same path. The first signal exerts this control by setting the internal state of the memristor to either high or low resistance.

    The favored layout for memristor memory is a crossbar structure, where perpendicular rows and columns of fine metal conductors are separated by a thin, partially doped layer of TiO2. In this way a memristor is formed at every point where a column crosses a row. Each bit in the memory is individually addressable by selecting the correct combination of column and row conductors. A signal pulse applied to these conductors can write information by setting the resistive state of the TiO2 junction. A later pulse on the same pair of conductors reads the recorded information by measuring the resistance.

    A near-term role for memristor crossbar arrays might be as a competitor for “flash” memory, the nonvolatile storage technology used in cell phones, cameras and many other devices. Each cell in a flash memory is a single transistor, modified for long-term storage of electric charge. The memristor structure is simpler and requires only two connections, so it might be made smaller than the flash-memory transistor. Thus there’s the possibility of higher density and lower cost.

    Building logic circuits out of memristors would be a somewhat greater departure from current practice. In the early years of solid-state electronics, a technology called resistor-transistor logic had a brief vogue; the idea was to minimize the number of expensive transistors and maximize the number of cheap resistors. But with the coming of integrated circuits the economic incentives changed. With potential advantages in size and power consumption, memristors might shift the balance back toward a technology that combines active and passive devices. Williams and his colleagues have demonstrated a set of memristor logic gates that are computationally complete—they can implement any Boolean logic function.

    Active components such as transistors would still be needed even if most information processing were done by memristors. The reason is that signals are reduced in amplitude by every passive circuit element, and at some point they must be restored to full strength. This requires a transistor or some other active device. Methods of fabricating hybrid circuits that combine transistors and memristors on the same substrate are an active area of investigation.

    In binary digital circuits, memristors would operate as switches, toggling between maximum and minimum resistance. In this mode, the state of a memristor encodes one bit of information. If several intermediate resistances could be distinguished reliably, then the information density could be raised to two or three bits per device. The writing and reading processes would have to be calibrated to resolve four or eight levels of resistance. (Some flash memory chips already achieve this.) The end point of this evolution is to let the resistance vary continuously, and operate the memristor as an analog device.

    One intriguing way to exploit analog memristors would be to build a machine modeled on the nervous system. In biological neural networks, each nerve cell communicates with other cells through thousands of synapses; adjustments to the strength of the synaptic connections is thought to be one mechanism of learning. In an artificial neural network, synapses must be small, simple structures if they are to be provided in realistic numbers. The memristor meets those requirements. Moreover, its native mode of operation—changing its resistance in response to the currents that flow through it—suggests a direct way of modelling the adjustment of synaptic strength.

    Empire Building

    Will the memristor turn out to be a transformative technology, the key to putting hundreds of trillions of devices in the palm of your hand? Or will we be asking, a few years from now, “Whatever happened to the memristor?”

    The empire of the transistor has fended off many other rivals and would-be invaders. A memory technology based on magnetic “bubbles” floating on a garnet crystal once held great promise, but you can read about it now on the website of the Vintage Technology Association. The charge-coupled device, or CCD, was another candidate for main memory and mass storage; it failed to gain a foothold in that role, although it did find another niche, as the image sensor of digital cameras. And there were wilder flights of fancy, such as superconducting computers and photonic data processing.

    This roster of defeated challengers might lead one to conclude that no innovation has a chance of displacing an entrenched technology. However, the transistor itself offers the obvious refutation. At its debut in 1948 it had to compete with the vacuum tube, which had dominated the electronics industry for 30 years. Although the transistor took more than a decade to establish itself, in the end the vacuum tube became a quaint collector’s item.

    Today the TiO2 memristor is just one of many contending new technologies. Considering only the realm of switched-resistance memory elements, there are several other candidates, including devices based on phase changes, on magnetic fields and on electron spin. (Chua argues that all these devices should be classified as memristors.) To evaluate the long-term prospects of such technologies, one would have to go beyond basic principles of operation to questions of reliability, longevity, uniformity, cost of manufacturing and dozens of other details.

    In a telephone conversation I asked Williams why he believes the memristor will be the technology that prevails. He offered several substantive arguments, but he also added, candidly: “It’s the one I’m working on. I have to believe in it.” In a sense this is the strongest endorsement anyone can give. As a bystander, I have the luxury of waiting on the sidelines to see how the contest comes out. But someone has to make choices, take risks and commit resources, or nothing new will ever be created.

    ©Brian Hayes

    Bibliography

    • Borghetti, Julien, Gregory S. Snider, Philip J. Kuekes, J. Joshua Yang, Duncan R. Stewart and R. Stanley Williams. 2010. ‘Memristive’ switches enable ‘stateful’ logic operations via material implication. Nature 464:873–876.
    • Chua, Leon O. 1971. Memristor—the missing circuit element. IEEE Transactions on Circuit Theory 18:507–519.
    • Chua, Leon O., and Sung Mo Kang. 1976. Memristive devices and systems. Proceedings of the IEEE 64(2):209–223.
    • Chua, Leon. 2011. Resistance switching memories are memristors. Applied Physics A 102(4) (In press).
    • Joglekar, Yogesh N., and Stephen J. Wolf. 2009. The elusive memristor: properties of basic electrical circuits. European Journal of Physics 30:661–675.
    • Keyes, Robert W. 2009. The long-lived transistor. American Scientist 97:134–141.
    • Li, Hai, and Yiran Chen. 2010. Emerging non-volatile memory technologies. In Proceedings of the 53rd Midwest Symposium on Circuits and Systems. doi:10.1109/MWSCAS.2010.5548590.
    • Rose, Garrett S. 2010. Overview: Memristive devices, circuits and systems. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS 2010), pp. 1955–1958.
    • Strukov, Dmitri B., Gregory S. Snider, Duncan R. Stewart and R. Stanley Williams. 2008. The missing memristor found. Nature 453:80–83.
    • Strukov, D. B., D. R. Stewart, J. Borghetti, X. Li, M. Pickett, G. Medeiros Ribeiro, W. Robinett, G. Snider, J. P. Strachan, W. Wu, Q. Xia, J. Joshua Yang and R. S. Williams. 2010. Hybrid CMOS/memristor circuits. In Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS 2010), pp. 1967–1970.
    • Versace, Massimiliano, and Ben Chandler. 2010. The brain of a new machine. IEEE Spectrum 47(12):30–37.
    • Williams, R. Stanley. 2008. How we found the missing memristor. IEEE Spectrum 45(12):28–35.