Laws of Software Engineering

Mike's Notes

Dr Milan Milanovic has written a 300-page book about the laws of software engineering. It looks great. There is also a website, a newsletter and a printable poster. It is a very useful collection of 56 laws for future reference.

Missing Laws

  • Glass's Law: For every 25% increase in the complexity of the problem space, there is a 100% (fourfold) increase in the complexity of the solution space.
  • Session's Law: 
    • The Law of Exponential Complexity: As you add more components to a system, the complexity increases exponentially, not linearly.
    • Equivalence Relations & Partitioning: Sessions argues that the only way to manage massive IT complexity is through partitioning—breaking a large system into smaller, independent "Snowman" units that do not share state or data directly.
    • Mathematical Predictability: He uses these laws to calculate the probability of IT project failure based on the number of interconnected components.

Heuristics

While this name suggests rigid scientific "laws," it is more accurately described as a series of heuristic, or mental shortcuts or empirical observations that help teams understand.

Complex Adaptive System (CAS)

I knew about some of these heuristics and designed Pipi as a CAS to break many of them. πŸ˜Ž Laws of Software Engineering gives me a clear target to smash.

Website

On the Laws of Software Engineering website, there is a link from each law to a page with sections covering;

  • Takeaways
  • Overview
  • Examples
  • Origin
  • Further Reading

I copied the 56 laws here from the website, subscribed to the newsletter, and printed off the poster. I must get the book. Very cool.

Thank you, Dr Milan Milanovic. 😎

From Amazon

"Conway's Law. Brooks's Law. Goodhart's Law. Hyrum's Law. If you've been writing software long enough, you've recognized these rules even before you knew their names. If you had an opportunity to see a failed rewrite, or a time that is getting bigger but slower, you hit into these rules.

These patterns have been showing up in software projects for over fifty years. Experienced engineers know them, but they learned them the hard way. These lessons were never in one place, where you could read them. They lived in academic papers from the 1960s, but also in some blog posts that get shared once and forgotten. I also found them in discussions and code reviews.

This book puts all of them in one place. 63+ laws and principles, each with its own chapter, with forewords by Dr. Rebecca Parsons (CTO Emerita at Thoughtworks) and Addy Osmani (Engineering Director at Google Cloud AI).

Each chapter covers where the law came from, how it actually works, what it looks like in the real world, and how it connects to other laws in the book. That last part matters. Some of these principles reinforce each other, and others directly conflict. The "Related Laws" sections throughout the book show you where those connections are. This is important because in practice, engineering is about tradeoffs, not rules.

The book is organized into seven standalone parts:

    • Part I: Architecture & Complexity. 8 laws, including Gall's Law, CAP Theorem, and Hyrum's Law.
    • Part II: People, Teams & Organizations. 10 laws, including Conway's Law, Brooks's Law, and the Peter Principle.
    • Part III: Time, Estimation & Planning. 6 laws, including Hofstadter's Law and Goodhart's Law.
    • Part IV: Quality, Maintenance & Evolution. 10 laws including Technical Debt, Lehman's Laws, and Testing Pyramid.
    • Part V: Scale, Performance & Growth. 4 laws, including Amdahl's Law and Metcalfe's Law.
    • Part VI: Coding & Design Principles. 6 laws including SOLID, DRY, and YAGNI.
    • Part VII: Decision-Making & Biases. 13 laws, including Occam's Razor, Dunning-Kruger Effect, and Pareto Principle.

Many chapters also cover companion laws within them: George Box's Law, The Spotify Model, Two Pizza Rule, The Dead Sea Effect, The Cobra Effect, Impostor Syndrome, and more. There's more in here than the table of contents suggests.

You don't need to read it front to back. Each chapter stands alone. Building a distributed system? Start with Part I. Team problems? Part II. Estimation problems? Part III. Codebase issues? Part IV. Or just start at page one. That works too.

Half the book covers people, organizations, and how we make decisions, not just technology. Remember that this is not a coding tutorial and it won't teach you a programming language or framework.

If you've been in the industry for a while, you'll probably recognize some of these laws. Here you will learn about other laws and how they are connected and affect each other. You will also learn to understand it in depth, so it becomes useful for your current or next projects. And if you're earlier in your career, this book gives you the vocabulary your senior colleagues already have." - Amazon

Resources

References

  • Laws of Software Engineering by Dr Milan Milanovic (2026).

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Amazing CTO
  • Home > Ajabbi Research > Library > Subscriptions > Techworld with Milan
  • Home > Handbook > 

Last Updated

27/04/2026

Laws of Software Engineering

By: Dr Milan Milanovic
Laws of Software Engineering: 27/04/2026

I’m a software engineer and CTO with over 20 years of experience, from startups to large enterprises, including big tech as a contractor. I hold a Ph.D. in Computer Science, have authored over 20 scientific publications with 440+ citations, and am recognized as a Microsoft MVP for Developer Technologies.

With each new role, from developer to architect to CTO, I encountered the same patterns. The same struggles. The same failures. The same hard-won insights that engineers keep rediscovering, often at great cost.

I started collecting these laws and principles for myself, then for my teams. When I started my newsletter, I shared them with the world. Today, my writing on software, architecture, and leadership reaches over 400,000 engineers.

This book represents everything I wish I’d had when I started. I hope it spares you some of the pain I went through.

A collection of principles and patterns that shape software systems, teams, and decisions.

Part I: Architecture & Complexity (Architecture)

1. Gall's Law

A complex system that works is invariably found to have evolved from a simple system that worked.

2. The Law of Leaky Abstractions

All non-trivial abstractions, to some degree, are leaky. 

+ George Box's Law

3. Tesler's Law (Conservation of Complexity)

Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated.

4. CAP Theorem

A distributed system can guarantee only two of: consistency, availability, and partition tolerance.

5. Hyrum's Law

With a sufficient number of API users, all observable behaviors of your system will be depended on by somebody.

6. Second-System Effect

Small, successful systems tend to be followed by overengineered, bloated replacements.

7. Fallacies of Distributed Computing

A set of eight false assumptions that new distributed system designers often make.

8. Law of Unintended Consequences

Whenever you change a complex system, expect surprise.

9. Zawinski's Law

Every program attempts to expand until it can read mail.

Part II: People, Teams & Organizations (Teams)

10. Conway's Law

Organizations design systems that mirror their own communication structure. 

+ The Spotify Model

11. Brooks's Law

Adding manpower to a late software project makes it later. 

+ Little's Law

12. Dunbar's Number

There is a cognitive limit of about 150 stable relationships one person can maintain.

13. The Ringelmann Effect

Individual productivity decreases as group size increases. 

+ The Two-Pizza Rule

14. Price's Law

The square root of the total number of participants does 50% of the work.

15. Putt's Law

Those who understand technology don't manage it, and those who manage it don't understand it.

16. Peter Principle

In a hierarchy, every employee tends to rise to their level of incompetence.

17. Bus Factor

The minimum number of team members whose loss would put the project in serious trouble. 

+ The Dead Sea Effect

18. Dilbert Principle

Companies tend to promote incompetent employees to management to limit the damage they can do.

Part III: Time, Estimation & Planning (Planning)

19. Hofstadter's Law

It always takes longer than you expect, even when you take into account Hofstadter's Law. 

20. Parkinson's Law

Work expands to fill the time available for its completion.

21. The Ninety-Ninety Rule

The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.

22. Goodhart's Law

When a measure becomes a target, it ceases to be a good measure. 

+ The Cobra Effect

23. Gilb's Law

Anything you need to quantify can be measured in some way better than not measuring it. 

24. Premature Optimization (Knuth's Optimization Principle)

Premature optimization is the root of all evil.

Part IV: Quality, Maintenance & Evolution (Quality)

25. Murphy's Law / Sod's Law

Anything that can go wrong will go wrong.

26. Postel's Law

Be conservative in what you do, be liberal in what you accept from others.

27. Broken Windows Theory

Don't leave broken windows (bad designs, wrong decisions, or poor code) unrepaired. 

28. The Boy Scout Rule

Leave the code better than you found it.

29. Technical Debt

Technical Debt is everything that slows us down when developing software.

30. Linus's Law

Given enough eyeballs, all bugs are shallow.

31. Kernighan's Law

Debugging is twice as hard as writing the code in the first place.

32. Testing Pyramid

A project should have many fast unit tests, fewer integration tests, and only a small number of UI tests. 

+ The BeyoncΓ© Rule

33. Pesticide Paradox

Repeatedly running the same tests becomes less effective over time.

34. Lehman's Laws of Software Evolution

Software that reflects the real world must evolve, and that evolution has predictable limits.

35. Sturgeon's Law

90% of everything is crap.

Part V: Scale, Performance & Growth (Scale)

36. Amdahl's Law

The speedup from parallelization is limited by the fraction of work that cannot be parallelized.

37. Gustafson's Law

It is possible to achieve significant speedup in parallel processing by increasing the problem size.

38. Metcalfe's Law

The value of a network is proportional to the square of the number of users. 

+ Sarnoff's & Reed's Laws

Part VI: Coding & Design Principles (Design)

39. DRY (Don't Repeat Yourself)

Every piece of knowledge must have a single, unambiguous, authoritative representation.

40. KISS (Keep It Simple, Stupid)

Designs and systems should be as simple as possible.

41. YAGNI (You Aren't Gonna Need It)

Don't add functionality until it is necessary. 

42. SOLID Principles

Five main guidelines that enhance software design, making code more maintainable and scalable. 

43. Law of Demeter

An object should only interact with its immediate friends, not strangers.

44. Principle of Least Astonishment

Software and interfaces should behave in a way that least surprises users and other developers.

Part VII: Decision-Making & Biases (Decisions)

45. Dunning-Kruger Effect

The less you know about something, the more confident you tend to be. 

+ Impostor Syndrome

46. Hanlon's Razor

Never attribute to malice that which is adequately explained by stupidity or carelessness.

47. Occam's Razor

The simplest explanation is often the most accurate one.

48. Sunk Cost Fallacy

Sticking with a choice because you've invested time or energy in it, even when walking away helps you.

49. The Map Is Not the Territory

Our representations of reality are not the same as reality itself.

50. Confirmation Bias

A tendency to favor information that supports our existing beliefs or ideas.

51. The Hype Cycle & Amara's Law

We tend to overestimate the effect of a technology in the short run and underestimate the impact in the long run.

52. The Lindy Effect

The longer something has been in use, the more likely it is to continue being used.

53. First Principles Thinking

Breaking a complex problem into its most basic blocks and then building up from there.

54. Inversion

Solving a problem by considering the opposite outcome and working backward from it.

55. Pareto Principle (80/20 Rule)

80% of the problems result from 20% of the causes.

56. Cunningham's Law

The best way to get the correct answer on the Internet is not to ask a question, it's to post the wrong answer.

20 Engines

Mike's Notes

While I finish off testing the Pipi System Engine (sys), here is the plan for the next stage.

Update 27/04/2026

Today, I have also been setting up some new twin 27" monitors to help with coding. I need larger 16pt Arial or Noto Sans font sizes these days.😎

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

27/04/2026

20 Engines

By: Mike Peters
On a Sandy Beach: 26/04/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Project

The project is to import the approximately 20 engines necessary for Pipi to self-manage with a minimal set of internal features, no adaptation or self-evolution. That will come later, when many more engines are imported and contained inside other engines.

Variables

All of these engines have worked for years (some with origins dating back 26 years) and are mature and stable, but have been migrated from a laptop to a data centre, where the host environment differs. I forgot to sort that out first. 😎 The Nest Engine (nst) was rapidly invented to solve that problem, but the variables generated by the nest are also causing name clashes.

Process

Each engine needs the same minor tweak as it is imported. I'm carefully checking all spelling in each engine. Each engine is then left running while I watch the logs. Once the log engine is imported, logs can be visualised in Mission Control using third-party open-source tools. This will speed things up a lot. Eventually, the logs will feed feedback loops as this beast scales.

20 Engine import list

In no particular order.

  1. System Engine (sys)
  2. Nest Engine (nst)
  3. Namespace Engine (nsp)
  4. Variables Engine (var)
  5. Render Engine (rnd)
  6. Template Engine (tem)
  7. Log Engine (log)
  8. Data Engine (dta)
  9. Configuration Engine (cnf)
  10. Versioning Engine (ver)
  11. Code Engine (cde)
  12. Conductor Engine (cnd)
  13. Directory Engine (dir)
  14. Node Engine (nde)
  15. CMS Engine (cms)
  16. Core Engine (cor)
  17. Factory Engine (fac)
  18. Page Engine (pge)
  19. DevOps Engine (dvp)

Pipi is the IDE

I built Pipi using Pipi, so I'm stuck in chicken-and-egg land at the moment. The more engines are imported, the easier this will get. Luckily, I have a rough idea of the import order, and I can use Synethesia to run simulations, which are usually very fast and accurate. After all, it's how I build everything.

Pipi Editions

Each engine is like a different kind of Lego brick, and each Pipi is built out of hundreds of these bricks. The 4 Editions are built with the same bricks, combined in different ways. The Instances of any Edition are built exactly the same, but their databases with the same data models will store different histories, weights, parameters, etc., as they adapt and evolve.

DevOps Engine (dvp)

The DevOps log below is currently maintained manually, but will be generated automatically once the DevOps Engine (dvp) is back up and running. These logs will be automatically published on the Ajabbi Developer website using an interactive format with more detail.


DevOps log (edit)

A record of work done.

NZ DateTime Action Engine Status

Test System Engine (sys)





Create Nest Engine (nst) - Code for Linux vs Windows path delimiters.

Test Nest Engine (nst)





Test Namespace Engine (nsp)





Test Variables Engine (var)





Test System Engine (sys)





Import Render Engine (rnd)

Edit Render Engine (rnd) - variables.

Edit Render Engine (rnd) - spelling.

Test Render Engine (rnd)





Test Render Engine (rnd) - run Nest Engine (nst) templates to create a nest.





Test System Engine (sys)





Import Template Engine (tem)

Edit Template Engine (tem) - variables.

Edit Template Engine (tem) - spelling.

Test Template Engine (tem)





Import Template Engine (tem) - import temporary nest templates





Test System Engine (sys)





Test Render Engine (nst) + Template Engine (tem) + Nest Engine (nst)





Test System Engine (sys)





Create Temporary web form UI for each engine

Test Temporary web form UI for each engine





Import Log Engine (log)

Edit Log Engine (log) - variables.

Edit Log Engine (log) - spelling.

Test Log Engine (log) - CRUD log file formats

Render Log Engine (log) - logs





Import Graphing library for logs

Create Embed visualisations on Mission Control web pages.





Test System Engine (sys)





Import Data Engine (dta)

Edit Data Engine (dta) - variables.

Edit Data Engine (dta) - spelling.

Test Data Engine (dta)





Test System Engine (sys)





Import Configuration Engine (cnf)

Edit Configuration Engine (cnf) - variables.

Edit Configuration Engine (cnf) - spelling.

Test Configuration Engine (cnf)





Test System Engine (sys)





Import Versioning Engine (ver)

Edit Versioning Engine (ver) - variables.

Edit Versioning Engine (ver) - spelling.

Test Versioning Engine (ver) - automatic versioning.

Render Versioning Engine (ver) - version logs





Test System Engine (sys)




























Live 4K video of Earth and space: 24/7 Livestream of Earth by Sen’s 4K video cameras on the ISS

Mike's Notes

Just beautiful.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

25/04/2026

Live 4K video of Earth and space: 24/7 Livestream of Earth by Sen’s 4K video cameras on the ISS

By: Sen
Sen: 25/04/2026

Sen’s vision is to democratise space through video to inform, educate, inspire, and benefit all of humanity. Sen’s mission is to stream real-time video from space to billions of people, gathering news and information about Earth and space and making the data universally accessible and useful.

Watch Earth Live and in 4K from Sen's video cameras on the International Space Station, downlinked via NASA. This is the world’s first continuous 4K livestream from space, empowering you to see our planet like astronauts do.

"Intention is what we need": Neeru Khosla on the Future of Education and Learning with AI

Mike's Notes

I love the multi-modality of CK12 education. Works for me. A lot better than what I got at school.

Thanks, Ksneia, for this wonderful interview.

Turing Post is a fantastic newsletter to subscribe to.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Turing Post
  • Home > Handbook > 

Last Updated

24/04/2026

"Intention is what we need": Neeru Khosla on the Future of Education and Learning with AI

By: Kesnia Se
Turing Post: 18/04/2026

Over two decades in tech, with the last seven years focused on ML and AI. Our analysis stays precise and grounded. Our educational series walk you through the foundations and help you explore the deeper layers. We trace the arc of AI – its past, its present, and the direction it’s pulling us toward. We track the research that matters, the systems being built, and the ideas that define how AI actually works. And we break it down with clarity, so you can make better decisions. Join 100,000+ professionals who rely on Turing Post.

Neeru Khosla is the co-founder of CK-12 Foundation, a nonprofit focused on free, adaptive learning materials for students and teachers. In this interview, she explains why AI in education matters less as an “answer machine” and more as a way to understand how students think: what misconception they hold, where they are confused, and what kind of support can help them learn more deeply. CK-12’s AI tutor, Flexi, is built around that idea, using AI to support personalized learning, knowledge tracing, and student curiosity at scale.

How the CK-12 co-founder thinks about using AI to rethink, personalize, and gradually change the education system

Attention is what taught the machines. But now, in this AI world, we need to have intention and judgment. Intention is what we need.”

I loved that conversation with Neeru Khosla, co-founder of the CK-12 Foundation, a nonprofit focused on free, adaptive learning materials for students and teachers. She articulated one of the most important shifts now unfolding in education. For years, digital learning has focused on access: more content, more devices, more formats, more scale. But Neeru’s point is that the real shift brought by AI is the possibility of understanding how a student is actually thinking – where they are confused, what misconception they hold, what question they are really trying to ask, and what kind of help might move them forward.

Khosla has been working on this problem for nearly two decades. CK-12 started with an idea to make high-quality learning materials free, flexible, and adaptable to the needs of individual students. Long before the current AI boom, the foundation was already building multimodal learning tools, adaptive systems, and concept-based content designed to meet learners where they are. But as Neeru describes it, the missing layer was always insight. Schools could see whether a student got something right or wrong. They could not easily see why.

That is where the last three years changed everything. In Neeru’s view, AI is not most interesting as an answer machine.

“Just knowing that the answer is right or wrong doesn’t tell anybody anything.”

It is most useful when it helps surface the learner’s internal process. A question like “How does the sun burn if there’s no oxygen in space?” is not just a cute moment of curiosity – it reveals a specific misunderstanding that a good tutor can work with. This is the logic behind Flexi, CK-12’s AI tutor, which Neeru says has already been used by tens of millions of students and has processed more than 150 million questions. No one needs automation for its own sake. New tools give us the ability to trace learning more deeply than standardized testing ever could.

But there is a lot of fear in the system. Our conversation moves well beyond product and platform design. We talk about why schools often resist free tools, why they now resist AI tools, why teachers need open-mindedness more than technical fluency, why AI literacy may become as foundational as reading and writing, and why creativity, critical thinking, collaboration, communication, and community remain the real non-negotiables. Khosla also makes a broader argument: education is one of the few places where society cannot afford to treat access as optional. If AI is going to matter in the classroom, it has to help more children learn more deeply – not just help the most privileged students move faster. She argues that if the US wants to maintain its lead, it needs to rethink education and how it is delivered.

There is also something personal and unusually grounded in the way she talks about all this. Neeru came to education from molecular biology, then spent years in classrooms watching children learn one by one before building CK-12. That background gives her a practical lens on both technology and human development. She is enthusiastic about AI, but not dazzled by it. Again and again, she returns to the same principle: tools matter, but what matters more is whether they help us show up – for students, for teachers, and for one another.

Neeru believes in two AIs: Augmented Intelligence and Amplified Intention, and I’m 100 percent agree with her.


As I keep saying, AI literacy is extremely important, and this conversation is a great way to learn how to move from fear to understanding. Please watch it and share it →

YouTube video by Turing Post

AI Could Change Education Forever – Neeru Khosla Explains Why


Edited interview transcript

Ksenia:

Today I’m very honored to speak with Neeru Khosla. She is an entrepreneur and philanthropist known for her work in education and social impact. She is the co-founder of CK-12 Foundation, which provides free, customizable learning materials to students and teachers worldwide.

I’m very passionate about education, especially in this new era of AI, so thank you so much for joining me.

Neeru:

Thank you for having me.

Ksenia:

This podcast is called Inference – because we’re trying to make sense of what’s happening and draw conclusions from it.

Neeru:

Thank you for inviting me to Inference. I love talking to people about what I care deeply about, because education is one of the most important things we can give back to society. We can create all kinds of financial freedom, but that can only happen if we truly help young students understand what they need to know – and how to use that knowledge.

Education in the AI Era

Ksenia:

The last three years have been pretty disruptive for many industries. How has that changed your perspective on education and on the CK-12 Foundation?

Neeru:

Let me go back a little.

When we started CK-12, we were trying to answer a simple question: how might we make education available and meaningful for all children? At the time, textbooks were very one-sided. You got what you got. But true learning only happens when students can make their own connections to what they are learning.

As technology began to evolve in the early 2000s, I knew it was going to go far. I thought this would be one way to reach many more students and help them. So from the beginning, we made CK-12 free.

Technology gave us capabilities that learning really needs – especially multimodality. You cannot always learn effectively just by reading a textbook or just by watching a video. A concept has to be presented in the modality that works best for the learner.

Take physics, for example. If you are learning about electricity, reading in a textbook about how electrons move doesn’t really make it come alive. Now we have simulations. We have ways for students to actually see and interact with what they’re learning.

That was powerful. But even then, we still didn’t really know what was going on in the minds of students. We didn’t know whether they truly understood what we were trying to teach.

It was only with the arrival of AI in the last three years that we realized there was a real opportunity there. It wasn’t perfect – and no technology ever arrives 100 percent ready. Technology matures through iteration, and that requires all of us to participate.

When OpenAI suddenly reached enormous adoption in a very short time, it was clear something major was happening. A lot of people were afraid: students will cheat, machines will replace us, and so on. I was afraid as well. But I also noticed something important: prompting itself is a learning tool.

Ksenia:

Absolutely. It’s the skill of asking questions.

Neeru:

Exactly. That’s why I kept telling people not to be afraid. If students can prompt a machine in a way that gets them where they want to go, they are thinking. They are thinking about concepts, about application, about whether something sounds right or wrong, about whether they want to go deeper.

They may not describe it that way, but they do know when an answer doesn’t sound right. Then they keep prompting. If we can teach children to question – to keep asking why, the five whys – that itself is a huge learning process.

So over the last three years, we built on top of everything we had already done in learning science, concept maps, and knowledge connections. Two and a half years ago, we introduced Flexi, our student tutor. Today we have over 50 million students using it.

Ksenia:

Wow. And how are the results?

Neeru:

We’ve had over 150 million questions asked.

From the beginning, I told the team that we had to categorize and tag the questions students were asking. Many are procedural: help me solve this, define this, help me understand this. Those are fine.

But the questions that really matter are the ones that reveal thinking. For example: How does the sun burn when there’s no oxygen in space? That’s a wonderful question from a young learner. You immediately see the misconception: the child is assuming the sun “burns” through combustion, when in fact it’s nuclear fusion.

Those are the moments where you can really understand what a student is thinking and help them deeply.

That is what we built: a platform that connects concepts and content. Our FlexBooks were always designed to be flexible and personalized – to each learner’s level, language, standards, and classroom needs. But until AI, we couldn’t really see inside the learning process at this level.

And that changes everything. Standardized testing only tells you whether an answer was right or wrong. It doesn’t tell you why a learner got something wrong. Now we can do deep knowledge tracing and understand where the misunderstanding happened.

Do We Need to Rethink Education From the Ground Up?

Ksenia:

I have five children, and they’re still in public school. I can see how early we still are with AI because public schools just can’t catch up that fast. Do we need to rethink the whole education system from the ground up?

Neeru:

When I started, I actually avoided trying to get adoption directly in public schools. Partly because they didn’t want to adopt something free. We are still free, almost 20 years later, and we intend to remain free.

What surprised me early on was that schools, districts, and states had money – but they didn’t want to lose the systems around that money. That’s the legacy system we were dealing with. So yes, politics is part of it, but also incentives.

We took more of a Trojan horse approach. We built adaptive systems very early on so we could tell where students were missing things and support them in ways that were individualized.

But the reality is that schools still define success primarily through standardized testing. That is deeply embedded in the thinking of teachers, schools, districts, and systems.

So we started working not only with teachers but more directly with students as well – because I felt that maybe we had to disrupt the system from the learner side. But changing an entire system is hard. You have to move carefully. You take one step, stabilize, then take the next.

What has changed in the last three years is that AI finally gives us tools that make deeper change more possible. We now have a teacher assistant that shows educators where each student is and how they are likely to progress into the next concept. That kind of support can help create systemic change.

Ksenia:

Do you follow what teachers are asking for? What they want from AI?

Neeru:

Yes, very much. We work with teachers all the time, and we get many requests. But teachers are very different from one another. Some can imagine what’s possible and are ready to move. Others are afraid, tired, or not ready to take risks. So you have to work with many different profiles.

That’s one reason we’ve been around this long. Change in education takes a long-term view.

If you ask me whether the whole system can change, I would say yes – but not overnight. I don’t think it will take another 25 years. I think within five years we’ll start to see deeper systemic change. But we have to change the system.

Ksenia:

At the systemic level.

Neeru:

Yes. There is still a lot of fear. But if people can see that the change is real and not scary, adoption can happen faster.

And of course there will always be bad actors. But that’s true of any technology. When we created content, we had domain experts and human review. We constantly checked quality – what the systems were saying, how students were interacting with them. That quality control still matters. Now automation can help, but responsibility still matters.

What Skills Matter Most Now?

Ksenia:

What skills do you think are essential in this AI world – for teachers and for kids?

Neeru:

Teachers need open minds. They have to be willing to think about what’s possible.

Every time a new tool appears, people panic. When calculators arrived, people said students would never learn math. When the internet arrived, people said students would just cheat. The same story continues.

Of course students still need to know how to think, compute, read, and reason. But some tasks don’t need to be done by hand forever. That doesn’t make the underlying understanding less important – it makes it more important.

Teachers need to understand how these systems work. Right now, many of them don’t. But they are smart. They will learn.

Students, meanwhile, still need literacy and numeracy. They need to write. They need to understand language. They need math. What’s interesting now is that language has become even more powerful. We once thought English majors might not be especially practical. Now prompt quality and language fluency are central to how people use AI.

We also need to help students connect their interests to deeper knowledge. If a child says, “I want to be a stage designer, so I don’t need math or science,” we can show them that they absolutely do – through lighting, structure, area, form, efficiency. Creativity and technical understanding are not opposites.

In the end, students need creativity, critical thinking, collaboration, communication, and community.

What EdTech Often Gets Wrong

Ksenia:

What do you think companies offering AI in education misunderstand about kids and teachers?

Neeru:

One big misconception is the belief that AI is enough by itself – that you can just ask a question and get what you need.

What systems like large language models do is generate the most likely fit probabilistically. But education has non-negotiables. Learning has structure. Development matters. Misconceptions matter. Prior knowledge matters. Current AI systems are not ready to handle all of that on their own.

That’s why I think every vertical needs to be treated seriously in its own terms. Education is not the same as medicine. These are different domains with different responsibilities. You cannot simply take a general model and assume that is enough.

Ksenia:

And when you say verticals, you mean domains like education and medicine?

Neeru:

Exactly. Those are different verticals built on top of these foundational models. And I don’t think we even fully know what AGI means yet, in practical terms.

Kids, Curiosity, and Learning by Doing

Ksenia:

One thing I keep thinking is that AI is becoming as important as reading and writing – a new literacy. And maybe we should also follow kids a little more, because they approach technology with imagination and curiosity.

Neeru:

I saw that even when my own children were in middle school. Kids have always had their own social graph – their own way of learning from each other, teaching each other, figuring out who knows what.

I spent more than 15 years deeply involved in education. Originally I was a molecular biologist doing research at Stanford on oncogenes. Then I became pregnant with my first child and decided I couldn’t stay around radioactivity. Later, after having four children, I started thinking seriously about education.

I found what I thought was the best school for my children – a very child-centered school. I got deeply involved. I spent about ten years in classrooms, watching what teachers and students were doing.

Sometimes I would sit down next to a child who was off in a corner alone and just ask, “What’s going on?” And often that child didn’t need someone to do the work for them. They just needed someone to say: You’re doing fine. Keep going. Let me help if you’re stuck.'

That, to me, was powerful. I thought: why can’t every child have that?

That was really the genesis of CK-12.

Ksenia:

And what does CK-12 stand for?

Neeru:

K-12 is obvious. But the “C” stands for every child – and also for the core C’s: creativity, critical thinking, collaboration, communication. Those are non-negotiables for deep learning.

Ksenia:

That’s interesting, because those skills were always important – but now they feel even more essential.

Neeru:

Exactly.

Why Free Is Hard in Education

Ksenia:

You said something interesting earlier – that because CK-12 was free, schools had trouble adopting it. That’s such a paradox. Free is so important, but it’s also hard to make work in education.

Neeru:

Yes. Most free products don’t last. We’ve lasted almost 20 years. But any system – nonprofit or for-profit – needs sustainable funding.

That’s why my husband Vinod and I have stood behind this. We fund it because we believe in funding every child.

Ksenia:

And how do children get access to it?

Neeru:

They can just join. It’s open and free. If they’re under 13, they need teacher or parent permission. That’s one thing that breaks my heart a little – because if you’re under 13, there are more barriers to independent learning on platforms like this.

Ksenia:

That is a real barrier. There’s also a big difference between open-source culture in software – where people help each other build things – and education, where openness still feels unusual.

Neeru:

Yes. When we started, there was more conversation around OER – Open Educational Resources. But not many survived.

Wikipedia was there, of course, but it wasn’t foundational for children. Most things online were not at the right level for kids. They didn’t account for prior knowledge, which is a major non-negotiable in learning. You have to figure out what the child already knows. Otherwise, if something is too hard or too easy, they lose interest.

That’s why cold start matters so much in learning systems.

Philanthropy, Access, and Human Potential

Ksenia:

If we go back to philanthropy, how do we make it more impactful in systems that are so hard to penetrate?

Neeru:

Education is one of those things we simply cannot ignore. It is a human rights issue. If the US – or any country – wants to remain strong, it has to make sure the next generation is equipped.

And I think this can happen fast, because in countries like India and China, children are hungry to learn. In many places, children understand that education is their path forward. If you are born into hardship, your family may not be able to help you study – but we can.

That is how you do it.

Ksenia:

My big hope is that AI can really accelerate that. Do you feel the same way?

Neeru:

Absolutely. AI lets us deeply trace where a child misunderstood something and help them recover faster. The current education system often expects a child to make up years of missing knowledge in one year. That’s nearly impossible without support.

That is why Flexi matters. It is there all the time.

Raising Grounded Children

Ksenia:

If I can ask something a little more personal – you really strike me as a grounded and modest person. With all the resources you have, how do you raise children to stay humble and grateful? What are your core values?

Neeru:

For me, the core values are about caring – for people, for animals, for the world around you. My children have inherited that.

I also don’t think privilege should be something you flaunt. We always encouraged our children not to chase the fanciest places or the most prestigious labels, but to explore. We’d go to places like the Amazon or the GalΓ‘pagos – not for status, but for discovery.

When they were in middle school, we organized trips with other families and a teacher to South America, where the children lived in villages and helped with practical things – building toilets, helping where help was needed. We wanted them to see how most people actually live.

Those are the values that matter: help where you can, whether through money, mentorship, or simply showing up.

And I often think about how many educated women in the world have skills and talent that go underused. What if more of them helped mentor children around them? What if they became available to teach, guide, and support? That could go a very long way.

Ksenia:

I see some of that in my community too, but we live in a small town. It feels harder in big cities.

Neeru:

Maybe. But schools are still there. Parents can volunteer. In my children’s school, many parents brought their expertise into the classroom and helped teachers. You can do that. Show up.

Ksenia:

So it’s more about showing than telling.

Neeru:

Yes. Show up. All of us have to show up.

Intention, Not Just Attention

Neeru:

One more thing. In machine learning, people say attention is all you need. That’s what taught the machines. But for humans – for education, for society – attention is not enough.

We need intention. We need judgment.

Why am I learning this? Why does it matter? It cannot just be let AI do it. That’s what I would say.

Hope and Concern in the AI Age

Ksenia:

What is most exciting for you in this AI world right now?

Neeru:

Two things.

First, I’m going to become a grandmother for the first time.

Ksenia:

Congratulations!

Neeru:

Thank you. But I mean something larger too: humanity continues. Even though our time is limited, human beings continue. That matters.

Second, I really believe that if we use AI as augmented intelligence – something that helps us – we can do wonders. Don’t be afraid of it.

That excites me.

Ksenia:

And what are your concerns?

Neeru:

Fear is a real concern. And sometimes fear is grounded in reality. Human beings fight for dominance, for advantage, for shortcuts. There will always be people who try to game the system. We’ve seen that in every technological wave – from crypto scams to identity theft.

Those are real problems. But again, no technology arrives fully ready. Not even a baby comes ready-made. We have to stay involved and keep making systems better.

My concern is when a small number of people capture all the benefits and hoard them. That worries me.

Books That Shaped Neeru

Ksenia:

I always end with a question about books. What’s a book that significantly influenced you – either recently or from childhood?

Neeru:

There are several.

I learned to read English through Dr. Seuss and Archie comics. Later, The Hobbit influenced me a lot. And then Man’s Search for Meaning by Viktor Frankl had a very deep impact on me.

That book, with its reflections on concentration camps and how people kept their sanity in terrible circumstances, really stayed with me. It made me think about how we help children understand that life can be hard – but that people are also deeply adaptable and resilient.

Ksenia:

Yes. People are very adaptable, and we want them to adapt toward something better.

Neeru:

Exactly.

And I also love music. I love Hamilton. That line – I’m not throwing away my shot – really resonates with me. I’m still not giving away my shot. I’m going to keep taking it.

People say learning is lifelong, and I really believe that. I was around 50 when I went back to college for another degree. I was 52 when I started working on CK-12.

Ksenia:

What’s next for you to learn?

Neeru:

So much. One area I keep thinking about is economics – especially the economics of AI.

I’m not worried that AI will simply eliminate all work. Every major technological shift has opened new kinds of work while removing some forms of labor people no longer need to suffer through. AI will do some of that too. But people will have to reskill.

That’s exactly what we want our children to learn: learn broadly, learn quickly, keep adapting – but truly learn, and apply what you know.

This interview has been edited and condensed for clarity.

Further Reading

cfmlFiddle - Compare ColdFusion, Lucee, and BoxLang Side-by-Side

Mike's Notes

Thank you, James. This will help with code testing.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

23/04/2026

cfmlFiddle - Compare ColdFusion, Lucee, and BoxLang Side-by-Side

By: James Moberg
myCFML: 16/04/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

I built a local CFML playground that runs 10 engines at once

I've been writing CFML for a long time. Long enough to remember when testing something meant editing a file, refreshing the browser, and hoping the server hadn't crashed. We have better tools now, but I kept running into the same problem: I'd want to check how something behaves on CF2021 vs CF2025, or whether Lucee handles a date function differently than Adobe, and there wasn't a good way to do that without maintaining a bunch of separate installs.

The online tools help. CFFiddle.org, TryCF.com, and Try BoxLang are all useful when you need a quick test. But I kept hitting limitations. CFFiddle requires a social login to use some features. TryCF doesn't indicate which patch version it's running against. Neither lets you compare engines side-by-side. And both go offline sometimes, usually right when I need them. :(

So I built cfmlFiddle.

What it is

cfmlFiddle is a self-hosted CFML playground. It runs on your machine through CommandBox. You write code in an Ace Editor, pick an engine (or all of them), click Run, and see the output. If you picked multiple engines, you see all the results stacked, side by side, or tabbed.

cfmlFiddle screenshot

The default config ships with 10 server definitions:

  • Adobe ColdFusion 2016, 2021, 2023, 2025
  • Lucee 5, 6, 7
  • BoxLang (native, Adobe compat, Lucee compat)

You start whichever ones you need. Most of the time I run two or three.

The part I actually wanted

My main gripe with the online tools was never being able to pin a version. If I'm debugging something a client reported on CF2021.0.14, I need to know what CF2021.0.14 does, not whatever patch the hosted service happens to be running this week.

With cfmlFiddle, each engine version is a CommandBox server.json file. You control which version is installed, down to the patch. You can keep CF2021.0.14 around for months if you need it, or install CF2025 the day it drops.

The other thing: running code that the hosted services block. File operations, HTTP calls, Java objects, custom tags. cfmlFiddle doesn't restrict anything. It's your machine.

How the comparison works

Click "Run All Online" and cfmlFiddle sends your code to every running engine simultaneously via cfhttp. Each engine executes the same temp file from a shared webroot. The results come back with timing info, the engine name, and the actual patch version, so you can see exactly what ran where.

There's also an "Append" mode. Check the box and each run stacks on top of the previous results, so you can tweak your code and compare iterations without losing the earlier output.

Interactive mode

I wrote a test script with a <form> and immediately realized the form couldn't post back to itself. The result was static HTML in a div. The form action had nowhere to go.

cfmlFiddle now auto-detects forms in your output. When it finds one (or you check the Interactive box), it renders the result in a sandboxed iframe pointing directly at the payload file on the target engine. The form posts back to itself, processes the data, and returns the result. Multi-step scripts just work.

Under the hood

The status bar uses Server-Sent Events instead of polling. The heartbeat checks all engines with raw TCP socket connections (~50ms total for 10 servers) and streams updates to the browser in real time. It falls back to polling if SSE doesn't work on a particular engine.

Config lives in a JSON file above the webroot. You can change settings without editing CFML code.

All the frontend libraries (Ace, jQuery, SweetAlert2, jQuery contextMenu) ship locally in an assets/vendor/ directory. No CDN dependency by default, though you can flip a config switch if you prefer CDN.

Server management is built into the UI. Click the status bar to start, stop, or inspect engines. Left-click any server for a context menu with direct links to its admin panel, homepage, and documentation.

Session management

Every time you run code, cfmlFiddle saves the payload file with a timestamp. Click the Session button in the toolbar to see a list of everything you've run. Click any entry to reload it into the editor. When you're done, Archive All zips everything up and clears the working directory.

I kept losing track of what I'd tested ten minutes ago. Now I just open the session list and pick it.

The smaller stuff

There's a light/dark theme toggle. It picks up your OS preference by default, and the Ace editor switches to match. I bounce between light and dark depending on the time of day, so this was mostly for me.

You can import code from a GitHub Gist URL. Paste the link, it pulls the first file and drops it in the editor. Useful when someone shares a snippet and you want to see what it does on three engines before replying.

Snippets work the other direction too. Save whatever's in the editor as a named file, reload it later from the dropdown.

Each result card has a refresh button that re-executes and updates the timing, plus a dismiss button to toss results you don't need. Small thing, but it adds up when you're iterating.

We also put some work into keyboard accessibility: skip link, visible focus indicators, arrow keys on the splitter, ARIA roles on the toolbar and status bar.

Getting it

cfmlFiddle is open source under the MIT license.

Website: cfmlFiddle.com Source: GitHub

You need CommandBox installed. Clone the repo, edit config.json with your box.exe path, run box task run launchCFMLFiddle, and pick an engine. It opens in your browser.

cfmlFiddle is a myCFML.com project, sponsored by SunStar Media.