Ajabbi slide talks

Mike's Notes

I will have to give many talks in future, so here is a rough plan building on some earlier notes from a month ago. Feedback on it is welcome.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

09/03/2026

Ajabbi slide talks

By: Mike Peters
On a Sandy Beach: 09/03/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Background

I previously gave hundreds of slide talks to conferences, workshops, and public meetings in NZ. I enjoyed explaining complex things so that anyone could understand. Everything from 30 seconds to 2+1/2 hours. But that was years ago.

As Pipi continues to develop and international interest grows, I will have to give many talks to different audiences in the future. 

The need is now becoming urgent, with requests and opportunities arriving.

What works

I'm also highly visual and prefer to talk about something that I can see, like a map, chart, video, gadget or drawing, rather than read out a speech. So starting at the beginning.

What is needed

Audiences

  • Software developers
  • Scientists
  • Pitching to a panel
  • Training users
  • Radio or TV interviews
  • Contractors
  • Volunteers and staff
  • Open office-hours
  • etc
Format
  • In-person
  • Remote
  • Hybrid

Delivery

  • Live
  • Pre-recorded

Modular

  • Short & consice
  • Combined for longer talks.

What will happen

This is what I thought might work.

  • Design a structured outline of topics to cover.
  • Design a uniform slide format.
  • Practice delivering some to the fortnightly research group I attend online and other groups I am a member of.
  • Start rapidly manufacturing slide talks based on the priority of need.
  • Reuse the notes and images from the 650 posts on this blog.
  • Create simple 1-page A4 summary notes for print, hand out, event notices and email invitations.
  • Record the talks with slides in advance and upload them to YouTube as a library.
  • Pre-record videos of live demos of using Pipi and upload them to YouTube as a library.
  • Save and share the slides using links as PowerPoint, Google Slides, or Adobe PDF. Slideshare.

Content reuse

The same material can be reused in many formats.

  • Developer Docs
  • User learning (Diataxis)
  • In-context workspace help
  • Newsletters
  • This blog
  • Whitepapers

Topic outline

This is a brain-dump draft and is subject to many changes because Pipi is a very large enterprise system that must be configured by users before it can be put into production. Suggestions are welcome.

Slide talks

  • Ajabbi
    • Intro
    • Who am I
    • Origin
    • Mission & values (why)
    • Organisational form
      • Ajabbi
        • Role and purpose
        • SaaS
        • Profit distribution
        • From customers to community-driven
        • Bootstrap Startup
        • Scaling
      • Ajabbi Research
        • Role and purpose
        • Income
        • Open handbook
          • Team culture
        • Open Research
          • Library
          • Projects
          • Publications
            • Newsletter
            • Friday Report
            • Seminars
            • arXiv
        • Open-source
          • GitLab
          • GitHub
        • Future Collaborations
          • Unicode
          • OMG
          • W3C WCAG
          • CNCF
      • Ajabbi Foundation
        • Role and purpose
        • Income
        • Open handbook
          • Team culture
        • Future support
          • Ortus
          • SIL KeyMan
          • User groups
          • Books
  • Pipi
    • Intro
    • Origin
    • History
      • Pipi 1-5 (1997-2008)
      • Pipi 6 (2017-2019)
      • Pipi 7 (2020)
      • Pipi 8 (2021-2022)
      • Pipi 9 (2023-2026)
    • Closed core & open-source
    • Closed-core
      • Roadmap
        • Pipi 9 (2023-2026)
        • Pipi 10 (2027-2029)
        • Pipi 11 (2030-)
    • Open-source
      • Roadmap
        • Pipi 9 (2023-2026)
        • Pipi 10 (2027-2029)
        • Pipi 11 (2030-)
    • Many parts
      • Gödel engine
      • Workspaces
        • Industries
          • Agriculture
          • Arts & Culture
            • GLAM
            • Screen
          • Built Infrastructure
          • Civil Defense
          • Data Centre
          • DevOps
          • Health
          • Nature Conservation
          • Transport
            • Aviation
            • Ports
            • Rail
          • Research
      • Modules
      • Ontologies
      • Workflows and state
      • Digital Twin
      • IaC
      • CMS
        • 101 Introduction
        • 102 Content Management System
        • 103 Publication
        • 104 Website
        • 105 Blog
        • 106 Wiki
        • 107 Docs
        • 108 Help
        • 109 Workspace
      • Agents & engines
        • How to use the engines
          • DevOps
          • Security
          • etc

Pre-recorded live demos

  • Pipi
    • Sign up
    • Username, password, and 2-factor authentication
    • Create a profile
      • Accessibility
      • Language
    • Choosing an account type
    • Personal Account
      • Account settings
        • Languages
        • Accessibility
        • Support
        • Training
    • Enterprise account
      • Account settings
        • Default Language
        • Billing
        • Support
        • Training
      • Creating a deployment
        • Language
        • SaaS configuration
      • Creating a workspace
      • Adding modules
      • Adding Plugins
      • Roles and Permissions
      • Adding Users
    • Developer account
      • Account settings
        • Default Language
        • Billing
        • Support
        • Training
      • Creating a deployment
        • Language
      • Creating a workspace
      • Adding modules
      • Adding Plugins
      • Roles and Permissions
      • Adding Users
    • Researcher account
      • Account settings
        • Default Language
        • Billing
        • Support
        • Training
      • Creating a workspace
      • Adding modules
      • Adding Plugins
      • Roles and Permissions
      • Adding Users
    • SME account
      • Account settings
        • Default Language
        • Billing
        • Support
        • Training
      • Choosing a module
      • Roles and Permissions
      • Adding Users

About time

Mike's Notes

If we are lucky, we get 4,000 weeks to live, love and contribute.

Make every week count.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

08/03/2026

About time

By: Chris Loy
Chris Loy: 27/12/2024

Hey, I'm Chris. Nice to meet you!

I spend my professional life writing code, training machine learning models, building software products and running tech teams. I'm a startup founder with a successful exit, and in my career I've been CTO, a developer and a data scientist. I've worked at companies big and small, mentored junior engineers and data scientists from across Europe, and served as an advisor for some companies. Throughout my career I have been lucky enough to work with and hire some amazingly smart people.

This is my website, where I write about tech, data and AI, as well as some of my other interests and hobbies. Those include music and photography, and I also use this site to document the books I read..

As I write this, I’m about 7 months away from my 40th birthday. That milestone, numerologically important to inhabitants of our particular planet who happened to evolve 10 fingers upon which to count, will come a little past the halfway point of the average life expectancy of males from my country of birth.

We are a couple of days away from the start of 2025, and it seems like a suitable moment to reflect on the passage of life.

4,000 weeks

Back in 2022, I read Oliver Burkeman’s thought provoking book Four Thousand Weeks, named for the length of time that more or less corresponds with the average human lifetime. A worldview that acknowledges this surprisingly brief existence can be traced back to Roman stoicism - notably Seneca and Marcus Aurelius.

Fans of thinking about their own inevitable death sometimes create diagrams like this one:

My 4,000 weeks

This chart displays my life from birth, through to an expected death around 80 years old, with a few assumptions such as retirement at age 65. Each square is a week, each row is a year, and I have coloured it to indicate the major time demarcations of my life thus far. The current calendar year of 2024 is marked by a border, and as you can see lies around the midpoint.

Time moves fast and slow

The thin yellow sliver at row 31 represents my time studying Machine Learning at UCL. This period looks tiny on my diagram, but it looms large in my memory. In that year I switched careers, I studied a new field, and I found a new passion that led me into the world of AI. I also met my wife, and the co-founders of Datasine - my first startup that stands out in blue on the above chart. That little yellow line was a pivotal and momentous year.

In comparison, the first green slice (my time working for tech companies as an engineer) is more than 8 times the length, but now feels far less substantial in how I think of myself as a person.

The lesson is that 4,000 weeks can be a long time, or can be a short time. It is the meaning ascribed to each week that dictates the importance - not the length.

Age is biological

In recent years, I have become increasingly aware of the limitations placed upon us by biology. As my peers and I face both the aging of our own parents, our own growing awareness that we are not as young and fit as we used to be, and the shock of sudden severe illness and even death among our friends and colleagues, we start to think more seriously about our future health.

In the part of the diagram that is above, there are a handful of hospital visits - a couple of broken bones and sprained ankles, an emergency appendectomy near the end of row 24. Several rows in the school years with various medications for lung issues that variously pushed my weight up and down, affected my skin etc.

Despite this smattering of issues here and there, the reality is that 90% of my health problems are ahead of me. If I want to live my fullest life through the bottom half of this diagram, I need to start thinking more seriously about my physical health and wellbeing, and preserving the state that I am in as long as possible.

Life happens

I started writing this blog some way into row 29. It isn’t the first website I’ve kept, but I remember deciding to break with a past that had included vague aspirations towards journalism, creative writing and other pursuits. “I’ll write a tech blog!”, I thought, and so I did.

But over time, all that other messy stuff - life - kept creeping back in: sporadically voracious reading habits to be documented, music to be made in the dead of night and leaked out into dark corners of the internet, cryptic crosswords to nerd over, and much more besides.

This is part of a wider pattern across my life. As I have got older, I have also got fuzzier. I can’t think of a better way of describing it. I have let the partitioned parts of my life blend and overlap in a way that when younger I would have found frankly embarrassing. Is this what acceptance is - a letting go of your ability to control the separation between the parts of your life that you try to optimise separately?

Even the choice of colour coding above feels increasingly arbitrary and artificial. Why demarcate a life through jobs and education? Why not relationships, or books read, or recipes learned, or jokes told?

At the top of the hill

I really don’t like the phrase “over the hill”. It is usually quite disparaging, implying a decline that has set in to the point at which a person’s actions are of little importance.

Nonetheless, I find myself feeling the urge to acknowledge that I am likely currently standing at the top of the hill - in the best possible position dispassionately to survey all the terrain I have covered so far, and to look ahead and see where the road might lead as I start my descent down the second half of my 4,000 weeks.

I believe that this is an urge I should resist. Does the second half of my life just involve winding my way down into the valley, trying not to stumble, tipping my cap to others travelling past me on their way to the summit? I certainly hope not.

My belief is that the second half of that diagram contains greater summits to scale. To reach them, and make the most of my remaining 2,000 weeks, I will need to be strong of body and of mind, more than a little bit lucky, and walking with my eyes fixed on the horizon.

SIL KeyMan

Mike's Notes

Recently, I received an email from SIL KeyMan requesting donations to support their excellent, free, open-source, multilingual keyboards. They had lost a major donor whose circumstances had changed. This article is about what followed on from that message.

Where I stand

"Every child has the right to be educated in the language of their people and of their birth. This (issue) is dedicated to those working tirelessly to record, strengthen or revive human languages." - The i18n Issue, Ajabbi Research.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > KeyMan Newsletter
  • Home > Handbook > 

Last Updated

07/03/2026

SIL KeyMan

By: Mike Peters
On a Sandy Beach: 07/03/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Background

I had long planned to use KeyMan with Pipi to enable people to use their choice of written language.

Tibetan Keyboard

After receiving an email from SIL Keyman requesting donations, I sent a message offering future donations from the yet-to-be-established Ajabbi Foundation. Ajabbi is a community-driven bootstrapping pre-revenue startup that solves some very big, expensive problems with Pipi. With no investors needed, the surplus income will go to a future foundation to support open-source, books, user groups, conferences, science, etc.

They replied via a developer in Australia who contacted me.

I then spent a bit of time reading their blog, learning about their 30-year effort, watching a seminar, and looking at their GitHub repository. They are very humble and very impressive.

Thinking more about it, they need more developers in their team working on this important project for humanity.

Future long-term sponsorship priorities of open-source

  1. Support Ortus in providing free CommandBoxBoxLang, etc., and outsource all related remote development to them.
  2. Support SIL KeyMan to provide free KeyMan access and ensure they can meet demand, scale, and focus on supporting KeyMan rather than raising funds. Everyone has the right to use their own language.

Memberships

Another option could be for Ajabbi Research to join Unicode as a paid member and contribute.

Pipi Keyboard Engine (kyb)

A new Pipi engine will be created to handle keyboards.

Pipi 10 (2027-)

The Pipi 10 roadmap includes support for multiple languages and scripts across all workspace User Interface (UI) components. The hundreds of agent engines, their internal databases and namespaces already support this.

One of the primary tools will be to embed a language-specific keyboard using KeyMan Engine for Web in the HTML of the workspace UI for individual users. This will be available to all users using a simple profile form.

Code Example


<script src='https://s.keyman.com/kmw/engine/18.0.246/keymanweb.js'></script>
<script src='https://s.keyman.com/kmw/engine/18.0.246/kmwuitoggle.js'></script>
<script>
  (function() {
    keyman.init({attachType:'auto'});
    keyman.addKeyboards('@en'); // Loads default English keyboard from Keyman Cloud (CDN)
    keyman.addKeyboards('@th'); // Loads default Thai keyboard from Keyman Cloud (CDN)
  })();
</script>

KeyMan currently supports 1550 keyboards and 2650 languages. It uses CLDR/LDML to configure the keyboards. This is wonderful news and will make it very easy to integrate with Pipi 10.

There is a great Cloud Services API which will assist automation.

Translation

The keyboards will assist with volunteer user input to translate the Pipi User Interface into many languages, much like OpenOffice and Wikipedia/MediaWiki.

Codes

Since Pipi 6 (2017-2019), standard codes have been used internally to make internationalisation possible for potentially every language.

ISO 639-3

ISO 639 gives comprehensive provisions for the identification and assignment of language identifiers to individual languages, and for the creation of new language code elements or for the modification of existing ones (Terms of Reference of the ISO639/MA). - ISO 639-3

*** 

It defines three-letter codes for identifying languages. The standard was published by the International Organisation for Standardisation (ISO) on 1 February 2007. As of 2023, this edition of the standard has been officially withdrawn and replaced by ISO 639:2023.

ISO 639-3 extends the ISO 639-2 alpha-3 codes with an aim to cover all known natural languages. The extended language coverage was based primarily on the language codes used in the Ethnologue (volumes 10–14) published by SIL International, which is now the registration authority for ISO 639-3.[2] It provides an enumeration of languages as complete as possible, including living and extinct, ancient and constructed, major and minor, written and unwritten. However, it does not include reconstructed languages such as Proto-Indo-European.

ISO 639-3 is intended for use as metadata codes in a wide range of applications. It is widely used in computer and information systems, such as the Internet, in which many languages need to be supported. In archives and other information storage, it is used in cataloging systems, indicating what language a resource is in or about. The codes are also frequently used in the linguistic literature and elsewhere to compensate for the fact that language names may be obscure or ambiguous. Wikipedia

Examples

  • Eng (English
  • Fra (French)

ISO_3166-1_alpha-3

ISO 3166-1 alpha-3 codes are three-letter country codes defined in ISO 3166-1, part of the ISO 3166 standard published by the International Organization for Standardization (ISO), to represent countries, dependent territories, and special areas of geographical interest. They allow a better visual association between the codes and the country names than the two-letter alpha-2 codes (the third set of codes is numeric and hence offers no visual association). They were first included as part of the ISO 3166 standard in its first edition in 1974. - Wikipedia

 Examples

  • ABW  (Aruba)
  • AFG  (Afghanistan)
  • AGO  (Angola)

Unicode

Unicode (also known as The Unicode Standard and TUS) is a character encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 17.0[A] defines 159,801 characters and 172 scripts used in various ordinary, literary, academic and technical contexts. - Wikipedia

Examples

  • Latn (Latin)
  • Lina (Linear B)
  • Hebr (Hebrew)

CLDR/LDML

The Common Locale Data Repository (CLDR) is a project of the Unicode Consortium to provide locale data in XML format for use in computer applications. CLDR contains locale-specific information that an operating system will typically provide to applications. CLDR is written in the Locale Data Markup Language (LDML). - Wikipedia

Example

 <?xml version="1.0" encoding="UTF-8" ?>
<ldml>
  
    <version number="1.1">ldml version 1.1</version>
    <generation date="2024-03-06"/>
    <language type="en"/>
    <territory type="US"/>
  
  <!-- other locale data sections follow -->
</ldml>

Localisation (L10N)

Language localisation (or language localisation) is the process of adapting a product's translation to a specific country or region. It is the second phase of a larger process of product translation and cultural adaptation (for specific countries, regions, cultures or groups) to account for differences in distinct markets, a process known as internationalisation and localisation. - Wikipedia

***

Pipi internally automatically stores and uses 3-letter language codes, 4-letter Unicode and 3-letter country codes to define Locales. Other code formats are also stored to enable interoperability. Many languages can be written in several scripts.

Examples

  • eng-Latn-NZD (New Zealand English)
  • eng-Latn-USA (United States English)

Customers can configure the options for their own websites.

Examples

  • en-NZ
  • en-uk

The Accessibility Issue

Mike's Notes

This is a copy of the February issue of Ajabbi Research.

It is about the history of the effort to create a fully accessible User Interface (UI) for Pipi.

Ajabbi Research is published on SubStack on the first Friday of each month, and subscriptions are free.

Each issue is a broad historical overview of a research topic, serving as an index to dozens of previously posted related articles. There are now 647 articles/posts.

This copy of the issue will be updated with additional information as it becomes available. Check the Last Updated date given below.

Eventually, each issue will be reused on the separate Ajabbi Research website as an introduction to a research area comprising multiple research projects.

Resources

References

  • Web Accessibility Initiative - Accessible Rich Internet Applications (WAI-ARIA) 1.2, W3C, 6 June 2023.
  • Web Content Accessibility Guidelines (WCAG) 2.2, W3C, 12 December 2024.
  • GOV.UK Design System.

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

6/03/2026

The Accessibility Issue

By: Mike Peters
Ajabbi Research: 6/02/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

This is the story of the effort to make Pipi fully accessible to all who need it. The steps taken have been part of Pipi's development since 2005, spanning 5 versions.

The NZERN Pipi 2003-2005 Development Plan started it all.

Pipi 4 (2005-2008)

The story starts with Pipi 4. It was a big, successful system that supported community-driven Ecological Restoration in NZ. Here is a history of that Pipi version.

During that time, New Zealand (NZ) had dial-up modems. Many NZERN members were working farmers in rural areas, with very slow internet. Many were older with low computer literacy. This was a major factor determining what was possible. Web page sizes had to be kept under 16kb.

Recently, an archive of Pipi 4 help documentation was discovered and is now available for viewing. It is incomplete, but it gives an idea of how it worked.

Here is the description taken from the Pipi 4 Help archive. The Screen Reader Edition was designed for the blind members of NZERN by their family members.

PIPI4 is available in four editions to meet the needs of different groups of NZERN members.

Basic Edition

    • Designed for the novice computer user who wants a simple cut-down system, with instructions built into every step.
    • Skill level required: Capable of using a simple program like Outlook Express
    • Availability: All members of NZERN

Standard Edition

    • Designed for the confident computer user who wants a system with help available with one click. The user can ask questions and report bugs to the help desk.
    • Skill level required: Capable of using a program like Microsoft Word
    • Availability: All members of NZERN

Screen Reader Edition

    • Designed for the confident computer user who uses a screen reader and wants help available with one click. The user can ask questions and report bugs to the help desk.
    • Skill level required: Capable of using a program like Microsoft Word
    • Availability: All members of NZERN

Professional Edition

    • Designed for the expert computer user capable of self-learning, who wants a fast system with detailed technical documentation available with one click. The user will provide support to other members as a member of the help desk.
    • Skill level required: Capable of using an advanced program like Adobe Photoshop
    • Availability: All members of the help desk

    Pipi 6 (2017-2019)

    When Pipi was rebuilt from memory, some work was done to prepare for a more modular, standardised model-driven User Interface (UI) approach. Metadata was added to every database table to enable future personalisation and meet accessibility requirements.

    Pipi 7 (2020)

    Small, simple, static HTML mockups of workspaces were created as experiments.

    Pipi 8 (2021-2022)

    System-wide namespaces were implemented to enable future complex automated interactions.

    Pipi 9 (2023-2026)

    In 2023, a year-long investigation into model-driven interfaces led to the reuse and hacking of several abandoned EU research efforts in Human-Computer Interaction (HCI).

    Putting a User Interface (UI) on the front end of Pipi 9 was challenging. It had to be

    • Model-driven
    • Adaptive to the users' devices
    • Automated
    • Meet the individual needs of each logged-on user

    Resources used

    1. OMG Interaction Flow Modelling Language (IFML)
    2. The CAMELEON References Framework (CRF)
    3. User Interface Description Language (UIDL)
    4. W3C Model-based UI Incubator

      Model-driven UI

      The User Interface Description Language (UIDL) was an EU-funded project that was abandoned in 2010 after 10 years of excellent work. It was to enable accessibility on different screens and devices. The research results were reverse-engineered to build a User Interface Engine (usi) that would run in reverse to generate accessibility solutions for Pipi. The CSS Engine (css) replaced some redundant components of the UIDL project. Additional engines for localisation and personalisation were created.


      UK Design System

      The UK Government has created a design system for building accessible websites. It includes many templates, components, tools, code, and guidance on achieving this. An excellent resource.

      These templates are being used for the Pipi-generated workspace User Interface (UI)

      Example


      A teaching customer

      Mr G, my coach from Startup Aotearoa, suggested finding a first customer who could be a teaching customer. What a good idea.

      As it happens, a new disability rights organisation emerged in response to funding cuts by the NZ government. The group led a national campaign that resulted in the Minister responsible for the cuts losing her job and the scale of the cuts being reduced. The group had no money and needed a large campaign website that was highly accessible for deaf and blind people. Helping lead this campaign and building the website were deep learning experiences for me.

      Pipi CMS Engine (cms)

      A decision was made early on to autogenerate a separate website for each language (English, Māori, NZ Sign Language, and AAC picture language). This was the simplest solution for the CMS and the users.

      Creating UI for each natural language, including sign languages (i18n), requires user requests and volunteer testers.

      The CMS uses a template engine that builds web pages from reusable components. This means that getting CCS correct only needs to be done once for each component, and so on.

      Sign Language

      The scheme was dreamed up to embed NZ Relay Video Interpreting onto any webpage. This is an ongoing experiment, driven by deaf people.

      Blind and Low Vision

      This didn't get far because blind people in NZ were needed to volunteer to help with testing. However, there are volunteers in other countries. This job depends on ongoing work on the CSS Engine (css). A particular challenge is catering for braille machines.

      Picture Language

      Professor Stephen Hawking used AAC via a computer-generated voice. There are many forms of AAC, including picture language. Providing this as a UI is being explored, with other AAC to follow. Important for the millions of people with Cerebral Palsy and Motor Neurone Disease.

      Workspace personalisation

      The workspace settings will offer complete personalisation of the UI. Similar in purpose to the wonderful AccessWidget from Accessibe.com. Instead of a pop-up, it will use a personalisation form in account settings.

      Keeping it simple

      On paper, WAI-ARIA looks great. However, the reality is that the growing interactive complexity of websites and differences in how browsers work mean that web pages break for people using assistive technology.

      Modern websites present a large attack surface, making them vulnerable. Pipi's solution is to focus on functionality, usability, and maximum simplicity. Small page sizes (Kb), semantic structure, and minimising JavaScript use.

      Standards

      The W3C Web Content Accessibility Guidelines (WCAG) sets the international standard for providing full accessibility. Pipi will endeavour to meet WAIG 2.2 as a new conformance target for all languages.

      Whats next

      Designing the workspaces has made accessibility through personalisation a top requirement. The ongoing 2026 workspace rollout will include further accessibility testing.

      The most useful and inspiring resource has been Smashing Magazine's weekly newsletter, which often covers accessible UI design in great depth.

      Dedication

      Those who have fought all their lives for a world where people with disabilities have equal rights and can fully participate without barriers to the extent they are capable.

      I struggled to code with AI until I learned this workflow

      Mike's Notes

      A case for CodeRabbit.

      Resources

      References

      • Reference

      Repository

      • Home > Ajabbi Research > Library > Subscriptions > System Design Newsletter
      • Home > Handbook > 

      Last Updated

      05/03/2026

      I struggled to code with AI until I learned this workflow

      By: Neo Kim and Louis-François Bouchard
      System Design Newsletter: 02/02/2026

      Louis-François Bouchard: Making AI accessible. What's AI on YouTube. Co-founder at Towards AI. ex-PhD Student..

      Everyone talks about using AI to write code like it’s a vending machine:

      “Paste your problem, get a working solution.”

      The first time I tried it, I learned the hard way that this is not how it works in real projects…

      The model would confidently suggest code that called functions that didn’t exist, assumed libraries we weren’t using, or skipped constraints that felt obvious to me¹. The output looked polished.

      The moment I ran it… It fell apart.

      After enough trial and error, I stopped trying to “prompt better” and started working differently. What finally made AI useful wasn’t a magic tool or a clever prompt. It was a simple loop that kept the model on a short leash and kept you in the driver’s seat:

      This newsletter breaks that loop down step by step.

      It’s written for software engineers who are new to AI coding tools and want a practical starting point: not a tour of every product on the market, but a repeatable method you can use tomorrow.

      The core idea is simple:

      AI works best as an iterative loop, not a one-shot request. You steer. The model fills in the gaps. And because it does less guessing, you spend less time cleaning up confident mistakes.

      Onward.

      I want to introduce Louis-François Bouchard as a guest author.

      He focuses on making AI more accessible by helping people learn practical AI skills for the industry alongside 500k+ fellow learners.

      TL;DR

      If you’re new to using AI for coding, this is the set of habits that prevents most pain.

      • Treat AI output like a draft, not an answer. Models can sound certain while being completely wrong, so anything that matters still gets reviewed and verified.
      • Start with context, the way you’d brief a teammate. If you don’t share constraints, library versions, project rules, and intended behavior, the model will ‘happily’ invent them for you.
      • Ask for a plan before you ask for code. Plans are cheap to change. Code is expensive to unwind. I’ll usually approve the approach first, then ask for small, step-by-step changes.
      • Use reviews and tests as a safety net. I still do a normal pull request² review and rely on tests to verify behavior and catch edge cases³.

      Quick Glossary

      Before we dive in, here’s the small vocabulary I’ll use throughout.

      It’s not exhaustive; it’s just enough to keep the rest of the article readable:

      • AI Editor (e.g., Cursor, VS Code + GitHub Copilot) is a code editor with AI built in. It can suggest completions, refactor functions⁴, and generate code using your project files as context.
      • Chat model (e.g., ChatGPT, Claude, or Gemini) is a conversational AI you interact with in plain language. It’s useful when you’re still figuring out what to do: brainstorming approaches, explaining an error, comparing trade-offs, or sanity-checking a design before you write code.
      • AI code review tools (e.g., CodeRabbit) automatically review pull requests using AI, posting summaries and line-by-line suggestions.
      • Search assistant (e.g., Perplexity) combines chat with web search. It’s what you reach for when you need to verify that a suggested API call is real, that a library feature exists in the version you’re using, or that you’re not about to copy-paste something that expired two releases ago.

      The Mental Model

      Before the workflow, it helps to be honest about what AI coding assistants are and aren’t.

      They’re fantastic when the problem is well-scoped and sitting right in front of them. They’re unreliable the moment you assume they “know” what you didn’t explicitly provide. The workflow is basically a way to stay in the first zone and avoid the second.

      When you give clear requirements, AI is great at drafting functions, refactoring code, scaffolding tests, and talking through error messages.

      But it has a hard boundary: it only knows what it can see in the current context. It doesn’t remember your last chat; it doesn’t know your architecture or conventions, and it won’t reliably warn you when it’s guessing. It just keeps going confidently.

      I’ve seen AI call library functions that don’t exist, use syntax from the wrong version, and ignore constraints I assumed were obvious. The pattern was always the same: the AI didn’t know what I hadn’t told it, so it filled the gaps by inventing something plausible.

      Once I understood this, three principles shaped how I work:

      1. Give more context than you think you need. Just like I’d brief a colleague who just joined the project, I brief the AI every time. If I don’t share the details, it invents them.
      2. Guide it with specific steps. AI struggles with “build me a web app,” but does well with “add input validation for these fields, return a clear error message, and write a test that proves invalid input is rejected.” The more specific my request, the better the output.
      3. If it matters, verify it. Whenever the AI produces security-sensitive logic, a database migration, or an algorithm that must be correct, I review it myself and add tests that prove the behavior.

      A good way to hold all of this in your head is:

      AI is a smart teammate who joined your project five minutes ago.

      They can write quickly, but they don’t know your architecture, your conventions, or your constraints unless you tell them.

      That’s why the mistakes look so predictable: the model isn’t “being dumb,” it’s filling in gaps you didn’t realise you left open.

      Once I started seeing it that way, the fix wasn’t a better one-shot prompt⁵.

      It was a repeatable loop that forced me to brief the model, force clarity early, and keep changes small enough to verify.

      I’m not sure if you're aware of this…

      When you open a pull request, CodeRabbit can generate a summary of code changes for the reviewer. It helps them quickly understand complex changes and assess the impact on the codebase.

      The Workflow

      The loop is the same whether I’m fixing a bug, adding a feature, or cleaning up a messy module.

      It keeps the AI from freelancing, and it keeps me from treating “code that looks plausible” as “code that’s ready to ship.”

      Here’s the loop:

      1. Context: I share project background, constraints, and the relevant code so the AI isn’t guessing.
      2. Plan: I ask for a strategy before any code gets written.
      3. Code: I generate or edit code one step at a time, so changes stay small and reviewable.
      4. Review: I carefully check the output and often use AI-assisted pull request reviews as a second set of eyes.
      5. Test: I run tests, and I’ll often have AI generate new tests that lock in the intended behavior.
      6. Iterate: I debug failures, refine the request, and repeat until the change is solid.

      I use different tools at different points in the loop.

      Each one is good at a specific job:

      An editor is good at working inside a repo,

      A chat model is good at thinking in plain language,

      And review/testing tools are good at catching things I’d miss when I’m tired.

      The rest of this newsletter breaks down each step.

      The most important step is the first one:

      If the model is guessing about your setup, everything downstream becomes cleanup. So the workflow starts with context.

      Step 1: Context

      Most AI mistakes in code have the same root cause.

      The model is guessing in a vacuum. Someone pastes a function, types “fix this,” and acts surprised when the suggestion ignores half the system.

      “Fix this” is the fastest way to make the model hallucinate…

      Without a project background and constraints, it has no choice but to fill gaps with whatever sounds right: ‘functions that don’t exist, syntax from the wrong version, solutions that break conventions elsewhere in the repo’.

      So, for anything that is not small, I flip the default: documentation and rules go first. Code goes second.

      This is easiest with an AI editor that can automatically pull in files.

      I use Cursor, which lets me highlight code, pull in other files from my project, and ask the AI to do specific work with all of that as context. The pleasant part is I can swap models on the fly: a fast one for quick edits, a heavier reasoning model when I need to solve a tricky bug.

      VS Code with Copilot or Claude Code offers similar features if you prefer to stay in that ecosystem.

      When a task is even moderately complex, I load three kinds of context:

      1. Project background

      I keep an updated README⁶ for each project and start most AI sessions by attaching it with a simple opener:

      Read the README below to understand the project. Then I will give you a specific task.

      If the change touches something sensitive (like payments), I include the key files in that first message too. By the time I describe the change, the assistant has already seen the neighborhood.

      2. Rules and constraints

      I keep a rules file (sometimes called AGENTS.md or CLAUDE.md)⁷ that bundles project scope, coding style, version constraints (for example, “this service runs on Django 4.0”), and a few hard rules (“never call this external API in development,” “all dates must be UTC”).

      Some tools support “rules” or “custom instructions” that help me avoid repeating myself in every session.

      3. Relevant source and signals

      For bugs or features, I paste the function or file involved along with stack traces⁸ or logs.

      A single error line is like a screenshot of one pixel. The assistant needs more than that if I want real reasoning instead of optimistic guessing.

      Here’s a reusable prompt pattern:

      Read @README to understand the project scope, architecture, and constraints.

      Read @AGENTS.md to learn the coding style, rules, and constraints for this codebase.

      Then read @main.py, @business_logic_1.py, and @business_logic_2.py carefully.

      Your task is to update @business_logic_2.py to implement the following changes:

      1. <change 1>

      2. <change 2>

      3. <change 3>

      Follow the conventions in the README and AGENTS file.

      Do not modify other files unless strictly necessary and explain any extra changes you make.

      The structure stays the same every time: context, then rules, then a precise task.

      I swap out the filenames and the change list, but the pattern holds.

      One thing I learned the hard way: more text isn’t always better. The best briefings are short and focused. They explain what the project is for, how the main pieces fit together, and which rules actually matter. If I notice I’m pasting more than a human would reasonably read before starting work, I cut it down.

      One final detail that matters: context should be curated… not dumped.

      The best briefings are short and decisive, enough to prevent guessing, but not so much that the model loses the signal. If I’m pasting more than a human would reasonably read before starting, I cut it down.

      Step 2: Plan Before You Code

      Context answers “where am I?”

      It doesn’t answer “what should I build?”

      That’s where things usually go sideways.

      If you let AI write code immediately, it often picks a strange approach, optimizes the wrong thing, or quietly ignores constraints.

      I’ve learned to force a two-step process: plan first, then code.

      I usually do the planning step in a chat model like Claude, ChatGPT, or Gemini. ChatGPT works well when the problem is fuzzy, and I need structured thinking. Once the design feels reasonable, I switch to an AI editor like Cursor or Claude Code in VS Code, where the implementation happens with the repo open.

      First: Ask for a plan only

      For any non-trivial change, I first describe the feature or bug in plain language. That initial exchange is just about getting the idea into a workable shape:

      Here is the feature I want to build and some context.

      Help me design it.

      What needs to change?

      Which modules are involved?

      What are the main steps?

      The key is to stop the AI from jumping straight into code. I’ll often say explicitly, “Do not write any code until I say approved.”

      Then: Approve and implement in small steps

      Once the plan looks reasonable, I approve it and ask the AI to implement one step at a time.

      This is where I usually switch from a chat model to an AI editor like Cursor or VS Code with Copilot, since the implementation happens inside the actual codebase. For each step, I ask the AI to explain what it’s about to change and propose the code for that step only.

      Small steps are easier to review and easier to undo if something goes wrong.

      Here’s a prompt template I reuse:

      You are a senior engineer helping me with a new change.

      First, read the description of the feature or bug:

      <insert feature or bug description and any relevant context>

      Step 1 — Plan only:

      • Think step by step and outline a clear plan.
      • List the main steps you would take.
      • Call out important decisions or tradeoffs.
      • Mention edge cases we should keep in mind.

      Stop after the plan. Do not write any code until I say “approved.”

      Step 2 — Implement:

      Once I say “approved,” implement the plan one step at a time:

      • For each step, explain what you are about to change.
      • Propose the code changes for that step only.
      • Write tests for that step where it makes sense.

      If the AI recommends a library or function I’ve never seen, I’ll verify it actually exists using a search assistant or official docs. Models sometimes hallucinate APIs that sound plausible but don’t exist.

      This pattern is especially useful when I’m working in a new stack or unfamiliar codebase. Instead of reading docs for hours, I ask the AI to explain the stack, sketch a design, and then help me implement it. The AI explains before it writes, so I learn as I go.

      It also helps when a change touches multiple parts of the system, since a plan lets me see the full scope before I make edits everywhere.

      Same with subtle bugs I don’t fully understand. For a slow database query, instead of asking “make this faster,” I ask the AI to reason through why it might be slow and what options exist. Only after that reasoning do I ask for the actual fix.

      Fixing a plan is cheaper than fixing a pile of code. The “approved” step forces me to agree with the approach before the AI starts typing.

      Step 3: Lightweight Multi-Agent Coding

      Once I got comfortable with planning before coding, I started using a simple trick that makes AI output more reliable: I split the work into roles.

      This isn’t a complex ‘agent system⁹.’ Most of the time, it’s the same AI model, just prompted differently for each job.

      Sometimes I use different models for different roles:

      • Claude or ChatGPT for the Planner role (where reasoning matters),
      • Then, a faster model for the Implementer role (where the task is already well-defined and speed matters more).

      In Cursor, I can switch models mid-task, which makes this easy.

      The four roles I use:

      1. Planner: Breaks down the task into steps and calls out edge cases. (This is what we covered in Step 2.)

      2. Implementer: Writes code strictly based on the approved plan. I prompt it with something like: “Follow the approved plan. Change only the files I list. Keep the change small. If something is unclear, ask before coding.”

      3. Tester: Writes tests and edge cases. I prompt it with: “Write a unit test¹⁰ for the happy path¹¹. Write at least two edge case tests¹². If this were a bug fix, write a regression test that would fail before the fix.”

      4. Explainer: Summarizes what changed and why. I prompt it with: “Summarize changes by file. Explain the logic in plain language. List what could break and how the tests cover it.

      Big prompts encourage messy answers.

      When I ask the AI to plan, implement, test, and explain all at once, the output gets tangled. When I split roles, I get a checklist, then a small change, then tests, then an explanation. Each piece is easier to review.

      Long chats also drift. After enough back-and-forth, the AI forgets earlier context or recycles bad ideas. Short, focused threads stay sharp.

      Practical tip: summarise between steps.

      When I finish one role, I ask for a short summary before moving to the next. Then I paste that summary into the next prompt. This keeps each step focused and prevents context from getting lost across a long conversation.

      Step 4: Review the Output

      AI-generated code needs extra review.

      The model is confident even when it’s wrong, and subtle bugs hide easily in code that looks plausible. This is where I add a layer of automated review before merging anything.

      One way to do this is with an AI code review tool like CodeRabbit, which integrates with GitHub and GitLab. When you open a pull request, it auto¹³matically reviews the diff¹⁴ and posts comments directly in the PR thread. This kind of tool catches issues that slip past manual reviews, especially when you’re tired or rushing.

      A tool like CodeRabbit typically gives you two things:

      • First, a summary of what changed, often with a file-by-file walkthrough. This helps confirm the pull request matches your intent before looking at the details.
      • Second, line-by-line comments with suggestions. These often flag missing error handling, edge cases, potential security issues, and logic bugs like off-by-one errors. It can also run the code through linters and security analyzers during the review.

      When you push more commits to the same PR, it reviews the new changes incrementally rather than repeating the entire review.

      An example pull request flow

      Here’s what a typical flow looks like:

      • Open a PR with a small, focused change.
      • The AI review tool automatically posts comments.
      • Read the comments, fix real issues, and reply to anything that’s noise or missing context.
      • Then do a final human pass before merging.

      Not every comment requires action. Sort them into two buckets:

      • Must-fix: logic errors, missing error handling, security issues
      • Worth considering: style preferences, naming suggestions, alternative approaches

      If you’re unsure whether something matters, ask yourself:

        • Would this likely cause a bug?
        • Or would this confuse someone reading the code later?

      If yes to either, fix it or add a test.

      AI review tools have the same limitations as other AI tools.

      They sometimes flag things that aren’t problems or suggest patterns that don’t match the codebase. The goal is to catch obvious problems early, not to treat every comment as a mandate.

      Always do a final human pass before merging.

      Step 5: Test the Change

      Tests are part of the flow, not a later chore.

      After any change that isn’t small, I ask for tests immediately. I don’t wait until the feature is complete. Tests serve both as verification and as documentation. If the AI can’t write a sensible test for the code it just produced, that’s often a sign the code itself is unclear…

      I request different tests depending on the situation.

      For new functions, I ask for unit tests that cover the happy path and edge cases. When I used AI to build a React component in a stack I barely knew, my immediate follow-up was, “Now write unit tests for this component.” The tests showed me what the component was supposed to do and how it handled different inputs.

      For bug fixes, I ask for a regression test that would have failed before the fix. This proves the fix works and helps prevent the bug from returning later. For changes that touch multiple components or an endpoint, I ask for one minimal integration or end-to-end test¹⁵.

      I paste a short feature description and ask for a realistic user flow and a few edge cases.

      Prompt templates I reuse

      For unit tests:

      Write unit tests for this function.

      Cover the happy path and at least two edge cases.

      For regression tests:

      Write a regression test for this bug.

      The test should fail before the fix and pass after.

      For integration or end-to-end tests:

      Write a minimal integration test for this feature.

      Include one realistic user flow and a few edge cases.

      For reviewing existing tests:

      Review these tests.

      Are there obvious edge cases missing or any weak assertions?

      When I first started using AI for code, I would generate a function and move on.

      Tests came later, if at all. Bugs shipped. And I didn’t always understand what the code was doing. Now I ask for tests right after the code. Reading the test often teaches me more than reading the function. It shows the inputs, the expected outputs, and the edge cases the code is supposed to handle.

      If the generated test doesn’t make sense, I treat that as a signal. Either the code is unclear, or my prompt was incomplete. Either way, I go back before moving forward.

      Step 6: Debug and Iterate

      When something breaks… I don’t just paste an error and hope.

      I give the model the same information I’d give a colleague: the error, the function, and enough context to reason through the problem.

      A single error line is rarely enough. The model needs more than that to produce a useful diagnosis.

      Here is what I include:

      • Error message or stack trace.
      • Function where the error occurs.
      • Relevant surrounding code or types.
      • What I expected to happen and what actually happened.

      I avoid pasting only the error with no code, dumping an entire file without pointing to the relevant section, or just saying “it doesn’t work” without describing the failure.

      The prompt I use for debugging (I usually ask for both the explanation and the fix in one request):

      Here is the function and the error message.

      Explain why this is happening.

      Then rewrite the function using best practices, while keeping it efficient and readable.

      Asking for both gives me a diagnosis and a fix in one shot. It also helps me learn what went wrong, not just how to patch it.

      If a fix doesn’t work and I keep saying “try again” in the same thread, the suggestions usually get worse. The model circles the same wrong idea with slightly different words.

      My rule: if I’ve asked twice and the answers are getting repetitive or worse, I stop.

      I start a fresh chat, restate the problem with better context, and narrow the question.

      For example, instead of “fix this function,” I ask, “under what conditions could this variable be null here?” Fresh context plus a smaller question beats a tired thread most of the time.

      Sometimes I realize I don’t understand the problem well enough to evaluate the fix. When that happens, I stop asking for code and start asking for an explanation:

      Do not fix anything yet.

      Explain what this function does, step by step.

      Then list the most likely failure cases.

      Once I understand the logic, I go back to asking for a targeted fix.

      This avoids the loop of accepting fixes I don’t understand and hoping one of them works.

      Bet you didn’t know…

      Common Failure Modes and Guardrails

      After enough cycles, I started noticing the same failures repeating.

      Here’s a short checklist I keep in mind:

      Context drift in long chats

      Long conversations cause the model to forget earlier decisions.

      The fix: keep conversations short and scoped. One chat for design, one for part A, one for part B. When a thread feels messy, ask the model to summarize where you are, then start a fresh chat with that summary at the top.

      Wrong API or version

      Models are trained on data up to a certain point.

      They sometimes write code for an older version of a library or generate methods that don’t exist. For anything new or fast-moving, I assume the suggestion might be wrong and verify against official docs. I also ask the model to state its assumptions: “Which version are you assuming?”

      If the answer doesn’t match my setup, I rewrite it myself.

      Off-rails debugging loops

      Once a model gets stuck on a bad idea, it tends to dig deeper. It proposes variations of the same broken fix, sometimes reintroducing bugs from earlier attempts.

      Code quality drift

      AI rarely produces well-structured code by default.

      It’s good at “something that runs,” less good at “something I’ll want to maintain in three months.”

      I fix this by baking quality into the request: ask for tests, ask for a summary of what changed and why, and nudge toward structure (“refactor this into smaller functions,” “follow the pattern in file X”).

      Over-reliance

      This one has nothing to do with the model and everything to do with me.

      If I let AI handle every decision, my own instincts start to dull. I push back by keeping important decisions human-owned, occasionally doing small tasks without AI, and asking the model to teach as well as do: explain its reasoning, compare approaches, and talk through trade-offs.

      The goal is not just “ship faster” but “ship faster and understand what I shipped.”

      Closing Thoughts

      The workflow I use comes back to a simple loop:

      Context → Plan → Code → Review → Test → Iterate

      • I give the model enough context to see the real problem.
      • I ask it to plan before writing code.
      • I generate and edit in small steps.
      • I review the output, often with AI-assisted tools.
      • I ask for tests right away.

      And when something breaks, I debug, refine, and repeat until it works.

      Tools and models will change. Pricing will change. New products will appear. What survives is your method: how you give context, how you break work into steps, when to use a model, and when to rely on yourself.

      If this newsletter did its job, you now have a clearer picture of what coding with AI looks like in practice.

      Some days it’s a sprint… Some days it’s a wrestling match. But it has changed how I work. I ship features I wouldn’t have attempted before, and I feel less stuck when learning a new stack or working through an unfamiliar codebase.

      The goal is not just to ship faster, but to ship faster and understand what I shipped.