The Accessibility Issue

Mike's Notes

This is a copy of the February issue of Ajabbi Research.

It is about the history of the effort to create a fully accessible User Interface (UI) for Pipi.

Ajabbi Research is published on SubStack on the first Friday of each month, and subscriptions are free.

Each issue is a broad historical overview of a research topic, serving as an index to dozens of previously posted related articles. There are now 647 articles/posts.

This copy of the issue will be updated with additional information as it becomes available. Check the Last Updated date given below.

Eventually, each issue will be reused on the separate Ajabbi Research website as an introduction to a research area comprising multiple research projects.

Resources

References

  • Web Accessibility Initiative - Accessible Rich Internet Applications (WAI-ARIA) 1.2, W3C, 6 June 2023.
  • Web Content Accessibility Guidelines (WCAG) 2.2, W3C, 12 December 2024.
  • GOV.UK Design System.

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

6/03/2026

The Accessibility Issue

By: Mike Peters
Ajabbi Research: 6/02/2026

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

This is the story of the effort to make Pipi fully accessible to all who need it. The steps taken have been part of Pipi's development since 2005, spanning 5 versions.

The NZERN Pipi 2003-2005 Development Plan started it all.

Pipi 4 (2005-2008)

The story starts with Pipi 4. It was a big, successful system that supported community-driven Ecological Restoration in NZ. Here is a history of that Pipi version.

During that time, New Zealand (NZ) had dial-up modems. Many NZERN members were working farmers in rural areas, with very slow internet. Many were older with low computer literacy. This was a major factor determining what was possible. Web page sizes had to be kept under 16kb.

Recently, an archive of Pipi 4 help documentation was discovered and is now available for viewing. It is incomplete, but it gives an idea of how it worked.

Here is the description taken from the Pipi 4 Help archive. The Screen Reader Edition was designed for the blind members of NZERN by their family members.

PIPI4 is available in four editions to meet the needs of different groups of NZERN members.

Basic Edition

    • Designed for the novice computer user who wants a simple cut-down system, with instructions built into every step.
    • Skill level required: Capable of using a simple program like Outlook Express
    • Availability: All members of NZERN

Standard Edition

    • Designed for the confident computer user who wants a system with help available with one click. The user can ask questions and report bugs to the help desk.
    • Skill level required: Capable of using a program like Microsoft Word
    • Availability: All members of NZERN

Screen Reader Edition

    • Designed for the confident computer user who uses a screen reader and wants help available with one click. The user can ask questions and report bugs to the help desk.
    • Skill level required: Capable of using a program like Microsoft Word
    • Availability: All members of NZERN

Professional Edition

    • Designed for the expert computer user capable of self-learning, who wants a fast system with detailed technical documentation available with one click. The user will provide support to other members as a member of the help desk.
    • Skill level required: Capable of using an advanced program like Adobe Photoshop
    • Availability: All members of the help desk

    Pipi 6 (2017-2019)

    When Pipi was rebuilt from memory, some work was done to prepare for a more modular, standardised model-driven User Interface (UI) approach. Metadata was added to every database table to enable future personalisation and meet accessibility requirements.

    Pipi 7 (2020)

    Small, simple, static HTML mockups of workspaces were created as experiments.

    Pipi 8 (2021-2022)

    System-wide namespaces were implemented to enable future complex automated interactions.

    Pipi 9 (2023-2026)

    In 2023, a year-long investigation into model-driven interfaces led to the reuse and hacking of several abandoned EU research efforts in Human-Computer Interaction (HCI).

    Putting a User Interface (UI) on the front end of Pipi 9 was challenging. It had to be

    • Model-driven
    • Adaptive to the users' devices
    • Automated
    • Meet the individual needs of each logged-on user

    Resources used

    1. OMG Interaction Flow Modelling Language (IFML)
    2. The CAMELEON References Framework (CRF)
    3. User Interface Description Language (UIDL)
    4. W3C Model-based UI Incubator

      Model-driven UI

      The User Interface Description Language (UIDL) was an EU-funded project that was abandoned in 2010 after 10 years of excellent work. It was to enable accessibility on different screens and devices. The research results were reverse-engineered to build a User Interface Engine (usi) that would run in reverse to generate accessibility solutions for Pipi. The CSS Engine (css) replaced some redundant components of the UIDL project. Additional engines for localisation and personalisation were created.


      UK Design System

      The UK Government has created a design system for building accessible websites. It includes many templates, components, tools, code, and guidance on achieving this. An excellent resource.

      These templates are being used for the Pipi-generated workspace User Interface (UI)

      Example


      A teaching customer

      Mr G, my coach from Startup Aotearoa, suggested finding a first customer who could be a teaching customer. What a good idea.

      As it happens, a new disability rights organisation emerged in response to funding cuts by the NZ government. The group led a national campaign that resulted in the Minister responsible for the cuts losing her job and the scale of the cuts being reduced. The group had no money and needed a large campaign website that was highly accessible for deaf and blind people. Helping lead this campaign and building the website were deep learning experiences for me.

      Pipi CMS Engine (cms)

      A decision was made early on to autogenerate a separate website for each language (English, Māori, NZ Sign Language, and AAC picture language). This was the simplest solution for the CMS and the users.

      Creating UI for each natural language, including sign languages (i18n), requires user requests and volunteer testers.

      The CMS uses a template engine that builds web pages from reusable components. This means that getting CCS correct only needs to be done once for each component, and so on.

      Sign Language

      The scheme was dreamed up to embed NZ Relay Video Interpreting onto any webpage. This is an ongoing experiment, driven by deaf people.

      Blind and Low Vision

      This didn't get far because blind people in NZ were needed to volunteer to help with testing. However, there are volunteers in other countries. This job depends on ongoing work on the CSS Engine (css). A particular challenge is catering for braille machines.

      Picture Language

      Professor Stephen Hawking used AAC via a computer-generated voice. There are many forms of AAC, including picture language. Providing this as a UI is being explored, with other AAC to follow. Important for the millions of people with Cerebral Palsy and Motor Neurone Disease.

      Workspace personalisation

      The workspace settings will offer complete personalisation of the UI. Similar in purpose to the wonderful AccessWidget from Accessibe.com. Instead of a pop-up, it will use a personalisation form in account settings.

      Keeping it simple

      On paper, WAI-ARIA looks great. However, the reality is that the growing interactive complexity of websites and differences in how browsers work mean that web pages break for people using assistive technology.

      Modern websites present a large attack surface, making them vulnerable. Pipi's solution is to focus on functionality, usability, and maximum simplicity. Small page sizes (Kb), semantic structure, and minimising JavaScript use.

      Standards

      The W3C Web Content Accessibility Guidelines (WCAG) sets the international standard for providing full accessibility. Pipi will endeavour to meet WAIG 2.2 as a new conformance target for all languages.

      Whats next

      Designing the workspaces has made accessibility through personalisation a top requirement. The ongoing 2026 workspace rollout will include further accessibility testing.

      The most useful and inspiring resource has been Smashing Magazine's weekly newsletter, which often covers accessible UI design in great depth.

      Dedication

      Those who have fought all their lives for a world where people with disabilities have equal rights and can fully participate without barriers to the extent they are capable.

      I struggled to code with AI until I learned this workflow

      Mike's Notes

      A case for CodeRabbit.

      Resources

      References

      • Reference

      Repository

      • Home > Ajabbi Research > Library > Subscriptions > System Design Newsletter
      • Home > Handbook > 

      Last Updated

      05/03/2026

      I struggled to code with AI until I learned this workflow

      By: Neo Kim and Louis-François Bouchard
      System Design Newsletter: 02/02/2026

      Louis-François Bouchard: Making AI accessible. What's AI on YouTube. Co-founder at Towards AI. ex-PhD Student..

      Everyone talks about using AI to write code like it’s a vending machine:

      “Paste your problem, get a working solution.”

      The first time I tried it, I learned the hard way that this is not how it works in real projects…

      The model would confidently suggest code that called functions that didn’t exist, assumed libraries we weren’t using, or skipped constraints that felt obvious to me¹. The output looked polished.

      The moment I ran it… It fell apart.

      After enough trial and error, I stopped trying to “prompt better” and started working differently. What finally made AI useful wasn’t a magic tool or a clever prompt. It was a simple loop that kept the model on a short leash and kept you in the driver’s seat:

      This newsletter breaks that loop down step by step.

      It’s written for software engineers who are new to AI coding tools and want a practical starting point: not a tour of every product on the market, but a repeatable method you can use tomorrow.

      The core idea is simple:

      AI works best as an iterative loop, not a one-shot request. You steer. The model fills in the gaps. And because it does less guessing, you spend less time cleaning up confident mistakes.

      Onward.

      I want to introduce Louis-François Bouchard as a guest author.

      He focuses on making AI more accessible by helping people learn practical AI skills for the industry alongside 500k+ fellow learners.

      TL;DR

      If you’re new to using AI for coding, this is the set of habits that prevents most pain.

      • Treat AI output like a draft, not an answer. Models can sound certain while being completely wrong, so anything that matters still gets reviewed and verified.
      • Start with context, the way you’d brief a teammate. If you don’t share constraints, library versions, project rules, and intended behavior, the model will ‘happily’ invent them for you.
      • Ask for a plan before you ask for code. Plans are cheap to change. Code is expensive to unwind. I’ll usually approve the approach first, then ask for small, step-by-step changes.
      • Use reviews and tests as a safety net. I still do a normal pull request² review and rely on tests to verify behavior and catch edge cases³.

      Quick Glossary

      Before we dive in, here’s the small vocabulary I’ll use throughout.

      It’s not exhaustive; it’s just enough to keep the rest of the article readable:

      • AI Editor (e.g., Cursor, VS Code + GitHub Copilot) is a code editor with AI built in. It can suggest completions, refactor functions⁴, and generate code using your project files as context.
      • Chat model (e.g., ChatGPT, Claude, or Gemini) is a conversational AI you interact with in plain language. It’s useful when you’re still figuring out what to do: brainstorming approaches, explaining an error, comparing trade-offs, or sanity-checking a design before you write code.
      • AI code review tools (e.g., CodeRabbit) automatically review pull requests using AI, posting summaries and line-by-line suggestions.
      • Search assistant (e.g., Perplexity) combines chat with web search. It’s what you reach for when you need to verify that a suggested API call is real, that a library feature exists in the version you’re using, or that you’re not about to copy-paste something that expired two releases ago.

      The Mental Model

      Before the workflow, it helps to be honest about what AI coding assistants are and aren’t.

      They’re fantastic when the problem is well-scoped and sitting right in front of them. They’re unreliable the moment you assume they “know” what you didn’t explicitly provide. The workflow is basically a way to stay in the first zone and avoid the second.

      When you give clear requirements, AI is great at drafting functions, refactoring code, scaffolding tests, and talking through error messages.

      But it has a hard boundary: it only knows what it can see in the current context. It doesn’t remember your last chat; it doesn’t know your architecture or conventions, and it won’t reliably warn you when it’s guessing. It just keeps going confidently.

      I’ve seen AI call library functions that don’t exist, use syntax from the wrong version, and ignore constraints I assumed were obvious. The pattern was always the same: the AI didn’t know what I hadn’t told it, so it filled the gaps by inventing something plausible.

      Once I understood this, three principles shaped how I work:

      1. Give more context than you think you need. Just like I’d brief a colleague who just joined the project, I brief the AI every time. If I don’t share the details, it invents them.
      2. Guide it with specific steps. AI struggles with “build me a web app,” but does well with “add input validation for these fields, return a clear error message, and write a test that proves invalid input is rejected.” The more specific my request, the better the output.
      3. If it matters, verify it. Whenever the AI produces security-sensitive logic, a database migration, or an algorithm that must be correct, I review it myself and add tests that prove the behavior.

      A good way to hold all of this in your head is:

      AI is a smart teammate who joined your project five minutes ago.

      They can write quickly, but they don’t know your architecture, your conventions, or your constraints unless you tell them.

      That’s why the mistakes look so predictable: the model isn’t “being dumb,” it’s filling in gaps you didn’t realise you left open.

      Once I started seeing it that way, the fix wasn’t a better one-shot prompt⁵.

      It was a repeatable loop that forced me to brief the model, force clarity early, and keep changes small enough to verify.

      I’m not sure if you're aware of this…

      When you open a pull request, CodeRabbit can generate a summary of code changes for the reviewer. It helps them quickly understand complex changes and assess the impact on the codebase.

      The Workflow

      The loop is the same whether I’m fixing a bug, adding a feature, or cleaning up a messy module.

      It keeps the AI from freelancing, and it keeps me from treating “code that looks plausible” as “code that’s ready to ship.”

      Here’s the loop:

      1. Context: I share project background, constraints, and the relevant code so the AI isn’t guessing.
      2. Plan: I ask for a strategy before any code gets written.
      3. Code: I generate or edit code one step at a time, so changes stay small and reviewable.
      4. Review: I carefully check the output and often use AI-assisted pull request reviews as a second set of eyes.
      5. Test: I run tests, and I’ll often have AI generate new tests that lock in the intended behavior.
      6. Iterate: I debug failures, refine the request, and repeat until the change is solid.

      I use different tools at different points in the loop.

      Each one is good at a specific job:

      An editor is good at working inside a repo,

      A chat model is good at thinking in plain language,

      And review/testing tools are good at catching things I’d miss when I’m tired.

      The rest of this newsletter breaks down each step.

      The most important step is the first one:

      If the model is guessing about your setup, everything downstream becomes cleanup. So the workflow starts with context.

      Step 1: Context

      Most AI mistakes in code have the same root cause.

      The model is guessing in a vacuum. Someone pastes a function, types “fix this,” and acts surprised when the suggestion ignores half the system.

      “Fix this” is the fastest way to make the model hallucinate…

      Without a project background and constraints, it has no choice but to fill gaps with whatever sounds right: ‘functions that don’t exist, syntax from the wrong version, solutions that break conventions elsewhere in the repo’.

      So, for anything that is not small, I flip the default: documentation and rules go first. Code goes second.

      This is easiest with an AI editor that can automatically pull in files.

      I use Cursor, which lets me highlight code, pull in other files from my project, and ask the AI to do specific work with all of that as context. The pleasant part is I can swap models on the fly: a fast one for quick edits, a heavier reasoning model when I need to solve a tricky bug.

      VS Code with Copilot or Claude Code offers similar features if you prefer to stay in that ecosystem.

      When a task is even moderately complex, I load three kinds of context:

      1. Project background

      I keep an updated README⁶ for each project and start most AI sessions by attaching it with a simple opener:

      Read the README below to understand the project. Then I will give you a specific task.

      If the change touches something sensitive (like payments), I include the key files in that first message too. By the time I describe the change, the assistant has already seen the neighborhood.

      2. Rules and constraints

      I keep a rules file (sometimes called AGENTS.md or CLAUDE.md)⁷ that bundles project scope, coding style, version constraints (for example, “this service runs on Django 4.0”), and a few hard rules (“never call this external API in development,” “all dates must be UTC”).

      Some tools support “rules” or “custom instructions” that help me avoid repeating myself in every session.

      3. Relevant source and signals

      For bugs or features, I paste the function or file involved along with stack traces⁸ or logs.

      A single error line is like a screenshot of one pixel. The assistant needs more than that if I want real reasoning instead of optimistic guessing.

      Here’s a reusable prompt pattern:

      Read @README to understand the project scope, architecture, and constraints.

      Read @AGENTS.md to learn the coding style, rules, and constraints for this codebase.

      Then read @main.py, @business_logic_1.py, and @business_logic_2.py carefully.

      Your task is to update @business_logic_2.py to implement the following changes:

      1. <change 1>

      2. <change 2>

      3. <change 3>

      Follow the conventions in the README and AGENTS file.

      Do not modify other files unless strictly necessary and explain any extra changes you make.

      The structure stays the same every time: context, then rules, then a precise task.

      I swap out the filenames and the change list, but the pattern holds.

      One thing I learned the hard way: more text isn’t always better. The best briefings are short and focused. They explain what the project is for, how the main pieces fit together, and which rules actually matter. If I notice I’m pasting more than a human would reasonably read before starting work, I cut it down.

      One final detail that matters: context should be curated… not dumped.

      The best briefings are short and decisive, enough to prevent guessing, but not so much that the model loses the signal. If I’m pasting more than a human would reasonably read before starting, I cut it down.

      Step 2: Plan Before You Code

      Context answers “where am I?”

      It doesn’t answer “what should I build?”

      That’s where things usually go sideways.

      If you let AI write code immediately, it often picks a strange approach, optimizes the wrong thing, or quietly ignores constraints.

      I’ve learned to force a two-step process: plan first, then code.

      I usually do the planning step in a chat model like Claude, ChatGPT, or Gemini. ChatGPT works well when the problem is fuzzy, and I need structured thinking. Once the design feels reasonable, I switch to an AI editor like Cursor or Claude Code in VS Code, where the implementation happens with the repo open.

      First: Ask for a plan only

      For any non-trivial change, I first describe the feature or bug in plain language. That initial exchange is just about getting the idea into a workable shape:

      Here is the feature I want to build and some context.

      Help me design it.

      What needs to change?

      Which modules are involved?

      What are the main steps?

      The key is to stop the AI from jumping straight into code. I’ll often say explicitly, “Do not write any code until I say approved.”

      Then: Approve and implement in small steps

      Once the plan looks reasonable, I approve it and ask the AI to implement one step at a time.

      This is where I usually switch from a chat model to an AI editor like Cursor or VS Code with Copilot, since the implementation happens inside the actual codebase. For each step, I ask the AI to explain what it’s about to change and propose the code for that step only.

      Small steps are easier to review and easier to undo if something goes wrong.

      Here’s a prompt template I reuse:

      You are a senior engineer helping me with a new change.

      First, read the description of the feature or bug:

      <insert feature or bug description and any relevant context>

      Step 1 — Plan only:

      • Think step by step and outline a clear plan.
      • List the main steps you would take.
      • Call out important decisions or tradeoffs.
      • Mention edge cases we should keep in mind.

      Stop after the plan. Do not write any code until I say “approved.”

      Step 2 — Implement:

      Once I say “approved,” implement the plan one step at a time:

      • For each step, explain what you are about to change.
      • Propose the code changes for that step only.
      • Write tests for that step where it makes sense.

      If the AI recommends a library or function I’ve never seen, I’ll verify it actually exists using a search assistant or official docs. Models sometimes hallucinate APIs that sound plausible but don’t exist.

      This pattern is especially useful when I’m working in a new stack or unfamiliar codebase. Instead of reading docs for hours, I ask the AI to explain the stack, sketch a design, and then help me implement it. The AI explains before it writes, so I learn as I go.

      It also helps when a change touches multiple parts of the system, since a plan lets me see the full scope before I make edits everywhere.

      Same with subtle bugs I don’t fully understand. For a slow database query, instead of asking “make this faster,” I ask the AI to reason through why it might be slow and what options exist. Only after that reasoning do I ask for the actual fix.

      Fixing a plan is cheaper than fixing a pile of code. The “approved” step forces me to agree with the approach before the AI starts typing.

      Step 3: Lightweight Multi-Agent Coding

      Once I got comfortable with planning before coding, I started using a simple trick that makes AI output more reliable: I split the work into roles.

      This isn’t a complex ‘agent system⁹.’ Most of the time, it’s the same AI model, just prompted differently for each job.

      Sometimes I use different models for different roles:

      • Claude or ChatGPT for the Planner role (where reasoning matters),
      • Then, a faster model for the Implementer role (where the task is already well-defined and speed matters more).

      In Cursor, I can switch models mid-task, which makes this easy.

      The four roles I use:

      1. Planner: Breaks down the task into steps and calls out edge cases. (This is what we covered in Step 2.)

      2. Implementer: Writes code strictly based on the approved plan. I prompt it with something like: “Follow the approved plan. Change only the files I list. Keep the change small. If something is unclear, ask before coding.”

      3. Tester: Writes tests and edge cases. I prompt it with: “Write a unit test¹⁰ for the happy path¹¹. Write at least two edge case tests¹². If this were a bug fix, write a regression test that would fail before the fix.”

      4. Explainer: Summarizes what changed and why. I prompt it with: “Summarize changes by file. Explain the logic in plain language. List what could break and how the tests cover it.

      Big prompts encourage messy answers.

      When I ask the AI to plan, implement, test, and explain all at once, the output gets tangled. When I split roles, I get a checklist, then a small change, then tests, then an explanation. Each piece is easier to review.

      Long chats also drift. After enough back-and-forth, the AI forgets earlier context or recycles bad ideas. Short, focused threads stay sharp.

      Practical tip: summarise between steps.

      When I finish one role, I ask for a short summary before moving to the next. Then I paste that summary into the next prompt. This keeps each step focused and prevents context from getting lost across a long conversation.

      Step 4: Review the Output

      AI-generated code needs extra review.

      The model is confident even when it’s wrong, and subtle bugs hide easily in code that looks plausible. This is where I add a layer of automated review before merging anything.

      One way to do this is with an AI code review tool like CodeRabbit, which integrates with GitHub and GitLab. When you open a pull request, it auto¹³matically reviews the diff¹⁴ and posts comments directly in the PR thread. This kind of tool catches issues that slip past manual reviews, especially when you’re tired or rushing.

      A tool like CodeRabbit typically gives you two things:

      • First, a summary of what changed, often with a file-by-file walkthrough. This helps confirm the pull request matches your intent before looking at the details.
      • Second, line-by-line comments with suggestions. These often flag missing error handling, edge cases, potential security issues, and logic bugs like off-by-one errors. It can also run the code through linters and security analyzers during the review.

      When you push more commits to the same PR, it reviews the new changes incrementally rather than repeating the entire review.

      An example pull request flow

      Here’s what a typical flow looks like:

      • Open a PR with a small, focused change.
      • The AI review tool automatically posts comments.
      • Read the comments, fix real issues, and reply to anything that’s noise or missing context.
      • Then do a final human pass before merging.

      Not every comment requires action. Sort them into two buckets:

      • Must-fix: logic errors, missing error handling, security issues
      • Worth considering: style preferences, naming suggestions, alternative approaches

      If you’re unsure whether something matters, ask yourself:

        • Would this likely cause a bug?
        • Or would this confuse someone reading the code later?

      If yes to either, fix it or add a test.

      AI review tools have the same limitations as other AI tools.

      They sometimes flag things that aren’t problems or suggest patterns that don’t match the codebase. The goal is to catch obvious problems early, not to treat every comment as a mandate.

      Always do a final human pass before merging.

      Step 5: Test the Change

      Tests are part of the flow, not a later chore.

      After any change that isn’t small, I ask for tests immediately. I don’t wait until the feature is complete. Tests serve both as verification and as documentation. If the AI can’t write a sensible test for the code it just produced, that’s often a sign the code itself is unclear…

      I request different tests depending on the situation.

      For new functions, I ask for unit tests that cover the happy path and edge cases. When I used AI to build a React component in a stack I barely knew, my immediate follow-up was, “Now write unit tests for this component.” The tests showed me what the component was supposed to do and how it handled different inputs.

      For bug fixes, I ask for a regression test that would have failed before the fix. This proves the fix works and helps prevent the bug from returning later. For changes that touch multiple components or an endpoint, I ask for one minimal integration or end-to-end test¹⁵.

      I paste a short feature description and ask for a realistic user flow and a few edge cases.

      Prompt templates I reuse

      For unit tests:

      Write unit tests for this function.

      Cover the happy path and at least two edge cases.

      For regression tests:

      Write a regression test for this bug.

      The test should fail before the fix and pass after.

      For integration or end-to-end tests:

      Write a minimal integration test for this feature.

      Include one realistic user flow and a few edge cases.

      For reviewing existing tests:

      Review these tests.

      Are there obvious edge cases missing or any weak assertions?

      When I first started using AI for code, I would generate a function and move on.

      Tests came later, if at all. Bugs shipped. And I didn’t always understand what the code was doing. Now I ask for tests right after the code. Reading the test often teaches me more than reading the function. It shows the inputs, the expected outputs, and the edge cases the code is supposed to handle.

      If the generated test doesn’t make sense, I treat that as a signal. Either the code is unclear, or my prompt was incomplete. Either way, I go back before moving forward.

      Step 6: Debug and Iterate

      When something breaks… I don’t just paste an error and hope.

      I give the model the same information I’d give a colleague: the error, the function, and enough context to reason through the problem.

      A single error line is rarely enough. The model needs more than that to produce a useful diagnosis.

      Here is what I include:

      • Error message or stack trace.
      • Function where the error occurs.
      • Relevant surrounding code or types.
      • What I expected to happen and what actually happened.

      I avoid pasting only the error with no code, dumping an entire file without pointing to the relevant section, or just saying “it doesn’t work” without describing the failure.

      The prompt I use for debugging (I usually ask for both the explanation and the fix in one request):

      Here is the function and the error message.

      Explain why this is happening.

      Then rewrite the function using best practices, while keeping it efficient and readable.

      Asking for both gives me a diagnosis and a fix in one shot. It also helps me learn what went wrong, not just how to patch it.

      If a fix doesn’t work and I keep saying “try again” in the same thread, the suggestions usually get worse. The model circles the same wrong idea with slightly different words.

      My rule: if I’ve asked twice and the answers are getting repetitive or worse, I stop.

      I start a fresh chat, restate the problem with better context, and narrow the question.

      For example, instead of “fix this function,” I ask, “under what conditions could this variable be null here?” Fresh context plus a smaller question beats a tired thread most of the time.

      Sometimes I realize I don’t understand the problem well enough to evaluate the fix. When that happens, I stop asking for code and start asking for an explanation:

      Do not fix anything yet.

      Explain what this function does, step by step.

      Then list the most likely failure cases.

      Once I understand the logic, I go back to asking for a targeted fix.

      This avoids the loop of accepting fixes I don’t understand and hoping one of them works.

      Bet you didn’t know…

      Common Failure Modes and Guardrails

      After enough cycles, I started noticing the same failures repeating.

      Here’s a short checklist I keep in mind:

      Context drift in long chats

      Long conversations cause the model to forget earlier decisions.

      The fix: keep conversations short and scoped. One chat for design, one for part A, one for part B. When a thread feels messy, ask the model to summarize where you are, then start a fresh chat with that summary at the top.

      Wrong API or version

      Models are trained on data up to a certain point.

      They sometimes write code for an older version of a library or generate methods that don’t exist. For anything new or fast-moving, I assume the suggestion might be wrong and verify against official docs. I also ask the model to state its assumptions: “Which version are you assuming?”

      If the answer doesn’t match my setup, I rewrite it myself.

      Off-rails debugging loops

      Once a model gets stuck on a bad idea, it tends to dig deeper. It proposes variations of the same broken fix, sometimes reintroducing bugs from earlier attempts.

      Code quality drift

      AI rarely produces well-structured code by default.

      It’s good at “something that runs,” less good at “something I’ll want to maintain in three months.”

      I fix this by baking quality into the request: ask for tests, ask for a summary of what changed and why, and nudge toward structure (“refactor this into smaller functions,” “follow the pattern in file X”).

      Over-reliance

      This one has nothing to do with the model and everything to do with me.

      If I let AI handle every decision, my own instincts start to dull. I push back by keeping important decisions human-owned, occasionally doing small tasks without AI, and asking the model to teach as well as do: explain its reasoning, compare approaches, and talk through trade-offs.

      The goal is not just “ship faster” but “ship faster and understand what I shipped.”

      Closing Thoughts

      The workflow I use comes back to a simple loop:

      Context → Plan → Code → Review → Test → Iterate

      • I give the model enough context to see the real problem.
      • I ask it to plan before writing code.
      • I generate and edit in small steps.
      • I review the output, often with AI-assisted tools.
      • I ask for tests right away.

      And when something breaks, I debug, refine, and repeat until it works.

      Tools and models will change. Pricing will change. New products will appear. What survives is your method: how you give context, how you break work into steps, when to use a model, and when to rely on yourself.

      If this newsletter did its job, you now have a clearer picture of what coding with AI looks like in practice.

      Some days it’s a sprint… Some days it’s a wrestling match. But it has changed how I work. I ship features I wouldn’t have attempted before, and I feel less stuck when learning a new stack or working through an unfamiliar codebase.

      The goal is not just to ship faster, but to ship faster and understand what I shipped.

      Neuroscience has a species problem

      Mike's Notes

      A great example of how science could advance.

      Resources

      References

      • Reference

      Repository

      • Home > Ajabbi Research > Library > Subscriptions > The Transmitter
      • Home > Handbook > 

      Last Updated

      04/03/2026

      Neuroscience has a species problem

      By: Nanthia Suthana
      The Transmitter: 16/02/2026

      Nanthia Suthana is professor of neurosurgery, biomedical engineering and neurobiology at Duke University. Her lab studies the neural mechanisms of human memory, emotion and spatial navigation using intracranial recordings, neuromodulation and wearable technologies during real-world behavior. Her work bridges basic neuroscience and clinical translation, with the goal of developing novel treatments for neurological and psychiatric disorders. Suthana earned her B.S. and Ph.D. at the University of California, Los Angeles. She has led interdisciplinary research programs integrating neuroscience, engineering and clinical practice, with an emphasis on studying brain function in naturalistic settings..

      If our field is serious about building general principles of brain function, cross-species dialogue must become a core organizing principle rather than an afterthought.

      Neuroscience has never been richer in data. Laboratories now generate detailed recordings of neural activity, behavior and physiology across species at scales unimaginable a decade ago. In rodents, researchers can monitor thousands of neurons simultaneously across distributed circuits during behavior. In humans, they can record from deep brain structures during ambulatory, real-world behavior, integrated with wearable sensors and linked to clinical symptoms and subjective experience. The field has access to neural signals spanning orders of magnitude in space, time and biological complexity.

      Yet despite this abundance, neuroscience remains deeply organized along species lines. Animal and human researchers often operate within separate conceptual frameworks, attend different conferences and develop theories that rarely confront data across species. This separation is no longer a minor inconvenience but a growing liability. The problem is not simply that cross-species translation is difficult; it is that the field has largely accepted this difficulty rather than treating it as a central scientific challenge. Neuroscience has also struggled to confront the fact that different species often tell different stories.

      As a result, neuroscience’s primary limitation today is not a lack of data or tools, but persistent fragmentation across model systems, recording modalities and analytic traditions. Findings are typically interpreted within species- and technique-specific frameworks, with little pressure to explain when, how or why neural principles should generalize across organisms. Researchers acknowledge differences but rarely use them to constrain or revise theory.

      If neuroscience is serious about building general principles of brain function, cross-species dialogue must become a core organizing principle rather than an afterthought. Differences between species should be treated as informative constraints that refine theory, not as inconsistencies to be explained away. Overcoming this divide won’t be trivial, but there are ways we can start now to begin to change our culture. 

      major source of the field’s fragmentation lies in how it treats different neural signals. Researchers focused on single-unit activity often prioritize spikes as the fundamental currency of computation, treating population-level signals, such as local field potentials, as secondary or ambiguous. Others emphasize population dynamics and view single-neuron activity as overly local or insufficiently informative for translational applications. Similar divisions exist across recording and manipulation modalities, from electrophysiology and calcium imaging to hemodynamic and electrical stimulation-based approaches. Though these distinctions reflect real technical constraints, they have hardened into conceptual boundaries that shape which questions are asked and which forms of evidence are considered explanatory.

      These boundaries persist across species, even as many of the technological constraints that once justified them have faded. As a researcher studying the human brain using both single-unit and local field potential recordings, I am acutely aware that these signals offer distinct and complementary views of neural activity, each with its own strengths and limitations. In humans, it’s now possible to directly record brain activity during behaviors such as walking and natural navigation, enabling experiments similar to those in animals. Single-unit sampling in humans is sparse, however, so field potentials are often the primary signal available for linking neural activity to ethologically relevant behavior. 

      Differences between species should be treated as informative constraints that refine theory, not as inconsistencies to be explained away.

      High-density single-unit recordings in animal models are therefore essential for understanding how population-level signals relate to single-neuron activity. Yet even when spikes and field potentials are recorded simultaneously in animal studies, researchers often prioritize single-unit analyses, reflecting long-standing theoretical preferences. These preferences limit opportunities to connect neural activity across scales and species. Rather than optimizing theories around a single signal or model organism, the field would benefit from frameworks designed to link signals across scales, using the strengths of each system to offset the limitations of others.

      Theta oscillations, a brain rhythm typically defined as 4 to 8 hertz, provide a clear example of how this fragmentation plays out in practice. The details of theta matter less here than what its cross-species differences reveal about how the field handles disagreement. In rodents, hippocampal theta activity during locomotion appears to be largely continuous, a regularity that has shaped decades of influential models of navigation, memory encoding and temporal organization. In humans, however, hippocampal theta activity occurs in brief, intermittent bouts, often linked to specific behavioral or cognitive events rather than ongoing movement. These findings have been replicated across laboratories and tasks and are supported by converging evidence from bats and nonhuman primates. 

      When these findings emerged, they were initially met with skepticism. Rather than asking what the differences might imply for theory, the dominant response was to question whether the signals were truly comparable. As evidence accumulated over time, skepticism softened. But theories that attempt to meaningfully integrate the two types of theta are still largely lacking. 

      Nearly a decade later, rodent-derived models continue to assume sustained oscillatory structure, although bat, nonhuman primate and human findings are treated as species-specific implementation details rather than as constraints on general principles. For the most part, scientists have not tried to uncover why different species recruit theta in distinct ways, what computational roles these patterns serve, or whether continuous and intermittent theta reflect complementary solutions to shared navigational and memory demands, or distinct modes of environmental sampling, such as whisking, echolocation or eye movements. 

      This pattern illustrates a broader issue in neuroscience. With enough evidence, researchers tend to accept cross-species differences, but they rarely use these differences to refine or revise theory. Instead of asking why hippocampal theta is continuous in rodents but burst-like in nonhuman primates and humans, or what computational advantages these different regimes might confer, the field has largely compartmentalized the findings, enabling parallel literatures to proceed with little pressure to reconcile them.

      Yet these differences are precisely where theoretical progress should occur. Intermittent hippocampal theta suggests a fundamentally different mode of coordinating neural activity, one in which rhythmic structure is recruited transiently to gate information, mark boundaries between events, or coordinate distributed circuits at specific moments rather than continuously. Ignoring these implications does not preserve existing theories; it limits their scope and explanatory power. 

      Cultural asymmetries within the field reinforce this divide, a pattern I observe as a researcher who studies the human brain. When human data align with animal model data, they are welcomed as validation. When they do not, they face higher evidentiary thresholds and greater skepticism. This skepticism is often justified by appeals to sample size, even though nonhuman primate studies, long viewed as theoretically foundational, have historically relied on similarly small cohorts. Such asymmetries insulate animal-derived theories from challenge and weaken the role of human research as a source of theoretical insight rather than mere applied confirmation.

      For much of my career, I have watched this divide only perpetuate and deepen. I have attended conferences where animal research overwhelmingly shaped the agenda and human work was treated as secondary. At human-focused meetings, the reverse was true, with few researchers whose primary work involved non-primate species having influence over the event. These experiences shape not only which conversations happen but which questions young scientists learn to ask. The result has been the emergence of parallel scientific cultures that rarely engage deeply with each other.

      Overcoming this divide, and developing theories that incorporate contrasting data, will require shifts in how scientists are trained, how conferences are structured and how cross-species work is valued within academic culture. It will also require theoretical frameworks and models that are explicitly tested and revised across species rather than optimized within a single model system. Finally, funding, review and publication practices must reward work that treats cross-species differences as opportunities for insight rather than liabilities to be minimized.

      The Unbearable Joy of Sitting Alone in A Café

      Mike's Notes

      Me too. 😀

      Resources

      References

      • Reference

      Repository

      • Home > Ajabbi Research > Library >
      • Home > Handbook > 

      Last Updated

      03/03/2026

      The Unbearable Joy of Sitting Alone in A Café

      By: Candost Dagdeviren
      Candost: 05/01/2026

      Writer, Software Engineering Leader, Problem Solver.

      It’s contradictory to sit alone in a café. It’s against the reason cafés exist.

      They are designed as meeting spaces. There is no table with a single chair. Even the ones placed right by the window with high seating are big tables with many chairs.

      Cafés are community spaces. Most go there to see their loved ones, friends, or colleagues.

      You find only a few people sitting alone. Most are buried in their laptops, working hard to make a living in their own worlds, whatever world they have.

      I rarely do that.

      When I took time off from work, I chose a staycation. Unlike most of my friends, who visited Japan in 2025.

      When I heard their experiences, I was jealous. When I told them my staycation plans of doing nothing for four weeks, they were jealous.

      While off work, I wanted to slow time down as much as I could. The best way to freeze time, I read somewhere, is to get a dog. Luckily, I have one already. So, I took long walks with my dog.

      What used to feel like 10 minutes between breakfast and lunch while working became a full-blown day. Even though I was spending two hours walking my dog instead of a 30-40 minute rush, it felt like an eternity. A peaceful eternity.

      On the second day, I decided to leave my phone at home, so I lived those two hours to the fullest. I didn’t take any device that could connect me to the internet or to other people.

      I was nervous.

      But all the anxiety evaporated after 30 minutes.

      I felt free, so to speak.

      It wasn’t that nobody could reach out to me that felt like an escape; it was that I couldn’t reach out to anyone or anything that caused the turmoil.

      I had no possibility to text anyone. No possibility to watch or read. No chance to look up anything to fulfill my curiosity.

      My mind was alone after a long time.

      There were a few moments I put my hand into my pocket to take out my phone to look up something I was curious about. My phone wasn’t there.

      I smiled. Every. Single. Time.

      On the second day, I randomly walked into a neighborhood café. I ordered an americano with a double shot of espresso.

      Sipping a hot americano feels different when you are in a rush to catch a subway. Its purpose is to wake you up. A sip from that little hole in a single-use cap burns my tongue every time. I despise that.

      With a porcelain cup, you don’t have that. Coffee changes its purpose. It becomes a pleasure.

      I sat down with a proper cup of americano. My dog crawled under the table.

      I was sitting alone in a café with a dog that had crawled under the table without any electronics that could distract me.

      Distract me from, basically, nothing.

      It was pure delight. Every element. Or rather, the non-existence of any element. No phone. No headphones. No tablet. No laptop.

      My mind was just drifting with the chatter in the café. I left myself to the flow.

      When you let your thoughts wander, they take you on a journey you’ll never think possible. You reflect on the smallest details of your fast life. Your brain absorbs all the mistakes you’ve made. You accept that you can’t change failures anymore, as much as you feel guilty.

      You might as well not worry about them and focus on what you can change: what you do now. And what you will do next.

      Nothing else.

      The next day, I left my phone at home again and decided to stop by the same café. I was lucky; I sat down at the same table.

      Sitting alone in a café without distractions reveals a lot about people. The same people you pass by in a split second while rushing from home to work, from a meeting to a meeting. The invisible suddenly appears right in front of you. People don’t go away in two seconds. They stay. They sip a coffee. They talk with others, laugh, cry, and worry. Oh, worry.

      Worry is only visible in people’s eyes. Eyes are the channel of the heart. You have to close your ears and look at people’s eyes to see their hearts.

      You realize that looking into eyes is frightening—both for you and the other person. You try to avoid it, but eventually make eye contact because nobody is physically moving anywhere.

      As none of you are passing by in a second, you mimic looking at something else. They continue their conversation. But you saw their worry, and you can’t help but try to understand.

      You leave the café to avoid making things awkward.

      I went there the next day. This time, my table was occupied. I don’t know when it became my table. But it felt like that. I found another one. It was closer to the staff.

      Sitting alone in a café without distractions shows you how a café works. You never contemplate how they operate behind that giant coffee machine while you’re waiting for your coffee before you run to catch the next bus, tram, subway, or taxi. You never ruminate when you sip from a single-use cup and burn your tongue.

      You notice how the staff circulates porcelain cups, from dirty to clean, to the top of the coffee machine. You observe the staff’s reactions to each customer. You try to analyze if someone is a regular by noticing how the staff talks.

      You wonder whether they consider you a regular, since you’ve been there for the last couple of days. Or they call you a creepy guy with a dog. You will never know. You’re not fine with never knowing.

      You promise yourself to come the next day to observe how the staff talks to you.

      I again went to the same café. Unlucky me. A different staff were working on that day. Yet I ordered the same: a cup of americano with a double shot of espresso.

      Sitting alone in a café without distractions, with a dog that had crawled under the table, brings a light to a truth: you can’t control or influence other people’s thoughts and feelings, no matter what you do. Staff may think of you as a weirdo with a dog; your friends might want to be in your place; your family might be nervous because they can’t reach out to you.

      You know you can’t change any of those unless you change who you are. It makes you feel alone and powerless.

      You are alone and powerless. You encounter a deep challenge.

      The next day, I didn’t go to the café. I instead took an even longer walk. I went there the following day, knowing I had faced that challenge in my longest walk.

      Sitting alone in a café without distractions shows everyone you’re alone.

      It’s an alone act.

      A scary but powerful one.

      Many avoid at all costs. That’s why everybody looks at you with wondering eyes. They are afraid of your powerful joy. They can’t grasp why someone would do this to themselves. They are hesitant but are thinking of doing the same.

      Then you realize you’re planting thoughts in people’s minds that you can’t control. Feelings are feelings. Thoughts are thoughts.

      Just at the moment you think you are alone again, you see another weirdo across the café sitting alone without distractions. That weirdo is looking at your sleeping-in-a-croissant-shape dog under the table. Weirdo is enjoying the moment, while your dog is on an adventure in her second dream.

      You smile. You know you’re not alone. You are one weirdo sitting at a distance from one other. You know there are many.

      Maybe one of them is reading this and feeling heard. Perhaps one will never see this and will always feel alone. But it only needs one look around. You glance over the café and leave with a smile.

      The next day, I went there again. This time, I put in an intentional distraction. A good one.

      Sitting alone in a café without distractions only gets better when there is something to write on. Not with a keyboard. You must use your single hand to write, not two. Ideally, with a pen on paper.

      The pen is meant to slow you down. The words shouldn’t land on paper at the speed of thinking or even talking.

      The writing must hurt your wrist or hand. It must turn into a burden. That pain is a signal telling you that you have written long enough. Maybe you wrote only five lines. Perhaps one thousand.

      It doesn’t matter.

      You take a break.