Pages

About

Pipi Engines to build, deploy and manage in the cloud

Mike's Notes

Finally got onto this job. Should be fun. 😊

Update 1

This is Infrastructure as Code (IaC). Get Pipi to generate the Terraform script.

Update 2

Don't use HCP Terraform Free Tier; it is being discontinued by IBM. Use an open-source product that Pipi can use internally without restriction to orchestrate cloud infrastructure.

  • OpenTofu + DIY Pipeline (Cloud Native Computing Foundation)
  • Terramate

The criticisms of Terramate

"Adopting Infrastructure as Code creates a new world of challenges

Terraform and OpenTofu lack standard code organization patterns, leading to code complexity, long-running pipelines, poor collaboration, drift, and high blast radii. Most vendors at best only partially mitigate these issues.

    • Poor environment management
    • Large blast radius
    • Config sprawl
    • Complex pipelines
    • Countless drift
    • Lack of observability

Pipi 9

The issues raised by Terramate could be handled natively by a yet-to-be-built Pipi Engine. Pipi excels at using standard code organisation patterns.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

25/01/2026

Pipi Engines to build, deploy and manage in the cloud

By: Mike Peters
On a Sandy Beach: 30/11/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

The problem

The open-source workspaces under development are designed to be shared on GitHub/GitLab and hosted in production on various Cloud Platforms. Eventually, private cloud and on-premises will be included, provided Pipi 9 can obtain secure access. In that case, hybrid clouds should also be fine.

Pipi 9 needs to be able to automatically build, deploy, and manage this process, either directly or via third-party tools such as GitHub Actions.

Agent Engines

An early Pipi 6 module from 2016 that catalogued cloud services was converted to a Pipi 7 microservice in 2018. Yesterday, this was imported into Pipi 9 and is being used to create these agents, which act as autonomous engines.

  • Platform Engine (plt) - this one is the commander on the battlefield. I will get this finished first.
A dedicated agent engine has been created for each cloud platform. They have yet to be differentiated.
  • Apple Engine (ale)
  • AWS Engine (aws)
  • AZURE Engine (azu)
  • Digital Ocean Engine (dgo)
  • Google Cloud Engine (ggc)
  • IBM Engine (ibm)
  • Meta Engine (met)
  • Oracle Engine (ora)
  • (More will be added later; all are welcome)
Then I remembered: Pipi 9 also has some self-deployment capacity, so generalising the existing capacity for building, deploying, and managing to share with the other engines makes sense.
  • Pipi Engine (pip) - for deploying to the closed data centre in a Boxlang/JRE host environment.

How

Most agents start like a Stem Cell. They are then modified to perform a specific job and can evolve over time. That's what I'm doing now.

I started last night in the GCP console, looking into how to reverse-engineer the APIs and build a data model to drive the API calls. Looks straightforward. The Gemini 3 chat is a big help and saves significant time.

But wait, there's more.

Each agent engine is complex and can incorporate other agents like LEGO bricks. Example: API Engine (api) and YAML Engine (yml). Just as in a living biological cell, everything is structured, in flux and self-regulating in response to its environment and internal processes. Other agent types, like primitives, are not complex.

Free-tier experiments

The Engines will initially play in the free tier of the different cloud providers. The first experiments could use GitHub Actions, leveraging the sample code identified and building on it.

Known available free tiers (more to come)

  • Alibaba Cloud
  • AWS
  • Azure
  • Cloudflare
  • Container Hosting Service
  • Couchbase
  • DigitalOcean
  • Google Cloud
  • Hetzner Cloud
  • IBM Cloud
  • Linode
  • Netlify
  • OpenShift
  • Oracle Cloud
  • OVHcloud
  • Render
  • Salesforce
  • Tencent Cloud
  • Vercel
  • Wasabi
  • Zeabur

Cost $$$$$$$ 😎😎

The free-tier usage limits need to be locked to prevent Pipi from burning through lots of cash.

Cloud credits

If Ajabbi can obtain cloud credits, using the more expensive resources would be possible to ensure everything works for future customers. It would enable customers to choose their preferred cloud provider without barriers.

No Series B

Ajabbi is a bootstrap start-up for public good (with a future foundation) and will have no investors, so there will be no Series B. Unfortunately, these cloud providers are obsessed with giving more credits only to Series B start-ups. Go figure.

Stock numbers

Use more agent engines as the workload increases. So if, for example, IBM Engine (ibm) can handle 1,000 enterprise customers, and 10.000 enterprise customers want IBM cloud setups, then Platform Engine (plt) can get the Factory Engine (fac) to breed more IBM Engine (ibm) to nibble on the work. I won't know the actual stocking ratio until field testing under load. However, it won't be a problem. And some huge customers need a dedicated agent engine or two each. All of this is possible.

Developer Accounts

The Workspaces for Developers, currently under development, will enable developers to help configure these agent engines and keep them up to date. This will also allow any platform to add itself by submitting a request for a dedicated agent engine and its developers, who will help with configuration and user documentation.


From Gemini 3

A Terraform script, more accurately called a Terraform configuration file, uses the declarative HashiCorp Configuration Language (HCL) or JSON to define infrastructure that Terraform will provision, manage, and version across various cloud and on-premises providers. Terraform is an infrastructure as code (IaC) tool, not a programming or scripting language in the traditional, procedural sense. 

Core Concepts

  • Declarative Language: You describe the desired end state of your infrastructure, and Terraform automatically determines the steps required to achieve that state, including dependencies between resources.
  • Providers: Terraform interacts with cloud providers (like AWS, Azure, Google Cloud, etc.) and other services through "providers," which are plugins that wrap existing APIs.
  • Resources: These represent infrastructure objects, such as virtual machines, subnets, or DNS entries, and are defined within resource blocks in the configuration file.
  • State: Terraform maintains a terraform.tfstate file to track the current state of your managed infrastructure, which is crucial for determining what changes to make during an apply operation. 

A Basic Terraform Configuration Example (AWS)

The configuration below defines a simple AWS EC2 instance. This would typically be saved as a .tf file (e.g., main.tf): 

hcl
# Define the required provider and minimum Terraform version
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
  required_version = ">= 1.7"
}
# Configure the AWS provider with a specific region
provider "aws" {
  region = "us-west-2"
}
# Define an AWS VPC resource
resource "aws_vpc" "example" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "example-vpc"
  }
}
# Define an output value to display the VPC ID after creation
output "vpc_id" {
  description = "The ID of the created VPC"
  value       = aws_vpc.example.id
}

 

Key Workflow Commands

Once you have a configuration file, you use the Terraform CLI to manage your infrastructure: 

  • terraform init: Prepares your working directory, downloading the necessary provider plugins.
  • terraform plan: Shows a detailed execution plan of what changes Terraform will make to your infrastructure to match your configuration.
  • terraform apply: Executes the planned actions to create, update, or delete infrastructure resources.
  • terraform destroy: Deletes all the infrastructure resources managed by your Terraform configuration. 

For more information, refer to the official HashiCorp Terraform documentation. 

No comments:

Post a Comment