Types of AI agents

Mike's Notes

I only built one type of AI agent. I didn't realise there were other types. Again, learned from the MLOPs community.

By the way, I like how IBM organise their website information. Very clean and tidy, and no crap in their HTML when I copied it to this post. Very impressive and unusual.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

21/11/2025

Types of AI agents

By: Cole Stryker
IBM: Accessed 21/11/2025

Staff Editor, AI Models, IBM Think. Cole Stryker is an editor and writer based in Los Angeles, California. He's been telling stories about AI with IBM since 2017.

Artificial intelligence (AI) has transformed the way machines interact with the world, enabling them to perceive, reason and act intelligently. At the core of many AI systems are intelligent agents, autonomous entities that make decisions and perform tasks based on their environment.

These agents can range from simple rule-based systems to advanced learning systems powered by large language models (LLMs) that adapt and improve over time.

AI agents are classified based on their level of intelligence, decision-making processes and how they interact with their surroundings to reach wanted outcomes. Some agents operate purely on predefined rules, while others use learning algorithms to refine their behavior.

There are 5 main types of AI agents: simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents and learning agents. Each type has distinct strengths and applications, ranging from basic automated systems to highly adaptable AI models.

All 5 types can be deployed together as part of a multi-agent system, with each agent specializing in handling the part of the task for which they are best suited.

Simple reflex agents

A simple reflex agent is the most basic type of AI agent, designed to operate based on direct responses to environmental conditions. These agents follow predefined rules, known as condition-action rules, to make decisions without considering past experiences or future consequences.

Reflex agents apply current perceptions of the environment through sensors and take action based on a fixed set of rules.

For example, a thermostat is a simple reflex agent that turns on the heater if the temperature drops below a certain threshold and turns it off when the wanted temperature is reached. Similarly, an automatic traffic light system changes signals based on traffic sensor inputs, without remembering past states.

Simple reflex agents are effective in structured and predictable environments where the rules are well-defined. However, they struggle in dynamic or complex scenarios that require memory, learning or long-term planning.

Because they do not store past information, they can repeatedly make the same mistakes if the predefined rules are insufficient for handling new situations.


Model-based reflex agents

A model-based reflex agent is a more advanced version of the simple reflex agent. While it still relies on condition-action rules to make decisions, it also incorporates an internal model of the world. This model helps the agent track the current state of the environment and understand how past interactions may have affected it, enabling it to make more informed decisions.

Unlike simple reflex agents, which respond solely to current sensory input, model-based reflex agents use their internal model to reason about the environment's dynamics and make decisions accordingly.

For instance, a robot navigating a room might not just react to obstacles in its immediate path but also consider its previous movements and the locations of obstacles that it has already passed.

This ability to track past states enables model-based reflex agents to function more effectively in partially observable environments. They can handle situations where the context needs to be remembered and used for future decisions, making them more adaptable than simpler agents.

However, while model-based agents improve flexibility, they still lack the advanced reasoning or learning capabilities required for truly complex problems in dynamic environments.

Goal-based agents

A goal-based reflex agent extends the capabilities of a simple reflex agent by incorporating a proactive, goal-oriented approach to problem-solving.

Unlike reflex agents that react to environmental stimuli with predefined rules, goal-based agents consider their ultimate objectives and use planning and reasoning to choose actions that move them closer to achieving their goals.

These agents operate by setting a specific goal, which guides their actions. They evaluate different possible actions and select the one most likely to help them reach that goal.

For instance, a robot designed to navigate a building might have a goal of reaching a specific room. Rather than reacting to immediate obstacles only, it plans a path that minimizes detours and avoids known obstacles, based on a logical assessment of available choices.

The goal-based agent's ability to reason allows it to act with greater foresight compared to simpler reflex agents. It considers future states and their potential impact on reaching the goal.

However, goal-based agents can still be relatively limited in complexity compared to more advanced types, as they often rely on preprogrammed strategies or decision trees for evaluating goals.

Goal-based reflex agents are widely used in robotics, autonomous vehicles and complex simulation systems where reaching a clear objective is crucial, but real-time adaptation and decision-making are also necessary.

Utility-based agents

A utility-based reflex agent goes beyond simple goal achievement by using a utility function to evaluate and select actions that maximize overall benefit.

While goal-based agents choose actions based on whether they fulfill a specific objective, utility-based agents consider a range of possible outcomes and assign a utility value to each, helping them determine the most optimal course of action. This allows for more nuanced decision-making, particularly in situations where multiple goals or tradeoffs are involved.

For example, a self-driving car might face a decision to choose between speed, fuel efficiency and safety when navigating a route. Instead of just aiming to reach the destination, it evaluates each option based on utility functions, such as minimizing travel time, maximizing fuel economy or ensuring passenger safety. The agent selects the action with the highest overall utility score.

An e-commerce company might employ a utility-based agent to optimize pricing and recommend products. The agent evaluates various options, such as sales history, customer preferences and inventory levels to make informed decisions on how to price items dynamically.

Utility-based reflex agents are effective in dynamic and complex environments, where simple binary goal-based decisions might not be sufficient. They help balance competing objectives and adapt to changing conditions, ensuring more intelligent, flexible behavior.

However, creating accurate and reliable utility functions can be challenging, as it requires careful consideration of multiple factors and their impact on decision outcomes.

Learning agents

A learning agent improves its performance over time by adapting to new experiences and data. Unlike other AI agents, which rely on predefined rules or models, learning agents continuously update their behavior based on feedback from the environment. This allows them to enhance their decision-making abilities and perform better in dynamic and uncertain situations.

Learning agents typically consist of 4 main components:

  • Performance element: Makes decisions based on a knowledge base.
  • Learning element: Adjusts and improves the agent's knowledge based on feedback and experience.
  • Critic: Evaluates the agent's actions and provides feedback, often in the form of rewards or penalties.
  • Problem generator: Suggests exploratory actions to help the agent discover new strategies and improve its learning.

For example, in reinforcement learning, an agent might explore different strategies, receiving rewards for correct actions and penalties for incorrect ones. Over time, it learns which actions maximize its reward and refine its approach.

Learning agents are highly flexible and capable of handling complex, ever-changing environments. They are useful in applications such as autonomous driving, robotics and virtual assistants that assist human agents in customer support.

The ability to learn from interactions makes learning agents valuable for applications in fields such as persistent chatbots and social media, where natural language processing (NLP) analyzes user behavior to predict and optimize content recommendations.

Multi-agent systems

As AI systems become more intricate, the need for hierarchical agents arises. These agents are designed to break down complex problems into smaller, manageable subtasks, making it easier to handle complex problems in real-world scenarios. Higher-level agents focus on overarching goals, while lower-level agents handle more specific tasks.

An AI orchestration that integrates the different types of AI agents can make for a highly intelligent and adaptive multi-agent system capable of managing complex tasks across multiple domains.

Such a system can operate in real time, responding to dynamic environments while continuously improving its performance based on past experiences.

For example, in a smart factory, a smart management system might involve reflexive autonomous agents handling basic automation by responding to sensor inputs with predefined rules. These agents help ensure that machinery reacts instantly to environmental changes, such as shutting down a conveyor belt if a safety hazard is detected.

Meanwhile, model-based reflex agents maintain an internal model of the world, tracking the internal state of machines and adjusting their operations based on past interactions, such as recognizing maintenance needs before failure occurs.

At a higher level, goal-based agents drive the factory’s specific goals, such as optimizing production schedules or reducing waste. These agents evaluate possible actions to determine the most effective way to achieve their objectives.

Utility-based agents further refine this process by considering multiple factors, such as energy consumption, cost efficiency and production speed, selecting actions that maximize expected utility.

Finally, learning agents continuously improve factory operations through reinforcement learning and machine learning (ML) techniques. They analyze data patterns, adapt workflows and suggest innovative strategies to optimize manufacturing efficiency.

By integrating all 5 types of AI agents, this AI-powered orchestration enhances decision-making processes, streamlines resource allocation and minimizes human intervention, leading to a more intelligent and autonomous industrial system.

As agentic AI continues to evolve, advancements in generative AI (gen AI) will enhance the capabilities of AI agents across various industries. AI systems are becoming increasingly adept at handling complex use cases and improving customer experiences.

Whether in e-commerce, healthcare or robotics, AI agents are optimizing workflows, automating processes and enabling organizations to solve problems faster and more efficiently.

Techsplainers Audio

Types of AI Agents

14/11/2025

DESCRIPTION

In this episode of "Techsplainers", host Alice explains the five main types of AI agents: simple reflex agents (like thermostats), model-based reflex agents (like robot vacuums), goal-based agents (like navigation robots), utility-based agents (like self-driving cars), and learning agents (like reinforcement learning systems). Each type is discussed in detail, highlighting its capabilities, applications, and limitations. The episode concludes by discussing the benefits of deploying multiple types of agents within a single system, emphasizing their potential in diverse industries for automation, optimization, and improved customer experiences.

Find more information at https://www.ibm.com/think/podcasts/techsplainers

Narrated by Alice Gomstyn

https://listen.casted.us/public/95/Techsplainers-by-IBM-28b0cf76/d0bbb98a

Using GitHub Actions to CLI JFrog, AWS, GCP

Mike's Notes

What I'm learning today. I'm learning fast as I go. Its all new :)

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

21/11/2025

Using GitHub Actions to CLI JFrog, AWS, GCP

By: Mike Peters
On a Sandy Beach: 21/11/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

I finally figured out how to implement CI/CD so Pipi can autonomously manage all remote cloud platforms.

  • AWS
  • Azure
  • GCP
  • IBM
  • etc

I was watching a video from the MLOPs community email that led me to JFrog (very useful), which led me to GitHub Actions. I had been looking for a way to enable Pipi 9 to autonomously control any Cloud Platform, but I did not know the correct technical terms, so I was asking the wrong questions. It's one of the disadvantages of being completely self-taught.

Use GitHub Actions

According to Google AI ..."

GitHub Actions can effectively control both Google Cloud Platform (GCP) and Amazon Web Services (AWS) Command Line Interfaces (CLIs) within your CI/CD workflows. This enables automation of cloud resource management, deployments, and other cloud-related tasks directly from your GitHub repositories.

  • Controlling AWS CLI with GitHub Actions:
  • Configure AWS Credentials:
  • Store your AWS Access Key ID and Secret Access Key as GitHub Secrets in your repository settings.

Use the aws-actions/configure-aws-credentials action to configure the AWS CLI with these secrets within your workflow. This action handles the secure setup of credentials for subsequent AWS CLI commands.

Execute AWS CLI Commands:

Once credentials are configured, you can use the run step in your workflow to execute any AWS CLI command.

Example:

Code

        - name: Configure AWS Credentials
          uses: aws-actions/configure-aws-credentials@v1
          with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: us-east-1

        - name: List S3 Buckets
          run: aws s3 ls

Controlling GCP CLI (gcloud) with GitHub Actions:

Authenticate to GCP:

Store your GCP Service Account Key (JSON format) as a GitHub Secret.
Use the google-github-actions/auth action to authenticate your workflow to GCP using this service account key.

Setup gcloud CLI:

Use the google-github-actions/setup-gcloud action to install and configure the gcloud CLI within your workflow. You can specify the desired gcloud version and project ID.

Execute gcloud Commands:

After authentication and gcloud setup, you can use the run step to execute gcloud commands.

Example:

Code

        - name: Authenticate to GCP
          uses: google-github-actions/auth@v1
          with:
            credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}

        - name: Setup gcloud CLI
          uses: google-github-actions/setup-gcloud@v1
          with:
            project_id: your-gcp-project-id

        - name: List GCS Buckets
          run: gcloud storage ls

Key Considerations:
  • Security: Always use GitHub Secrets to store sensitive credentials and implement the principle of least privilege for your cloud service accounts/IAM roles. Consider using OpenID Connect (OIDC) for enhanced security with AWS and GCP.
  • Actions Marketplace: Leverage pre-built actions from the GitHub Marketplace for common tasks like credential configuration and CLI setup, as demonstrated above.
  • Error Handling: Include error handling and logging in your workflows for better debugging and reliability.
  • Idempotency: Design your cloud operations to be idempotent, ensuring that running the workflow multiple times produces the same desired state without unintended side effects.

JFrog

JFrog looks great. Not cheap, but no one is better at security than the Israelis. They are the best in the world. So using their kit is a no-brainer.

There is no free tier, so plan for future use.

Next Question

  • Pipi can use CFML to easily output any of the code listed above.
  • How does that generated code then get into GitHub Actions?
  • So Pipi 9 can autonomously control GitHub Actions. (or GitLab, etc)
  • Would BoxLang do the job?
  • Am I using the correct technical terms?

Interesting examples

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Build and Deploy to GKE

on:
  push:
    branches:
      - main

env:
  PROJECT_ID: ${{ secrets.GKE_PROJECT }}
  GKE_CLUSTER: cluster-1    # Add your cluster name here.
  GKE_ZONE: us-central1-c   # Add your cluster zone here.
  DEPLOYMENT_NAME: gke-test # Add your deployment name here.
  IMAGE: static-site

jobs:
  setup-build-publish-deploy:
    name: Setup, Build, Publish, and Deploy
    runs-on: ubuntu-latest
    environment: production

    steps:
    - name: Checkout
      uses: actions/checkout@v5

    # Setup gcloud CLI
    - uses: google-github-actions/setup-gcloud@1bee7de035d65ec5da40a31f8589e240eba8fde5
      with:
        service_account_key: ${{ secrets.GKE_SA_KEY }}
        project_id: ${{ secrets.GKE_PROJECT }}

    # Configure Docker to use the gcloud command-line tool as a credential
    # helper for authentication
    - run: |-
        gcloud --quiet auth configure-docker

    # Get the GKE credentials so we can deploy to the cluster
    - uses: google-github-actions/get-gke-credentials@db150f2cc60d1716e61922b832eae71d2a45938f
      with:
        cluster_name: ${{ env.GKE_CLUSTER }}
        location: ${{ env.GKE_ZONE }}
        credentials: ${{ secrets.GKE_SA_KEY }}

    # Build the Docker image
    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
          --build-arg GITHUB_SHA="$GITHUB_SHA" \
          --build-arg GITHUB_REF="$GITHUB_REF" \
          .

    # Push the Docker image to Google Container Registry
    - name: Publish
      run: |-
        docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"

    # Set up kustomize
    - name: Set up Kustomize
      run: |-
        curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
        chmod u+x ./kustomize

    # Deploy the Docker image to the GKE cluster
    - name: Deploy
      run: |-
        ./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
        ./kustomize build . | kubectl apply -f -
        kubectl rollout status deployment/$DEPLOYMENT_NAME
        kubectl get services -o wide

What Founders Want

Mike's Notes

This is a useful resource for startups in NZ. Organised like a structured catalogue. It has a weekly mailing list for frequent updates.

Below is an index.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

21/11/2025

What Founders Want

By: 
What Founders Want: 21/11/2025

The single source of truth for New Zealand startup founders.

From raising capital and government grants to legal templates and hiring tools — built on what founders are actively searching for, but can’t find.

What’s New

  • Expert Editions (Updated Monthly)
  • Case Studies (Real Founder Journeys)

Save Money

  • NZ legal & financial templates 
  • Grants, R&D credits, co-funding
  • $1M+ in startup perks & discounts

Raise Money

  • Complete NZ capital directory
  • Angels, VCs, and alt-finance options
  • Curated contacts to save weeks of research

Make Money

  • AI Power Stack for NZ founders
  • Hiring & automation tools
  • Overseas easy-entry sales channels

Dead Startups

  • Hiring or being hired

What Founders Want Directories

  • Get Funded
  • Government Support for Startups in NZ
  • Startup Accelerators in New Zealand
  • NZ Startup Ecosystem Directory
  • Startup Perks & Credits (NZ-Verified)
  • Start a Startup in New Zealand — Step by Step
  • Free Legal & Financial Templates

Sales of AI-enabled teddy bear suspended after it gave advice on BDSM sex and where to find knives

This summary is not available. Please click here to view the post.

Agents in Production was excellent

Mike's Notes

Some initial reflections. I will add to this post over the coming week.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

21/11/2025

Agents in Production was excellent

By: Mike Peters
On a Sandy Beach: 20/11/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

I attended the online MLOPs event "Agents in Production held yesterday. It was hosted in the Netherlands. It started at 3am NZ time, so I missed most of it. But all the videos will be available soon, and I'll watch them all.

It was fantastic. The 30 speakers were leaders from engineering teams from every major AI company. NVIDIA, Google, Meta, OpenAI, Microsoft, Redis, Databook, Prosus, etc.

All talking about agents in depth. Lots of architecture here.

The audience was even broader, as great questions came through.

I could follow along and understand what they were talking about. This is in sharp contrast to NZ, where nobody is interested in this stuff. Nobody understands what I'm talking about, even in the NZ AI tech group, which focuses on using AI to build apps. This was about building AI itself and solving its problems.

It was fascinating. 

One thing I learned, which really surprised me, is that none of the AI agents described in the talks can evolve.

All the agents in Pipi 9 evolve.

I also found a solution to a big problem for Ajabbi.

I can't find anyone who can help do this work. In this room, there were plenty of people. Now I know where to look.

It was a lucky moment. I don't recall how I found out about this community and event. Maybe I got an invite. Next event, I will be better prepped and have questions to post in the Q&A. Will also figure out how to contact other participants. I look forward to meeting them.

Future

Maybe I will get to give one of these talks sometime. I find it really useful to bounce ideas in an open discussion. Always come back with more ideas. Listening is more important than talking.

Bolt on

I also had another epiphany last night while sleeping. I could use some of these LLMs as input into Pipi 9, combining the strength of both. This also confirms what I have been learning from testing Krobar.

Tristan, I might have a job for you. :)

I had been thinking about outputting to an LLM, but it never occurred to me to go the other way until I watched these talks and worked it out visually.

Workspaces for Screen

Mike's Notes

This is where I will keep detailed working notes on creating Workspaces for Screen. Eventually, these will become permanent documentation stored elsewhere. This replaces coverage in Industry Workspace written on 13/10/2025

Testing

The current online mockup is version 3 and will be updated frequently. If you are helping with testing, please remember to delete your browser cache so you see the daily changes. Eventually, a live demo version will be available for field trials.

Learning

Years ago, I was the "hands" of NZ sculptor Neil Dawson, later becoming a commercial sculptor. Then, I was a set builder and set engineer at The Court Theatre set workshop. I was producer and then director of a series of natural history interview videos. I had to build a production management system for logistics.  I also crewed on short films, documentaries, and TV live broadcasts. All lots of fun and learning on the job.

I also did most of the Physical Effects courses at the Stan Winston School of Character Arts, the best school on the planet. My happy place is being in a workshop, making stuff. I will use all these experiences, combined with learning from MovieLabs, to build out Workspaces for Screen for film crews, especially the art department.

Why

Ajabbi will be the first user of this workspace to produce video training content and record online interviews with authors. Most of whom have their writings often reproduced on this blog. Later, recreating historic moments in the discovery of science. Therefore, the modules will be completed to meet Ajabbi's needs. Later, this workspace will be made available to other users and expanded in scope.

Resources

References

  • Movie Lab

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

21/11/2025

Workspaces for Screen

By: Mike Peters
On a Sandy Beach: 20/11/2025

Mike is the inventor and architect of Pipi and the founder of Ajabbi.

Open-source

This open-source SaaS cloud system will be shared on GitHub and GitLab.

Dedication

This workspace is dedicated to the life and work of Dick Smith, who pioneered much of makeup special effects and generously coached so many others. A real gentleman.

Dick Smith

Source: https://web.archive.org/web/20160502184445im_/http://dicksmithmake-up.com/wp-content/uploads/2016/03/Dick-Smith.jpg

"Richard Emerson Smith (June 26, 1922 – July 30, 2014) was an American special make-up effects artist and author, (nicknamed "The Godfather of Make-Up") known for his work on such films as Little Big Man (1970), The Godfather (1972), The Exorcist (1973), Taxi Driver (1976), Scanners (1981) and Death Becomes Her (1992). He won a 1985 Academy Award for Best Makeup for his work on Amadeus and received a 2012 Academy Honorary Award for his career's work." - Wikipedia

MovieLabs

"MovieLabs is an independent non-profit organization founded by Disney, Fox, Paramount, Sony, Universal, and Warner Bros. to advance research and development in motion picture distribution and protection. It maintains project engineering, technology market analysis and standards development/evangelism among its core areas of focus and partners with leading universities, corporations, technology startups, service providers, and standards bodies to further explore innovative technologies in the field of digital media.

Key publications and standards available through MovieLabs include:

  • Entertainment ID Registry (EIDR)
  • Common Metadata
  • Content Availability Metadata (Avails)
  • Common Metadata Ratings
  • Next Generation/HDR Video
  • Enhanced Content Protection (ECP)
  • Creative Works Ontology

"- Wikipedia

MovieLabs Digital Distribution Framework

Source: https://movielabs.com/md/images/mddf-workflow-201906.png

Asset Ordering, Delivery and Tracking

Source: https://movielabs.com/md/delivery/OrderingDelivery.png

Narrative Element

Source: https://movielabs.com/wp-content/uploads/2022/09/omc_diagram.png

Change Log

Ver 3 includes development, preproduction, production, post-production, and distribution.

Existing products

This is a basic comparison of features found in screen production software.

Yamdu

Basic

  • Cast and Crew Management
  • Script Import and Breakdown
  • Distribution
  • Shooting Scheduling
  • Call Sheet Builder
  • Production Calendar
  • Budgeting (Beta)
  • Shot List and Storyboard
  • Sustainability
  • Mobile app for iOS and Android
Extra
  • Episodic Feature Set
  • Restrict Access to Sensitive Information
  • Personnel Master Data
  • Units
  • Time cards
  • Credits Generator
  • PDF and Video Watermarks
  • Travel Management
  • Resource Planning
  • Customized File Sharing
  • Advanced Email Sending Options
  • Story Management
  • Timesheets
  • Activity Logs
  • CO₂e calculation with Klimaktiv
  • Brand White Labeling
  • SSO / SAML
  • Dedicated Account Manager

[TABLE]

Data Model

words

Database Entities

  • Facility
  • Party
  • etc

Standards

The workspace needs to comply with all international standards.

  • MovieLabs

Schema

An XML schema needs to be created for scripts. There is none. Different script software can import and export with each other. To import a script into Workspaces for Screen, it would be easier to convert everything to XML and hide it behind the User Interface.

Variables

Source: Krombar.ai simulation platform (beta)

Node Name Type Estimates / Formula
Location Scouting Cost per Project Input Variable Normal(μ=12500.0, σ=11119.516638792014)
Cost per Edit Input Variable Normal(μ=1750.0, σ=1853.2527731320024)
Shooting Days Input Variable discrete_normal distribution
Number of Sets Input Variable discrete_normal distribution
Target Markets Input Variable discrete_normal distribution
Campaigns Input Variable discrete_normal distribution
Planning Staff Input Variable discrete_normal distribution
Script Development Cost Input Variable Normal(μ=100000.0, σ=30395.136778115502)
Storyboard Artists per Project Input Variable discrete_normal distribution
Budget & Schedule Finalization Calculation Step round(Projects in Pre-Production * Available Budget / Average Budget per Project)
Design HODs Cost per Project Input Variable Normal(μ=40000.0, σ=29652.04437011204)
Cost per VFX Shot Input Variable Normal(μ=8500.0, σ=9636.914420286412)
Set Complexity Factor Input Variable Normal(μ=2.0, σ=0.60790273556231)
Creative Assets per Campaign Input Variable discrete_normal distribution
Weeks of Campaign Planning Input Variable discrete_normal distribution
Number of Scripts Input Variable discrete_normal distribution
Rights Acquisition Cost Input Variable Normal(μ=62500.0, σ=22796.352583586628)
Projects in Pre-Production Input Variable discrete_normal distribution
Cast & Crew per Project Input Variable discrete_normal distribution
Market Research & Campaign Planning Calculation Step round(Target Markets * Campaigns per Market * Planning Staff * Weeks of Campaign Planning)
Net Profit Calculation Step Total Revenue - sum(Development Costs,Pre-Production Costs,Production Costs,Post-Production Costs,Marketing & Distribution Costs)
Storyboard/TechScout Cost per Project Input Variable Normal(μ=10000.0, σ=7413.01109252801)
Cost per Sound Edit Input Variable Normal(μ=1250.0, σ=1111.9516638792015)
Unit Set Construction Cost Input Variable Normal(μ=95000.0, σ=15197.568389057751)
Campaigns per Market Input Variable discrete_normal distribution
Press Kits per Campaign Input Variable discrete_normal distribution
Creative Staff Input Variable discrete_normal distribution
Shooting Days per Script Input Variable discrete_normal distribution
Budget/Finance Cost Input Variable Normal(μ=200000.0, σ=60790.273556231004)
Available Budget Input Variable Normal(μ=275000.0, σ=333585.49916376045)
Insurance Policies per Project Input Variable discrete_normal distribution
ROI % Calculation Step 100*Net Profit/sum(Development Costs,Pre-Production Costs,Production Costs,Post-Production Costs,Marketing & Distribution Costs)
Location Scouting & Permits Calculation Step round(max(0, Projects) * Locations per project)
Development & Greenlight Calculation Step round(max(0, Projects) * Development approval rate)
Casting/Crew Contracts Cost per Project Input Variable Normal(μ=80000.0, σ=59304.08874022408)
Cost per Score Input Variable Normal(μ=6000.0, σ=5930.408874022408)
Script Pages per Day Input Variable Normal(μ=6.0, σ=2.43161094224924)
Festival Submissions per Campaign Input Variable discrete_normal distribution
Weeks of Creative Work Input Variable discrete_normal distribution
Packaging/Casting Cost Input Variable Normal(μ=250000.0, σ=91185.41033434651)
Average Budget per Project Input Variable Normal(μ=55000.0, σ=66717.0998327521)
Storyboarding, Shot Listing, Tech Scout Calculation Step round(Projects in Pre-Production * Storyboard Artists per Project)
Set Construction & Prep Calculation Step round(Number of Sets * Set Complexity Factor * Unit Set Construction Cost)
Insurance/Compliance Cost per Project Input Variable Normal(μ=16500.0, σ=12602.118857297617)
Cost per Master Input Variable Normal(μ=1750.0, σ=1853.2527731320024)
Publicity Events per Campaign Input Variable discrete_normal distribution
Press Kits per Release Input Variable discrete_normal distribution
Greenlight Approval Cost Input Variable Normal(μ=35000.0, σ=9118.54103343465)
Designers per Project Input Variable discrete_normal distribution
Casting & Crew Hiring/Contracts Calculation Step round(Projects in Pre-Production * Cast & Crew per Project)
Daily Shooting Calculation Step round(Shooting Days per Script * Number of Scripts + Shooting Days + Script Pages per Day * Number of Scripts)
Cost per Test Screening Input Variable Normal(μ=11000.0, σ=13343.419966550418)
Distribution Contracts per Campaign Input Variable discrete_normal distribution
Number of Releases Input Variable discrete_normal distribution
Insurance & Compliance Calculation Step round(Projects in Pre-Production * Insurance Policies per Project)
Set, Costume, Makeup, Props Design Calculation Step round(Projects in Pre-Production * Designers per Project)
Budget/Schedule Finalization Cost Calculation Step
Ops per Campaign Input Variable discrete_normal distribution
Number of Festivals Input Variable discrete_normal distribution
Picture Editing & Lock Calculation Step round(max(0, Projects) * Edits per project)
Location Scouting Cost Calculation Step Location Scouting & Permits * Location Scouting Cost per Project
Sales Ops per Campaign Input Variable discrete_normal distribution
Submissions per Festival Input Variable discrete_normal distribution
VFX/Animation Calculation Step round(max(0, Projects) * VFX shots per project)
Design HODs Cost Calculation Step Set, Costume, Makeup, Props Design*Design HODs Cost per Project
Junket Events per Release Input Variable discrete_normal distribution
Trailer, Teaser, Poster Creative Calculation Step round(Campaigns * Creative Assets per Campaign * Creative Staff * Weeks of Creative Work)
Sound Editing, ADR, Foley Calculation Step round(max(0, Projects) * Sound edits per project)
Storyboard/TechScout Cost Calculation Step Storyboarding, Shot Listing, Tech Scout * Storyboard/TechScout Cost per Project
Distribution Deals per Market Input Variable discrete_normal distribution
Press Kit, Screener, Critics Setup Calculation Step round(Campaigns * Press Kits per Campaign * Press Kits per Release * Number of Releases)
Scoring & Recording Calculation Step round(max(0, Projects) * Scores per project)
Casting/Crew Contracts Cost Calculation Step Casting & Crew Hiring/Contracts*Casting/Crew Contracts Cost per Project
Number of Markets Input Variable discrete_normal distribution
Festival & Award Submissions Calculation Step round(Campaigns * Festival Submissions per Campaign * Number of Festivals * Submissions per Festival)
Color Grade, Titles, Mastering Calculation Step round(max(0, Projects) * Masters per project)
Insurance/Compliance Cost Calculation Step Insurance & Compliance*Insurance/Compliance Cost per Project
DCPs per Release Input Variable discrete_normal distribution
Publicity/Junket & Media Interviews Calculation Step round(Campaigns * Publicity Events per Campaign * Junket Events per Release * Number of Releases)
Test Screening/Studio Review Calculation Step round(max(0, Projects) * Test screenings per project)
Set Construction Cost Calculation Step
Subtitling Ops per Release Input Variable discrete_normal distribution
Daily Shooting Cost Input Variable Normal(μ=875000.0, σ=926626.3865660012)
Distribution Contracts Calculation Step round(Campaigns * Distribution Contracts per Campaign * Distribution Deals per Market * Number of Markets)
Censorship Ops per Release Input Variable discrete_normal distribution
Onset FX/Stunts Cost Input Variable Normal(μ=175000.0, σ=185325.27731320023)
DCP/Subtitling/Censorship Ops Calculation Step round(Campaigns * Ops per Campaign * DCPs per Release * Subtitling Ops per Release * Censorship Ops per Release)
Sales Ops Staff Input Variable discrete_normal distribution
Dailies Review Cost Input Variable Normal(μ=30000.0, σ=29652.04437011204)
Intl & Domestic Sales Ops Calculation Step round(Campaigns * Sales Ops per Campaign * Sales Ops Staff * Weeks of Sales Ops)
Weeks of Sales Ops Input Variable discrete_normal distribution
Prod Office Ops Cost Input Variable Normal(μ=62500.0, σ=55597.583193960076)
Picture Editing Cost Calculation Step Picture Editing & Lock*Cost per Edit
VFX Cost Calculation Step VFX/Animation*Cost per VFX Shot
Sound Editing Cost Calculation Step Sound Editing, ADR, Foley*Cost per Sound Edit
Music Scoring Cost Calculation Step Scoring & Recording*Cost per Score
Color/Mastering Cost Calculation Step Color Grade, Titles, Mastering*Cost per Master
Test Screening Cost Calculation Step Test Screening/Studio Review*Cost per Test Screening
Marketing Planning Cost Calculation Step
Creative Asset Cost Calculation Step
Press/Publicity Cost Calculation Step
Marketing & Distribution Costs Calculation Step sum(Marketing Planning Cost, Creative Asset Cost, Press/Publicity Cost, Festivals Cost, PR Junket Cost, Distribution Deal Cost, Delivery/Compliance Cost, Sales Ops Cost)
Festivals Cost Input Variable Normal(μ=62500.0, σ=55597.583193960076)
PR Junket Cost Input Variable Normal(μ=62500.0, σ=55597.583193960076)
Distribution Deal Cost Input Variable Normal(μ=62500.0, σ=55597.583193960076)
Delivery/Compliance Cost Input Variable Normal(μ=62500.0, σ=55597.583193960076)
Sales Ops Cost Input Variable Normal(μ=62500.0, σ=55597.583193960076)
Domestic Box Office Input Variable Normal(μ=17500000.0, σ=18532527.731320024)
International Box Office Input Variable Normal(μ=11500000.0, σ=12602118.857297616)
Digital/VOD Revenue Input Variable Normal(μ=4500000.0, σ=5189107.764769607)
Home Video Revenue Input Variable Normal(μ=2250000.0, σ=2594553.8823848036)
Merchandising Revenue Input Variable Normal(μ=1050000.0, σ=1408472.1075803218)
Licensing Revenue Input Variable Normal(μ=1550000.0, σ=2149773.216833123)
Development Costs Calculation Step sum(Script Development Cost, Rights Acquisition Cost, Budget/Finance Cost, Packaging/Casting Cost, Greenlight Approval Cost)
Pre-Production Costs Calculation Step sum(Budget/Schedule Finalization Cost, Location Scouting Cost, Design HODs Cost, Storyboard/TechScout Cost, Casting/Crew Contracts Cost, Insurance/Compliance Cost)
Production Costs Calculation Step sum(Set Construction Cost,Daily Shooting Cost,Onset FX/Stunts Cost,Dailies Review Cost,Prod Office Ops Cost)
Post-Production Costs Calculation Step sum(Picture Editing Cost,VFX Cost,Sound Editing Cost,Music Scoring Cost,Color/Mastering Cost,Test Screening Cost)
Total Revenue Calculation Step sum(Domestic Box Office, International Box Office, Digital/VOD Revenue, Home Video Revenue, Merchandising Revenue, Licensing Revenue)
Total Footage Hours Input Variable Normal(μ=60.0, σ=24.3161094224924)
Edits per Hour Input Variable Normal(μ=6.0, σ=5.930408874022408)
Total Shots Input Variable discrete_normal distribution
VFX Shot Percentage Input Variable beta distribution
Sound Edits per Hour Input Variable Normal(μ=17.5, σ=18.532527731320023)
Total Minutes of Score Input Variable Normal(μ=75.0, σ=66.71709983275208)
Minutes per Score Input Variable Normal(μ=3.0, σ=2.965204437011204)
Hours per Master Input Variable Normal(μ=5.0, σ=4.447806655516806)
Test Screenings Required Input Variable discrete_normal distribution
Picture Editors Input Variable discrete_normal distribution
Weeks of Editing Input Variable discrete_normal distribution
VFX Shots per Sequence Input Variable discrete_normal distribution
Number of Sequences Input Variable discrete_normal distribution
Sound Editors Input Variable discrete_normal distribution
Weeks of Sound Editing Input Variable discrete_normal distribution
Scores per Film Input Variable discrete_normal distribution
Number of Films Input Variable discrete_normal distribution
Masters per Film Input Variable discrete_normal distribution
Test Screenings per Film Input Variable discrete_normal distribution
Sets per Script Input Variable discrete_normal distribution
Release & Revenue Collection per Project Input Variable log_normal distribution
Development Opportunities Input Variable Normal(μ=30.0, σ=29.65204437011204)
Script Approval Rate Input Variable beta distribution
Scripts Input Variable Normal(μ=17.5, σ=18.532527731320023)
Rights Acquisition Rate Input Variable beta distribution
Projects with Rights Secured Input Variable Normal(μ=11.0, σ=13.343419966550417)
Budget Approval Rate Input Variable beta distribution
Budgeted Projects Input Variable Normal(μ=8.0, σ=10.378215529539213)
Packaging Success Rate Input Variable beta distribution
Packaged Projects Input Variable Normal(μ=5.5, σ=6.671709983275209)
Greenlight Approval Rate Input Variable beta distribution
Available Locations Input Variable discrete_normal distribution
Locations per Project Input Variable discrete_normal distribution
Available Designers Input Variable discrete_normal distribution
Available Storyboard Artists Input Variable discrete_normal distribution
Available Cast & Crew Input Variable discrete_normal distribution
Available Insurance Policies Input Variable discrete_normal distribution
Number of Action Sequences Input Variable discrete_normal distribution
FX Complexity Factor Input Variable Normal(μ=2.0, σ=1.482602218505602)
Review Sessions per Day Input Variable Normal(μ=2.5, σ=2.223903327758403)
Prep Days Input Variable discrete_normal distribution
Wrap Days Input Variable discrete_normal distribution
Project Pitches Input Variable discrete_normal distribution
Greenlight Rate Input Variable beta distribution
Pre-Production Start Rate Input Variable beta distribution
Production Start Rate Input Variable beta distribution
Post-Production Start Rate Input Variable beta distribution
Marketing & Distribution Rate Input Variable beta distribution
SFX Sequences per Script Input Variable discrete_normal distribution
Dailies Reviews per Day Input Variable discrete_normal distribution
Office Staff Input Variable discrete_normal distribution
Production Weeks Input Variable discrete_normal distribution
Projects Input Variable discrete_normal distribution
Scripts per Project Input Variable discrete_normal distribution
Box Office Revenue Input Variable log_normal distribution
Ancillary Revenue Input Variable log_normal distribution
Streaming Revenue Input Variable log_normal distribution
Locations per project Input Variable discrete_normal distribution
Development approval rate Input Variable beta distribution
Edits per project Input Variable discrete_normal distribution
VFX shots per project Input Variable discrete_normal distribution
Sound edits per project Input Variable discrete_normal distribution
Scores per project Input Variable discrete_normal distribution
Masters per project Input Variable discrete_normal distribution
Test screenings per project Input Variable discrete_normal distribution
On-set FX sequences per project Input Variable discrete_normal distribution
Dailies reviews per project Input Variable discrete_normal distribution
Production office days per project Input Variable discrete_normal distribution
Scripts per project Input Variable discrete_normal distribution
Legal reviews per project Input Variable discrete_normal distribution
Budgeting sessions per project Input Variable discrete_normal distribution
Packaging sessions per project Input Variable discrete_normal distribution
Greenlight approval rate Input Variable beta distribution
Distribution fees Input Variable beta distribution
Preproduction rate Input Variable beta distribution
Production rate Input Variable beta distribution
Postproduction rate Input Variable beta distribution
Marketing rate Input Variable beta distribution
On-Set SFX/VFX/Stunts Calculation Step round(max(0, Projects) * On-set FX sequences per project)
Dailies & Production Review Calculation Step round(max(0, Projects) * Dailies reviews per project)
Production Office Operations Calculation Step round(max(0, Projects) * Production office days per project)
Script Development Calculation Step round(max(0, Projects) * Scripts per project)
Rights Acquisition & Legal Calculation Step round(max(0, Projects) * Legal reviews per project)
Budgeting & Finance Calculation Step round(max(0, Projects) * Budgeting sessions per project)
Project Packaging & Casting Calculation Step round(max(0, Projects) * Packaging sessions per project)
Studio Greenlight Calculation Step round(max(0, Projects) * Greenlight approval rate)
Release & Revenue Collection Calculation Step Total Revenue - Distribution fees
Pre-Production Calculation Step round(max(0, Development & Greenlight) * Preproduction rate)
Principal Photography Calculation Step round(max(0, Pre-Production) * Production rate)
Post-Production Calculation Step round(max(0, Principal Photography) * Postproduction rate)
Marketing & Distribution Calculation Step round(max(0, Post-Production) * Marketing rate)

Simulation notes

Workspace navigation menu

This default outline needs a lot of work. The outline can be easily customised by future users using drag-and-drop and tick boxes to turn features off and on.

Developers can build plugins and add integrations.

  • Enterprise Account
    • Applications
      • Screen (v3)
        • Development
          • Casting
          • Script
        • Preproduction
          • Budget
          • Location
          • Previsualisation
          • Schedule
          • Story Board
        • Production
          • Craft
            • Animals
            • Atmosphere
            • Costume
            • Greens
            • Makeup & Hair
            • Minature
            • Prop
            • Set
            • Vehicles
            • Wardrobe
          • Technical
            • Audio
            • Camera
              • Shot
            • Grips
            • Lighting
        • Post Production
          • Editing
          • Music
          • Subtitle
          • Visual Effects
        • Distribution
          • (To come)
    • Customer (v2)
      • Bookmarks
        • (To come)
      • Support
        • Contact
        • Forum
        • Live Chat
        • Office Hours
        • Requests
        • Tickets
      • (To come)
        • Feature Vote
        • Feedback
        • Surveys
      • Learning
        • Explanation
        • How to Guide
        • Reference
        • Tutorial
    • Settings (v3)
      • Account
      • Billing
      • Deployments
        • Workspaces
          • Modules
          • Plugins
          • Templates
            • Client/Agency
            • Episodic TV
            • Feature Film
            • Short Film
          • Users