What Is Information? The Answer Is a Surprise.

Mike's Notes

A great summary introduction from Quanta Magazine.

Resources

References

  • The Mathematical Theory of Communication (1949). Claude E. Shannon and Warren Weaver.

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Fundamentals from Quanta Magazine
  • Home > Handbook > 

Last Updated

06/06/2025

What Is Information? The Answer Is a Surprise.

By: Ben Brubaker
Fundamentals from Quanta Magazine: 03/06/2025

Ben Brubaker is a staff writer covering computer science for Quanta Magazine. He previously covered physics as a freelance journalist, and his writing has also appeared in Scientific American, Physics Today, and elsewhere. He has a Ph.D. in physics from Yale University and conducted postdoctoral research at the University of Colorado, Boulder..

It’s often said that we live in the information age. But what exactly is information? It seems a more nebulous resource than iron, steam and other key substances that have powered technological transformations. Indeed, information didn’t have a precise meaning until the work of the computer science pioneer Claude Shannon in the 1940s. 

Shannon was inspired by a practical problem: What’s the most efficient way to transmit a message over a communication channel like a telephone line? To answer that question, it’s helpful to reframe it as a game. I choose a random number between 1 and 100, and your goal is to guess the number as quickly as possible by asking yes-or-no questions. “Is the number greater than zero?” is clearly a bad move — you already know that the answer will be yes, so there’s no point in asking. Intuitively, “Is the number greater than 50?” is the best opening move. That’s because the two possible answers are equally likely: Either way, you’ll learn something you couldn’t have predicted. 

In his famous 1948 paper “A Mathematical Theory of Communication,” Shannon devised a formula that translated this intuition into precise mathematical terms, and he showed how the same formula can be used to quantify the information in any message. Roughly speaking, the formula defines information as the number of yes-or-no questions needed to determine the contents of a message. More predictable messages, by this measure, contain less information, while more surprising ones are more informative. Shannon’s information theory laid the mathematical foundation for data storage and transmission methods that are now ubiquitous (including the error correction techniques that I discussed in the August 5, 2024, issue of Fundamentals). It also has more whimsical applications. As Patrick Honner explained in a 2022 column, information theory can help you win at the online word-guessing game Wordle.

In a 2020 essay for Quanta, the electrical engineer David Tse reflected on a curious feature of information theory. Shannon developed his iconic formula to solve a real-world engineering problem, yet the underlying mathematics is so elegant and pervasive that it increasingly seems as if he hit upon something more fundamental. “It’s as if he discovered the universe’s laws of communication, rather than inventing them,” Tse wrote. Indeed, Shannon’s information theory has turned out to have unexpected connections to many different subjects in physics and biology.

What’s New and Noteworthy

The first surprising link between information theory and physics was already present in Shannon’s seminal paper. Shannon had previously discussed his theory with the legendary mathematician John von Neumann, who observed that Shannon’s formula for information resembled the formula for a mysterious quantity called entropy that plays a central role in the laws of thermodynamics. Last year, Zack Savitsky traced the history of entropy from its origins in the physics of steam engines to the nanoscale “information engines” that researchers are developing today. It’s a beautiful piece of science writing that also explores the philosophical implications of introducing information — an inherently subjective quantity — into the laws of physics.

Such philosophical questions are especially relevant for researchers studying quantum theory. The laws of quantum physics were devised in the 1920s to explain the behavior of atoms and molecules. But in the past few decades, researchers have realized that it’s possible to derive all the same laws from principles that don’t seem to have anything to do with physics — instead, they’re based on information. In 2017, Philip Ball explored what researchers have learned from these attempts to rebuild quantum theory.

Physics isn’t the only field influenced by ideas from information theory. Soon after Shannon’s paper, information became central to the way researchers think about genetics. More recently, some researchers have brought principles from information theory to bear on some of the thorniest questions in biology. In a 2015 Q&A with Kevin Hartnett, the biologist Christoph Adami described how he uses information theory to explore the origins of life. In April, Ball wrote about a new effort to reframe biological evolution as a special case of a more fundamental “functional information theory” that drives the emergence of complexity in the universe. This theory is still speculative, but it illustrates the striking extent of information theory’s influence.

As the astrobiologist Michael Wong told Ball, “Information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.” One thing seems certain: Researchers studying information can surely expect more surprises in the coming years.

Unix Mindset: MCP Is Unix Pipes for AI

Mike's Notes

"The Model Context Protocol (MCP) is an open standard, open-source framework introduced by Anthropic to standardize the way artificial intelligence (AI) models like large language models (LLMs) integrate and share data with external tools, systems, and data sources. Technology writers have dubbed MCP “the USB-C of AI apps”, underscoring its goal of serving as a universal connector between language-model agents and external software. Designed to standardize context exchange between AI assistants and software environments, MCP provides a model-agnostic universal interface for reading files, executing functions, and handling contextual prompts. It was officially announced and open-sourced by Anthropic in November 2024, with subsequent adoption by major AI providers including OpenAI and Google DeepMind." - Wikipedia

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library >
  • Home > Handbook > 

Last Updated

05/06/2025

Unix Mindset: MCP Is Unix Pipes for AI

By: Kingsley Uyi Idehen
LinkedIn: 01/06/2025

Founder & CEO at OpenLink Software | Driving GenAI-Based AI Agents | Harmonizing Disparate Data Spaces (Databases, Knowledge Bases/Graphs, and File System Documents).

The “Unix mindset applied to AI” is a compelling paradigm that draws from Unix’s foundational design philosophy and applies it to AI systems architecture. It’s a powerful insight I came across while digesting a recent presentation by Reuven Cohen.

Here’s how Unix pipes and the Model Context Protocol (MCP) embody this approach:

Unix Philosophy Core Tenets

The Unix philosophy revolves around several key principles:

  1. Do one thing well – Build small, focused tools instead of monolithic applications
  2. Composability – Chain simple tools to create complex workflows
  3. Universal interface – Use text streams as a common data format
  4. Modularity – Favor loosely coupled components that can be mixed and matched

Unix Pipes as the Model

Unix pipes (|) are the quintessential example of this philosophy. You can write:

cat data.txt | grep "error" | sort | uniq -c | head -10

Each tool (cat, grep, sort, uniq, head) performs one task exceptionally well. Together, they form a powerful, composable data-processing pipeline. The magic lies in composition, not in any single tool.

MCP Protocol: Pipes for AI

The Model Context Protocol extends this Unix mindset into the world of AI:

  1. Standardized Interfaces – Like Unix pipes use text, MCP uses standardized JSON-RPC protocols for AI-tool communication—providing a universal interface for AI systems to interact with services and data.
  2. Composable AI Workflows – Rather than building monolithic AI systems, you can compose:Data connectors (to databases, APIs, file systems)Processing tools (calculators, web scrapers, code interpreters)Specialized models (e.g., for vision, reasoning, code generation)Output formatters (to generate documents, charts, or dashboards)
  3. Tool Interoperability – Vendors can create MCP-compatible tools that work together seamlessly—just like Unix tools from different sources pipe into one another without friction.

Practical Applications

This enables streamlined, reusable AI workflows such as:

Document → Semantic Analysis → Content Transformation → Data Lookup → Formatting → Publication 

Each step is a focused component that adheres to MCP protocols. You’re not locked into a single vendor’s ecosystem—you’re free to mix the best tools for the job.

The Broader Vision

This marks a shift from AI as a black box to AI as composable infrastructure—where intelligence is modular, interoperable, and infinitely reusable, just like the Unix tools that have underpinned computing for decades.

BTW — Google Gemini’s Canvas now includes an HTML-based infographic generator, which I used to create an interactive visual version of this concept. It also includes rich metadata, offering yet another showcase of the powerful symbiosis between recent Large Language Model (LLM) innovations and the long-established (and now increasingly appreciated) power of structured data representation—rooted in the same conceptual tenets that gave rise to the World Wide Web: Linked Data Principles (where entities and entity relationship types a named using hyperlinks).

View of the Infographic version of this article using the OpenLink Data Sniffer Browser extension for discovering and visualizing document metadata

You can view the Infographic by clicking on the link below:

  • Unix Mindset, Pipes & MCP for AI: An Infographic

This is the kind of flexibility and power our Virtuoso platform delivers—seamlessly combining Data Spaces with a full-featured Web Application Server.

If you haven’t yet explored Virtuoso—or its new OpenLink AI Layer (OPAL) add-on—you’re missing a direct path to harnessing the transformative potential of AI that’s redefining the future of software.

Why wait? The future is composable, interoperable, and agentic—and Virtuoso gets you there, faster.

How Math is Visual - by Scientific American

Mike's Notes

A great demonstration of visual thinking as a discovery tool in maths.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Authors > Benoit Mandelbrot
  • Home > Handbook > 

Last Updated

04/06/2025

How Math is Visual - by Scientific American

By: Marissa Fessenden
Scientific American: 03/01/2013

Papers from Benoit Mandelbrot's office offer a peek into the mathematician's thinking process. His work and that of his contemporaries show how images can inform theory and discovery.

Markov Chain Monte Carlo: Made Simple Once and For All

Mike's Notes

A bit of helpful background.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Subscriptions > Towards Data Science
  • Home > Handbook > 

Last Updated

03/06/2025

Markov Chain Monte Carlo: Made Simple Once and For All

By: Pol Marin
Towards Data Science: 01/03/2024

Introduction to MCMC, dividing it into its simplest terms.

I recently posted an article where I used Bayesian Inference and Markov chain Monte Carlo (MCMC) to predict the CL round of 16 winners. There, I tried to explain Bayesian Statistics in relative depth but I didn’t tell much about MCMC to avoid making it excessively large. The post:

  • Using Bayesian Modelling to Predict The Champions League

So I decided to dedicate a full post to introduce Markov Chain Monte Carlo methods for anyone interested in learning how they work mathematically and when they prove to be useful.

To tackle this post, I’ll adopt the divide-and-conquer strategy: divide the term into its simplest terms and explain them individually to then solve the big picture. So this is what we’ll go through:

  • Monte Carlo methods
  • Stochastic processes
  • Markov Chain
  • MCMC

Monte Carlo Methods

A Monte Carlo method or simulation is a type of computational algorithm that consists of using sampling numbers repeatedly to obtain numerical results in the form of the likelihood of a range of results of occurring.

In other words, a Monte Carlo simulation is used to estimate or approximate the possible outcomes or distribution of an uncertain event.

A simple example to illustrate this is by rolling two dice and adding their values. We could easily compute the probability of each outcome but we could also use Monte Carlo methods to simulate 5,000 dice-rollings (or more) and get the underlying distribution.

Stochastic Processes

Wikipedia’s definition is "A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set"[1].

In more readable terms: "it’s any mathematical process that can be modeled with a family of random variables".[2]

Let’s use a simple example to understand the concept. Imagine you put a video camera on your favorite store to, once every 2 minutes, check how many visitors there are. We define X(0) as the initial state and it shows the number of visitors seen at t=0. Then, 2 minutes later, we see X(1) and so on.

The state space is a set of values that our random variables (X(i)) can adopt, and they can get from 1 to the maximum store capacity.

One of the properties of a stochastic process is that whatever happens in a specific moment is conditioned to what has happened in the preceding moments. Keeping up with our example, if we have 100 visitors at t=0, the probability of having 100 ± 20 at t=1 is greater than seeing it drop to 10, for example (if no unexpected event happens). Therefore, these X variables aren’t independent.

Markov Chains

A Markov chain is a sequence of numbers where each number is dependent on the previous value of the sequence.

So it’s a stochastic method with one peculiarity: knowing the current state is as good as knowing the entire history. In mathematical terms, we say that a stochastic process is Markovian if X(t+1) conditioned to x(1), x(2),…x(t) only depends on x(t):

Mathematical expression – Image by the author

Keeping up with our example, for it to be considered a Markov chain we would need the number of visitors in a given time – t – to only depend on the number of visitors we saw in the previous instant – t-1. That’s not true in real life but imagine it is, then we define the transition probability as the probability of going from state i to state j in a specific instant:

Transition probability in a Markov chain – Image by the author

And, if that probability is time-independent, we say it’s stationary.

With this transition probability, we now define the transition matrix, which is just a matrix with all transition probabilities:

Markovian transition matrix – Image by the author

This matrix comes in handy when we want to compute the probabilities of transitioning from one state to another in n steps, which is achieved mathematically with power operations on the matrix:


Power operation on the matrix to get transition probability after n steps – Image by the author

Let’s define now a new – and dumb – example, in which we consider that a striker’s probability of scoring a goal in a football (soccer) match depends only on whether he/she scored in the previous game or not. Because we suppose it’s also time-independent – when the match is played doesn’t matter – we are working with stationary transition probabilities.

Concretely, if a player scored in the previous match, we assume the probability of scoring again in the next game is 70% (the player is hypermotivated to keep the streak going). If the player doesn’t score, this probability drops to 40%.

Let’s put that into the transition matrix:

Transition matrix for our example – Image by the author

The proper way to read it is: we have two possible outcomes (goal or no goal). Row 1 defines the next game probabilities for the case in which the player has scored; row 2 does the same but for the case in which he/she hasn’t scored. Columns are read similarly: the first one relates to the probabilities of scoring and the second to the probabilities of not scoring.

So, for example, 0.7 is the probability of scoring after having scored in the previous game.

Now, what are the chances that a certain player scores in the n = 2 game knowing that he hasn’t scored today?

Transition matrix for n=2 – Image by the author

If the player hasn’t scored today, we have to focus on the second row. As we’re interested in the chances of scoring, we focus on the first column. And where these both intersect we have 0.52–the probability of scoring in the game ahead of the next one is 52%.

We could want to work out the marginal distributions for each X(t) and we can do it by using the initial conditions in which the chain initialized: X(0).

Keeping up with the example, the question would now be: knowing that the player has a 50–50% chance of scoring in the first game, what are the chances that he/she scores then and in the second game ahead of the first?

Transition matrix with marginal distributions – Image by the author

The answer is 0.565, or 56.5%.

What’s curious about Markov Chains is that, independently of which values we choose for p0, we might end up with the same distribution after a certain number of iterations. That’s called a stationary distribution, and this is key for MCMC.

Markov Chain Monte Carlo (MCMC)

Now it’s time to combine both methods together.

MCMC methods constitute Monte Carlo simulations where the samples are drawn from random Markov chain sequences to form a probability distribution. In the case of Bayesian modelling, this stationary distribution will be the posterior distribution.

Simulating the chain after a given set of steps (what’s called the burn-in phase) we’ll get us to the desired distribution. These simulations are dependent on each other but, if we discard a few after certain iterations, we make sure these simulations are almost independent (thinning).

MCMC comes in handy when we want to perform inference for probability distributions where independent samples from the distribution cannot be easily drawn.

Regarding the different MCMC algorithms that exist, we’ll focus on the two more common ones:

Gibbs Sampling: this algorithm for sampling samples from the conditional distributions. Here, we sample our variables based on the distribution conditional to the other variables and iteratively repeat this process. For example, in a case where we have 3 variables, we would simulate the first one by sampling, for each t in 1…N iterations:


Variable update using conditional distribution – Image by the author

Metropolis-Hastings: is usually the alternative to Gibbs when simulating the complete conditionals isn’t possible (i.e. when we cannot sample a variable conditioned to all the other ones). This works by proposing a candidate for the next step in the Markov chain – x(cand) – by sampling from a simple distribution – q – built around x(t-1). Then we choose to accept the candidate or not with a determined probability (if it’s not accepted, then the chain doesn’t change). This probability is defined by:

Acceptance probability in Metropolis-Hastings – Image by the author

Conclusion

In short, MCMC methods consist of drawing random samples conditioned to the previous value/step only and potentially deciding whether we keep them or not. And repeat multiple times until we form the chains.

  1. To schematize, let’s define the algorithm in a set of steps:
  2. Get/Assign the initial values.

For each iteration: a) Sample the candidates from a distribution that only depends on the previous value (Markov Chain). b) If we’re using the Metropolis-Hastings algorithm, decide whether we accept or reject the candidates by computing and using the acceptance probability. c) Update/Store the new values.

Resources

[1] Wikipedia contributors. (2024, January 8). Stochastic process. In Wikipedia, The Free Encyclopedia. Retrieved 19:43, February 24, 2024, from https://en.wikipedia.org/w/index.php?title=Stochastic_process&oldid=1194369849

[2] Christopher Kazakis. (2021, January 8th). See the Future with Stochastic Processes. Retrieved 19:43, February 24, 2024, from https://towardsdatascience.com/stochastic-processes-a-beginners-guide-3f42fa9941b5.

Review Standish Group – CHAOS 2020: Beyond Infinity

Mike's Notes

I have been researching and reviewing the material referenced by Roger Sessions. This is about the Standish Group.

There is now a collection of Standish reference material in the library as part of the Roger Sessions collection.

Resources

References

  • Standish Group – CHAOS Report 2020

Repository

  • Home > Ajabbi Research > Library > Authors > Roger Sessions
  • Home > Handbook > 

Last Updated

01/06/2025

Review Standish Group – CHAOS 2020: Beyond Infinity

By: Henny Portman
Henny Portman's Blog: 06/01/2021

A few weeks ago, I received the latest report from the Standish Group – CHAOS 2020: Beyond Infinity – written by Jim Johnson. Every two years the Standish Group publish a new CHAOS Report.

These reports include classic CHAOS data in different forms with many charts. Most of the charts come from the CHAOS database of over 50,000 in-depth project profiles of the previous 5 years. You have probably seen some of those yellow-red-green charts showing e.g., challenged, failed and successful project percentages. 

The book contains ten sections and an epilogue:

Section I:

Factors of Success describes the three factors (good sponsor, good team and good place) the Standish Group has determined most seriously affect the outcome of a software project. Specific attention has been given how poor decision latency and emotional maturity level affect outcomes and the success ladder benchmark.

Section II:

Classic CHAOS provides the familiar charts and information generally found in CHAOS reports. E.g., resolution by traditional measurement, modern measurements, pure measurements and “Bull’s Eye” measurements.

Section III:

Type and styles of projects breaks down project resolution by measurement types and styles of delivery method.

In the next three sections we get an overview of the principles for the good sponsor, the good team and the good place. Each principle is explained in detail, including the required skills to improve the principle and a related chart showing the resolution of all software projects due to poorly skilled, moderately skilled, skilled and very skilled.

Section IV:

The Good Sponsor discusses the skills needed to be a good sponsor. The good sponsor is the soul of the project. The sponsor breathes life into a project, and without the sponsor there is no project. Improving the skills of the project sponsor is the number-one factor of success – and also the easiest to improve upon, since each project has only one. Principles for a good sponsor are: 

  • The Decision Latency principle
  • The Vision Principle
  • The Work Smart Principle
  • The Daydream Principle
  • The Influence Principle 
  • The Passionate Principle
  • The People Principle
  • The Tension Principle 
  • The Torque Principle
  • The Progress Principle.

Section V:

The Good Team discusses the skills involved in being a good team. The good team is the project’s workhorse. They do the heavy lifting. The sponsor breathes life into the project, but the team takes that breath and uses it to create a viable product that the organization can use and from which it derives value. Since we recommend small teams, this is the second easiest area to improve. Principles for a good team are: 

  • The Influential Principle
  • The Mindfulness Principle
  • The Five Deadly Sins Principle
  • The Problem-Solver Principle
  • The Communication Principle
  • The Acceptance Principle
  • The Respectfulness Principle
  • The Confrontationist Principle
  • The Civility Principle
  • The Driven Principle.

Section VI:

The Good Place covers what’s needed to provide a good place for projects to thrive. The good place is where the sponsor and team work to create the product. It’s made up of the people who support both sponsor and team. These people can be helpful or destructive. It’s imperative that the organization work to improve their skills if a project is to succeed. This area is the hardest to mitigate, since each project is touched by so many people. Principles for a good place are: 

  • The Decision Latency Principle
  • The Emotional Maturity Principle
  • The Communication Principle
  • The User Involvement Principle
  • The Five Deadly Sins Principle
  • The Negotiation Principle
  • The Competency Principle
  • The Optimization Principle
  • The Rapid Execution Principle
  • The Enterprise Architecture Principle.

Section VII:

Overview of the CHAOS Database explains the process of creating project cases and adjudicating them for inclusion in the CHAOS database.

Section VIII:

New Resolution Benchmark offers an overview of this new benchmark, which will replace the original in the CHAOS database. The Project Resolution Benchmark is a self-service instrument that uses a three-step method to help benchmark your organization against similar organizations on the basis of size, industry, project mix, types, and capability.

Section IX:

The Dutch Connection describes and celebrates the contributions made by our colleagues in the Netherlands and Belgium and their effect on our research.

Section X:

Myths and Illusions debunks some typical beliefs about “project improvement.” By using the data points from the database. The busted myths are:

  • Successful projects have a highly skilled project manager
  • Project management tools help project success
  • All projects must have clear business objectives
  • Incomplete requirements cause challenged and failed projects.

Epilogue

The Epilogue takes a look at 60 years of software development. The Standish Group has come up with four distinct evolutionary periods of developing software. The first period, which ran roughly from 1960 to 1980, is called “the Wild West”. The Waterfall Period ran from 1980 to about 2000. The Agile Period started around the year 2000 – and their prediction is that it will end shortly. They are now seeing the beginning of what they call the Infinite Flow Period, and they imagine that the Flow Period will last at least 20 years. In the Flow Period, there will be no project budgets, project plans, project managers, or Scrum masters. There will be a budget for the pipeline, which is a pure direct cost of the output. There will also be a cost to manage the pipeline, which will reduce the current project overhead cost by as much as 90%. This will be accomplished by reducing and eliminating most of the current project management activities. Functional description of work will come into the pipeline and out of the pipeline fully usable. Change will happen continuously, but in small increments that will keep everything current, useful, and more acceptable to users, rather than startling them with a “big bang boom” result (in a next blog I will dive into some details of the Flow method).

Conclusion:

CHAOS stands for the Comprehensive Human Appraisal for Originating Software. It’s all about the human factor. If you are looking for areas of improvement of your organizational project management skills (good sponsor, good team and good place), this guide gives a great overview where you could get the highest benefits from your investments. It gives excellent insights in root causes for project failure or success.

A pity this is the last CHAOS report (there will be an updated version in 2021, but that will not be a completely new CHAOS report). Given that The Standish Group are recommending you move to Flow, they state that there is no need for them to continue to research software projects. I would say not all projects are software projects, why not collect datapoints from non-software projects and start building a database and analyze the impact of the good sponsor, the good team and the good place for these projects too.

To order CHAOS 2020: Beyond Infinity

Ten Laws That Govern Enterprise Architecture

Mike's Notes

Another excellent article by Roger Sessions.

Resources

References

  • Reference

Repository

  • Home > Ajabbi Research > Library > Authors > Roger Sessions
  • Home > Handbook > 

Last Updated

01/06/2025

Ten Laws That Govern Enterprise Architecture

By: Roger Session
LinkedIn: 24/06/2016

Lead Architect of the IT Simplification Initiative (ITSI), leveraging mathematics to build simpler IT.

Every engineering discipline is governed by specific mathematical laws. Bridge designers must understand the laws of Tension and Compression. Any bridge designed in violation of these laws will collapse. Rocket ship designers must understand Newton’s Laws of Motion. Any rocket ship designed in violation of Newton’s laws will destruct. Aqueduct designers must understand the Laws of Hydraulics. Any aqueduct designed in violation of these laws will block.

How many people would knowingly drive on bridges that violate the laws of Tension and Compression? How many would ride a rocket ship that ignores Newton’s Laws of Motion? How many would hook up their toilet to aqueducts designed by people who poo-poo the laws of Hydraulics?

Like these engineering disciplines, enterprise architecture is governed by specific mathematical laws. However in marked contrast to these other disciplines, few enterprise architects understand the laws that govern their field. Even fewer executives demand that the high cost designs they are funding take into account the most fundamental laws that will determine their success.

This article is a Call to Action. It is a call to enterprise architects to start designing systems that conform to those laws rather than flaunting them. It is a call to executives to start demanding that all designs be subject to a proof of conformity to these laws. To design a large IT system that violates the Laws of Complexity is every bit as negligent as designing a bridge that violates the Laws of Tension and Compression.

As a starting point for this critical discussion, here are what I believe are the ten most important Laws of Complexity:

  1. Complexity = Functional Complexity + Dependency Complexity
  2. Viability = c /Complexity
  3. Value = Useful Functionality / Complexity
  4. Complexity increases exponentially
  5. Capacity to Manage Complexity increases linearly
  6. When partitioning independent elements, partition complexity is driven by subset size.
  7. When partitioning dependent elements, partition complexity is driven by element assignment.
  8. |Non-Optimal Partitions (NOPs)| >> |Optimal Partitions (OPs)|
  9. Complexity (NOPs) >> Complexity (OPs)
  10. OPs can only be found with directed methodologies.

I will write about these laws in more detail in upcoming articles. Stay tuned!