An omnipresent problem observed by all scientific intellectuals from Aristotle to Galileo is that modeling the real world is, well, nearly impossible. While dot-diagrams, differential equations & vector calculus bring us closer & closer from theory to reality, it’s clear that an additional degree of complexity lurks.

A fascinating branch of discrete math that attempts to clarify the innate randomness & noise from complex systems is known as Cellular Automata (CA). The core question behind this branch is:

*Can We Generate Complexity From Simple Systems?*

In the 1940s, the eminent John von Neumann sought an answer to a standing question: was it theoretically (meaning mathematically) possible to design a robot that would replicate itself? Known as self-replication, this is the property of a dynamical system that yields construction of an identical copy of itself; common examples found IRL are biological cells (through cell division) & computer viruses (through host replication). von Neumann shared the problem with famed Los Alamos Laboratory colleague Stanislaw Ulam, who suggested working with a lattice or grid-like setup. Non-equilibrium dynamics are not well modeled with classic equations. Attempting to model dynamic systems, they derived a framework of cells within grids; each cell, with multiple possible states, is a function of its neighboring cells. And thus cellular automaton was born:

*Cellular Automaton* (CA) is a collection of cells, with states, on a grid of specified shape that evolves through several discrete time steps according to a set of rules based on the states of neighboring cells. The rules are then applied iteratively for as many time steps as desired.

To provide a beautiful sneak preview, take a look at the visual result of a one-dimensional grid with binary cells; even with strict constraints, it’s confirmed that complexity can indeed, stem from simplicity:

Every CA is made up of three different modules: a grid with cells, cells with a set of states, & a ruleset, which is a neighborhood function for all neighboring cells. The key component here is the innate system dependence:** the state of each cell is defined by the result of some function of it’s neighboring cells.**

A ** grid** is exactly what it sounds like, it’s simply a table of length

**What is the simplest possible scenario we can imagine?**

Simplest possible agent, simplest possible neighbors, simplest possible set of rules — with this extreme simplicity, can we still see a rise to complexity?

Credited with extending CA into this sub-branch, Stephen Wolfram heavily researched this absolute simplest CA scenario. Subsequently named *elementary*, this is a specific, special case of CA that’s at the crux of this article. An elementary CA, the simplest possible scenario, is defined by the following three modules found in any CA:

**Grid Of Cells**— One Dimension (array/row)**Set of States**— 0 or 1 (binary/shaded in)**Ruleset**— Nearest Neighbor (left/right adjacent cells)

Elementary cellular automata are the ‘simplest’ case of the family of computationally created cellular automata. Simplest here meaning that they are* 1-Dimensional, binary & generated by a ruleset that takes the state of nearest-neighbors as an input.*

The cells in our elementary 1-Dimensional case will be squares, which will be placed side by side in one horizontal row. There is no set number of cells that we should choose for our row, although the row will be bounded in a ‘viewing box’ sense, most likely a computer screen or paper printout.

The grid above contains seven (7) observable squares, or, more appropriately, cells. Next, we have to set (or observe) all possible states for each cell; since we’re optimizing for simplicity here, the least amount of meaningful states is *two. *Which again, points to a binary cells; commonly transcribed as “0” or “1,” we’ll instead represent this property by shading a square. For example:

As stated above, the simplest neighborhood consists of the adjacent cells to the left & the right of a middle cell. In the example below, the state of the middle cell is based on *some function *(a ruleset) of its current state, the state of the cell to the left, & the state of the cell to the right.

We *still* need to define our rule set for generating a second row; however, we’ve finally collected enough numerical properties to start deriving a domain of all possible rule sets. For example, we now know that in that simplest CA there are three (3) neighbors per neighborhood; each cell with two possible states, this leads to a total of 2³ = **8** possible neighborhood configurations:

Each of these eight different configurations will output the ** state of a single middle cell.** Since we’re considering the simplest CA possible, with a binary state, these eight (8) configurations could output either a 0 or a 1. For example, let’s say we wanted all eight configurations to generate a blank (or “0”) cell set — we could represent this function with the following: [0,0,0,0,0,0,0,0]. In Elementary CA, our

With this array domain derived, we can calculate that there a total of 2⁸ = 256 possible ways the eight-bit array can be arranged; therefore, there’s only a grand total of 256 ways an elementary CA could defined, **256 possible rule sets**. Another of the core reasons why Stephen Wolfram is credited with “inventing” this field is because *he was the first to map out every single one of these 256 rules*. Before we show a diagram of a few of these rules fully mapped out, let’s take a second to fully comprehend how a CA is fully generated.

Let’s use **Rule 54**, for example, which maps the eight configurations above (left-to-right, then second row) to [0,0,1,1,0,1,1,0]; use the previous diagram above with this array to double-check: all shaded in cells generates a blank (0) cell, the first two cells shaded generates a blank cell (0), the two outer cells shaded generates a shaded cell (1), the first cell shaded generates a shaded cell (1), etc… Traditionally, ECA is conducted with a single, middle cell shaded in. This will be our initial state G0:

Since we’re generating the next discrete time-step, G1, for different seven cells, we outline the seven pertinent neighborhoods (assuming any overflow cells are also blank). Still in G0, these are the seven neighborhoods observed:

In G0, the four outer cells all have the same exact same neighborhood: all blank cells. This is why though we have seven cells, we only drew four neighborhoods above. We now turn to our rule set, Rule 54 ([0,0,1,1,0,1,1,0]), to generate the next discrete step in our ECA, G1. Referring back to the “Rule Set Map” visual & the Rule 54 array, we can work out the cell state for each of the four neighborhoods above:

**Three Blank Cells**= 0 (blank)**First Cell Shaded**= 1 (shaded)**Middle Cell Shaded**= 1 (shaded)**Last Cell Shaded =**1 (shaded)

We can now go ahead & fully generate the next time-step, G1! Each iteration is carried out in parallel, meaning all state updates happen simultaneously.

Take note, you don’t read this like an X/Y graph, read left-to-right; instead, you read from *top-to-bottom, *each row indicative of another step. From here, ECAs aren’t drawn out manually but rather computed through your favorite programming language; though out of the scope of this article, there’s a few great tutorials on programming the logic derived here. The pattern visualizations that create ECAs famous, patterned aesthetic are a result of these iterations generated over time. For example, Rule 54 generated 18 times yields the following diagram:

From a one-dimensional row, a single shaded cell, & a byte-worth of data (rule set), quite the captivating imagery emerges! Wolfram, as mentioned a many times by now, was the first to fully generate all 256 rule sets. If you peek on over to the Wolfram website, you would the following visual which provides further stunning ECA generations:

The patterns that these 256 rule sets generate over time is the ECA mathematical equivalent of functions as they approach infinity; essentially, each of these 256 rule sets converges to a common pattern. Another fine contribution by Wolfram, he eventually categorized these patterns of convergence.

As Wolfram states in his legendary *A New Kind Of Science*, he noticed similar-ish common convergences:

Even though each pattern is different in detail, the number of fundamentally different types of patterns is very limited. Indeed, it seems that the patterns which arise can almost always be assigned quite easily to one of just four basic shapes:

These four classes are conveniently ordered in terms of increasing complexity, yet they each have additional unique features:

**Uniformity —**Almost all initial conditions lead to the final state**Oscillation —**Many possible final states, but all consist of a structure that either remains the same or repeats**Random —**Seems almost random, but triangles & similar simple structures are present at some levels**Complexity —**Intricate balance of order & randomness; localized structures are produced but move around to interact in complex ways

To clarify, Wolfram acknowledges that this classification system is by no means absolute or strict— it’s simply a visual categorization based on looking at thousands of generations with varying initial conditions & 256 rule sets. The curious thing here, however, is that the deeper Wolfram studied more-detailed properties of ECAs, the more clear it became that:

Most of these properties were closely correlated with the classes already identified. Indeed, in trying to predict detailed properties of a particular ECA it was often enough just to know what the ECA was in.

It seems counter-intuitive, particularly in math, to essentially judge a book by its cover; however, a few real-world examples exist that seem to support this heuristic. For example, the classification of materials into solids, liquids & gases, or of living organisms into plants & animals; classifications made purely based on general appearance, that under the microscope, indeed confirm similar deeper, detailed properties.

Cellular automata are an excellent example of a dynamic recursive process, where each iteration is a function of all the previous iterations. Once an evolution is generated, there is a web of interdependence that’s practically impossible to unravel. Perhaps the strangest attribute of the behavior seen in an ECA is that you would assume it to be continuous, but the very foundational basis of cellular automata is that it is entirely a discrete, non-continuous, process. Most of cellular automata’s wow-power comes from the fact that simple rules, the configurations, produce very complex behavior. Some of it is uniform, some of it is fractal, some of it is chaotic in nature & some of it is in a category all of its own — producing structures within the chaos. With this window into computing complexity, we open the path to further ECA findings; could we, given a final state of some unknown configuration, deconstruct it or undo the process & find what the initial rules were?