Busy beaver function

Rado's sigma function or busy beaver function \(\Sigma(n)\) is equal to the maximum number of ones that can be written (in the finished tape) with an n-state, 2-color, starting from a blank tape, before halting. It is also commonly denoted \(BB(n)\). It is one of the fastest-growing functions ever arising out of professional mathematics. In googology, only a handful of significant functions surpass it &mdash; the Rayo function and the xi function.

Turing machines that produce these numbers are called busy beavers.

Rado showed that it the function grows faster than any computable function, and thus it is uncomputable &mdash; determining whether a given Turing machine is a busy beaver takes an infinite number of steps. That is, no algorithm that terminates after a finite number of steps can compute \(\Sigma(n)\) for arbitrary \(n\); Rado's sigma function marks the limit of recursion, which is the foundation of classical googology.

The Robot of Eternity Inn
Imagine an endless row of hotel rooms, and each room contains a lightbulb and a switch that controls it. Initially, all the rooms are dark. A robot starts at one of the rooms, and has the ability to operate switches and move to adjacent rooms.

The robot has several states that it can be in, and each state determines what it should do based on whether the current room is light or dark. For example, a robot's rules could include these states:


 * The "scared" state:
 * If the room is dark, turn on the light and move to the room to the left.
 * If the room is light, do nothing and go to the "normal" state.
 * The "normal" state:
 * If the room is light, turn off the light and move to the room on the right.
 * Otherwise, go to the "scared" state.

One special state is the "stop" state. When the robot finds itself in this state, the process is complete.

Suppose a robot has n states (not including the "stop" state), and it stops. What is the maximum number of light rooms at this point?

This system is in direct allegory to Turing machines. The hotel is the tape, the robot is the Turing machine, and dark rooms and light rooms are 0 and 1 cells.

Example
Suppose n = 3. It turns out that the following ruleset wins:


 * State "normal" (initial state):
 * If the room is dark, turn it on and move to the right. Then go to state "beware".
 * If the room is light, move left and go to state "frivolous".
 * State "beware":
 * If the room is dark, turn it on and move to the left. Then go to state "normal".
 * If the room is light, move right. (Stay in state "beware".)
 * State "frivolous":
 * If the room is dark, turn it on and move to the left. Then go to state "beware".
 * If the room is light, stop.

Uncomputability
With the right rules, Turing machines (TMs) can perform any computable operation despite their apparent simplicity. If a computer can calculate it with finite time and space, a TM can also do it with finite time and space. From a computability perspective, TMs and Intel processors are one and the same, although the former is much simpler. This important fact is known as the. (Some equivalent simple devices include s, certain, , and the programming language.)

It is trivially easy to simulate a TM, but it is much harder to oversee a TM (read: observe it and determine its output). This is because some TMs never halt, and the only way to test for that is to A) simulate them infinitely, or B) find and prove a pattern. Option A is impossible. Option B is difficult &mdash; how does a computer recognize arbitrary patterns? &mdash; and also problematic because many TMs exhibit chaotic behavior and do not have simple patterns.

The underlying issue here is that computers are no more powerful than TMs. To oversee arbitrary TMs, we need something more powerful than a TM. But, by the Church-Turing thesis, the computers we have today are computationally equivalent to TMs! Overseeing them, and thus computing BB(n), is impossible.

The Deity of Eternity Inn
The set of all TM rule sets is countably infinite &mdash; TMs can be mapped one-to-one onto the natural numbers. In fact, let's label them TM #1, TM #2, TM #3, ... The exact mapping we use is unimportant, since it's all computationally equivalent anyway.

Note that each of the TMs in this sequence can have any finite number of states. There are \((4n + 4)^{2n}\) Turing machines for \(n\) states, so TM #1 might be the one-state machine, TMs #2-65 all the two-state machines, TMs #66-20801 all the three-state machines, etc.

We augment the Robot of Eternity Inn by adding in a god. Three new special states are introduced: "ask," "yes," and "no." If the robot enters the "ask" state, it contacts Triakula, the Goddess of Large Numbers. Triakula then does the following steps:


 * She counts all the light rooms, and calls the result x.
 * She simulates TM #x (which is impossible for anyone but a deity).
 * If TM #x halts, she puts the robot in the "yes" state. Otherwise, she puts it in the "no" state.

We ask the same question as before. Given access to the god, what is the maximum number of light rooms possible the robot halts? This is a function of n, the number of states the robot has (excluding "ask," "yes," "no," and "halt").

This new extended situation is an allegory for oracle Turing machines. There are several equivalent definitions for them; some use an additional tape for the oracle, and some use different states than "ask," "yes," and "no."

Higher Deities of Eternity Inn
In the true spirit of googology, you can always go a step further. We can once again transcend oracle Turing machines by enumerating them TM2 #1, TM2 #2, TM2, TM2 #3, ..., where TM2 stands for "level-2 Turing machine," or an oracle Turing machine. Transcending those we can get higher successive tiers of Turing machines: TM3, then TM4, then TM5, ...

What do we do after all these levels? We can create a new "level-ω Turing machine," or TMω, that has access to all these levels of Turing machines. Take this table:

Although this is infinite in two dimensions instead of just one, we can still label it with natural numbers using this pattern (or something equivalent):

We now have the definition of TMω #n, so we have successfully transcended all order-n Turing machines.

By the way, the maximum output of a TMa with n states is Σa(n). Adam P. Goucher suggested that the maximum output of a TMω (Σω) is comparable to Rayo's function, through it has been disproved since.

Computing values
It is very hard to prove whether specific TM is a busy beaver of not. There exists \((4n+4)^{2n}\) TM's with n states to verify. It can be reduced by a few orders of magnitude using the formula \((2n-1)*(4n)^{2n-2}\), which immediately eliminates the most trivial TMs.

The sigma function is uncomputable, but it is possible to evaluate \(\Sigma(n)\) for some small values of \(n\). It is easy to prove that \(\Sigma(1) = 1\). If 1-state TM don't halts and writes 1 after very first step, then it will go to the one side without change of state. With 2,3 and 4 states, behaviour becomes somewhat complicated, but it has been proved that \(\Sigma(2) = 4\), \(\Sigma(3) = 6\), and \(\Sigma(4) = 13\). No other values are exactly known, but some lower bounds are \(\Sigma(5) \geq 4098\) and \(\Sigma(6) \geq 3.514 \cdot 10^{18276}\).

Milton Green proved that \(\Sigma(2n) \gg 3 \uparrow^{n-2} 3\), yielding the following lower bounds:

\[\Sigma(6) > 3 \uparrow 3 = 27\]

\[\Sigma(8) > 3 \uparrow\uparrow 3 = 7625597484987\]

\[\Sigma(10) > 3 \uparrow\uparrow\uparrow 3\]

\[\Sigma(12) > 3 \uparrow\uparrow\uparrow\uparrow 3\]

\(\Sigma(6)\) is already known to be much greater than 27, so these lower bounds are very weak.

The function's growth rate is comparable to \(f_{\omega^\text{CK}_1}(n)\) in the fast-growing hierarchy, where \(\omega^\text{CK}_1\) is the, the set of all recursive ordinals.

\(\Sigma(64)\) has been proven to be larger than Graham's number.

Suppose we define an accelerated version of the fast-growing hierarchy:

\begin{eqnarray*} h_0(n) &=& n+1 \\ h_{\alpha+1}(n) &=& h_{\alpha}^{n+2}(n+4) \\ h_{\alpha}(n) &=& h_{a[n+1]}^{n+5}(n+7) \\ \end{eqnarray*}

Then \(\Sigma(64) > h_{\omega+1}(h_{\omega+1}(4)) > \lbrace 4,3,2,2 \rbrace \gg G\) (where \(\{\}\) represents BEAF). Bird's Proof also tells us that \(\Sigma(64) > 3 \rightarrow 3 \rightarrow 3 \rightarrow 3\) in chained arrow notation.

Deedlit11 proved that \(\Sigma(25,2) > G\). This reduces the record for beating Graham's number from 64 states down to 25 states.

Also LittlePeng9 proved that \(\Sigma(150,10) >>\) N, where N is the number that Chris Bird defined in his end of 7-part paper about his array notation.

Moreover more likely (from discussion LittlePeng9 and Ikosarakt1) that \(\Sigma(160,2) >\) final output of Loader.c

Note that if \(f(n)\) is any computable function, \(\Sigma(n)\) will eventually overtake it. The tricky question is when does the sigma function overtake it, and this is difficult to answer without actually evaluating \(\Sigma(n)\).

Constructing Green's TMs
Turing machines discovered by Milton Green are known as Class M Turing machines, and k-th Class M Turing machine has 2k states. To construct the first Class M Turing machine (that simply increments the original number), we use the following rules:

0 _ 1 r 1 0 1 1 l 0 1 _ _ l halt 1 1 1 r 1

Next, every nth Class M Turing machine has 2n states and are constructed as follows:

0 _ _ l 2 0 1 1 l 0 T 2n-1 _ _ l halt 2n-1 1 _ l 2

Here T indicates the table for the (n-1)th Class M Turing machine, except that every state numbers in rules increased by one, and the halting rule replaced with 2n-2 _ 1 r 2n-1

Finding lower bounds
It is exceedingly difficult to compute exact values of \(\Sigma(n)\), but some lower bounds can be found.

This algorithm works for smaller values of \(n\):


 * Run through all possible Turing machines with \(n\) states.
 * If a machine writes X ones, for some large limit X, it is assumed to be running indefinitely and is ignored.
 * Find the machine that writes the most ones.

And this algorithm works for larger values of \(n\):


 * Take the some quickly growing function \(f\).
 * Simulate the computation of \(f(n)\) using a Turing machine with \(n\) states.

For example, if we can compute the Goodstein function on a machine with 100 states, then it is very likely that \(\Sigma(100) > G(n)\) for sufficiently large n.

Max shifts function
Another function that has comparable growth rate is \(S(n)\), the maximum finite number of steps that can be performed using n-state, 2-color Turing machine. Some authors refers to this function as Busy Beaver function.

It has been proven that \(S(1) = 1\), \(S(2) = 6\), \(S(3) = 21\), and \(S(4) = 107\). Some lower bounds are \(S(5) \geq 47176870\) and \(S(6) \geq 7.412 \cdot 10^{36534}\).

Trivial variations
There are some other variations on busy beaver and max shifts function. For example, define \(\Sigma(n,m)\) as the maximum number of colored symbols that can be written (in the finished tape) with an n-state, m-color Turing machine before halting, and \(S(n,m)\) in this way (see below). It will grow even faster than regular \(\Sigma(n)\) and \(S(n)\), and number of TMs for \(\Sigma(n,x)\) is given by the formula \(((2 x)*(n+1))^{xn}\).

The function can also be generalized to more dimensions (such as s, a 2-D variation) and/or having more than one Turing machine.

Other variation can be obtained if we allow Turing machine only either write symbol or move. These are called quadruple Turing machines. We can also add the option "do not move" to any of these types of Turing machines.

None of these extensions provides a significant improvement over the original function.

Higher order busy beaver functions
It is impossible for a Turing machine (or anything less powerful) to compute \(\Sigma(n)\) for arbitrary \(n\). However, we can create a second order Turing machine, or , that has access to a black box ("oracle") that can determine when an ordinary Turing machine halts. The maximum number of ones that can be written with an n-state, two-color oracle Turing machine is denoted \(\Sigma_2(n)\) &mdash; the second order busy beaver function.

Although a second order Turing machine can solve the for first order Turing machines, it cannot predict when it halts itself. Thus \(\Sigma_2(n)\) is uncomputable even with respect to an oracle Turing machine.

In general, \(\Sigma_x(n)\) can be computed using an order-\((x + 1)\) Turing machine, but not for any lower-order machines.

The order-\(x\) busy beaver function has growth rate comparable to \(f_{\omega^\text{CK}_x}(n)\) in the fast-growing hierarchy.

Unfortunately, there is no single, standard definition for these oracle Turing machines, so the higher-order sigma functions do not have any standard definition.

Pseudocode
This is an "escape time algorithm" for finding lower bounds &mdash; if a Turing machine does not halt after t steps, it is assumed to go on infinitely. Setting t to infinity yields the correct value for BB(n), but this obviously cannot be done on a computer.

function BB(n, t): max := 0 for each rule set r: simulate a Turing machine with ruleset r for t steps if the machine halts: if number of 1's printed > max: max := number of 1's printed return max

This method was used to compute \(\Sigma(3)\) and \(\Sigma(4)\) as well as the bounds for \(\Sigma(5)\) and \(\Sigma(6)\).

The "bottom-up" method of finding bounds by constructing Turing machines with large outputs is a tricky problem. So far it has been done only by humans, and computer-assisted proofs bounding \(\Sigma(n)\) for large \(n\) are yet to be explored. An example of such program is Skelet's BBprover program.