Busy beaver function

The busy beaver function (a.k.a. BB function or Radó's sigma function, denoted \(\Sigma(n)\) or \(\text{BB}(n)\)), is a fast-growing function from computability theory. It is defined as the maximum number of ones that can be written (in the finished tape) with an n-state, 2-color starting from a blank tape, before halting.

Radó showed that the function eventually dominates all computable functions, and thus it is uncomputable. That is, no algorithm that terminates after a finite number of steps can compute \(\Sigma(n)\) for arbitrary \(n\). However, it is computable by an oracle Turing machine with an oracle for the halting problem.

Turing machines that produce these numbers are called busy beavers. It is one of the fastest-growing functions ever arising out of professional mathematics. In googology, only a handful of significant functions surpass it — the xi function, the Rayo function and its aforementioned derivatives such as the FOOT function.

The Robot of Eternity Inn
Imagine an endless row of hotel rooms, and each room contains a lightbulb and a switch that controls it. Initially, all the rooms are dark. A robot starts at one of the rooms, and has the ability to operate switches and move to adjacent rooms.

The robot has several states that it can be in, and each state determines what it should do based on whether the current room is light or dark. For example, a robot's rules could include these states:


 * The "scared" state:
 * If the room is dark, turn on the light and move to the room to the left.
 * If the room is light, do nothing and go to the "normal" state.
 * The "normal" state:
 * If the room is light, turn off the light and move to the room on the right.
 * Otherwise, go to the "scared" state.

One special state is the "stop" state. When the robot finds itself in this state, the process is complete.

Suppose a robot has n states (not including the "stop" state), and it stops. What is the maximum number of light rooms at this point?

This system is in direct allegory to Turing machines. The hotel is the tape, the robot is the Turing machine, and dark rooms and light rooms are 0 and 1 cells.

Example
Suppose n = 3. It turns out that the following ruleset wins:


 * State "normal" (initial state):
 * If the room is dark, turn it on and move to the right. Then go to state "beware".
 * If the room is light, move left and go to state "frivolous".
 * State "beware":
 * If the room is dark, turn it on and move to the left. Then go to state "normal".
 * If the room is light, move right. (Stay in state "beware".)
 * State "frivolous":
 * If the room is dark, turn it on and move to the left. Then go to state "beware".
 * If the room is light, stop.

Uncomputability
With the right rules, Turing machines (TMs) can perform any computable operation despite their apparent simplicity. If a computer can calculate it with finite time and space (even in theory), a TM can also do it with finite (though probably longer) time and space. From a computability perspective, TMs and Intel processors are one and the same, although the former is much simpler. This important fact is known as the. (Some equivalent simple devices include s, certain, , and the programming language.)

It is easy to simulate a TM, but it is much harder to predict the behavior of a TM. This is because some TMs never halt, and the only way to test for that is to A) simulate them infinitely, or B) find and prove a pattern. Option A is physically impossible, and option B is difficult — how does a computer recognize arbitrary patterns? — and also problematic because many TMs exhibit chaotic behavior and do not have simple patterns.

The underlying issue here is that computers are no more powerful than TMs. To oversee arbitrary TMs, we need something more powerful than a TM. But, by the Church–Turing thesis, the computers we have today are computationally equivalent to TMs! Overseeing them, and thus computing BB(n), is impossible.

The Deity of Eternity Inn
The set of all TM rule sets is countably infinite &mdash; TMs can be mapped one-to-one onto the natural numbers. In fact, let's label them TM #1, TM #2, TM #3, ... The exact mapping we use is unimportant, as long as it's possible for some TM to recover the ruleset given machine's number.

Note that each of the TMs in this sequence can have any finite number of states. There are \((4n + 4)^{2n}\) Turing machines for \(n\) states, so TM #1-64 might be the one-state machines, TMs #65-20801 all the two-state machines, TMs #20802-16798018 all the three-state machines, etc.

We augment the Robot of Eternity Inn by adding in a god. Three new special states are introduced: "ask", "yes", and "no". If the robot enters the "ask" state, it contacts Triakula, the Goddess of Large Numbers. Triakula then does the following steps:


 * She counts all the light rooms, and calls the result x.
 * She simulates TM #x (which is impossible for anyone but a deity).
 * If TM #x halts, she puts the robot in the "yes" state. Otherwise, she puts it in the "no" state.

We ask the same question as before. Given access to the god, what is the maximum number of light rooms possible the robot halts? This is a function of n, the number of states the robot has (excluding "ask", "yes", "no", and "halt").

This new extended situation is an allegory for oracle Turing machines. There are several equivalent definitions for them; some use an additional tape for the oracle, and some use different states than "ask", "yes", and "no".

Higher Deities of Eternity Inn
In the true spirit of googology, you can always go a step further. We can once again transcend oracle Turing machines by enumerating them TM2 #1, TM2 #2, TM2, TM2 #3, ..., where TM2 stands for "level-2 Turing machine," or an oracle Turing machine. Transcending those we can get higher successive tiers of Turing machines: TM3, then TM4, then TM5, ...

What do we do after all these levels? We can create a new "level-ω Turing machine", or TMω, that has access to all these levels of Turing machines. Take this table:

Although this is infinite in two dimensions instead of just one, we can still label it with natural numbers using this pattern (or something equivalent):

We now have the definition of TMω #n, so we have successfully transcended all order-n Turing machines.

By the way, the maximum output of a TMa with n states is Σa(n).

Beyond
By adding an oracle for TMω we can continue with what could be called "level-ω+1 Turing machine", or TMω+1, then adding another level of oracle we can get TMω+2, and so on. After we define TMω+a for all positive integers a we can use a trick similar to one used in the previous section to construct "level-ω+ω Turing machine", which would be able to solve halting problem for all TMω+a.

With the help of ordinal numbers we can continue this method further, and we could define TMα for any countable ordinal number α. However, due to the increasing complexity of encodings of high order oracles this becomes very impractical to work with, which is why mathematicians have been considering different ways of extending the notion of computability. One of them worth mentioning is an infinite time Turing machine devised by J. D. Hamkins and A. Lewis.

Background


It is very hard to prove whether specific TM is a busy beaver or not. There exist \((4n+4)^{2n}\) TM's with n states to verify. This can be reduced to \((2n-1)\cdot(4n)^{2n-2}\), which immediately eliminates the most trivial TMs.

It is exceedingly difficult to compute exact values of \(\Sigma(n)\), but some lower bounds can be found. There are two general approaches to the problem:


 * Bottom-up Technique (for small \(n\))
 * Run through all possible Turing machines with \(n\) states.
 * If a machine runs for more than X steps, for some large limit X, it is assumed to be running indefinitely and is ignored.
 * Out of the machines we didn't ignore, find the machine that writes the most ones.


 * Top-down Technique (for large \(n\))
 * Take the some quickly growing function \(f\).
 * Simulate the computation of \(f(n)\) using a Turing machine with \(n\) states.

For example, if we can compute the Goodstein function on a machine with 100 states, then it is very likely that \(\Sigma(100) > G(n)\) for relatively large n.

First of the methods could be potentially used to evaluate \(\Sigma(n)\) exactly for small \(n\), but the method quickly becomes infeasible. For this reason top-down technique is generally used.

Note that if \(f(n)\) is any computable function, \(\Sigma(n)\) will eventually outgrow it. The tricky question is when does the sigma function outgrow it, and this is difficult to answer without actually evaluating \(\Sigma(n)\).

The function's growth rate is believed to be comparable to \(f_{\omega^\text{CK}_1}(n)\) in the fast-growing hierarchy, where \(\omega^\text{CK}_1\) is the Church-Kleene ordinal, the set of all recursive ordinals.

Small values
Using the bottom-up method, the following values and bounds have been found:

sigma(1)=1 sigma(2)=2 sigma(3)=4 sigma(4)=6 sigma(5)=10 sigma(6)=146 sigma(7)=2^2^10 sigma(8)=2^2^2^2^10 sigma(9)=2^2^2^2^2^10

For \(\Sigma(1)\), the proof is trivial. Single-state Turing machines either halt after the first step or continue moving out in one direction infinitely. For 2, 3, and 4, behavior becomes more complicated, but it is possible to prove the values. Beyond \(\Sigma(4)\), no exact values are known. It is suspected that \(\Sigma(5) = 4098\), but this has not yet been proven.

Using the 6-state record holder, and  proved \(\Sigma(7) > 10^{10^{10^{10^{18705352}}}}\).

Green's machines
Milton Green proved that \(\Sigma(2n) \gg 3 \uparrow^{n-2} 3\), so e.g.:

\begin{eqnarray} \Sigma(6) \gg 3 \uparrow 3 = 27 \\ \Sigma(8) \gg 3 \uparrow\uparrow 3 = 7625597484987 \\ \Sigma(10) \gg 3 \uparrow\uparrow\uparrow 3 \\ \Sigma(12) \gg 3 \uparrow\uparrow\uparrow\uparrow 3 \\ \end{eqnarray}

\(\Sigma(6)\) is already known to be much, much greater than 27, so these lower bounds are very weak.

Turing machines discovered by Milton Green are known as Class M Turing machines, and k-th Class M Turing machine has 2k states. To construct the first Class M Turing machine (that simply increments the original number), we use the following rules:

0 _ 1 r 1 0 1 1 l 0 1 _ _ l halt 1 1 1 r 1

Next, every nth Class M Turing machine has 2n states and are constructed as follows:

0 _ _ l 2 0 1 1 l 0 T 2n-1 _ _ l halt 2n-1 1 _ l 2

Here T indicates the table for the (n-1)th Class M Turing machine, except that every state numbers in rules increased by one, and the halting rule replaced with 2n-2 _ 1 r 2n-1

Graham's number vs. BB(n)
\(\Sigma(64)\) has been proven to be larger than Graham's number. Suppose we define an accelerated version of the fast-growing hierarchy:

\begin{eqnarray*} h_0(n) &=& n+1 \\ h_{\alpha+1}(n) &=& h_{\alpha}^{n+2}(n+4) \\ h_{\alpha}(n) &=& h_{a[n+1]}^{n+5}(n+7) \\ \end{eqnarray*}

Then \(\Sigma(64) > h_{\omega+1}(h_{\omega+1}(4)) > \lbrace 4,3,2,2 \rbrace \gg G\) (where \(\{\}\) represents BEAF). Bird's Proof also tells us that \(\Sigma(64) >\) \(3 \rightarrow 3 \rightarrow 3 \rightarrow 3\) in chained arrow notation.

Googology Wiki user proved that \(\Sigma(25) > G\). Another user proved using Deedlit11's results that \(\Sigma(24) > G\). This reduces the record for beating Graham's number down from 64 states to 24 states. Over several years, made multiple improvements to the bound, improving it to \(\Sigma(18) > G\).

Larger values
Wythagoras proved \(\Sigma(38) > f_{\omega2}(167)\),  \(\Sigma(64) > f_{\omega^2}(4098)\) and \(\Sigma(85) > f_{\varepsilon_0}(1907)\).

Wythagoras also showed that \(\Sigma(61, 6) \gg H(3)\) and proved \(\Sigma(134, 7) \gg U(3)\) where H and U are Chris Bird's functions. Wythagoras has also shown that \(\Sigma(38,3) \gg f_{\varepsilon_0}(374676379)\).

\(\text{BB}(1919)\) is the smallest known value of the busy beaver sequence that is independent of ZFC. The first such TM constructed had 7,918 states.

Tape changes
Below are pictures illustrating tape changes for two, three and four state busy beavers and the record holder for five.

Frantic frog function

 * Main article: Frantic frog function

Another function that has comparable growth rate is \(S(n)\), the maximum finite number of steps that can be performed using n-state, 2-color Turing machine. Some authors refers to this function as the busy beaver function. For all \(n\), \(S(n) \geq \Sigma(n)\) because a machine that prints \(\Sigma(n)\) ones must undergo at least as many state transitions.

Higher order busy beaver functions
It is impossible for a Turing machine (or anything less powerful) to compute \(\Sigma(n)\) for arbitrary \(n\). However, we can create a second order Turing machine, or , that has access to a black box ("oracle") that can determine when an ordinary Turing machine halts. The maximum number of ones that can be written with an n-state, two-color oracle Turing machine is denoted \(\Sigma_2(n)\) — the second order busy beaver function. Unlike the simple variations above, this new function has a sizeable advantage over the original function.

Although a second order Turing machine can solve the for first order Turing machines, it cannot predict when it halts itself. Thus \(\Sigma_2(n)\) is uncomputable even with respect to an oracle Turing machine.

In general, \(\Sigma_x(n)\) can be computed using an order-\((x + 1)\) Turing machine, but not for any lower-order machines.

Unfortunately, there is no single, standard definition for these oracle Turing machines, so the higher-order sigma functions do not have any standard definition.

Pseudocode
This is an "escape time algorithm" for finding lower bounds &mdash; if a Turing machine does not halt after t steps, it is assumed to go on infinitely. Setting t to infinity yields the correct value for BB(n), but this obviously cannot be done on a computer.

function BB(n, t): max := 0 for each rule set r: simulate a Turing machine with ruleset r for t steps if the machine halts: if number of 1's printed > max: max := number of 1's printed return max

This method was used to compute \(\Sigma(3)\) and \(\Sigma(4)\) as well as the bounds for \(\Sigma(5)\) and \(\Sigma(6)\).

The "bottom-up" method of finding bounds by constructing Turing machines with large outputs is a tricky problem. So far it has been done only by humans, and computer-assisted proofs bounding \(\Sigma(n)\) for large \(n\) are yet to be explored. An example of such program is Skelet's BBprover program.