Busy beaver function

The busy beaver function (a.k.a. BB function or Radó's sigma function, denoted \(\Sigma(n)\) or \(\text{BB}(n)\)), is a distinctive fast-growing function from computability theory. It is the most well-known of the uncomputable functions. It is defined as the maximum number of ones that can be written (in the finished tape) with an n-state, 2-color halting starting from a blank tape before halting. (invalid link)

Radó showed that this function eventually dominates all computable functions, and thus it is uncomputable. That is, no algorithm that terminates after a finite number of steps can compute \(\Sigma(n)\) for arbitrary \(n\). However, it is computable by an oracle Turing machine with an oracle for the halting problem.

Turing machines that produce these numbers are called busy beavers. It is one of the fastest-growing functions ever arising out of professional mathematics. In googology, only a handful of significant functions surpass it — for example, the xi function, the ITTM busy beaver function and the Rayo function.

Generally, \(\Sigma(n,m)\) indicates the maximum number of non-blank symbols that can be written (in the finished tape) with an n-state, m-color halting Turing machine starting from a blank tape before halting.

\(\Sigma(n)\) also can be defined as the largest number in set \(T = \{n_1,n_2,\cdots,n_k\}\), which contains outputs of all Turing machines with \(n\) states and 2-colors. The size of \(T\) is not larger than \((4n+4)^{2n}\) (total number of Turing machines with \(n\) states), which shows how small portion of numbers \(\Sigma(n)\) can define.

The Robot of Eternity Inn
Imagine an endless row of hotel rooms, and each room contains a lightbulb and a switch that controls it. Initially, all the rooms are dark. A robot starts at one of the rooms and has the ability to operate switches and move to adjacent rooms.

The robot has several states that it can be in, and each state determines what it should do based on whether the current room is light or dark. For example, a robot's rules could include these states:


 * The "scared" state:
 * If the room is dark, turn on the light and move to the room to the left.
 * If the room is light, do nothing and go to the "normal" state.
 * The "normal" state:
 * If the room is light, turn off the light and move to the room on the right.
 * Otherwise, go to the "scared" state.

One special state is the "stop" state. When the robot finds itself in this state, the process is complete.

Suppose a robot has n states (not including the "stop" state), and it stops. What is the maximum number of light rooms at this point?

This system is in direct allegory to Turing machines. The hotel is the tape, the robot is the Turing machine, and dark rooms and light rooms are 0 and 1 cells.

Example
Suppose n = 3. It turns out that the following ruleset wins:


 * State "normal" (initial state):
 * If the room is dark, turn it on and move to the right. Then go to state "beware".
 * If the room is light, move left and go to state "frivolous".
 * State "beware":
 * If the room is dark, turn it on and move to the left. Then go to state "normal".
 * If the room is light, move right. (Stay in state "beware".)
 * State "frivolous":
 * If the room is dark, turn it on and move to the left. Then go to state "beware".
 * If the room is light, stop.

Uncomputability
With the right rules, Turing machines (TMs) can perform any computable operation despite their apparent simplicity. If a computer can calculate it with finite time and space (even in theory), a TM can also do it with finite (though probably longer) time and space. From a computability perspective, TMs and Intel processors are one and the same, although the former is much simpler. This important fact is known as the. (Some equivalent simple devices include s, certain, , and the programming language.)

It is easy to simulate a TM, but it is much harder to predict the behavior of a TM. This is because some TMs never halt, and the only way to test for that is to A) simulate them infinitely, or B) find and prove a pattern. Option A is physically impossible, and option B is difficult — how does a computer recognize arbitrary patterns? — and also problematic because many TMs exhibit chaotic behavior and do not have simple patterns.

The underlying issue here is that computers are no more powerful than TMs. To oversee arbitrary TMs, we need something more powerful than a TM. But, by the Church–Turing thesis, the computers we have today are computationally equivalent to TMs! Overseeing them, and thus computing \(\Sigma(n)\), is impossible.

The Deity of Eternity Inn
The set of all TM rule sets is countably infinite &mdash; TMs can be mapped one-to-one onto the natural numbers. In fact, let's label them TM #1, TM #2, TM #3, ... The exact mapping we use is unimportant, as long as it's possible for some TM to recover the ruleset given machine's number.

Note that each of the TMs in this sequence can have any finite number of states. There are \((4n + 4)^{2n}\) Turing machines for \(n\) states, so TM #1-64 might be the one-state machines, TMs #65-20801 all the two-state machines, TMs #20802-16798018 all the three-state machines, etc.

We augment the Robot of Eternity Inn by adding in a god. Three new special states are introduced: "ask", "yes", and "no". If the robot enters the "ask" state, it contacts Triakula, the Goddess of Large Numbers. Triakula then does the following steps:


 * She counts all the light rooms and calls the result x.
 * She simulates TM #x (which is impossible for anyone but a deity).
 * If TM #x halts, she puts the robot in the "yes" state. Otherwise, she puts it in the "no" state.

We ask the same question as before. Given access to the god, what is the maximum number of light rooms possible the robot halts? This is a function of n, the number of states the robot has (excluding "ask", "yes", "no", and "halt").

This new extended situation is an allegory for oracle Turing machines. There are several equivalent definitions for them; some use an additional tape for the oracle, and some use different states than "ask", "yes", and "no".

Higher Deities of Eternity Inn
In the true spirit of googology, you can always go a step further. We can once again transcend oracle Turing machines by enumerating them TM2,TM2 #1, TM2 #2, TM2 #3, ..., where TM2 stands for "level-2 Turing machine," or an oracle Turing machine. Transcending those we can get higher successive tiers of Turing machines: TM3, then TM4, then TM5, ...

What do we do after all these levels? We can create a new "level-ω Turing machine", or TMω, that has access to all these levels of Turing machines. Take this table:

Although this table extends infinitely in two dimensions instead of just one, complete enumeration using the naturals is entirely possible using this winding pattern (although alternative schemes are okay provided they enumerate all cell entries):

We now have the definition of TMω #n, so we have successfully transcended all order-n Turing machines.

By the way, the maximum output of a TMa with n states is Σa(n).

Beyond
By adding an oracle for TMω we can continue with what could be called "level-ω+1 Turing machine", or TMω+1, then adding another level of oracle we can get TMω+2, and so on. After we define TMω+a for all positive integers a we can use a trick similar to the one used in the previous section to construct "level-ω+ω Turing machine", which would be able to solve halting problem for all TMω+a.

With the help of ordinal numbers, we can continue this method further, and we could define TMα for any countable ordinal number α. However, due to the increasing complexity of encodings of high order oracles, this becomes very impractical to work with, which is why mathematicians have been considering different ways of extending the notion of computability. One of them worth mentioning is an infinite time Turing machine devised by J. D. Hamkins and A. Lewis.

Background


It is very hard to prove whether specific TM is a busy beaver or not. There exist \((4n+4)^{2n}\) TM's with n states to verify. This can be reduced to \((2n-1)(4n)^{2n-2}\), which immediately eliminates the most trivial TMs.

It is exceedingly difficult to compute exact values of \(\Sigma(n)\), but some lower bounds can be found. There are two general approaches to the problem:


 * Bottom-up Technique (for small \(n\))
 * Run through some set of possible Turing machines with \(n\) states.
 * If a machine runs for more than X steps, for some large limit X, it is assumed to be running indefinitely and is ignored.
 * Out of the machines we didn't ignore, find the machine that writes the most ones.


 * Top-down Technique (for large \(n\))
 * Take the some quickly growing function \(f\).
 * Simulate the computation of \(f(n)\) using a Turing machine.

For example, if we can compute the Goodstein function on a machine with 100 states, then it is very likely that \(\Sigma(100) > G(n)\) for relatively large n.

First of the methods could be potentially used to evaluate \(\Sigma(n)\) exactly for small \(n\), but the method quickly becomes infeasible. For this reason, the top-down technique is generally used.

Note that if \(f(n)\) is any computable function, \(\Sigma(n)\) will eventually outgrow it. The tricky question is when does the sigma function outgrow it, and this is difficult to answer without actually evaluating \(\Sigma(n)\).

The function's growth rate is believed to be comparable to \(f_{\omega^\text{CK}_1}(n)\) in the fast-growing hierarchy associated to Kleene's \(\mathcal{O}\) with respect to a system of fundamental sequences, where \(\omega^\text{CK}_1\) is the Church-Kleene ordinal, the set of all recursive ordinals. But it is unknown which function is faster if the enumeration of Turing machines in the definition of Kleene's \(\mathcal{O}\) is chosen to be generic.

Small values
Using the bottom-up method, the following values and bounds have been found:  (invalid link)

\begin{eqnarray} \Sigma(1) &=& 1 \\ \Sigma(2) &=& 4 \\ \Sigma(3) &=& 6 \\ \Sigma(4) &=& 13 \\ \Sigma(5) &\geq& 4,098 \\ \Sigma(6) &\geq& 3.514 \cdot 10^{18,267}& \\ \end{eqnarray}

For \(\Sigma(1)\), the proof is trivial. Single-state Turing machines either halt after the first step or continue moving out in one direction infinitely. For 2, 3, and 4, behavior becomes more complicated, but it is possible to prove the values. Beyond \(\Sigma(4)\), no exact values are known. It is suspected that \(\Sigma(5) = 4,098\), but this has not yet been proven.

Using the 6-state record holder, and  proved that \(\Sigma(7) > 10^{10^{10^{10^{18,705,352}}}}\).

Green's machines
Milton Green proved that \(\Sigma(2n) \gg 3 \uparrow^{n-2} 3\), so e.g.:

\begin{eqnarray} \Sigma(6) \gg 3 \uparrow 3 = 27 \\ \Sigma(8) \gg 3 \uparrow\uparrow 3 = 7,625,597,484,987 \\ \Sigma(10) \gg 3 \uparrow\uparrow\uparrow 3 \\ \Sigma(12) \gg 3 \uparrow\uparrow\uparrow\uparrow 3 \\ \end{eqnarray}

\(\Sigma(6)\) is already known to be much, much greater than 27, so these lower bounds are very weak.

Turing machines discovered by Milton Green are known as Class M Turing machines, and k-th Class M Turing machine has 2k states. To construct the first Class M Turing machine (that simply increments the original number), we use the following rules:

0 _ 1 r 1 0 1 1 l 0 1 _ _ l halt 1 1 1 r 1

Next, every nth Class M Turing machine has 2n states and are constructed as follows:

0 _ _ l 2 0 1 1 l 0 T 2n-1 _ _ l halt 2n-1 1 _ l 2

Here T indicates the table for the (n-1)th Class M Turing machine, except that every state numbers in rules increased by one, and the halting rule replaced with 2n-2 _ 1 r 2n-1

Using Green's machines, Shawn Ligocki showed that there are better bounds \(\Sigma(9) > 10 \uparrow\uparrow 28\) and \(\Sigma(11) > 3 \uparrow\uparrow\uparrow 720618962331271\).

Graham's number vs. Σ(n)
\(\Sigma(64)\) has been proven to be larger than Graham's number. Suppose we define an accelerated version of the fast-growing hierarchy:

\begin{eqnarray*} h_0(n) &=& n+1 \\ h_{\alpha+1}(n) &=& h_{\alpha}^{n+2}(n+4) \\ h_{\alpha}(n) &=& h_{a[n+1]}^{n+5}(n+7) \\ \end{eqnarray*}

Then \(\Sigma(64) > h_{\omega+1}(h_{\omega+1}(4)) > \lbrace 4,3,2,2 \rbrace \gg G\) (where \(\{\}\) represents BEAF). Bird's Proof also tells us that \(\Sigma(64) >\) \(3 \rightarrow 3 \rightarrow 3 \rightarrow 3\) in chained arrow notation.

Googology Wiki user proved that \(\Sigma(25) > G\). Another user proved using Deedlit11's results that \(\Sigma(24) > G\). This reduces the record for beating Graham's number down from 64 states to 24 states. Over several years, made multiple improvements to the bound, improving it to \(\Sigma(18) > G\).

Larger values
Wythagoras proved \(\Sigma(38) > f_{\omega2}(167)\),  \(\Sigma(64) > f_{\omega^2}(4,098)\) and \(\Sigma(85) > f_{\varepsilon_0}(1,907)\).

Wythagoras also showed that \(\Sigma(61, 6) \gg H(3)\) and proved \(\Sigma(134, 7) \gg U(3)\) where H and U are Chris Bird's functions. Wythagoras has also shown that \(\Sigma(38,3) \gg f_{\varepsilon_0}(374,676,379)\).

Tape changes
Below are pictures illustrating tape changes for two, three and four state busy beavers and the record holder for five.

Maximum shifts function
Another function that has a comparable growth rate is \(S(n)\), the maximum finite number of steps that an n-state, 2-color Turing machine can perform. Some authors refer to this function as the busy beaver function. For all \(n\), \(S(n) \geq \Sigma(n)\) because a machine that prints \(\Sigma(n)\) ones must undergo at least as many state transitions.

Analogously, \(S(n,m)\) is the maximum finite number of steps that an n-state, m-color Turing machine can perform.

\(S(1919)\) is the smallest known value of the maximum shifts function that is undecidable in ZFC set theory as long as it is consistent. The first such TM constructed had 7,918 states. Thus, \(S(1919)\) is the smallest known uncomputable number generated by maximum shifts function in the sense that the equality \(\underbrace{1+ \cdots +1}_m = S(1919)\) is unprovable in ZFC set theory for any meta -theoretic natural number \(m\) as long as it is consistent. Supposedly \(\Sigma(1919)\) is uncomputable too, but it is unknown in the current community whether the machine generates large output.

Trivial extensions
One can naturally define function \(f\) so that \(f(n) = \Sigma^n(n)\), and then even construct a recursive hierarchy around it. However, to go sufficiently beyond \(\Sigma(n)\), we need more sophisticated paradigm.

Higher-order busy beaver functions
It is impossible for a Turing machine (or anything less powerful) to compute \(\Sigma(n)\) for arbitrary \(n\). However, we can create a second-order Turing machine, or , that has access to a black box ("oracle") that can determine when an ordinary Turing machine halts. The maximum number of ones that can be written with an n-state, two-color oracle Turing machine is denoted \(\Sigma_2(n)\) — the second-order busy beaver function. Unlike the simple variations above, this new function has a sizeable advantage over the original function.

Although a second-order Turing machine can solve the for first-order Turing machines, it cannot predict when it halts itself. Thus \(\Sigma_2(n)\) is uncomputable even with respect to an oracle Turing machine.

In general, if one defines the notion of an \(x\)-th order Turing machine in a suitable way for a recursive ordinal \(x\), \(\Sigma_x(n)\) can be computed using an order-\((x + 1)\) Turing machine, but not for any lower-order machines.

Unfortunately, there is no single, standard definition for these oracle Turing machines, so the higher-order sigma functions do not have any standard definition. At least, under any reasonable formulation of the notion of a higher-order Turing machine, well-orderings of ordinal type \(\omega_1^{\textrm{CK}}\) are uncomputable even with respect to an \(x\)-th oracle Turing machine for any recursive ordinal \(x\). In particular, the fast-growing hierarchy associated to Kleene's \(\mathcal{O}\) with respect to a system of fundamental sequences is uncomputable even with respect to such a higher-order Turing machine. Therefore it is not obvious at all whether higher-order busy beaver functions surpass the fast-growing hierarchy associated to Kleene's \(\mathcal{O}\) or not, although googologists tend to believe that the first-order busy beaver function is comparable with it and hence the second-order busy beaver function significantly surpasses it without reasoning.

Pseudocode
This is an "escape time algorithm" for finding lower bounds &mdash; if a Turing machine does not halt after t steps, it is assumed to go on infinitely. Setting t to infinity yields the correct value for BB(n), but this obviously cannot be done on a computer.

function BB(n, t): max := 0 for each rule set r: simulate a Turing machine with ruleset r for t steps if the machine halts: if number of 1's printed > max: max := number of 1's printed return max

This method was used to compute \(\Sigma(3)\) and \(\Sigma(4)\) as well as the bounds for \(\Sigma(5)\) and \(\Sigma(6)\).

The "bottom-up" method of finding bounds by constructing Turing machines with large outputs is a tricky problem. So far it has been done only by humans, and computer-assisted proofs bounding \(\Sigma(n)\) for large \(n\) are yet to be explored. An example of such a program is Skelet's BBprover program.

Monotony
\(\Sigma(n) = m\) is clearly monotonical. At least, if we have \(n\)-state Turing machine that halts on symbol 0, then for \(n+1\)-state Turing machine we can replace the halting rule with the rule that goes to the state \(n+1\). In the state \(n+1\), we go to the halting state and write 1, making \(m+1\) ones and proving that \(\Sigma(n+1) \geq m+1\). If we have \(n\)-state Turing machine that halts on symbol 1, we do the same except that in \(n+1\)-th state head goes to the left or right corner of 1's, writes 1 and halts.