Ackermann function

The Ackermann function \(A(x,y)\) is a recursive function which was originally invented by and later simplified by Rozsa Peter and then by Raphael M. Robinson. The exact definition of the Ackermann function varies slightly between authors.

Robinson's definition
Robinson's version is the most commonly-refered-to rendition of the Ackermann function and is defined as follows:

This function grows at a rate comparable to the lesser-known Sudan function.
 * \(A(x,y) = y + 1\) if \(x = 0\), else
 * \(A(x,y) = A(x - 1,1)\) if \(y = 0\), otherwise,
 * \(A(x,y) = A(x - 1,A(x,y - 1))\).

The expansion of \(A(2,2)\) using Robinson's definition is given below as an example:

\[ \begin{array}{cclclcl} A(2,2) &=& A(1,A(2,1))\\ &=& A(1,A(1,A(2,0)))&&\\ &=& A(1,A(1,A(1,1)))\\ &=& A(1,A(1,A(0,A(1,0))))\\ &=& A(1,A(1,A(0,A(0,1))))\\ &=& A(1,A(1,A(0,2)))\\&=& A(1, A(1, 3))\\ &=& A(1, A(0, A(1, 2)))\\ &=& A(1, A(0, A(0, A(1, 1))))\\ &=& A(1, A(0, A(0, A(0, A(1, 0)))))\\ &=& A(1, A(0, A(0, A(0, A(0, 1)))))\\ &=& A(1, A(0, A(0, A(0, 2))))\\ &=& A(1, A(0, A(0, 3)))\\ &=& A(1, A(0, 4))\\ &=& A(1, 5)\\ &=& A(0, A(1, 4))\\ &=& A(0, A(0, A(1, 3)))\\ &=& A(0, A(0, A(0, A(1, 2))))\\ &=& A(0, A(0, A(0, A(0, A(1, 1)))))\\ &=& A(0, A(0, A(0, A(0, A(0, A(1, 0))))))\\ &=& A(0, A(0, A(0, A(0, A(0, A(0, 1))))))\\ &=& A(0, A(0, A(0, A(0, A(0, 2)))))\\ &=& A(0, A(0, A(0, A(0, 3))))\\ &=& A(0, A(0, A(0, 4)))\\ &=& A(0, A(0, 5))\\ &=& A(0, 6)\\ &=& 7 \end{array} \]

The Ackermann function is related to Knuth's up-arrow notation and Hyper operators in the following way:

\(A(x,y) = 2\uparrow^{x-2}(y+3)-3 = 2[x](y+3)-3\).

It is thus capable of significantly fast growth rates. For example, \(A(4,2)\) computes to the following value:

\(A(4,2) = (2 \uparrow^{2}5) -3 = 2 \uparrow\uparrow 5 -3 = 2 \uparrow 2 \uparrow 2 \uparrow 2 \uparrow 2 -3 = 2^{2^{2^{2^{2 }}} }-3 \approx 2.003529930 \times 10^{19,728}\)

Friedman's definition
Harvey Friedman defined the Ackermann function like so: According to this definition, \(A(x,y)\) is thus equal to \(2 \uparrow^{x-1} y\).
 * \(A(x,y) = 1\) (if \(y=0\))
 * \(A(x,y) = 2y\) (if \(x=1\))
 * \(A(x,y) = A(x-1,A(x,y-1))\) (otherwise)

Goucher's definition
A.P. Goucher proposed the following definition of the Ackermann function in his blog post:


 * \(f_1(n)=n+2\)
 * \(f_{m+1}(n) = f_m^n(2)\)
 * \(A(n) = f_n(n)\)

The post describes yet another variant of the this function that is related to the following problem:

Given a row of boxes, and some number of coins in each, we can pick one box and operate by one of the following rules:


 * Remove one coin from this box and add two coins in the box n+1.
 * Remove one coin from this box and invert the number of coins in boxes n+1 and n+2.

We can choose a strategy of picking boxes and applying rules for them. Consider the situation when all boxes except the rightmost one are empty. Now, given a row of n boxes with one coin in each of them, f(n) is the largest number of coins in the rightmost box. Computing exact values of this function can be tricky, but it is trivial to see that \(f(1) = 1\), \(f(2) = 3\) and \(f(3) = 7\). Proving that \(f(4) = 28\) is slightly harder.

Other trivia
The Ackermann function is related to the Ackermann numbers as they exhibit equivalent growth rates.

In googology, the Ackermann function may alternatively refer to the single-argument function \(A(n) = A(n, n)\) which diagonalizes over the two-variable counterpart. \(A(n)\) is also known as the gag function.

The inverse of the single-argument Ackermann function, \(\alpha(n)\), is called the inverse Ackermann function. Since \(A(n)\) grows somewhat rapidly, the inverse Ackermann function consequently grows very slowly. Interestingly, it has seen applications in time complexity theory.

Wilhelm's original function was devised to show the existence of computable functions that are not primative-recursive, as that had been an open question at the time.