Consider the problem of computing the exponential of a given number. We would like a function that takes as arguments a base b and a positive integer exponent n and computes . One way to do this is via the recursive definition
which translates readily into the recursive function
>>> def exp(b, n):
if n == 0:
return 1
return b * exp(b, n-1)
This is a linear recursive process that requires steps and space. Just as with factorial, we can readily formulate an equivalent linear iteration that requires a similar number of steps but constant space.
>>> def exp_iter(b, n):
result = 1
for _ in range(n):
result = result * b
return result
We can compute exponentials in fewer steps by using successive squaring. For instance, rather than computing as
we can compute it using three multiplications:
This method works fine for exponents that are powers of 2. We can also take advantage of successive squaring in computing exponentials in general if we use the recursive rule
We can express this method as a recursive function as well:
>>> def square(x):
return x*x
>>> def fast_exp(b, n):
if n == 0:
return 1
if n % 2 == 0:
return square(fast_exp(b, n//2))
else:
return b * fast_exp(b, n-1)
>>> fast_exp(2, 100)
1267650600228229401496703205376
The process evolved by fast_exp grows logarithmically with n in both space and number of steps. To see this, observe that computing using fast_exp requires only one more multiplication than computing . The size of the exponent we can compute therefore doubles (approximately) with every new multiplication we are allowed. Thus, the number of multiplications required for an exponent of n grows about as fast as the logarithm of n base 2. The process has growth. The difference between growth and growth becomes striking as becomes large. For example, fast_exp for n of 1000 requires only 14 multiplications instead of 1000.