CS240 -- Lecture Notes: Recursion

Daisy Tang

Back To Lectures Notes


This lecture introduces recursion, a very useful technique in programming. Please read chapter 8 of our textbook. Answers to Exercises.
Recursion

Generally speaking, recursion is the concept of well-defined self-reference. It is the determination of a succession of elements by operating on one or more preceding elements according to a rule or a formula involving a finite number of steps.

In computer science, recursion is a programming technique using function or algorithm that calls itself one or more times until a specified condition is met at which time the rest of each repetition is processed from the last one called to the first.

For example, let's look at a recursive definition of a person's ancestors:

  • One's parents are one's ancestors
  • The parents of any ancestor are also ancestors of the person under consideration

We can write pseudocode to determine whether somebody is someone's ancestor.

FUNCTION isAncestor(Person x, Person y):
    IF x is y's parent, THEN:
        return true
    ELSE 
        return isAncestor(x, y's mom) OR isAncestor(x, y's dad)
}

This is a recursive function that calls itself. Notice that there's a case when the function does not call itself recursively, otherwise, the function will keep calling itself and will never stop to return a value. Thus, a recursive function usually has a certain structure: (1) a base case, which does not call the function itself; and (2) a recursive step, which calls the function itself and moves closer to the base case. Even with the right structure, we still need to be careful to infiniteness. You may have noticed that the above function isAncestor still has some problem. What if x is not an ancestor of y? Then the program will keep asking about if x is y's parents' ancestor, and so on. It will never reach the base case, and the list of parents will go much further back. The program will never stop. The problem here is the base case here is not complete. We should add a new base case:

FUNCTION isAncestor(Person x, Person y):
    IF x is y's parent, THEN:
        return true
    ELSE IF x was not born before y was born, THEN:
        return false
    ELSE 
        return isAncestor(x, y's mom) OR isAncestor(x, y's dad)
}

Important: Every recursion must have at least one base case, at which the recursion does not recur (i.e., does not refer to itself).

More examples of recursion:

  • Russian Matryoshka dolls. Each doll is made of solid wood or is hollow and contains another Matryoshka doll inside it.
  • Modern OS defines file system directories in a recursive way. A file system consists of a top-level directory, and the contents of this directory consists of files and other directories.
  • Much of the syntax in modern programming languages is defined in a recursive way. For example, an argument list consists of either (1) an argument or (2) an argument list followed by a comma and an argument.

Practice: give a recursive definition of the following data structures:

  • A linked list
  • Number of nodes in a linked list

Defining Problems in Ways That Facilitate Recursion

To design a recursive algorithm for a given problem, it is useful to think of the different ways we can subdivide this problem to define problems that have the same general structure as the original problem. This process sometimes means we need to redefine the original problem to facilitate similar-looking subproblems.

Some observations: (1) Avoid if possible recursive functions that make multiple overlapping calls to themselves, which leads to an exponential complexity; and (2) repetition in code can be achieved through recursion.

In the following examples, you should always ask yourself what are the base case and the recursive step, note the naturalness of the implementation, understand how the loop replacement feature of recursion is involved, and maybe think about what is the running time and space usage.

 

Example 1: Factorial Calculation

We know that the factorial of n (n >= 0) is calculated by n! = n * (n-1) * (n-2) * ... * 2 * 1.  Note that the product of (n-1) * (n-2) * ... * 2 * 1 is exactly (n-1)!. Thus we can write the expression as n! = n * (n-1)!, which is the recursive expression of the factorial calculation. 

What is the base case? What is the recursive step?

public class RecursiveFactorial {
    public static void main (String[] args) {
        for (int i = 1; i < 10; i++)
            System.out.println(i + "\t" + factorial(i));
    } 
    static int factorial (int n) {              
        if (n < 2) return 1;                     // base case
        else return n * factorial(n-1);    // recursive case
    }
}

The above recursion is called a linear recursion since it makes one recursive call at a time. The loop equivalent:

public static int factorial(int n) {
    int result = 1;
    for (int i = 2; i <= n; i++) 
        result *= i;
    return result;
}

Recursion and Stacks

Let's take a close look at the mechanism with which a recursive program is actually implemented by the compiler. In the previous example, we have seen how a recursion executes its forward and backing-out phases. The order in which the recursive process backs out is the reverse of the order in which it goes forward. Thus some action may be performed that involves recalling something that has been stored in the forward process. The compiler uses a stack to implement recursion.

  • In the forwarding phase, the values of local variables and parameters, and the return address are pushed on the stack for every level of the recursion
  • In the backing-out phase, the stacked address is popped and used to return to executing the rest of the code in the calling level, and the stacked local variables and parameters are popped and used to restore the state of that call

Exercise

  • Exponentiation. Calculate xn using both iteration and recursion. (Assume x > 0 and n >= 0)

 

Example 2: Reversing an Array

Let us consider the problem of reversing the n elements of an array, A, so that the first element becomes the last, the second element becomes the second to the last, and so on. We can solve this problem using the linear recursion, by observing that the reversal of an array can be achieved by swapping the first and last elements and then recursively reversing the remaining elements in the array.

Algorithm ReverseArray(A, i, j):
    Input: An array A and nonnegative integer indices i and j
    Output: The reversal of the elements in A starting at index i and ending at j
    if i < j then
        Swap A[i] and A[j]
        ReverseArray(A, i+1, j-1)
    return

Exercises

  • Summing the elements of an array recursively
  • Finding the maximum element in an array A of n elements using recursion
Example 3: Towers of Hanoi

This is a standard problem where the recursive implementation is trivial but the non-recursive implementation is almost impossible.

In the Towers of Hanoi puzzle, we are given a platform with three pegs, a, b, and c, sticking out of it. On peg a is a stack of n disks, each larger than the next, so that the smallest is on the top and the largest is on the bottom. The puzzle is to move all the disks from peg a to c, moving one disk at a time, so that we never place a larger disk on top of a smaller one. The following figures give an example of the starting position and the ending position of the disks with n = 4. Let's look at an example of moving 4 disks.

                              

          a                       b                 c                                                a                 b                   c
    (source)              (spare)         (dest)                                        (source)       (spare)          (dest)

Think about what is the base case? What is the recursive step?

At the top level, we want to move 4 disks from peg a to c, with a spare peg b. We can break the problem of moving 4 disks into three steps:

  1. Move disk 3 and smaller from peg a to b, using c as a spare peg. This can be done by recursively calling the same procedure but with 3 disks instead. After this procedure, we will have 3 smaller disks on peg b.
  2. Move disk 4 from peg a to peg c. After this procedure, we will have 3 smaller disks on peg b, disk 4 on peg c, and peg a empty.
  3. Move disk 3 and smaller from peg b to c, using a as spare peg. Again, this can be done by recursively calling the same procedure on 3 disks with different source and destination. After this procedure, we will have all the disks on peg c without breaking the rules.

The pseudocode looks like the following. We call this function to move 4 disks by MoveDisk(4, a, c, b).

Algorithm MoveDisk(disk, source, dest, spare) {
    if (disk = = 1) then
        move disk from source to dest
    else
        MoveDisk(disk-1, source, spare, dest)      // step 1 above
        move disk from source to dest                    // step 2 above
        MoveDisk(disk-1, spare, dest, source)       // step 3 above 
}

Let's trace our solution. To visualize the recursive calling process, we generate a call tree. This is a call tree for moving 3 disks from peg a to c.

Notice that, each MoveDisk call will branch into two functions calls unless it's the base case. If we want to move n disks, how many movements do we need with this recursive function?

Assume M(i) represents the number of movement for the disks, let's calculate how long does it take to move n disks.

  • M(1) = 1
  • M(2) = 2M(1) + 1 = 3
  • M(3) = 2M(2) + 1 = 7
  • M(4) = 2M(3) + 1 = 15 
  • ...
  • We can guess M(n) = 2n - 1

This can be verified by plugging it into our function.

  • M(1) = 21 - 1
  • M(n) = 2M(n-1) + 1 = 2[2M(n-2) + 1] + 1 = ... = 2k M(n-k) + 2k-1 + 2k-2 + ... + 2 + 1  
  • M(n) = 2n - 1 when k = n-1 (stopping at the base case)

A 64-disk version of the puzzle lies in a Hanoi monastery, where monks continuously toward solving the puzzle. When they complete the puzzle, the world will come to an end. Now, you know the answer. How long will the world last? roughly 585.442 billion years. The universe is currently about 13.7 billion years old.

 

Example 4: Fibonacci Sequence

Fibonacci sequence is the sequence of numbers 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, .... The first two numbers of the sequence are both 1, while each succeeding number is the sum of the two numbers before it. We can define a function F(n) that calculates the nth Fibonacci number.

First, the base cases are: F(0) = 1 and F(1) = 1.

Now, the recursive case: F(n) = F(n-1) + F(n-2). 

Write the recursive function and the call tree for F(5).

Algorithm Fib(n) {
    if (n < 2) return 1
    else return Fib(n-1) + Fib(n-2)
}

The above recursion is called binary recursion since it makes two recursive calls instead of one. How many number of calls are needed to compute the kth Fibonacci number? Let nk denote the number of calls performed in the execution.

  • n0 = 1
  • n1 = 1
  • n2 = n1 + n0 + 1  = 3 > 21
  • n3 = n2 + n1  + 1 = 5 > 22
  • n4 = n3 + n2 + 1 = 9 > 23
  • n5 = n4 + n3 + 1 = 15 > 23
  • ...
  • nk  > 2k/2

This means that the Fibonacci recursion makes a number of calls that are exponential in k. In other words, using binary recursion to compute Fibonacci numbers is very inefficient.  Compare this problem with binary search, which is very efficient in searching items, why is this binary recursion inefficient? The main problem with the approach above, is that there are multiple overlapping recursive calls.

We can compute F(n) much more efficiently using linear recursion. One way to accomplish this conversion is to define a recursive function that computes a pair of consecutive Fibonacci numbers F(n) and F(n-1) using the convention F(-1) = 0.

Algorithm LinearFib(n) {
    Input: A nonnegative integer n
    Output: Pair of Fibonacci numbers (Fn, Fn-1)
    if (n <= 1) then
        return (n, 0)
    else 
        (i, j) <-- LinearFib(n-1)
        return (i + j, i)
}

Since each recursive call to LinearFib decreases the argument n by 1, the original call results in a series of n-1 additional calls. This performance is significantly faster than the exponential time needed by the binary recursion. Therefore, when using binary recursion, we should first try to fully partition the problem in two or, we should be sure that overlapping recursive calls are really necessary. Usually, we can eliminate overlapping recursive calls by using more memory to keep track of previous values. In fact, this approach is a central part of a technique called dynamic programming. Let's use iteration to generate the Fibonacci numbers. What's the complexity of this algorithm?

public static int IterationFib(int n) {
    if (n < 2) return n;
    int f0 = 0, f1 = 1, f2 = 1;
    for (int i = 2; i < n; i++) {
        f0 = f1;
        f1 = f2;
        f2 = f0 + f1;
    }
    return f2; 
}

 

Example 5: Binary Search

What's the base case? What's the recursive case?

public class TestBinarySearch {
    TestBinarySearch() {
        int[] arr = {22, 33, 44, 55, 66, 77, 88, 99};
        System.out.println("search(" + 55 + "): " + BinarySearch(arr, 0, arr.length-1, 55));
        System.out.println("search(" + 50 + "): " + BinarySearch(arr, 0, arr.length-1, 50));
    }
    
    public static void main(String[] args) {
        new TestBinarySearch();
    }

    int BinarySearch(int[] arr, int start, int end, int x) {
        int mid = (start + end) / 2;
        if (arr[mid] = = x) return mid;
        if (start > end) return -1;
        if (arr[mid] < x) return BinarySearch(arr, mid+1, end, x);
        else return BinarySearch(arr, start, mid-1, x);
    }
}

Exercise

  • Summing the elements in an array using the binary recursion

Drawbacks of Recursion

Recursion consumes stack space. Every recursive method call produces a new instance of the method, one with a new set of local variables. The total stack space used depends upon the level of nesting of the recursion process, and the number of local variables and parameters.

Recursion may perform redundant computations. Consider the recursive computation of the Fibonacci sequence.

In sum, one has to weigh the simplicity of the code delivered by recursion against its drawbacks as described above. When a relatively simple iterative solution is possible, it is definitely a better alternative.

 

Tail Recursion

We can convert a recursive algorithm into a non-recursive algorithm and there are some instances when we can do this conversion more easily and efficiently. Specifically, we can easily convert algorithms that use tail recursion. An algorithm uses tail recursion if it uses linear recursion and the algorithm makes a recursive call as its very last operation. The recursive call must be absolutely the last thing the method does. For example, the examples 1, 2 and 5 are all tail recursion, and can be easily implemented using iteration. 

 

Exercises

Either write the pseudo-code or the Java code for the following problems. Draw the recursion trace of a simple case. What is the running time and space requirement?.

  1. Recursively searching a linked list
  2. Forward printing a linked list
  3. Reverse printing a linked list

 


Last updated: Mar.  2013