Recursion 
Generally speaking, recursion is the concept of welldefined selfreference. It is the determination of a succession of elements by operating on one or more preceding elements according to a rule or a formula involving a finite number of steps. In computer science, recursion is a programming technique using function or algorithm that calls itself one or more times until a specified condition is met at which time the rest of each repetition is processed from the last one called to the first. For example, let's look at a recursive definition of a person's ancestors:
We can write pseudocode to determine whether somebody is someone's ancestor. FUNCTION isAncestor(Person x, Person y): This is a recursive function that calls itself. Notice that there's a case when the function does not call itself recursively, otherwise, the function will keep calling itself and will never stop to return a value. Thus, a recursive function usually has a certain structure: (1) a base case, which does not call the function itself; and (2) a recursive step, which calls the function itself and moves closer to the base case. Even with the right structure, we still need to be careful to infiniteness. You may have noticed that the above function isAncestor still has some problem. What if x is not an ancestor of y? Then the program will keep asking about if x is y's parents' ancestor, and so on. It will never reach the base case, and the list of parents will go much further back. The program will never stop. The problem here is the base case here is not complete. We should add a new base case: FUNCTION isAncestor(Person x, Person y): Important: Every recursion must have at least one base case, at which the recursion does not recur (i.e., does not refer to itself). More examples of recursion:
Practice: give a recursive definition of the following data structures:
Defining Problems in Ways That Facilitate Recursion To design a recursive algorithm for a given problem, it is useful to think of the different ways we can subdivide this problem to define problems that have the same general structure as the original problem. This process sometimes means we need to redefine the original problem to facilitate similarlooking subproblems. Some observations: (1) Avoid if possible recursive functions that make multiple overlapping calls to themselves, which leads to an exponential complexity; and (2) repetition in code can be achieved through recursion. In the following examples, you should always ask yourself what are the base case and the recursive step, note the naturalness of the implementation, understand how the loop replacement feature of recursion is involved, and maybe think about what is the running time and space usage.

Example 1: Factorial Calculation 
We know that the factorial of n (n >= 0) is calculated by n! = n * (n1) * (n2) * ... * 2 * 1. Note that the product of (n1) * (n2) * ... * 2 * 1 is exactly (n1)!. Thus we can write the expression as n! = n * (n1)!, which is the recursive expression of the factorial calculation. What is the base case? What is the recursive step? public class RecursiveFactorial { The above recursion is called a linear recursion since it makes one recursive call at a time. The loop equivalent: public static int factorial(int n) { Recursion and Stacks Let's take a close look at the mechanism with which a recursive program is actually implemented by the compiler. In the previous example, we have seen how a recursion executes its forward and backingout phases. The order in which the recursive process backs out is the reverse of the order in which it goes forward. Thus some action may be performed that involves recalling something that has been stored in the forward process. The compiler uses a stack to implement recursion.
Exercise

Example 2: Reversing an Array 
Let us consider the problem of reversing the n elements of an array, A, so that the first element becomes the last, the second element becomes the second to the last, and so on. We can solve this problem using the linear recursion, by observing that the reversal of an array can be achieved by swapping the first and last elements and then recursively reversing the remaining elements in the array. Algorithm ReverseArray(A, i, j): Exercises

Example 3: Towers of Hanoi 
This is a standard problem where the recursive implementation is trivial but the nonrecursive implementation is almost impossible. In the Towers of Hanoi puzzle, we are given a platform with three pegs, a, b, and c, sticking out of it. On peg a is a stack of n disks, each larger than the next, so that the smallest is on the top and the largest is on the bottom. The puzzle is to move all the disks from peg a to c, moving one disk at a time, so that we never place a larger disk on top of a smaller one. The following figures give an example of the starting position and the ending position of the disks with n = 4. Let's look at an example of moving 4 disks. a b c a b c (source) (spare) (dest) (source) (spare) (dest) Think about what is the base case? What is the recursive step? At the top level, we want to move 4 disks from peg a to c, with a spare peg b. We can break the problem of moving 4 disks into three steps:
The pseudocode looks like the following. We call this function to move 4 disks by MoveDisk(4, a, c, b). Algorithm MoveDisk(disk, source, dest,
spare) { Let's trace our solution. To visualize the recursive calling process, we generate a call tree. This is a call tree for moving 3 disks from peg a to c. Notice that, each MoveDisk call will branch into two functions calls unless it's the base case. If we want to move n disks, how many movements do we need with this recursive function? Assume M(i) represents the number of movement for the disks, let's calculate how long does it take to move n disks.
This can be verified by plugging it into our function.
A 64disk version of the puzzle lies in a Hanoi monastery, where monks continuously toward solving the puzzle. When they complete the puzzle, the world will come to an end. Now, you know the answer. How long will the world last? roughly 585.442 billion years. The universe is currently about 13.7 billion years old.

Example 4: Fibonacci Sequence 
Fibonacci sequence is the sequence of numbers 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, .... The first two numbers of the sequence are both 1, while each succeeding number is the sum of the two numbers before it. We can define a function F(n) that calculates the nth Fibonacci number. First, the base cases are: F(0) = 1 and F(1) = 1. Now, the recursive case: F(n) = F(n1) + F(n2). Write the recursive function and the call tree for F(5). Algorithm Fib(n) { The above recursion is called binary recursion since it makes two recursive calls instead of one. How many number of calls are needed to compute the kth Fibonacci number? Let n_{k} denote the number of calls performed in the execution.
This means that the Fibonacci recursion makes a number of calls that are exponential in k. In other words, using binary recursion to compute Fibonacci numbers is very inefficient. Compare this problem with binary search, which is very efficient in searching items, why is this binary recursion inefficient? The main problem with the approach above, is that there are multiple overlapping recursive calls. We can compute F(n) much more efficiently using linear recursion. One way to accomplish this conversion is to define a recursive function that computes a pair of consecutive Fibonacci numbers F(n) and F(n1) using the convention F(1) = 0. Algorithm LinearFib(n) { Since each recursive call to LinearFib decreases the argument n by 1, the original call results in a series of n1 additional calls. This performance is significantly faster than the exponential time needed by the binary recursion. Therefore, when using binary recursion, we should first try to fully partition the problem in two or, we should be sure that overlapping recursive calls are really necessary. Usually, we can eliminate overlapping recursive calls by using more memory to keep track of previous values. In fact, this approach is a central part of a technique called dynamic programming. Let's use iteration to generate the Fibonacci numbers. What's the complexity of this algorithm? public static int IterationFib(int n) {

Example 5: Binary Search 
What's the base case? What's the recursive case? public class TestBinarySearch { Exercise

Drawbacks of Recursion 
Recursion consumes stack space. Every recursive method call produces a new instance of the method, one with a new set of local variables. The total stack space used depends upon the level of nesting of the recursion process, and the number of local variables and parameters. Recursion may perform redundant computations. Consider the recursive computation of the Fibonacci sequence. In sum, one has to weigh the simplicity of the code delivered by recursion against its drawbacks as described above. When a relatively simple iterative solution is possible, it is definitely a better alternative.

Tail Recursion 
We can convert a recursive algorithm into a nonrecursive algorithm and there are some instances when we can do this conversion more easily and efficiently. Specifically, we can easily convert algorithms that use tail recursion. An algorithm uses tail recursion if it uses linear recursion and the algorithm makes a recursive call as its very last operation. The recursive call must be absolutely the last thing the method does. For example, the examples 1, 2 and 5 are all tail recursion, and can be easily implemented using iteration.

Exercises 
Either write the pseudocode or the Java code for the following problems. Draw the recursion trace of a simple case. What is the running time and space requirement?.
