Mergesort Analysis

Mergesort, it turns out, is Nlog2N, where N is the number of items in L. It may help to refer to Figure 11.9, on the next page.

The first part of the function, creating the list of one-item lists, takes N iterations, one for each item.

The second loop, in which we continually merge lists, will take some care to analyze. We'll start with the very last iteration, in which we are merging two lists with about N items. As we've seen, function merge copies each element into its result exactly once, so with these two lists, this merge step takes roughly N steps.

On the previous iteration, there are two lists of size N to merge into one of the two lists of size N, and on the iteration before that there are another two lists of size N to merge into the second list of size N. Each

Figure 11.9: Steps in mergesort

of these two merges takes roughly N steps, so the two together take roughly N steps in total.

On the iteration before that, there are a total of eight lists of size jf to merge into the four lists of size N. Four merges of this size together also take roughly N steps.

We can subdivide a list with N items a total of log2N times, using an analysis much like we used for binary search. Since at each "level" there are a total of N items to be merged, each of these log2N levels takes roughly N steps. Hence, mergesort take time proportional to Nlog2N.

11.6 Summary in this chapter, we learned the following:

• Linear search is the simplest way to find a value in a list, but on average, the time required is directly proportional to the length of the list.

• Binary search is much faster—the average time is proportional to the logarithm of the list's length—but it works only if the list is in sorted order.

Big-Oh and All That

Our method of analyzing the performance of searching and sorting algorithms might seem like hand-waving, but there is actually a well-developed mathematical theory behind it. If f and g are functions, then the expression f(x) = O(g(x)) is read "f is big-oh of g" and means that for sufficiently large values of x, f(x) is bounded above by some constant multiple of g(x), or equivalently that the function g gives us an upper bound on the values of the function f. Computer scientists use this to group algorithms into families, such as those sorting functions that execute in N2 time and those that execute in Nlog2N time.

These distinctions have important practical applications. In particular, one of the biggest puzzles in theoretical computer science today is whether two families of algorithms (called P and NP for reasons that we won't go into here) are the same or not. Almost everyone thinks they are not, but no one has been able to prove it (despite the offer of a million-dollar prize for the first correct proof). If it turns out that they are the same, then many of the algorithms used to encrypt data in banking and military applications (as well as on the Web) will be much more vulnerable to attack than expected.

• Similarly, the average running time of simple sorting algorithms like selection sort is proportional to the square of the input size N, while the running time of more complex sorting algorithms grows as Nlog2N.

• Looking at how the running time of an algorithm grows as a function of the size of its inputs is the standard way to analyze and compare algorithms' efficiency.

Was this article helpful?

0 0

Post a comment