Because Heapsort requires only a fixed amount of extra storage space, it is a true in-place algorithm. The "siftdown" routine given below uses tail recursion which requires stack space in naive interpreters and compilers, but a good language implementation (such as any Scheme implementation) or a competent programmer can straightforwardly convert the algorithm to an iterative form.
Heapsort vs. Quicksort Heapsort primarily competes with Quicksort, another efficient nearly-in-place comparison-based sort algorithm. Quicksort is typically somewhat faster, but the worst-case running time for Quicksort is O(n2) which becomes unacceptable for large data sets. See Quicksort for a detailed discussion of this problem, and possible solutions. The Quicksort algorithm also requires Ω(log n) extra storage space, making it not a strictly in-place algorithm. This typically does not pose a problem except on the smallest embedded systems. Obscure constant-space variations on Quicksort exist but are never used in practice due to their extra complexity. In situations where very limited extra storage space is available, Heapsort is the sorting algorithm of choice. Thus, because of the O(n log n) upper bound on Heapsort's running time and constant upper bound on its auxiliary storage, embedded systems with real-time constraints often use Heapsort.
Explanation Heapsort works by first finding the largest element in the array, putting it in the last position, finding the next largest element, putting that in the next-to-last position, etc. A naive implementation of this idea would result in selection sort. Heapsort is more efficient because it uses a data structure called a heap.
To explain Heapsort, we will assume that we are sorting an array whose index values runs from 1 to n. This will later be generalised to arbitrary index ranges. Array elements i will be referred to as a[i].
Construct the heap by superimposing a binary tree structure on the array. Element a[i] is the parent of two children a[2i] and a[2i+1]. The heap property holds for index i if a[i] >= a[2i] and a[i] >= a[2i+1].
We need only one function to establish and maintain the heap. Suppose that the heap property holds for the indices b, b+1, ..., e. The sift-down function extends the heap property to b-1, b, b+1, ..., e. Only index i = b-1 can violate the heap property. Let j be the index of the largest child of a[i] within the range b, ..., e. (If no such index exists because 2i > e then the heap property holds for the newly extended range and nothing needs to be done.) By swapping the values a[i] and a[j] the heap property for position i is established. The only problem now is that the heap property might not hold for index j. The sift-down function is applied tail-recursively to index j until the whole heap property is established.
The sift-down function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log2 e steps are required.
The heapsort algorithm is now easy to explain.
For arrays that do not start at 1 the definition of the child index has to be adjusted. If an array starts with index value 0, then the children of a[i] are a[2i+1] and a[2i+2]. It is easy to verify that this imposes the same binary tree structure as the original definition on arrays starting at 1. Similar adaptations can be made for any other array range.
Although not widely known, it is possible to define a ternary Heapsort. Instead of each element having two children, each element has three. Ternary Heapsort is somewhat more complicated to program, but it is potentially faster. Each step in a ternary heap requires three comparisons and one swap vs. two comparisons and one swap in a binary heap. The ternary heap can do two steps in less time than the binary heap requires for three steps. But two steps of a ternary tree multiply the index by a factor of 9, which is more than the factor 8 of three binary steps. Ternary Heapsort is about 12% faster than binary Heapsort.
Here's an implementation in the Python programming language: (To be done: this code could be improved.)
def reverserange(n): list = range(n) list.reverse() return list def siftdown(array, heaplen, index): lc = 2 * index + 1 # left child rc = lc + 1 # right child # if the left child is inside the heap and the max of the 3, move it up: if (lc < heaplen and array[lc] > array[index] and (rc >= heaplen or (rc < heaplen and array[rc] <= array[lc]))): array[index], array[lc] = array[lc], array[index] siftdown(array, heaplen, lc) # else if the right child is, move it up: elif (rc < heaplen and array[rc] > array[index] and array[rc] > array[lc]): array[index], array[rc] = array[rc], array[index] siftdown(array, heaplen, rc) def makeheap(array): for i in reverserange(len(array)/2): siftdown(array, len(array), i) def heapsort(array): makeheap(array) for i in reverserange(len(array)): (array[i], array[0]) = (array[0], array[i]) siftdown(array, i - 1, 1)
An alternative algorithm, also in Python:
def bubbledown(x, l, sz): "Bubbles element x in heap l down as far as it can go." lc = 2*x + 1 # leftchild rc = lc + 1 # rightchild # Still inside the heap and left child is bigger than current element? if lc < sz and l[x] < l[lc]: if rc < sz and l[lc] < l[rc]: # right is bigger, use it instead. lc = rc # swap smaller element downwards l[lc], l[x] = l[x], l[lc] # continue the trip bubbledown(lc, l, sz) elif rc < sz and l[x] < l[rc]: l[rc], l[x] = l[x], l[rc] bubbledown(rc, l, sz) def makeheap(l): "Makes l into a heap." # Works by bubbling down each element of the first half of the array in # reverse order. for i in range(len(l)/2-1, -1, -1): bubbledown(i, l, len(l)) def heapsort(l): "Sorts l *IN PLACE* with the Heap Sort algorithm." makeheap(l) for i in range(len(l)-1, -1, -1): l[i], l[0] = l[0], l[i] bubbledown(0,l,i)
Search Encyclopedia
|
Featured Article
|