Sorting can be done faster than quadratic time. Two of such sorting algorithms are based on the principle of divide and conquer. Divide and conquer is a strategy that helps simplify solving a complex problem. It begins by dividing a problem into sub-problems. It then keeps doing that until sub-problems are trivial to solve. In the end, outcomes of all sub-problems are assembled backwards, which eventually gives the final result of the whole problem.
Divide and conquer
Divide and conquer looks like a magic to me. By simply dividing a problem into sub-problems, it can be solved in a time complexity lighter than intuition. Of course, if we parallelize sub-problems, this makes perfect sense. But when talking about algorithms, we usually don’t assume the existence of special computational power supports.
In fact, the time complexity of a divide-and-conquer operation depends on the time complexity of each sub-operation. The formula is not intuitive to understand and let’s just treat it as a magic. Generally, given a problem of size \(n\), we divide it into \(a\) sub-problems with each of size \(n/b\), and then assemble the results of sub-problems in \(O(n^d)\) time, where \(a,b,d > 0\). The final time complexity follows the formula below:
$$ T(n)=\begin{cases} O(n^d) &\text{if } d > \log_b a \\ O(n^d\log n) &\text{if } d = \log_b a \\ O(n^{\log_b a}) &\text{if } d < \log_b a . \end{cases} $$
Merge sort
Let’s look at one typical example of divide and conquer—merge sort. The basic idea of merge sort is to divide a data set into two partitions, sort each of them, and then merge the two sorted partitions into one sorted sequence. Merge sort applies such process recursively until sub-problems become trivial to solve (i.e., sorting one single data element, which is a no-op).
// a[0] to a[n-1] is the array to sort.
// The result is MergeSort(a, 0, n-1).
MergeSort(int a[], int head, int tail)
{
if (head >= tail)
return;
int mid = head + (tail - head) / 2;
MergeSort(a, head, mid);
MergeSort(a, mid+1, tail);
Merge(a[head..mid], a[mid+1..tail]);
}
With the formula of divide and conquer, we have \(a=2, b=2, d=1\), and thus the time complexity is \(O(n^d\log n)\), that is, \(O(n\log n)\). The performance of merge sort is consistent, where the best-case, worst-case, and average performance are all \(O(n\log n)\). In fact, \(O(n\log n)\) is the optimal time can be achieved for sorting.
To merge two sorted partitions, we need additional \(O(n)\) spaces to save the merged result. The size of such spaces needs to be at least the sum of the sizes of those two sorted partitions. Because of that, merge sort can’t be done in-place over arrays. Merge sort is stable. When merging two sorted partitions, we always first merge repeated data elements from the partition that is in front of the other.
The process of merge sort is at least as interesting as its result. From the perspective of each individual data element, the process of merge sort is to essentially move all smaller data elements after that data element to the front of it. The following interesting problem can be solved by such process of merge sort:
Quicksort
Quicksort first selects one data element from the unsorted data set as the pivot. It then shuffles the other data elements into two partitions such that one partition contains the data elements smaller than the pivot and the other partition contains the data elements greater than the pivot. It further sorts the two partitions respectively following the same process, and then merges the two sorted partitions and the pivot into one sorted data sequence. Quicksort applies such process recursively until sub-problems become trivial to solve (i.e., sorting one single data element, which is a no-op).
// a[0] to a[n-1] is the array to sort.
// The result is Quicksort(a, 0, n-1).
Quicksort(int a[], int head, int tail)
{
if (head > tail)
return;
// Partitioning.
int pivot = a[tail];
int i = head;
for (int j = head; j < tail; ++j)
{
if (a[j] < pivot)
{
swap(a[i], a[j]);
i++;
}
}
swap(a[i], a[tail]);
Quicksort(a, head, i-1);
Quicksort(a, i+1, tail);
}
The performance of quicksort depends on the seelction of the pivot. As a best case, we always shuffle the data elements besides the pivot into two partitions of equal size (or their sizes differ by one if the data set contains even number of elements). Therefore, we are applying the same formula as merge sort, where \(a=2, b=2, d=1\), and thus the time complexity is \(O(n\log n)\). The average performance of quicksort is also \(O(n\log n)\). However, the worst-case performance of quicksort becomes \(O(n^2)\). For example, if we always choose the maximum data element as the pivot, which results in only one single partition that includes all the other data elements, we are basically doing a quadratic-time operations. Different from merge sort, quicksort can be done in-place over arrays by leveraging swapping.
Also different from merge sort, quicksort is not stable. For example, suppose a data set contains three elements of the identical value. If we choose the second one as the pivot, the other two will be moved to one partition, which breaks the original order of these three elements. However, quicksort can be stable if we implement it not in-place, but rely on additional \(O(n)\) spaces for partitioning. Such spaces allow the data set to be partitioned in a stable manner.