Although countless PhD’s have been awarded on sorting algorithms, it’s not a topic that seems to come up much in embedded systems (or at least the kind of embedded systems that I work on). Thus it was with some surprise recently that I found myself needing to sort an array of integers. The array wasn’t very large (about twenty entries) and I was eager to move on to the real problem at hand and so I just dropped in a call to the standard C routine qsort(). I didn’t give it a great deal of thought because I ‘knew’ that a ‘Quick Sort’ algorithm is in general fast and well behaved and that with sorting so few entries I wasn’t too concerned about it being ‘optimal’. Anyway, with the main task at hand solved, on a whim I decided to take another look at qsort(), just to make sure that I wasn’t being too cavalier in my approach. Boy did I get a shock! My call to qsort() was increasing my code size by 1500 bytes and it wasn’t giving very good sort times either. For those of you programming big systems, this may seem acceptable. In my case, the target processor had 16K of memory and so 1500 bytes was a huge hit.
Surely there had to be a better solution? Well there’s always a better solution, but in my case in particular, and for embedded systems in general, what is the optimal sorting algorithm?
Well, after thinking about it for a while, I think the optimal sorting algorithm for embedded systems has these characteristics:
- It must sort in place.
- The algorithm must not be recursive.
- Its best, average and worst case running times should be of similar magnitude.
- Its code size should be commensurate with the problem.
- Its running time should increase linearly or logarithmically with the number of elements to be sorted.
- Its implementation must be ‘clean’ – i.e. free of breaks and returns in the middle of a loop.
Sort In Place
This is an important criterion not just because it saves memory, but most importantly because it obviates the need for dynamic memory allocation. In general dynamic memory allocation should be avoided in embedded systems because of problems with heap fragmentation and allocation performance. If you aren’t aware of this issue, then read this series of articles by Dan Saks on the issue.
Recursion
Recursion is beautiful and solves certain problems amazingly elegantly. However, it’s not fast and it can easily lead to problems of stack overflow. As a result, it should never be used in embedded systems.
Running Time Variability
Even the softest of real time systems have some time constraints that need to be met. As a result a function whose execution time varies enormously with the input data can often be problematic. Thus I prefer code whose execution time is nicely bounded.
Code Size
This is often a concern. Suffice to say that the code size should be reasonable for the target system.
Data Size Dependence
Sorting algorithms are usually classified using ‘Big O notation’ to denote how sensitive they are to the amount of data to be sorted. If N is the number of elements to be sorted, then an algorithm whose running time is N Log N is usually preferred to one whose running time is N2. However, as you shall see, for small N the advantage of the more sophisticated algorithms can be lost by the the overhead of the sophistication.
Clean Implementation
I’m a great proponent of ‘clean’ code. Thus code where one exits from the middle of a loop isn’t as acceptable as code where everything proceeds in an orderly fashion. Although this is a personal preference of mine, it is also codified in for example the MISRA C requirements, to which many embedded systems are built. Anyway to determine the optimal sorting algorithm, I went to the Wikipedia page on sorting algorithms and initially selected the following for comparison to the built in qsort: Comb, Gnome, Selection, Insertion, Shell & Heap sorts. All of these are sort in place algorithms. I originally eschewed the Bubble & Cocktail sorts as they really have nothing to commend them. However, several people posted comments asking that I include them – so I did. As predicted they have nothing to commend them. In all cases, I used the Wikipedia code pretty much as is, optimized for maximum speed. (I recognize that the implementations in Wikipedia may not be optimal – but they are the best I have). For each algorithm, I sorted arrays of 8, 32 & 128 signed integers. In every case I sorted the same random array, together with a sorted array and an inverse sorted array. First the code sizes in bytes:
qsort() 1538
Gnome() 76
Selection() 130
Insertion() 104
Shell() 242
Comb() 190
Heap() 200
Bubble() 104
Cocktail() 140
Clearly, anything is a lot better than the built in qsort(). However, we are not comparing apples and oranges, because qsort() is a general purpose routine, whereas the others are designed explicitly to sort integers. Leaving aside qsort(), the Gnome sort Insertion sort and Bubble sorts are clearly the code size leaders. Having said that, in most embedded systems, a 100 bytes here or there is irrelevant and so we are free to choose based upon other criteria.
Execution times for the 8 element array
Name Random Sorted Inverse Sorted
qsort() 3004 832 2765
Gnome() 1191 220 2047
Selection() 1120 1120 1120
Insertion() 544 287 756
Shell() 1233 1029 1425
Comb() 2460 1975 2480
Heap() 1265 1324 1153
Bubble() 875 208 1032
Cocktail() 1682 927 2056
In this case, the Insertion sort is the clear winner. Not only is it dramatically faster in almost all cases, it also has reasonable variability and it has almost the smallest code size. Notice that the bubble sort for all its vaunted simplicity consumes as much code and runs considerably slower. Notice that the Selection sort’s running time is completely consistent – and not too bad when compared to other methods.
Execution times for the 32 element array
Name Random Sorted Inverse Sorted
qsort() 23004 3088 19853
Gnome() 17389 892 35395
Selection() 14392 14392 14392
Insertion() 5588 1179 10324
Shell() 6589 4675 6115
Comb() 10217 8638 10047
Heap() 8449 8607 7413
Bubble() 13664 784 16368
Cocktail() 17657 3807 27634
In this case, the winner isn’t so clear cut. Although the insertion sort still performed well, it’s showing a very large variation in running time now. By contrast the shell sort has got decent times with small variability. The Gnome, Bubble and Cocktail sorts are showing huge variability in execution times (with a very bad worst case), while the Selection sort shows consistent execution time. On balance, I’d go with the shell sort in most cases.
Execution times for the 128 element array
Name Random Sorted Inverse Sorted
qsort() 120772 28411 77896
Gnome() 316550 3580 577747
Selection() 217420 217420 217420
Insertion() 88475 4731 158020
Shell() 41661 25611 34707
Comb() 50858 43523 48568
Heap() 46959 49215 43314
Bubble() 231294 3088 262032
Cocktail() 271821 15327 422266
In this case the winner is either the shell sort or the heap sort depending on whether you want raw performance more or less when compared to performance variability. The Gnome, Bubble and Cocktail sorts are hopelessly outclassed. So what to make of all this? Well in any comparison like this there are a myriad of variables that one should take into account, and so I don’t believe these data should be treated as gospel. What is clear to me is that:
- Being a general purpose routine, qsort() is unlikely to be the optimal solution for an embedded system.
- For many embedded applications, a shell sort has a lot to commend it – decent code size, fast running time, well behaved and a clean implementation. Thus if you don’t want to bother with this sort of investigation every time you need to sort an array, then a shell sort should be your starting point. It will be for me henceforth.
A quick update: Check out this link for a very cool visualization of different sorting algorithms.
Home