Recently I was examining some statistical data on the performance of a sensor against temperature. The data were from a number of sensors and I was interested in determining a mathematical model that most closely described the sensors’ performance. Using the regression tools built into Excel, I was looking at the various models, from a ‘goodness of fit’ perspective. After playing around for a while, I came to the conclusion that a quadratic polynomial really was the best fit, and should be the model to adopt. At this point, I turned to the issue of computational efficiency.

Now, it turns out that there is a relatively well known algorithm for evaluating polynomials, called Horner’s rule. I say relatively well known, because I’d say about half the time I see a polynomial evaluated, it doesn’t use Horner’s rule, but instead evaluates the polynomial directly. Thus in an effort to increase the use of Horner’s rule, I thought I’d mention it here.

OK, so what is it? Well it’s based on simply refactoring a polynomial expression:

a_{n}x^{n} + a_{(n-1)}x^{(n-1) }+ … + a_{0}=((a_{n}x + a_{(n-1)})x +…)x + a_{0}.

Thus a polynomial of order n, requires exactly n multiplications and n additions.

For example:

23.1x^{2} – 45.6x + 12.3 = (23.1x -45.6)x + 12.3

In this case a quadratic equation or order 2, using Horner’s rule requires 2 multiplications and two additions to evaluate the polynomial, versus the direct approach which requires 5 multiplications and 2 additions.

For those of you that are looking for code to just use, then this snippet will work. This is for a cubic polynomial. COEFFN is the coefficient of x^{N}.

y = x * COEFF3; y += COEFF2; y *= x y += COEFF1; y *= x y += COEFF0;

The recurrence relationship for higher order polynomials should be obvious. Note that unlike most implementations, I perform the code in line, rather than using a loop.

It should be noted that as well as being more computationally efficient, Horner’s rule is also more accurate. This comes about in two ways:

- The very act of using less floating point operations leads to less rounding errors
- Higher order polynomials generate very large numbers in a hurry. Horner’s method significantly reduces the magnitude of the intermediate values, thus minimizing problems associated with adding / subtracting floating point numbers that differ in magnitude

Although Horner’s rule is a nice tool to have at one’s disposal, I think there is a larger point to be made here. Whenever you need to perform any sort of calculation, there is nearly always a superior method than the obvious direct method of evaluation. Sometimes it requires algebraic manipulation such as for Horner’s rule. Other times, it’s an approximation method, and other times it’s just a flat out really neat algorithm (see for example my posting on Crenshaw’s square root code). The bottom line. Next time you write code to perform some sort of numerical calculation, take a step back and investigate possibilities other than direct computation. You’ll probably be glad you did.

#### Update

There is a highly relevant addendum to this posting here.

Thanks for this, seriously.Makes more sense as I feel CLRS' Algorithm book gets a bit away from explaining Horner's rule in Lehman terms.

Glad you found it useful.

Nice explanation.

Dear Nigel Jones, thank you for sharing your thoughts on Horner’s rule. It is a great reminder that there are often more efficient methods for performing calculations than the obvious direct approach. The refactoring of polynomial expressions provided by Horner’s rule not only increases computational efficiency, but it also reduces the occurrence of rounding errors, making it a more accurate method. Thanks again for the helpful insight!