embedded software boot camp

Horner’s rule addendum

Sunday, February 15th, 2009 by Nigel Jones

A few weeks ago I wrote about using Horner’s rule to evaluate polynomials. Well today I’m following up on this posting because I made a classic mistake when I implemented it. On the premise that one learns more from one’s mistakes than one’s successes, I thought I’d share it with you. First, some background. I had some experimental data on the behavior of a sensor against temperature. I needed to be able to fit a regression curve through the data, and so after some experimentation I settled on a quadratic polynomial fit. This is what the data and the curve looked like:

On the face of it, everything looks OK. However, if you look carefully, you will notice two things:

  • The bulk of the experimental data cover the temperature range of 5 – 48 degrees.
  • There is a very slight hook on the right hand side of the graph

So where’s the mistake? Well actually I made two mistakes:

  • I assumed that my experimental data covered the entire expected operating temperature range.
  • I failed to check at run time that the temperature was indeed bounded to the experimental input range.

Why is this important? Well, what happened, was that in some circumstances the sensor would experience temperatures somewhat higher than I expected when the experimental data was gathered, e.g. 55 degrees. Well that doesn’t sound too bad – until you take the polynomial and extend it out a bit. This is what it looks like:

You can see that at 55 degrees, the polynomial generates a value which is about the same as at 25 degrees. Needless to say, things didn’t work too well! So what advice can I offer?

  • Ensure that when fitting a polynomial to experimental data, that the experimental data covers all the possible range of values that can be physically realized.
  • Always plot the polynomial to see how it performs outside your range of interest. In particular, if it ‘takes off’ in a strange manner, then treat it very warily.
  • At run time, ensure that the data that you are feeding into the polynomial is bounded to the range over which the polynomial is known to be valid.

The maddening thing about this for me, was that I ‘learned’ this lesson about polynomial fits many years ago. I just chose to ignore it this time. Before I leave this topic, I’d like to offer one other insight. If you search for Horner’s rule, you’ll find a plethora of articles. The more detailed ones will opine on topics such as evaluation stability, numeric overflow issues and so on. However, it’s rare that you’ll find this sort of information on polynomial evaluation posted. I think it’s because we tend to get wrapped up in the details of the algorithm while losing sight of the underlying mathematics of what is going on. The bottom line, the next time you find a neat algorithm posted on the web for ‘solving’ your problem, take a big step back and think hard about what is really going on and what are the inherent weaknesses in what you are doing. Home

6 Responses to “Horner’s rule addendum”

  1. Heya just wanted to give you a brief heads up and let you know a few of
    the pictures aren’t loading properly. I’m not sure why but I think its a linking issue. I’ve tried it in two different browsers and both show the same outcome.

  2. Todd says:

    I was under the impression that RISC architectures (at least MIPS) returned the remainder in a secondary register after division, therefore shouldn’t % be a single instruction operation across all RISC CPUs? If so, perhaps the significant difference in results across architectures is more the architectures themselves and/or sub-optimal compiler.

    In any case, it should be a compiler’s duty to optimise the code. Here I am in 2012 and programmers still have to do the work our programs should be helping us do…

  3. oliver says:

    While having a 32bit int to count the ‘epoch’ has it’s great advantages (aren’t we using 64bits now?) I’m extremly curious what the results would be if the program would be completly rewritten to store seconds, hours, days etc seperatly. Yes it would increase the logic; several things come to mind, loops for each item etc, or one big loop where you count the other items up based on some if’s.

    I know multiplications and divisions are very expensive (on 8-bitters, PC’s have special hardware for it) so eliminating all of those and replacing them with simple if’s (only a few instructions) additions (only a few instructions) would be more complex code, but could beat out efficiency quite a bit I would think?

Leave a Reply

You must be logged in to post a comment.