Archive for November, 2009

Keeping your EEPROM data valid through firmware updates

Thursday, November 26th, 2009 Nigel Jones

Back when embedded systems used EPROM (no that is not a typo for my younger readers) rather than Flash, the likelihood of the code being updated in the field was close to nil. Today however, it is common for embedded systems to contain mechanisms to allow the code to be updated easily. Like most people, I embraced this feature enthusiastically. However, after I’d implemented a few systems that were field upgradable, I discovered that the ability to update in the field had an unexpected impact on my EEPROM data. To see what I mean, read on…

Most of the embedded systems I work on contain EEPROM. One of the prime uses for this EEPROM is for storing configuration / calibration information for the system. As a result, I often store data in EEPROM as a series of data structures at fixed locations, with gaps in between them. Thus, my EEPROM map might look something like this:

#define CAL_DATA_LOCATION     0x0010
#define CONFIG_DATA_LOCATION  0x0200
...
#define SYSTEM_PARAMS_LOCATION  0x1000
typedef struct
{
 uint32_t param1;
 uint16_t param2;
 ...
 uint8_t  spare[10];
} CALIBRATION_DATA;
__eeprom CALIBRATION_DATA Cal_Data @ CAL_DATA_LOCATION;
__eeprom CONFIGURATION_DATA Config_Data @ CONFIG_DATA_LOCATION;
...
__eeprom SYSTEM_DATA System_Data @ SYSTEM_PARAMS_LOCATION;

As you can see, I was smart enough to allow room for growth within the structure via the spare[] array. (I have intentionally omitted support related to corruption detection to avoid complicating the issue at hand). As a result I thought I was all set if at some time a SW update caused me to have to use more parameters in a given EEPROM structure. Well I went along in this blissful state of ignorance for a few years until the real world intruded in a rather ugly way. Here’s what happened. The firmware upgrade didn’t require me to add any new parameters to the EEPROM, per se, but it did require that the data type of some of the parameters be changed. For example, my CALIBRATION_DATA structure example might have to change to this:

typedef struct
{
 float     param1;
 uint16_t  param2;
 ...
 uint8_t   spare[10];
} CALIBRATION_DATA;

Thus param1 has changed from a uint32_t type to a float. Thus when the new code powered up, it had to read param1 as a uint32_t, and then convert it to a float type and write it back to the EEPROM. This clearly was quite straightforward. However, where the problem came was the next time the system powered up. I realized that without some sort of logic in place, I would re-read param1, treat it as a uint32_t (even though it is a float), ‘convert’ it to a float and write it back to EEPROM. Clearly I needed some method of signaling that I had already performed the requisite upgrade. As I pondered this problem, I realized that it was even more complicated. Let us denote the two versions of CALIBRATION_DATA as version 1 and version 2 respectively. Furthermore, let’s assume that in version 3 of the code, param1 gets changed to a double (thus shifting all the other parameters down and consuming some of the spare allocation). I.e. it looks like this:

typedef struct
{
 double     param1;
 uint16_t   param2;
 ...
 uint8_t    spare[6];
} CALIBRATION_DATA;

In this case, we must not only be able to handle the upgrade from version 2 to version 3 – but also directly from version 1 to version 3. (You could of course require that users perform all upgrades in order. While I recognize that sometimes this is unavoidable, I suspect that most times it’s because the developer has backed themselves in to the sort of corner I describe here).

Anyway, with this insight in hand, I realized that I needed a generic system for both tagging an EEPROM structure with the version of software that created it, together with a means of providing arbitrary updates. This is how I do it.

Step 1.
Make the first location of each EEPROM structure a version field. This version field contains the firmware version that created the structure. By making it the first location in the EEPROM data structure, you ensure that you can always read it regardless of what else happens to the structure. Thus my CALIBRATION_DATA structure now looks something like this:

typedef struct
{
 uint16_t version;
 uint32_t param1;
 uint16_t param2;
 ...
 uint8_t  spare[10];
} CALIBRATION_DATA;

Step 2.
Add code to handle the upgrades. This code must be called before any parameters are used from EEPROM. The code looks something like this:

void eeprom_Update(void)
{
if (Cal_Data.version != SW_VERSION)
{
 switch (Cal_Data.version)
 {
  case 0x100:
   /* Do necessary steps to perform upgrade */
  break;
  case 0x200:
   /* Do necessary steps to perform upgrade */
  break;
  default:
  break;
  Cal_Data.version = SW_VERSION;  /* Update the EEPROM version number */
 }
}

Incidentally, I find that is often one of those cases where falling through case statements is really useful. Of course doing this is usually banned and so one ends up with much more clumsy code than would otherwise be required.

An Apology
Regular readers will no doubt have noticed that this is my first post in a while. A deadly combination of vacation and urgent projects with tight deadlines had conspired against me to prevent me blogging at my usual pace.

Home

Eye, Aye I!

Friday, November 6th, 2009 Nigel Jones

Today’s post should probably be called ‘Thoughts on non-descriptive variable names’, but once in a while I have to let my creative side out!

Anyway, the motivation for today’s post, is actually Michael Barr’s latest blog posting concerning analysis of the source code for a breathalyzer. Since I do expert witness work, as well as develop products I was keen to see what the experts in this case had to say. One snippet from the expert for the plaintiffs caught my eye. In appendix B of their report, Draeger made the following statement concerning general code issues:

Non descriptive variable names – i, j, dummy and temp

This touched upon something where I seem to be at odds with the conventional wisdom. I’ll illustrate what I mean. Consider initializing an array to zero (I’ll ignore that we could use a library function for this). I would code it like this:

uint8_t buffer[BUFSIZE];
uint8_t i;
for (i = 0; i < BUFSIZE; i++)
{
 buffer[i] = 0;
}

This code would be rejected by many coding standards (and apparently would offend Draeger), as the loop variable ‘i’ is not descriptive. To be ‘correct’, I should instead code it like this

uint8_t buffer[BUFSIZE];
uint8_t buffer_index;
for (buffer_index = 0; buffer_index <BUFSIZE; buffer_index++)
{
 buffer[buffer_index] = 0;
}

So for me, the question is, does the second approach buy me anything – or indeed cost me anything? Well clearly, this is a matter of opinion. However I’d make the following observations:

  1. I think my code is clear, concise and easily understood by even the most unskilled programmer
  2. Is the variable name ‘buffer_index’ clearer – yes but only to a native English speaker. It’s my experience that there are a lot of non-native English speakers in the industry.
  3. Personally, I find the use of similar words in close proximity (buffer[buffer_index]) to be a bit harder to read, and very easy to mis-read if there are other variables around prefixed with buffer.

I’d also make the observation that many coding standards require variable names to be at least 3 characters long, and as a result I’ve seen code that looks like this:

uint8_t buffer[BUFSIZE];
uint8_t iii;
for (iii = 0; iii < BUFSIZE; iii++)
{
 buffer[iii] = 0;
}

Clearly in this case, the person is addressing the letter of the standard (if you’ll pardon the pun), but not the spirit. Where the standard requires the variable names to be meaningful, I’ve also seen this done:

uint8_t buffer[BUFSIZE];
uint8_t idx;
for (idx = 0; idx < BUFSIZE; idx++)
{
 buffer[idx] = 0;
}

This code meets the letter of the standard, and arguably the spirit. Is it really any more understandable than my original code? I don’t think so – but I’ll be interested to get your comments.

Home

Lowering power consumption tip #3 – Using Relays

Monday, November 2nd, 2009 Nigel Jones

This is the third in a series of tips on lowering power consumption in embedded systems. Today’s topic concerns relays. It may be just the markets that I operate in, but relays seem to crop up in a very large percentage of the designs that I work on. If this is true for you, then today’s tip should be very helpful in reducing the power consumption of your system.

I’ll start by observing that relays consume a lot of power – at least in comparison to silicon based components, and thus anything that can be done to minimize their power consumption typically has a large impact on the overall consumption of the system. That being said, usually the thing that will reduce a relay’s power consumption the most is to simply use a latching relay. (A latching relay is designed to maintain its state once power is removed from its coil. Thus it only consumes power when switching – much like a CMOS gate). However, latching relays cannot be used in circumstances where it is important that the relays revert to a known state in the event of a loss of power. Most embedded systems that I work on require the relays to have this property. Thus in these cases, what can be done to minimize the relay’s power consumption?

If you look at the data sheet for a relay, you will see a plethora of parameters. However, the one of most interest is the operating current. (Relays are current operated devices. That is it is the presence of current flowing through the relay coil that generates a magnetic field that in turn produces the magneto-magnetic force that moves the relay armature). This current is the current required to actuate (pull-in) the relay. Not much can be done about this. However, once a relay is actuated, the current required to hold the relay in this state is typically anywhere between a third and two thirds less than the pull-in current. This current is called the holding current – and may or may not appear on the data sheet. Despite the fact that the holding current is so much less than the pull-in current, almost every design I see (including many of mine I might add) eschews the power savings that are up for grabs and instead simply puts the pull-in current through the relay the whole time the relay is activated.

So why is this? Well, the answer is that it turns out it isn’t trivial to switch from the pull-in current to the holding current. To see what I mean – read on!

The typical hardware to drive a relay consists of a microcontroller port pin connected to gate of an N channel FET (BJT’s are used, but if you are interested in reducing power, a FET is the way to go). The FET in turn is connected to the relay coil. Thus to turn the relay on, one need only configure the microcontroller port pin as an output and drive it high – a trivial exercise.

To use the holding current approach, you need to do the following.

  1. Connect the FET to a microcontroller port pin that can generate a PWM waveform. The hardware is otherwise unchanged.
  2. To turn the relay on, drive the port pin high as before.
  3. Delay for the pull in time of the relay. The pull in time is typically of the order of 10 – 100 ms.
  4. Switch the port pin over to a PWM output. The PWM depth of course dictates the effective current through the relay, and this is how you set the holding current. The other important parameter is the PWM frequency. Its period should be at most one tenth of the pull-in time. For example, a relay that has a pull in time of 10 ms, would require a PWM period of no more than 1 ms, giving a PWM frequency of 1 kHz. You can of course use higher frequencies – but then you are burning unnecessary power in charging and discharging the gate of the FET.
  5. To turn the relay off, you must disable the PWM output and then drive the port pin low.

Looking at this, it really doesn’t seem too hard. However compared to simply setting and clearing a port pin, it’s certainly a lot of work. Given that management doesn’t normally award points for reducing the power consumption of an embedded system, but does reward getting the system delivered on time, it’s hardly surprising that most systems don’t use this technique. Perhaps this post will start a tiny movement towards rectifying this situation.

Next Tip
Previous Tip
Home