embedded software boot camp

Lowering power consumption tip #1 – Avoid zeros on the I2C bus

July 17th, 2009 by Nigel Jones

I already have a series of tips on efficient C and another on effective C. Today I’m introducing a third series of tips – this time centered on lowering the power consumption of embedded systems. As well as the environmental benefits of reducing the power consumption of an embedded system, there are also a plethora of other advantages including reduced stress on regulators, extended battery life (for portable systems) and also of course reduced EMI.

Notwithstanding these benefits, reducing power consumption is a topic that simply doesn’t get enough coverage. Indeed when I first started working on portable systems twenty years ago there was almost nothing on this topic beyond ‘use the microprocessors power saving modes’. Unfortunately I can’t say it has improved much beyond that!

So in an effort to remedy the situation I’ll be sharing with you some of the things I’ve learned over the last twenty years concerning reducing power consumption. Hopefully you’ll find it useful.

Anyway, enough preamble. Today’s posting concerns the ubiquitous I2C bus. The I2C bus is found in a very large number of embedded systems for the simple reason that it’s very good for solving certain types of problems. However, it’s not exactly a low power consumption interface. The reason is that its open-drain architecture requires a fairly stiff pull up resistor on the clock (SCL) and data (SDA) lines. Typical values for these pull up resistors are 1K – 5K. As a result, every time SCL or SDA goes low, you’ll be pulling several milliamps. Conversely when SCL or SDA is high you consume essentially nothing. Now you can’t do much about the clock line (it has to go up and down in order to well, clock the data) – but you can potentially do something about the data line. To illustrate my point(s) I’ll use as an example the ubiquitous 24LC series of I2C EEPROMS such as the 24LC16, 24LC32, 24LC64 and so on. For the purposes of this exercise I’ll use the 24LC64 from Microchip.

The first thing to note is that these EEPROMs have the most significant four I2C address bits (1010b) encoded in silicon – but the other three bits are set by strapping pins on the IC high or low. Now I must have seen dozens of designs that use these serial EEPROMs – and in every case the address lines were strapped low. Thus all of these devices were addressed at 1010000b. Simply strapping the 3 address lines high would change the devices address to 1010111b – thus minimizing the number of zeros needed every time the device is addressed.

The second thing to note is that the memory address space for these devices is 16 bits. That is after sending the I2C address, it is necessary to send 16 bits of information that specify the memory address to be accessed. Now in the case of the 24LC64, the three most significant address bits are ‘don’t care’. Again in every example I’ve ever looked at, people do the ‘natural’ thing, and set these bits to zero. Set them to 1 and you’ll get an immediate power saving on every address that you send.

As easy as this is, there’s still more that can be done in this area. In most applications I have ever looked at, the serial EEPROM is not completely used. Furthermore, the engineer again does the ‘natural’ thing, and allocates memory starting at the lowest address and works upwards. If instead you allocate memory from the top down, and particularly if you locate the most frequently accessed variables at the top of the memory, then you will immediately increase the average preponderance of ‘1s’ in the address field, thus minimizing power. (Incidentally if you find accessing the correct location in EEPROM hard enough already, then I suggest you read this article I wrote a few years ago. It has a very nifty technique for accessing serial EEPROMs courtesy of the offsetof() macro).

Finally we come to the data itself that gets stored in the EEPROM. If you examine the data that are stored in the EEPROM and analyze the distribution of the number of zero bits in each byte, then I think you’ll find that in many (most?) cases the results are heavily skewed towards the typical data byte having more zero bits than one bits. If this is the case for your data, then it points to a further power optimization – namely invert all bytes before writing them to EEPROM, and then invert them again when you read them back. With a little care you can build this into the low level driver such that the results are completely transparent to the higher levels of the application.

If you put all these tips together, then the power savings can be substantial. To drive home the point, consider writing zero to address 0 with the 24LC64 located at I2C address 1010000b. Using the ‘normal’ methodology, you would send the following bytes:

1010000 //I2C Address byte = 1010000 with R/W = 0
0000000 //Memory address MSB = 0x00
0000000 //Memory address LSB = 0x00
0000000 //Datum = 0x00

Using the amended methodology suggested herein, the 24LC64 would be addressed at 1010111b, the 3 most significant don’t care bits of the address would be set to 111b, the datum would be located at some higher order address, such as xxx11011 11001100b, and the datum would be inverted. Thus the bytes written would be:

10101110 //I2C Address byte = 1010111 with R/W = 0
11111011 //Memory address MSB = 0xFC
11001100 //Memory address LSB = 0xCC
11111111 //Datum = 0xFF

Thus using this slightly extreme example, the percentage of zeros in the bit stream has been reduced from 30/32 to 8/32 – a dramatic reduction in power.

Obviously with other I2C devices such as an ADC you will not always have quite this much flexibility. Conversely if you are talking to another microprocessor you’ll have even more flexibility in how you encode the data. The point is, with a little bit of thought you can almost certainly reduce the power consumption of your I2C interface.

As a final note. I mentioned that you can’t do much about the clock line. Well that’s not strictly correct. What you can do is run the clock at a different frequency. I’ll leave it for another posting to consider the pros and cons of changing the clock frequency.

Home

Debugging with cell phones

July 11th, 2009 by Nigel Jones

If you walk in the door of a doctor’s office here in the USA, the chances are there will be a sign admonishing you to turn off your phone. Most people probably assume this has something to do with common courtesy – and I’m sure that’s part of it. However the larger issue is the fact that cell phone transmissions can play havoc with an EKG.

What’s this got to do with embedded systems? Well yesterday I was trying to debug a piece of code – only to be faced with a debug environment that would just randomly crash, taking down the debugger with it. Naturally my first thought was that I had made a stupid coding error. However, after some serious head scratching I noticed that I had placed my Blackberry down next to the ribbon cable leading from the emulator to the target. If a cell phone can mess up an EKG being performed 10 m away, I’m sure it can really do a number on a high speed debugger interface when it’s a mere 10 cm away. In short, not a smart idea. Removal of the cell phone solved the problem.

What’s the lesson here? Well the obvious one is that cell phones have no business in a laboratory. However, upon reflection there is a larger issue. I take great effort to make my code as hygienic as possible. However, my workbench is usually a disaster area with extraneous stuff all over the place. Maybe it’s time I literally cleaned my act up in this department. If I had I’d have noticed the phone a lot sooner.

Home

Effective C Tip #4 – Prototyping static functions

July 4th, 2009 by Nigel Jones

This is the fourth in a series of tips on writing effective C.

I have previously talked about the benefits of static functions. Today I’m addressing where to place static functions in a module. This posting is motivated by the fact that I’ve recently spent a considerable amount of time wading through code that locates its static functions at the top of the file. That is the code looks like this:

static void fna(void){...}

static void fnb(uint16_t a){...}

...

static uint16_t fnc(void){...}

void fn_public(void)
{
 uint16_t t;

 fna();
 t = fnc();
 fnb(t);
 ...
}

In this approach (which unfortunately seems to be the more common), all of the static functions are defined at the top of the module, and the public functions appear at the bottom. I’ve always strongly disliked this approach because it forces someone that is browsing the code to wade through all the minutiae of the implementation before they get to the big picture public functions. This can be very tedious in a file with a large number of static functions. The problem is compounded by the fact that it’s very difficult to search for a non static function. Yes I’m sure I could put together a regular expression search to do it – but it requires what I consider to be unnecessary work.

A far better approach is as follows. Prototype (declare) all the static functions at the top of the module. Then follow the prototypes with the public functions (thus making them very easy to locate) and then place the static functions out of the way at the end of the file. If I do this, my code example now looks like this:

static void fna(void);
static void fnb(uint16_t a);
static uint16_t fnc(void);

void fn_public(void)
{
  uint16_t t;

 fna();
 t = fnc();
 fnb(t);
 ...
}

static void fna(void){...}

static void fnb(uint16_t a){...}

...

static uint16_t fnc(void){...}

If you subscribe to the belief that we only write source code so that someone else can read it then this simple change to your coding style can have immense benefits to the person that has to maintain your code (including a future version of yourself).

Update: There’s a very interesting discussion in the comments section – I recommend taking a look.
Next Tip
Previous Tip
Home

Thoughts on BCC's, LRC's, CRC's and being experienced

June 20th, 2009 by Nigel Jones

Those of us that have been working in this field for a long time are referred to as ‘experienced’. Experienced is taken to mean that we have been doing this for long enough that we have experienced many of the problems common to embedded systems and thus know how to solve them. Although this is true for many things, I think there is a downside to it – namely that because we’ve successfully solved a particular problem a number of times that we fall into the trap of thinking that our solution is optimal. In order to guard against this it is essential to be proactive in seeking out new solutions to old problems. To illustrate my point, I’ll take you on an abbreviated trip through the memory lane of my career when it comes to that most prosaic of problems – transmitting serial data between microcontrollers.

Back when I was a lad I was by definition naive and so I just transmitted the data without any thought to how to detect errors beyond the use of a parity bit on each byte. Well it didn’t take me long to work out that a simple parity bit wasn’t exactly a robust way of detecting errors, and so I started appending a simple additive checksum to the message.

Well that worked for a while until the day it dawned on me that an additive checksum without an initial seed value was vulnerable to a stuck channel (e.g. all zeros). From that day on I started seeding my checksum computations with initial values. I tended to favour 0x2B (with apologies to Hamlet).

Somewhere along the road I switched from performing an additive checksum to using an XOR operation. I can’t remember why I did this – but it just seemed ‘better’.

This approach served me well for many years until I started investigating cyclic redundancy checks (CRC). I’d known about CRC’s for a long time of course. However all the ones I knew about used 16 or 32 bit values and had certain wondrous but rather unspecified properties for detecting certain classes of errors. To put it bluntly they seemed like complete overkill for sending a short message between two microprocessors – and so I didn’t entertain them. However this all changed the day I came across an 8 bit CRC. This changed my perspective dramatically. An 8 bit CRC designed for protecting small messages – excellent! Thus henceforth I eschewed the use of an LRC and instead opted for an 8 bit CRC to protect my messages.

Well this continued for a number of years. I learned more about CRCs, I got older until one day I decided to ask myself the question – is the 8 bit CRC I am using optimal? For regular readers of this blog, you’ll probably have noticed that ‘optimal solutions’ is a recurring theme. Anyway, with this thought in mind, I set off on a hunt to determine whether in fact the 8 bit CRC I was using to protect small messages was indeed optimal. That’s when I came across this paper by Koopman and Chakravarty. It’s entitled ‘Cyclic Redundancy Code (CRC) Polynomial Selection for Embedded Networks’. It’s a highly readable and informative paper. They essentially investigate what constitutes ‘optimal’ for a CRC polynomial and then exhaustively explore optimal polynomials for different data lengths and different polynomial lengths. Most interestingly they slay some sacred cows along the way, including the popular CRC-8 polynomial (x8+x7+x6+x4+x1+1).

Having read the paper, I discovered that the CRC I was using (the so called ATM-8 polynomial(x8+x2+x1+1)) wasn’t bad for my application – but it wasn’t optimal. Upon reflection this was hardly surprising since I had essentially selected it on the basis that it was designed for a similar application to mine – and thus must be decent. However as Koopman shows – this can be a very foolhardy assumption. I just got lucky.

More importantly from my perspective is that using Koopman’s paper I now have a logical methodology for determining the optimal CRC for any application. Thus after close to 30 years of doing this I think I’m finally homing in on the truly optimal solution to this problem.

Of course, the larger lesson to be learned here is that just because you have done something a certain way for many years means nothing unless you know that it is the optimal way of doing it. That’s when you are truly ‘experienced’.
Home

Do I have the technical skills to be a consultant?

June 17th, 2009 by Nigel Jones

My previous post on being a consultant addressed the issue of how to market yourself. Today I’ll look at something a little more prosaic – how can you tell if you have the necessary technical skills to be a consultant? This post was motivated by an email I received from Victor Johns who basically asked the aforementioned question.

Before I answer this question, I should note that while technical skills are essential to being a successful consultant, they are by no means sufficient. I’ll leave it to another day to discuss the sales and business skills required to run a consulting business.

Anyway – on to the answer. Well my first and rather sardonic observation is that you don’t need to be technically competent at all. Just about every engineer I have ever met has unfortunately experienced the case of the clueless consultant – that is someone that does more harm than good. While these individuals do of course exist, they are by no means ‘successful’ as they have to spend an inordinate amount of time winning new business as no one ever hires them a second time.

If we ignore the aforementioned clueless consultant, then I think my answer depends a bit on what sort of consultant do you want to be? Some consultants are specialists and others are generalists. If you are a specialist, then essentially you are marketing yourself as the ‘go to guy’ in a narrow field. A good example might be Bluetooth. If you are promoting yourself as a Bluetooth expert then you had better know pretty much all there is to know about Bluetooth. However, what about the majority of consultants who are more generalists? In their case absolute knowledge is not as important as the ability to learn fast and to apply skills learned in one field to the field they are currently in. The reason I say this is because no sensible client will expect you to know ‘everything’ needed to do a particular job. Rather they expect that you have the fundamental skills upon which you can rapidly build in order to solve the problem. It’s for this reason that my ideal project is one with 30% ‘new stuff’. That is I know exactly how to do 70% of the project, whereas the remaining 30% will require me to learn new tools / skills.

This of course brings up the issue of how does one stay up to date? While there are many ways of doing this, I find textbooks to offer the best bang for the buck. Simply put, a $100 text book that saves me an hour on a project is a good investment. One that saves me a day is an outstanding investment. It’s for this reason that I have a stellar technical library.

As a parting comment I’ll note that we have all run into the occasional engineer who ‘knows’ they know it all – while actually being pedestrian. In my experience it’s the engineers that have a lot of confidence in their ability – but still realize that they can’t hope to ‘know it all’ that ultimately will succeed in this business. I’m talking about you Victor!