Posts Tagged ‘I2C’

Lowering power consumption tip #1 – Avoid zeros on the I2C bus

Friday, July 17th, 2009 Nigel Jones

I already have a series of tips on efficient C and another on effective C. Today I’m introducing a third series of tips – this time centered on lowering the power consumption of embedded systems. As well as the environmental benefits of reducing the power consumption of an embedded system, there are also a plethora of other advantages including reduced stress on regulators, extended battery life (for portable systems) and also of course reduced EMI.

Notwithstanding these benefits, reducing power consumption is a topic that simply doesn’t get enough coverage. Indeed when I first started working on portable systems twenty years ago there was almost nothing on this topic beyond ‘use the microprocessors power saving modes’. Unfortunately I can’t say it has improved much beyond that!

So in an effort to remedy the situation I’ll be sharing with you some of the things I’ve learned over the last twenty years concerning reducing power consumption. Hopefully you’ll find it useful.

Anyway, enough preamble. Today’s posting concerns the ubiquitous I2C bus. The I2C bus is found in a very large number of embedded systems for the simple reason that it’s very good for solving certain types of problems. However, it’s not exactly a low power consumption interface. The reason is that its open-drain architecture requires a fairly stiff pull up resistor on the clock (SCL) and data (SDA) lines. Typical values for these pull up resistors are 1K – 5K. As a result, every time SCL or SDA goes low, you’ll be pulling several milliamps. Conversely when SCL or SDA is high you consume essentially nothing. Now you can’t do much about the clock line (it has to go up and down in order to well, clock the data) – but you can potentially do something about the data line. To illustrate my point(s) I’ll use as an example the ubiquitous 24LC series of I2C EEPROMS such as the 24LC16, 24LC32, 24LC64 and so on. For the purposes of this exercise I’ll use the 24LC64 from Microchip.

The first thing to note is that these EEPROMs have the most significant four I2C address bits (1010b) encoded in silicon – but the other three bits are set by strapping pins on the IC high or low. Now I must have seen dozens of designs that use these serial EEPROMs – and in every case the address lines were strapped low. Thus all of these devices were addressed at 1010000b. Simply strapping the 3 address lines high would change the devices address to 1010111b – thus minimizing the number of zeros needed every time the device is addressed.

The second thing to note is that the memory address space for these devices is 16 bits. That is after sending the I2C address, it is necessary to send 16 bits of information that specify the memory address to be accessed. Now in the case of the 24LC64, the three most significant address bits are ‘don’t care’. Again in every example I’ve ever looked at, people do the ‘natural’ thing, and set these bits to zero. Set them to 1 and you’ll get an immediate power saving on every address that you send.

As easy as this is, there’s still more that can be done in this area. In most applications I have ever looked at, the serial EEPROM is not completely used. Furthermore, the engineer again does the ‘natural’ thing, and allocates memory starting at the lowest address and works upwards. If instead you allocate memory from the top down, and particularly if you locate the most frequently accessed variables at the top of the memory, then you will immediately increase the average preponderance of ‘1s’ in the address field, thus minimizing power. (Incidentally if you find accessing the correct location in EEPROM hard enough already, then I suggest you read this article I wrote a few years ago. It has a very nifty technique for accessing serial EEPROMs courtesy of the offsetof() macro).

Finally we come to the data itself that gets stored in the EEPROM. If you examine the data that are stored in the EEPROM and analyze the distribution of the number of zero bits in each byte, then I think you’ll find that in many (most?) cases the results are heavily skewed towards the typical data byte having more zero bits than one bits. If this is the case for your data, then it points to a further power optimization – namely invert all bytes before writing them to EEPROM, and then invert them again when you read them back. With a little care you can build this into the low level driver such that the results are completely transparent to the higher levels of the application.

If you put all these tips together, then the power savings can be substantial. To drive home the point, consider writing zero to address 0 with the 24LC64 located at I2C address 1010000b. Using the ‘normal’ methodology, you would send the following bytes:

1010000 //I2C Address byte = 1010000 with R/W = 0
0000000 //Memory address MSB = 0x00
0000000 //Memory address LSB = 0x00
0000000 //Datum = 0x00

Using the amended methodology suggested herein, the 24LC64 would be addressed at 1010111b, the 3 most significant don’t care bits of the address would be set to 111b, the datum would be located at some higher order address, such as xxx11011 11001100b, and the datum would be inverted. Thus the bytes written would be:

10101110 //I2C Address byte = 1010111 with R/W = 0
11111011 //Memory address MSB = 0xFC
11001100 //Memory address LSB = 0xCC
11111111 //Datum = 0xFF

Thus using this slightly extreme example, the percentage of zeros in the bit stream has been reduced from 30/32 to 8/32 – a dramatic reduction in power.

Obviously with other I2C devices such as an ADC you will not always have quite this much flexibility. Conversely if you are talking to another microprocessor you’ll have even more flexibility in how you encode the data. The point is, with a little bit of thought you can almost certainly reduce the power consumption of your I2C interface.

As a final note. I mentioned that you can’t do much about the clock line. Well that’s not strictly correct. What you can do is run the clock at a different frequency. I’ll leave it for another posting to consider the pros and cons of changing the clock frequency.

Home