Archive for the ‘Low Power Design’ Category

Tools to help lower power consumption

Tuesday, June 29th, 2010 Nigel Jones

Regular readers will know that low power designs are an interest of mine. Indeed one of the very first blog posts I made lamented how difficult it is to ascertain how much energy it takes to perform various tasks typical to an embedded system. Thus it was a pleasant surprise to receive an IAR newsletter today announcing a tool (‘Power debugging’) that is explicitly designed to help one lower a system’s power consumption. The tool isn’t available yet, but if the propaganda is to be believed it should be a very interesting adjunct to the debugging arsenal. The sign up procedure to beta test the tool doesn’t seem to work properly, but on the assumption that I made it onto the beta tester list I will  post a review once I get my hands on it.

BTW I have to admit I found the name of the article / tool (‘Power debugging’) a bit confusing in the sense that I interpreted power in the vernacular sense (e.g. ‘power walking’, ‘power breakfast’) rather than the engineering sense. I guess I’m just a victim of so much marketing hyperbole that I can’t recognize plain talk any more. Oh well!

Lowering power consumption tip #4 – transmitting serial data

Thursday, May 20th, 2010 Nigel Jones

This is the fourth in a series of tips on lowering power consumption in embedded systems. For this post I thought I’d delve into the common task of transmitting serial data. I compare polling and interrupting and show you how a hybrid approach can sometimes be optimal.

Almost every embedded system I have ever worked on has contained serial links. At its most abstract level, a serial link takes in parallel data and converts it to a serial stream. This serialization inherently takes longer than the write to the register that holds the data and thus to send multiple bytes back to back there is an inevitable delay. The process thus looks like this:

Store data to be transmitted
Wait for data to be sent out
Store data to be transmitted
Wait for data to be sent out
...

Store data to be transmitted
Wait for data to be sent out

From a power consumption perspective, the question is – how best to wait for the data to be sent out? Well, you have four basic approaches – open loop, polling, interrupting or a hybrid combination.  In assessing them from a power consumption perspective, what I look at is how many non-useful clock cycles I have to execute in order to transmit a byte of data.

Open Loop

I use the term open loop to describe a technique whereby you make use of the properties of a synchronous link to know (actually more accurately presume) that it is safe to send the next byte. This technique is only of use when the transmit frequency is very high in comparison to the CPU speed. For example, consider an SPI link between a CPU and a peripheral. In many cases, this link may be clocked at up to half the CPU clock frequency. In which case it takes a mere 16 CPU clocks to shift out an 8 bit datum. As a result one can simply delay 16 clock cycles between writing successive bytes. The code looks something like this:

SBUF = datum[0];
delay(16 - LOAD_TIME);
SBUF = datum[1];
delay(16 - LOAD_TIME);
...

LOAD_TIME is a constant that takes into account the number of cycles required to get the next datum from memory and write it to SBUF. Thus the number of non-useful clock cycles per byte is (16- LOAD-TIME).

Now most of you are probably thinking that I’m nuts for advocating this approach – and I’d tend to agree with you! It’s a technique I’ve only used a few times – and then only when I had to get the data out with the least possible latency and with the least amount of power consumed. I much prefer the next technique which can be almost as efficient – but a lot safer.

Polling

Polling differs from the open loop approach in that one polls a status register to determine when it is safe to write the next byte. This can be quite power efficient as long as, just for the previous example,  the transmit speed is very high in comparison to the CPU speed. Thus the SPI link given in the open loop example is also a good candidate for this approach. The code looks something like this:

SBUF = datum[0];
wait_for_sbuf_empty();
SBUF = datum[1];
wait_for_sbuf_empty();
...

The key to making this approach as efficient as possible is to code the wait function so that you read the status register on the first clock after you expect SBUF will become available.  In other words you still use a pre-calculated delay, but you throw in a check of the status register just to make sure before you load the next byte. By pre-fetching the next byte to be loaded and doing some other tweaking it’s often possible to get this approach almost as efficient as the open loop method. Notwithstanding these optimizations, the number of non-useful polling clock cycles will be greater than the number of CPU clocks required to transmit the data.

Interrupting

When the transmit frequency starts to slow down with respect to the CPU frequency, then the number of non-useful clock cycles quickly starts to rise if one uses a polling method. The classic example of this is of course asynchronous serial links running at standard baud rates.  In these cases, the transmit time is a large fraction of a millisecond and a polling approach consumes a huge number of CPU cycles (and hence power). The solution here is of course to turn to an interrupt driven approach. In this case the over-head of the ISR is ‘non-useful’ clock cycles.  As I showed in this article the overhead of even a simple looking ISR can be quite significant. Notwithstanding this, for asynchronous serial links, an interrupt based approach is nearly always the most efficient.

Hybrid

The final methodology is what I term the hybrid approach. It’s typically the most power efficient and is well suited to medium to fast serial links. The code for it looks like this:

SBUF = datum[0];
__sleep();
SBUF = datum[1];
__sleep();
...

__interrupt void sbuf_tx_isr(void)
{
 /* Empty */
}

In this approach, I enable the transmit interrupt, but have no code in the interrupt handler. After each write to SBUF I execute a sleep instruction, effectively stopping op code processing. Once SBUF has emptied, it generates an interrupt. The processor vectors to the empty ISR, returns immediately and then processes the next instruction which stores the next byte in SBUF. In this case the overhead is the number of clock cycles to enter and exit sleep mode, plus the number of cycles to vector to an ISR and return. Depending upon your processor architecture this can be anything from almost nothing to quite a lot. However it is always less than a full blown interrupt handler approach and is in my experience, often less than the polling or open loop methods.

Notwithstanding the above, this method has several weaknesses:

  1. It should be obvious that the only interrupt that can be enabled is the SBUF transmit interrupt. (Actually it’s more accurate to say that the only interrupt that can cause the processor to exit sleep mode is the SBUF transmit interrupt. The MSP430, for example, allows one to do this).
  2. While I don’t consider this a kludge, it’s certainly not crystal clear what is going on. Thus clear documentation is a must.

Summary

  1. If you feel the need for the utmost efficiency then go open loop. It’s a bit like drag-racing in that it’s fast, furious and undoubtedly gets you from A to B ASAP. Just don’t be surprised if you blow up in the process.
  2. If open-loop isn’t for you then polling may make sense provided you can crank up the transmit speed high enough. This makes for the simplest code – and that’s always a plus in my book.
  3. If you have an asynchronous link, then an interrupt based approach is the right answer 99% of the time.
  4. If you have a medium to high speed link, then the hybrid approach has much to commend it. Once you’ve seen it done a few times it becomes less weird looking.

Previous Tip

Lowering power consumption tip #3 – Using Relays

Monday, November 2nd, 2009 Nigel Jones

This is the third in a series of tips on lowering power consumption in embedded systems. Today’s topic concerns relays. It may be just the markets that I operate in, but relays seem to crop up in a very large percentage of the designs that I work on. If this is true for you, then today’s tip should be very helpful in reducing the power consumption of your system.

I’ll start by observing that relays consume a lot of power – at least in comparison to silicon based components, and thus anything that can be done to minimize their power consumption typically has a large impact on the overall consumption of the system. That being said, usually the thing that will reduce a relay’s power consumption the most is to simply use a latching relay. (A latching relay is designed to maintain its state once power is removed from its coil. Thus it only consumes power when switching – much like a CMOS gate). However, latching relays cannot be used in circumstances where it is important that the relays revert to a known state in the event of a loss of power. Most embedded systems that I work on require the relays to have this property. Thus in these cases, what can be done to minimize the relay’s power consumption?

If you look at the data sheet for a relay, you will see a plethora of parameters. However, the one of most interest is the operating current. (Relays are current operated devices. That is it is the presence of current flowing through the relay coil that generates a magnetic field that in turn produces the magneto-magnetic force that moves the relay armature). This current is the current required to actuate (pull-in) the relay. Not much can be done about this. However, once a relay is actuated, the current required to hold the relay in this state is typically anywhere between a third and two thirds less than the pull-in current. This current is called the holding current – and may or may not appear on the data sheet. Despite the fact that the holding current is so much less than the pull-in current, almost every design I see (including many of mine I might add) eschews the power savings that are up for grabs and instead simply puts the pull-in current through the relay the whole time the relay is activated.

So why is this? Well, the answer is that it turns out it isn’t trivial to switch from the pull-in current to the holding current. To see what I mean – read on!

The typical hardware to drive a relay consists of a microcontroller port pin connected to gate of an N channel FET (BJT’s are used, but if you are interested in reducing power, a FET is the way to go). The FET in turn is connected to the relay coil. Thus to turn the relay on, one need only configure the microcontroller port pin as an output and drive it high – a trivial exercise.

To use the holding current approach, you need to do the following.

  1. Connect the FET to a microcontroller port pin that can generate a PWM waveform. The hardware is otherwise unchanged.
  2. To turn the relay on, drive the port pin high as before.
  3. Delay for the pull in time of the relay. The pull in time is typically of the order of 10 – 100 ms.
  4. Switch the port pin over to a PWM output. The PWM depth of course dictates the effective current through the relay, and this is how you set the holding current. The other important parameter is the PWM frequency. Its period should be at most one tenth of the pull-in time. For example, a relay that has a pull in time of 10 ms, would require a PWM period of no more than 1 ms, giving a PWM frequency of 1 kHz. You can of course use higher frequencies – but then you are burning unnecessary power in charging and discharging the gate of the FET.
  5. To turn the relay off, you must disable the PWM output and then drive the port pin low.

Looking at this, it really doesn’t seem too hard. However compared to simply setting and clearing a port pin, it’s certainly a lot of work. Given that management doesn’t normally award points for reducing the power consumption of an embedded system, but does reward getting the system delivered on time, it’s hardly surprising that most systems don’t use this technique. Perhaps this post will start a tiny movement towards rectifying this situation.

Next Tip
Previous Tip
Home

Lowering power consumption tip #2 – modulate LEDs

Tuesday, September 22nd, 2009 Nigel Jones

This is the second in a series of tips on lowering power consumption in embedded systems.

LEDs are found on a huge percentage of embedded systems. Furthermore their current consumption can often be a very large percentage of the overall power budget for a system. As such reducing the power consumption of LEDs can have a dramatic impact on the overall system power consumption. So how can this be done you ask? Well, it turns out that LEDs are highly amenable to high power strobing. That is, pulsing an LED at say 100 mA with a 10% on time (average current 10 mA) will cause it to appear as bright as an LED that is being statically powered at 20mA. However, like most things, this tradeoff does not come for free, as to take advantage of it, you have to be aware of the following:

  • LEDs are very prone to over heating failures. Thus putting a constant 100 mA through a 20 mA LED will rapidly lead to its failure. Thus any system that that intentionally puts 100 mA through a 20 mA LED needs to be designed such that it can never allow 100 mA to flow for more than a few milliseconds at a time. Be aware that this limit can easily be exceeded when breaking a debugger – so design the circuit accordingly!
  • The eye is very sensitive to flicker, and so the modulation frequency needs to be high enough that it is imperceptible.
  • You can’t sink these large currents into a typical microcontroller port pin. Thus an external driver is essential.
  • If the LED current is indeed a large portion of the overall power budget then you have to be aware that the pulsed 100 mA current can put tremendous strain on the power supply

Clearly then, this technique needs to be used with care. However, if you plan to do this from the start, then the hardware details are not typically that onerous and the firmware implementation details are normally straight forward. What I do is drive the LED off a spare PWM output. I typically set the frequency at about 1 kHz, and then set the PWM depth to obtain the desired current flow. Doing it this way imposes no overhead on the firmware and requires just a few setup instructions to get working. Furthermore a software crash is unlikely to freeze the PWM output in the on condition. Incidentally, as well as lowering your overall power consumption, this technique has two other benefits:

  • You get brightness control for free. Indeed by modulating the PWM depth you can achieve all sorts of neat effects. I have actually used this to convey multiple state information on a single LED. My experience is that it’s quite easy to differentiate between four states (off, dim, on, bright). Thus next time you need to get more mileage
    out of the ubiquitous debug LED, consider adding brightness control to it.
  • It can allow you to run LEDs off unregulated power. Thus as the supply voltage changes, you can simply adjust the PWM depth to compensate, thus maintaining quasi constant brightness. This actually gives a you further power savings because you are no longer having to accept the efficiency losses of the power supply

Anyway, give it a try on your next project. I think you’ll like it.
Next Tip
Previous Tip.
Home

Lowering power consumption tip #1 – Avoid zeros on the I2C bus

Friday, July 17th, 2009 Nigel Jones

I already have a series of tips on efficient C and another on effective C. Today I’m introducing a third series of tips – this time centered on lowering the power consumption of embedded systems. As well as the environmental benefits of reducing the power consumption of an embedded system, there are also a plethora of other advantages including reduced stress on regulators, extended battery life (for portable systems) and also of course reduced EMI.

Notwithstanding these benefits, reducing power consumption is a topic that simply doesn’t get enough coverage. Indeed when I first started working on portable systems twenty years ago there was almost nothing on this topic beyond ‘use the microprocessors power saving modes’. Unfortunately I can’t say it has improved much beyond that!

So in an effort to remedy the situation I’ll be sharing with you some of the things I’ve learned over the last twenty years concerning reducing power consumption. Hopefully you’ll find it useful.

Anyway, enough preamble. Today’s posting concerns the ubiquitous I2C bus. The I2C bus is found in a very large number of embedded systems for the simple reason that it’s very good for solving certain types of problems. However, it’s not exactly a low power consumption interface. The reason is that its open-drain architecture requires a fairly stiff pull up resistor on the clock (SCL) and data (SDA) lines. Typical values for these pull up resistors are 1K – 5K. As a result, every time SCL or SDA goes low, you’ll be pulling several milliamps. Conversely when SCL or SDA is high you consume essentially nothing. Now you can’t do much about the clock line (it has to go up and down in order to well, clock the data) – but you can potentially do something about the data line. To illustrate my point(s) I’ll use as an example the ubiquitous 24LC series of I2C EEPROMS such as the 24LC16, 24LC32, 24LC64 and so on. For the purposes of this exercise I’ll use the 24LC64 from Microchip.

The first thing to note is that these EEPROMs have the most significant four I2C address bits (1010b) encoded in silicon – but the other three bits are set by strapping pins on the IC high or low. Now I must have seen dozens of designs that use these serial EEPROMs – and in every case the address lines were strapped low. Thus all of these devices were addressed at 1010000b. Simply strapping the 3 address lines high would change the devices address to 1010111b – thus minimizing the number of zeros needed every time the device is addressed.

The second thing to note is that the memory address space for these devices is 16 bits. That is after sending the I2C address, it is necessary to send 16 bits of information that specify the memory address to be accessed. Now in the case of the 24LC64, the three most significant address bits are ‘don’t care’. Again in every example I’ve ever looked at, people do the ‘natural’ thing, and set these bits to zero. Set them to 1 and you’ll get an immediate power saving on every address that you send.

As easy as this is, there’s still more that can be done in this area. In most applications I have ever looked at, the serial EEPROM is not completely used. Furthermore, the engineer again does the ‘natural’ thing, and allocates memory starting at the lowest address and works upwards. If instead you allocate memory from the top down, and particularly if you locate the most frequently accessed variables at the top of the memory, then you will immediately increase the average preponderance of ‘1s’ in the address field, thus minimizing power. (Incidentally if you find accessing the correct location in EEPROM hard enough already, then I suggest you read this article I wrote a few years ago. It has a very nifty technique for accessing serial EEPROMs courtesy of the offsetof() macro).

Finally we come to the data itself that gets stored in the EEPROM. If you examine the data that are stored in the EEPROM and analyze the distribution of the number of zero bits in each byte, then I think you’ll find that in many (most?) cases the results are heavily skewed towards the typical data byte having more zero bits than one bits. If this is the case for your data, then it points to a further power optimization – namely invert all bytes before writing them to EEPROM, and then invert them again when you read them back. With a little care you can build this into the low level driver such that the results are completely transparent to the higher levels of the application.

If you put all these tips together, then the power savings can be substantial. To drive home the point, consider writing zero to address 0 with the 24LC64 located at I2C address 1010000b. Using the ‘normal’ methodology, you would send the following bytes:

1010000 //I2C Address byte = 1010000 with R/W = 0
0000000 //Memory address MSB = 0x00
0000000 //Memory address LSB = 0x00
0000000 //Datum = 0x00

Using the amended methodology suggested herein, the 24LC64 would be addressed at 1010111b, the 3 most significant don’t care bits of the address would be set to 111b, the datum would be located at some higher order address, such as xxx11011 11001100b, and the datum would be inverted. Thus the bytes written would be:

10101110 //I2C Address byte = 1010111 with R/W = 0
11111011 //Memory address MSB = 0xFC
11001100 //Memory address LSB = 0xCC
11111111 //Datum = 0xFF

Thus using this slightly extreme example, the percentage of zeros in the bit stream has been reduced from 30/32 to 8/32 – a dramatic reduction in power.

Obviously with other I2C devices such as an ADC you will not always have quite this much flexibility. Conversely if you are talking to another microprocessor you’ll have even more flexibility in how you encode the data. The point is, with a little bit of thought you can almost certainly reduce the power consumption of your I2C interface.

As a final note. I mentioned that you can’t do much about the clock line. Well that’s not strictly correct. What you can do is run the clock at a different frequency. I’ll leave it for another posting to consider the pros and cons of changing the clock frequency.

Home