Archive for the ‘Hardware’ Category

Lowering power consumption tip #2 – modulate LEDs

Tuesday, September 22nd, 2009 Nigel Jones

This is the second in a series of tips on lowering power consumption in embedded systems.

LEDs are found on a huge percentage of embedded systems. Furthermore their current consumption can often be a very large percentage of the overall power budget for a system. As such reducing the power consumption of LEDs can have a dramatic impact on the overall system power consumption. So how can this be done you ask? Well, it turns out that LEDs are highly amenable to high power strobing. That is, pulsing an LED at say 100 mA with a 10% on time (average current 10 mA) will cause it to appear as bright as an LED that is being statically powered at 20mA. However, like most things, this tradeoff does not come for free, as to take advantage of it, you have to be aware of the following:

  • LEDs are very prone to over heating failures. Thus putting a constant 100 mA through a 20 mA LED will rapidly lead to its failure. Thus any system that that intentionally puts 100 mA through a 20 mA LED needs to be designed such that it can never allow 100 mA to flow for more than a few milliseconds at a time. Be aware that this limit can easily be exceeded when breaking a debugger – so design the circuit accordingly!
  • The eye is very sensitive to flicker, and so the modulation frequency needs to be high enough that it is imperceptible.
  • You can’t sink these large currents into a typical microcontroller port pin. Thus an external driver is essential.
  • If the LED current is indeed a large portion of the overall power budget then you have to be aware that the pulsed 100 mA current can put tremendous strain on the power supply

Clearly then, this technique needs to be used with care. However, if you plan to do this from the start, then the hardware details are not typically that onerous and the firmware implementation details are normally straight forward. What I do is drive the LED off a spare PWM output. I typically set the frequency at about 1 kHz, and then set the PWM depth to obtain the desired current flow. Doing it this way imposes no overhead on the firmware and requires just a few setup instructions to get working. Furthermore a software crash is unlikely to freeze the PWM output in the on condition. Incidentally, as well as lowering your overall power consumption, this technique has two other benefits:

  • You get brightness control for free. Indeed by modulating the PWM depth you can achieve all sorts of neat effects. I have actually used this to convey multiple state information on a single LED. My experience is that it’s quite easy to differentiate between four states (off, dim, on, bright). Thus next time you need to get more mileage
    out of the ubiquitous debug LED, consider adding brightness control to it.
  • It can allow you to run LEDs off unregulated power. Thus as the supply voltage changes, you can simply adjust the PWM depth to compensate, thus maintaining quasi constant brightness. This actually gives a you further power savings because you are no longer having to accept the efficiency losses of the power supply

Anyway, give it a try on your next project. I think you’ll like it.
Next Tip
Previous Tip.
Home

FRAM in embedded systems

Friday, September 18th, 2009 Nigel Jones

In a previous post I mentioned that I had recently attended a seminar put on by TI. One of the things that was mentioned briefly in the seminar was that TI will soon be releasing members of its popular MSP430 line containing Ferroelectric RAM or FRAM as it is usually referred to. There’s an informative, but poor production quality video on the TI website that describes FRAM’s properties. (To view it, just enter the search term ‘FRAM’ at ti.com. You have to register first, otherwise I’d give you the direct link). Alternatively, Wikipedia has a nice write up as well.

The basic properties of FRAM are quite tantalizing – non-volatile, fast and symmetric read / write times, very low power and essentially immune to light, radiation, magnetic fields etc. Although its speed and density isn’t good enough yet to replace other memory types at the high end, the same is not true for MSP430 class microcontrollers.

From what was said at the seminar it seems likely that TI will soon introduce versions of the MSP430 that contain only FRAM and that you the engineer will be able to partition it as you see fit between code and data storage. Furthermore, the data storage is inherently non-volatile, and so the data storage part can presumably be further divided between scratch storage and configuration parameters.

This is all very interesting, but what are the advantages of FRAM over today’s typical configuration of Flash + SRAM + EEPROM? Well TI has identified what they consider to be several key areas, namely:

  • Data logging applications. They point out (quite correctly) that with FRAM there is no need to worry about wear leveling algorithms, and that data can be stored (written) 1000 times faster than Flash or EEPROM. While this is all true, I’m actually a bit skeptical that this will be a huge game changer. Why? Well if I can write data 1000 times faster, then I’m going to fill the memory 1000 times faster as well. To put it another way, all the data logging systems I’ve ever worked on that use low end processors (such as the MSP430) have data logged no more than about a dozen datums no faster than a couple of times a second. In short, high write speeds aren’t important. However, I do concede that obviating the need for wear leveling algorithms is very nice.
  • High Security applications. One of the fields that I work in is smartcards. Smartcards are used extensively in the fields of access control, conditional access systems for pay TV, smart purses and so on. The key feature of smart cards is their security. One way to attack a smart card is via differential power analysis. The basic idea is that by measuring the cycle by cycle change in the power consumption of the card, it is possible to determine what it’s doing. Given that FRAM essentially consumes the same (and very low) power when it is read and written, it makes it very hard to perform a DPA attack on it. However, for most general purpose applications, this benefit is zero.
  • Low power. For me this is a huge benefit. The ability to write to FRAM at less than 2V will undoubtedly allow me to extend the battery life of some of the systems that I design. Furthermore the amount of energy required to write a byte of FRAM is miniscule compared to Flash or EEPROM. I think TI should be commended for their relentless pursuit of low power in their MSP430 line.
  • Lack of data corruption. Yes folks, believe it or not TI is actually claiming that FRAM eliminates the possibility for data corruption that is associated with other non-volatile memories. Upon hearing this I couldn’t make up my mind whether to blame the marketing department or the hardware guys. Regardless, it’s clearly not true. While I concede that the fast write times significantly reduces the probability of data corruption occurring, it most certainly does not eliminate it. Until the silicon vendors come up with a mechanism for guaranteeing that an arbitrarily sized block of data can be written atomically regardless of what power is doing, then memory will always be prone to corruption.

So do I see any downsides to FRAM usage in microcontrollers? Not really. However I do expect that it will reveal weaknesses in a lot of code (which is of course a good thing). I expect that this will come about because today when a system powers up, the contents of RAM is quasi random. Code that relies on a location not being a certain value on start up thus has a high probability of working. However, with FRAM, that location will contain whatever you last wrote to it – with all that it implies. As a result, I expect people writing for FRAM systems will get religion in a hurry about data initialization. Anyway, once some parts are out, I hope to be able to have a play with them. If I do I’ll undoubtedly write about my experiences.

Home

Lowering power consumption tip #1 – Avoid zeros on the I2C bus

Friday, July 17th, 2009 Nigel Jones

I already have a series of tips on efficient C and another on effective C. Today I’m introducing a third series of tips – this time centered on lowering the power consumption of embedded systems. As well as the environmental benefits of reducing the power consumption of an embedded system, there are also a plethora of other advantages including reduced stress on regulators, extended battery life (for portable systems) and also of course reduced EMI.

Notwithstanding these benefits, reducing power consumption is a topic that simply doesn’t get enough coverage. Indeed when I first started working on portable systems twenty years ago there was almost nothing on this topic beyond ‘use the microprocessors power saving modes’. Unfortunately I can’t say it has improved much beyond that!

So in an effort to remedy the situation I’ll be sharing with you some of the things I’ve learned over the last twenty years concerning reducing power consumption. Hopefully you’ll find it useful.

Anyway, enough preamble. Today’s posting concerns the ubiquitous I2C bus. The I2C bus is found in a very large number of embedded systems for the simple reason that it’s very good for solving certain types of problems. However, it’s not exactly a low power consumption interface. The reason is that its open-drain architecture requires a fairly stiff pull up resistor on the clock (SCL) and data (SDA) lines. Typical values for these pull up resistors are 1K – 5K. As a result, every time SCL or SDA goes low, you’ll be pulling several milliamps. Conversely when SCL or SDA is high you consume essentially nothing. Now you can’t do much about the clock line (it has to go up and down in order to well, clock the data) – but you can potentially do something about the data line. To illustrate my point(s) I’ll use as an example the ubiquitous 24LC series of I2C EEPROMS such as the 24LC16, 24LC32, 24LC64 and so on. For the purposes of this exercise I’ll use the 24LC64 from Microchip.

The first thing to note is that these EEPROMs have the most significant four I2C address bits (1010b) encoded in silicon – but the other three bits are set by strapping pins on the IC high or low. Now I must have seen dozens of designs that use these serial EEPROMs – and in every case the address lines were strapped low. Thus all of these devices were addressed at 1010000b. Simply strapping the 3 address lines high would change the devices address to 1010111b – thus minimizing the number of zeros needed every time the device is addressed.

The second thing to note is that the memory address space for these devices is 16 bits. That is after sending the I2C address, it is necessary to send 16 bits of information that specify the memory address to be accessed. Now in the case of the 24LC64, the three most significant address bits are ‘don’t care’. Again in every example I’ve ever looked at, people do the ‘natural’ thing, and set these bits to zero. Set them to 1 and you’ll get an immediate power saving on every address that you send.

As easy as this is, there’s still more that can be done in this area. In most applications I have ever looked at, the serial EEPROM is not completely used. Furthermore, the engineer again does the ‘natural’ thing, and allocates memory starting at the lowest address and works upwards. If instead you allocate memory from the top down, and particularly if you locate the most frequently accessed variables at the top of the memory, then you will immediately increase the average preponderance of ‘1s’ in the address field, thus minimizing power. (Incidentally if you find accessing the correct location in EEPROM hard enough already, then I suggest you read this article I wrote a few years ago. It has a very nifty technique for accessing serial EEPROMs courtesy of the offsetof() macro).

Finally we come to the data itself that gets stored in the EEPROM. If you examine the data that are stored in the EEPROM and analyze the distribution of the number of zero bits in each byte, then I think you’ll find that in many (most?) cases the results are heavily skewed towards the typical data byte having more zero bits than one bits. If this is the case for your data, then it points to a further power optimization – namely invert all bytes before writing them to EEPROM, and then invert them again when you read them back. With a little care you can build this into the low level driver such that the results are completely transparent to the higher levels of the application.

If you put all these tips together, then the power savings can be substantial. To drive home the point, consider writing zero to address 0 with the 24LC64 located at I2C address 1010000b. Using the ‘normal’ methodology, you would send the following bytes:

1010000 //I2C Address byte = 1010000 with R/W = 0
0000000 //Memory address MSB = 0x00
0000000 //Memory address LSB = 0x00
0000000 //Datum = 0x00

Using the amended methodology suggested herein, the 24LC64 would be addressed at 1010111b, the 3 most significant don’t care bits of the address would be set to 111b, the datum would be located at some higher order address, such as xxx11011 11001100b, and the datum would be inverted. Thus the bytes written would be:

10101110 //I2C Address byte = 1010111 with R/W = 0
11111011 //Memory address MSB = 0xFC
11001100 //Memory address LSB = 0xCC
11111111 //Datum = 0xFF

Thus using this slightly extreme example, the percentage of zeros in the bit stream has been reduced from 30/32 to 8/32 – a dramatic reduction in power.

Obviously with other I2C devices such as an ADC you will not always have quite this much flexibility. Conversely if you are talking to another microprocessor you’ll have even more flexibility in how you encode the data. The point is, with a little bit of thought you can almost certainly reduce the power consumption of your I2C interface.

As a final note. I mentioned that you can’t do much about the clock line. Well that’s not strictly correct. What you can do is run the clock at a different frequency. I’ll leave it for another posting to consider the pros and cons of changing the clock frequency.

Home

Checking the fuse bits in an Atmel AVR at run time

Friday, May 15th, 2009 Nigel Jones

In general I try and post on topics that have broad appeal in the embedded world. Today I’m going to partially break with that tradition to show how to check the fuse bits in an Atmel AVR class processor. However, before I do so, I’d like to discuss my motivations for wanting to do this.

The AVR processor family, together with the PIC and other processor families contain fuse / configuration bits. These bits are settable only at program time and are used to configure the behavior of the processor at run time. Typical parameters that are configured are oscillator types, brown out voltage detect levels and memory partitioning. Now as I lamented in this post, there is no great way of communicating to the production staff how you want these fuse bits programmed. As a result I consider there to be a very high probability that a mistake will be made in production – and that all my efforts on crafting perfect code will thus be for naught. Thus while it is much better to prevent mistakes, if you can’t do so, then the next best thing to do is to detect them. As a result on one of the products that I am working on, I have as one of the startup tests a check to ensure that the fuse bits are indeed what they are supposed to be. While I recognize that if the fuse settings are dreadfully wrong it is unlikely that my code will run, I’m actually more concerned with the case where the fuse bits are set mostly correct – and thus that the code works most of the time.

So how do I do this on an AVR? Well if you are using an IAR compiler the work is mostly done for you. Here it is:

#include <intrinsics.h>

/* Macros to read the various fuse bytes */
#define _SPM_GET_LOW_FUSEBITS()  __AddrToZByteToSPMCR_LPM((void __flash*)0x0000U, 0x09U)
#define _SPM_GET_HIGH_FUSEBITS()  __AddrToZByteToSPMCR_LPM((void __flash*)0x0003U, 0x09U)
#define _SPM_GET_EXTENDED_FUSEBITS()  __AddrToZByteToSPMCR_LPM((void __flash*)0x0002U, 0x09U)

/* Structure to store the fuse bytes */
typedef struct{
uint8_t  fuse_low;      /* The low fuse setting */
uint8_t  fuse_high;     /* The high fuse setting */
uint8_t  fuse_extended; /* The extended fuse setting */
uint8_t  lockbits;      /* The lockbits */
} FUSE_SETTINGS;

/* Storage for the fuse settings will be in EEPROM */
static __eeprom __no_init FUSE_SETTINGS Fuse_Settings @ FUSE_VALUES; 

void fuses_Read(void)
{
 FUSE_SETTINGS value;

 value.fuse_low = _SPM_GET_LOW_FUSEBITS();
 value.fuse_high = _SPM_GET_HIGH_FUSEBITS();
 value.fuse_extended = _SPM_GET_EXTENDED_FUSEBITS();
 value.lockbits = _SPM_GET_LOCKBITS();
 __no_operation();

 Fuse_Settings = value;
}

The macro __AddrToZByteToSPMCR_LPM() is defined in intrinsics.h. Essentially it takes care of all the necessary finicky register usage required to read the fuse bits. You’ll also notice that I have used a macro _SPM_GET_LOCKBITS() to read the lockbits. This macro is also found in intrinsics.h. The really observant reader may wonder why there isn’t a macro in intrinsics.h for reading the fuse bits? Well there is – it’s just for reading the low fuse byte – which is all the early AVR processors had. I’ve pointed this out to IAR and they have promised to address this in the next release (thanks Steve!).

Before I leave this topic, I’ll also point out that I don’t read the fuse settings directly into EEPROM. Instead I read them into RAM and then copy the entire structure to EEPROM. I do this because writing to EEPROM messes with the same registers used for reading the fuse bits – and thus bad things happen. This also explains the __no_operation() statement before the data are copied to EEPROM.

Incidentally, I don’t know of a way to read the configuration bits of a PIC at run time. Chalk this up as one more reason why an AVR is superior to a PIC!

Home

Dogging your watchdog

Tuesday, November 4th, 2008 Nigel Jones

Most embedded systems employ watchdog timers. It’s not my intention today to talk about why to use watchdog timers, or indeed how to use them. Rather I assume you know the answers to these questions. Instead, I’ll pass on some tips for how to track down those unexpected watchdog resets that can occur during the development process.

To help find these problems, it is essential to find out where the watchdog reset is occurring. Unfortunately, this isn’t easy, since by definition a watchdog reset will reset the processor, typically destroying all state information that could be used to debug the problem. To get around this problem, here are a few things you can try.

  1. Place a break point on the (watchdog) reset vector. Although this will typically not stop the processor from being reset, it will ensure that none of your variables get initialized by your start up code. As a result, you should be able to use your debugger to examine these variables – which may give you an insight into what is going wrong.
  2. Certain processor architectures allow the action of the watchdog timer to be changed between a classic watchdog (when the timer times out, the processor is reset), to a special form of timer, complete with its own interrupt vector. Although I rarely use this mode of operation in release code, it is very useful for debugging. Simply reconfigure the watchdog to generate an interrupt upon timeout, and place a break point in the watchdog’s ISR. Then when the watchdog times out, your debugger will stop at the break point. It’s then just a simple matter of stepping out of the ISR to return to the exact point in your code where the watchdog timeout occurred.
  3. If neither of the above methods are available to you, and you are genuinely clueless as to where to start looking, then a painful but workable solution is to ‘instrument’ entry into each function. This essentially consists of some code that is placed at the start of every function. The code’s job is to record the ID of the function into some form of storage that will not be affected by a watchdog reset, such that you can identify the offending function after a watchdog reset has occurred. This isn’t quite as bad as it sounds, provided you are good with macros, a scripting language such as Perl and are aware of common compiler vendor extensions such as the macro __FUNCTION__. Of course if you are that good the chances are you won’t be clueless as to why you are taking a watchdog reset!

I’ll leave it to another post to talk about the sort of code that often causes watchdog timeouts.

Home