Archive for September, 2009

The consultant's dilemma

Tuesday, September 29th, 2009 Nigel Jones

Today I’m going to talk about an interesting ethical dilemma that is faced by all engineers at various times in their careers but which consultants face much more frequently because of the nature of the work. The situation is as follows:

A (potential) client has a new project that they wish to pursue and they have brought you in to discuss its feasibility, risk, development costs etc. At a certain point in the discussion, the topic of CPU architecture comes up. In rare cases, there is only one CPU that makes sense for the job. However in the majority of cases, it’s clear that there are a number of potential candidates that could get the job done and the client is interested in your opinion as to which way to go. In my experience you have the following options:

  1. Recommend your favorite architecture
  2. Recommend that time be spent investigating the optimal architecture
  3. Recommend the architecture that you are most interested in gaining experience on in order to develop your career.

Let’s take a look at these options:

Favorite Architecture
The advantage of going with your favorite architecture is that presumably you are highly experienced with the processor family and that you already have all the requisite tools in order to allow you to quickly and effectively develop the solution. The downsides to this approach are:

  1. It leads to antiquated architectures hanging around for ever. The prime example of this is of course the 8051.
  2. It means that your skill set can stagnate over time.
  3. It also may mean that the client pays more for the hardware than they would if a more optimal solution was used. This comes about when e.g. an ARM processor is used when an HC08 would have done quite nicely.

Architecture Investigation
With this approach you are essentially asking the client to pay you to work out what the optimal solution is to their problem. Sometimes this is just a few days work and other times it’s a lot more. This is often a tough sell because clients expect the consultant to instantly know what the best architecture for their application is. Furthermore, at the end of the day the consultant may end up recommending an architecture for which they have little experience. Whether you think this is reasonable or not depends on how you view consultants.

Career Development
In the 25+ years I’ve been doing this, I’ve only come across a few blatant cases where it’s clear that an architecture was chosen because that’s what the lead engineer wanted to play with next. My experience is that engineers are way more likely to be too conservative and stick with their favorite architecture than they are to go this route. Nevertheless if you are in the position of asking an engineer (and particularly a consultant) for a CPU architecture recommendation, then you must be aware that this does go on. Your best defense against this is to closely question why a particular architecture is being recommended.

So what do I do when faced with this issue? Well you’ll be pleased to know that I have never recommended an architecture in order to further my career. The decision as to whether to recommend my favorite architecture or to suggest an investigation comes down to one of cost. If the client will be building 500 of the widgets a year, then development costs will dwarf hardware costs and I’ll go with my favorite architecture. Conversely if they will be building 10,000 widgets a year, then an investigation is a must. The middle area is where it gets tricky!

I’d be interested in hearing how you have handled this dilemma.
Home

Minimizing memory use in embedded systems Tip #3 – Don’t use printf()

Thursday, September 24th, 2009 Nigel Jones

This is the third in a series of tips on minimizing memory consumption in embedded systems.

If you are like me, the first C program you saw was K&R’s famous ‘hello, world’ code, reproduced below:

main()
{
 printf(“hello, world\n”);
}

In my opinion, this program has done incalculable harm to the realm of embedded systems programming! I appreciate that this is a rather extreme statement – but as is usual I have my reasons …

The interesting thing about this code is that it introduces printf() – and as such gives the impression that printf() is an important (and useful) part of the C language. Well I suppose it is / was for those programming computers. However for those programming embedded systems, printf() and its brethren (sprintf, vsprintf, scanf etc) are in general a disaster waiting to happen for the unwary. Here is why:

Code Size

The printf() functions are immensely sophisticated functions, and as such consume an incredible amount of code space. I have clear memories of an early 8051 compiler’s printf() function consuming 8K of code space (and this was at a time when an 8K program was a decent size). Since then, compiler vendors have put a lot of effort into addressing this issue. For example IAR allows you to specify the functionality (and hence size) of printf() as a library option. Notwithstanding this, if your available code space is less than 32K the chances are you really shouldn’t be using printf(). But what if you need some of the features of printf()? Well in that case I recommend you write your own formatting function. For example I often find that I have a small microcontroller project that needs to talk over a serial link using an ASCII protocol. In cases like these, the easy thing to do is to generate the requisite string using a complex format string with sprintf(). However, with a little bit of ingenuity you should be able to create the string using a series of calls to simple formatting routines. I can guarantee that you’ll end up with more compact code.

Stack Size

Barely a day goes by that someone doesn’t end up on this blog because they have a stack overflow caused by printf(), sprintf() or vsprintf(). Why is this? Well if you are ever feeling bored one day, try and write the printf() function. If you do, you’ll soon find that it is not only difficult, but also that it requires a large amount of space for the function arguments, a lot of temporary buffer space for doing the formatting as well as a large number of intermediate variables. In short, it needs a tremendous amount of stack space. Indeed I have had embedded systems that need a mere 32 bytes of stack space prior to using printf() – and 200+ bytes after I’ve added in printf(). The bottom line is that for small embedded systems, formatted output needs a ridiculous amount of stack space – and that as a result stack overflow is a real possibility.

Variable length arguments

I’m sure most people use sprintf() etc without fully appreciating that these functions use a variable length argument list. I’ll leave for another day the full implications of this. However for now you should just consider that MISRA bans the use of variable length arguments – and that you should take this as a strong hint to avoid these functions in embedded systems.

Execution time

The execution time of printf() can be spectacularly long. For example the ‘hello world’ program given in the introduction requires 1000 cycles on an AVR CPU. Changing it to the almost as trivial function shown below increases the execution to 6371 cycles:

int main( void )
{
 int i = 89;

 printf("hello, world %d\n", i);
}

Lest you think this is an indictment of the AVR processor, the same code for a generic ARM processor still takes a whopping 1738 cycles. In short, printf() and its brethren can take a really long time to execute.

Now do the above mean you should always eschew formatted output functions? No! Indeed I recommend the use of vsprintf() here for certain classes of problem. What I do recommend is that you think long and hard before using these functions to ensure that you really understand what you are doing (and getting) when you use them.

Previous Tip
Home

Lowering power consumption tip #2 – modulate LEDs

Tuesday, September 22nd, 2009 Nigel Jones

This is the second in a series of tips on lowering power consumption in embedded systems.

LEDs are found on a huge percentage of embedded systems. Furthermore their current consumption can often be a very large percentage of the overall power budget for a system. As such reducing the power consumption of LEDs can have a dramatic impact on the overall system power consumption. So how can this be done you ask? Well, it turns out that LEDs are highly amenable to high power strobing. That is, pulsing an LED at say 100 mA with a 10% on time (average current 10 mA) will cause it to appear as bright as an LED that is being statically powered at 20mA. However, like most things, this tradeoff does not come for free, as to take advantage of it, you have to be aware of the following:

  • LEDs are very prone to over heating failures. Thus putting a constant 100 mA through a 20 mA LED will rapidly lead to its failure. Thus any system that that intentionally puts 100 mA through a 20 mA LED needs to be designed such that it can never allow 100 mA to flow for more than a few milliseconds at a time. Be aware that this limit can easily be exceeded when breaking a debugger – so design the circuit accordingly!
  • The eye is very sensitive to flicker, and so the modulation frequency needs to be high enough that it is imperceptible.
  • You can’t sink these large currents into a typical microcontroller port pin. Thus an external driver is essential.
  • If the LED current is indeed a large portion of the overall power budget then you have to be aware that the pulsed 100 mA current can put tremendous strain on the power supply

Clearly then, this technique needs to be used with care. However, if you plan to do this from the start, then the hardware details are not typically that onerous and the firmware implementation details are normally straight forward. What I do is drive the LED off a spare PWM output. I typically set the frequency at about 1 kHz, and then set the PWM depth to obtain the desired current flow. Doing it this way imposes no overhead on the firmware and requires just a few setup instructions to get working. Furthermore a software crash is unlikely to freeze the PWM output in the on condition. Incidentally, as well as lowering your overall power consumption, this technique has two other benefits:

  • You get brightness control for free. Indeed by modulating the PWM depth you can achieve all sorts of neat effects. I have actually used this to convey multiple state information on a single LED. My experience is that it’s quite easy to differentiate between four states (off, dim, on, bright). Thus next time you need to get more mileage
    out of the ubiquitous debug LED, consider adding brightness control to it.
  • It can allow you to run LEDs off unregulated power. Thus as the supply voltage changes, you can simply adjust the PWM depth to compensate, thus maintaining quasi constant brightness. This actually gives a you further power savings because you are no longer having to accept the efficiency losses of the power supply

Anyway, give it a try on your next project. I think you’ll like it.
Next Tip
Previous Tip.
Home

FRAM in embedded systems

Friday, September 18th, 2009 Nigel Jones

In a previous post I mentioned that I had recently attended a seminar put on by TI. One of the things that was mentioned briefly in the seminar was that TI will soon be releasing members of its popular MSP430 line containing Ferroelectric RAM or FRAM as it is usually referred to. There’s an informative, but poor production quality video on the TI website that describes FRAM’s properties. (To view it, just enter the search term ‘FRAM’ at ti.com. You have to register first, otherwise I’d give you the direct link). Alternatively, Wikipedia has a nice write up as well.

The basic properties of FRAM are quite tantalizing – non-volatile, fast and symmetric read / write times, very low power and essentially immune to light, radiation, magnetic fields etc. Although its speed and density isn’t good enough yet to replace other memory types at the high end, the same is not true for MSP430 class microcontrollers.

From what was said at the seminar it seems likely that TI will soon introduce versions of the MSP430 that contain only FRAM and that you the engineer will be able to partition it as you see fit between code and data storage. Furthermore, the data storage is inherently non-volatile, and so the data storage part can presumably be further divided between scratch storage and configuration parameters.

This is all very interesting, but what are the advantages of FRAM over today’s typical configuration of Flash + SRAM + EEPROM? Well TI has identified what they consider to be several key areas, namely:

  • Data logging applications. They point out (quite correctly) that with FRAM there is no need to worry about wear leveling algorithms, and that data can be stored (written) 1000 times faster than Flash or EEPROM. While this is all true, I’m actually a bit skeptical that this will be a huge game changer. Why? Well if I can write data 1000 times faster, then I’m going to fill the memory 1000 times faster as well. To put it another way, all the data logging systems I’ve ever worked on that use low end processors (such as the MSP430) have data logged no more than about a dozen datums no faster than a couple of times a second. In short, high write speeds aren’t important. However, I do concede that obviating the need for wear leveling algorithms is very nice.
  • High Security applications. One of the fields that I work in is smartcards. Smartcards are used extensively in the fields of access control, conditional access systems for pay TV, smart purses and so on. The key feature of smart cards is their security. One way to attack a smart card is via differential power analysis. The basic idea is that by measuring the cycle by cycle change in the power consumption of the card, it is possible to determine what it’s doing. Given that FRAM essentially consumes the same (and very low) power when it is read and written, it makes it very hard to perform a DPA attack on it. However, for most general purpose applications, this benefit is zero.
  • Low power. For me this is a huge benefit. The ability to write to FRAM at less than 2V will undoubtedly allow me to extend the battery life of some of the systems that I design. Furthermore the amount of energy required to write a byte of FRAM is miniscule compared to Flash or EEPROM. I think TI should be commended for their relentless pursuit of low power in their MSP430 line.
  • Lack of data corruption. Yes folks, believe it or not TI is actually claiming that FRAM eliminates the possibility for data corruption that is associated with other non-volatile memories. Upon hearing this I couldn’t make up my mind whether to blame the marketing department or the hardware guys. Regardless, it’s clearly not true. While I concede that the fast write times significantly reduces the probability of data corruption occurring, it most certainly does not eliminate it. Until the silicon vendors come up with a mechanism for guaranteeing that an arbitrarily sized block of data can be written atomically regardless of what power is doing, then memory will always be prone to corruption.

So do I see any downsides to FRAM usage in microcontrollers? Not really. However I do expect that it will reveal weaknesses in a lot of code (which is of course a good thing). I expect that this will come about because today when a system powers up, the contents of RAM is quasi random. Code that relies on a location not being a certain value on start up thus has a high probability of working. However, with FRAM, that location will contain whatever you last wrote to it – with all that it implies. As a result, I expect people writing for FRAM systems will get religion in a hurry about data initialization. Anyway, once some parts are out, I hope to be able to have a play with them. If I do I’ll undoubtedly write about my experiences.

Home

A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers (reprised)

Tuesday, September 15th, 2009 Nigel Jones

In May 2000 Embedded Systems Programming magazine (now Embedded Systems Design) published an article I had written entitled “A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers.”

A revised version is posted at: A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers.

I received a lot of mail about it at the time (including a decent amount of hate mail), and, much to my amazement, I continue to get mail about it to this day. The article has been shamelessly copied all over the web and its title is a popular search term that drives people to this blog.

Be aware that this test has been widely publicized, so be very suspicious of someone that does really well on it! To illustrate my point, when I wrote the article I was doing some work at a large company and was sharing an office with a fellow consultant, Nelson. Naturally I had Nelson proof read the article. Fast forward a few months when Nelson went off to an interview with a new potential client. Well it so happens that the interview occurred on the same day that my article was published, and the interviewer proceeded to use it verbatim on Nelson. Nelson, of course, aced the test leaving the interviewer astounded. Needless to say, we both found this to be very amusing! Alas, I’ve never had anyone in the intervening 9+ years hit me with it. Maybe next week…

If you would like a version of this test in Word format, or could use some expertise from an embedded systems consultant, please contact me.