embedded software boot camp

FRAM in embedded systems

September 18th, 2009 by Nigel Jones

In a previous post I mentioned that I had recently attended a seminar put on by TI. One of the things that was mentioned briefly in the seminar was that TI will soon be releasing members of its popular MSP430 line containing Ferroelectric RAM or FRAM as it is usually referred to. There’s an informative, but poor production quality video on the TI website that describes FRAM’s properties. (To view it, just enter the search term ‘FRAM’ at ti.com. You have to register first, otherwise I’d give you the direct link). Alternatively, Wikipedia has a nice write up as well.

The basic properties of FRAM are quite tantalizing – non-volatile, fast and symmetric read / write times, very low power and essentially immune to light, radiation, magnetic fields etc. Although its speed and density isn’t good enough yet to replace other memory types at the high end, the same is not true for MSP430 class microcontrollers.

From what was said at the seminar it seems likely that TI will soon introduce versions of the MSP430 that contain only FRAM and that you the engineer will be able to partition it as you see fit between code and data storage. Furthermore, the data storage is inherently non-volatile, and so the data storage part can presumably be further divided between scratch storage and configuration parameters.

This is all very interesting, but what are the advantages of FRAM over today’s typical configuration of Flash + SRAM + EEPROM? Well TI has identified what they consider to be several key areas, namely:

  • Data logging applications. They point out (quite correctly) that with FRAM there is no need to worry about wear leveling algorithms, and that data can be stored (written) 1000 times faster than Flash or EEPROM. While this is all true, I’m actually a bit skeptical that this will be a huge game changer. Why? Well if I can write data 1000 times faster, then I’m going to fill the memory 1000 times faster as well. To put it another way, all the data logging systems I’ve ever worked on that use low end processors (such as the MSP430) have data logged no more than about a dozen datums no faster than a couple of times a second. In short, high write speeds aren’t important. However, I do concede that obviating the need for wear leveling algorithms is very nice.
  • High Security applications. One of the fields that I work in is smartcards. Smartcards are used extensively in the fields of access control, conditional access systems for pay TV, smart purses and so on. The key feature of smart cards is their security. One way to attack a smart card is via differential power analysis. The basic idea is that by measuring the cycle by cycle change in the power consumption of the card, it is possible to determine what it’s doing. Given that FRAM essentially consumes the same (and very low) power when it is read and written, it makes it very hard to perform a DPA attack on it. However, for most general purpose applications, this benefit is zero.
  • Low power. For me this is a huge benefit. The ability to write to FRAM at less than 2V will undoubtedly allow me to extend the battery life of some of the systems that I design. Furthermore the amount of energy required to write a byte of FRAM is miniscule compared to Flash or EEPROM. I think TI should be commended for their relentless pursuit of low power in their MSP430 line.
  • Lack of data corruption. Yes folks, believe it or not TI is actually claiming that FRAM eliminates the possibility for data corruption that is associated with other non-volatile memories. Upon hearing this I couldn’t make up my mind whether to blame the marketing department or the hardware guys. Regardless, it’s clearly not true. While I concede that the fast write times significantly reduces the probability of data corruption occurring, it most certainly does not eliminate it. Until the silicon vendors come up with a mechanism for guaranteeing that an arbitrarily sized block of data can be written atomically regardless of what power is doing, then memory will always be prone to corruption.

So do I see any downsides to FRAM usage in microcontrollers? Not really. However I do expect that it will reveal weaknesses in a lot of code (which is of course a good thing). I expect that this will come about because today when a system powers up, the contents of RAM is quasi random. Code that relies on a location not being a certain value on start up thus has a high probability of working. However, with FRAM, that location will contain whatever you last wrote to it – with all that it implies. As a result, I expect people writing for FRAM systems will get religion in a hurry about data initialization. Anyway, once some parts are out, I hope to be able to have a play with them. If I do I’ll undoubtedly write about my experiences.

Home

A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers (reprised)

September 15th, 2009 by Nigel Jones

In May 2000 Embedded Systems Programming magazine (now Embedded Systems Design) published an article I had written entitled “A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers.”

A revised version is posted at: A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers.

I received a lot of mail about it at the time (including a decent amount of hate mail), and, much to my amazement, I continue to get mail about it to this day. The article has been shamelessly copied all over the web and its title is a popular search term that drives people to this blog.

Be aware that this test has been widely publicized, so be very suspicious of someone that does really well on it! To illustrate my point, when I wrote the article I was doing some work at a large company and was sharing an office with a fellow consultant, Nelson. Naturally I had Nelson proof read the article. Fast forward a few months when Nelson went off to an interview with a new potential client. Well it so happens that the interview occurred on the same day that my article was published, and the interviewer proceeded to use it verbatim on Nelson. Nelson, of course, aced the test leaving the interviewer astounded. Needless to say, we both found this to be very amusing! Alas, I’ve never had anyone in the intervening 9+ years hit me with it. Maybe next week…

If you would like a version of this test in Word format, or could use some expertise from an embedded systems consultant, please contact me.

Reader feedback

September 13th, 2009 by Nigel Jones

If I’m to believe the numbers for this blog, I’m getting both a large number of page views per day as well as a significant number of readers coming back on a regular basis to see what I have to say. While the page view statistics are nice, I actually value the returning reader far more than I do the one-time visitor who drops in looking for a solution to a particular problem. Thus I find myself in a bit of a quandary. While the page view statistics give me a very good idea about what is driving first time visitors to this site, I really don’t have a clue as to why anyone actually bothers to come back, or indeed what they are hoping to see on their next visit. Thus if you are a regular reader I’d be obliged if you could give me some feedback on what you (dis)like about this blog, and perhaps more importantly – what you’d like me to address in future postings. Feel free to use the comment section or to email me if you’d prefer your thoughts to be private. Thanks! Home

Observations on the relevance of C++ to embedded systems

September 10th, 2009 by Nigel Jones

My fellow blogger Mike Barr recently wrote an article entitled ‘Real men program in C’. Given that his blogs are cross posted at embedded.com, it was soon picked up by reddit et al and the usual language wars started – with all that these wars usually entail. Personally I don’t get very worked up on this subject and so I didn’t participate. However it did dove tail rather nicely with a conversation I had recently with Dan Saks. I had asked Dan for his thoughts on the difficulty (impossibility!) of inlining global functions in C. The conversation was interesting in its own right, but at the end Dan posed the question ‘Why don’t you program it in C++?’ (since for the uninitiated, C++ allows you to quite nicely inline a class’s public functions). I’ll leave for another day, my response and also my thoughts on C++. However, it did get me thinking a lot about this issue.

Now although I have many thoughts on this topic, the one that I’d like to share with you today is my observation that there is an incredible dearth of example C++ code for embedded systems. What do I mean by this? Well like most of you, I regularly download example code from vendors sites – and it’s nearly always written in C and not C++. I’d previously explained this away by assuming that it was because I do a lot of work in the 8/16 bit realm, and that smaller processors are more likely to be programmed in C than C++. However, yesterday I attended a seminar put on by TI. There were several things of interest in the seminar, including TI’s proprietary RF networking protocol SimpliciTI and also their recently acquired Cortex 3 line from Luminary. The FAE encouraged us to look at the code that was available for both of these entities – and so I did.

What I found is that the SimpliciTI code is all written in C as was all the Luminary code I looked at including their impressive graphics library. Hmmmm thought I – is this an aberration or is this norm? For my next stop I went over to the Micrium web site where they offer a fine array of products including an RTOS, a variety of protocol stacks, a graphics library and so on. All the ones I looked at were written in C. Same story over at Segger. OK, thought I, what about the compiler vendors? A sampling of the code examples at the IAR and Keil websites (for their respective ARM product lines) showed them to be all in C. Finally I headed over to the Greenhills website to check out their enormous Networking and Communications product line. I chose half a dozen products at random. In all cases where the language was specified, it was ANSI C.

Is this a true random sample – of course not. However it does suggest to me that the industry hasn’t exactly embraced C++. Now it’s debatable whether the tool vendors and silicon suppliers should lead the industry or whether they should reflect reality. Regardless of your perspective on this, it’s clear to me that I’ll know C++ has been embraced by the embedded community only when the majority of the publicly available code is written in C++. Personally, if it hasn’t happened by now, I don’t think it’s going to.

Home

Minimizing memory use in embedded systems tip#2 – Be completely consistent in your coding style

September 4th, 2009 by Nigel Jones

This is the second in a series of postings on how to minimize the memory consumption of an embedded system.

As the title suggests, you’ll often get a nice reduction in code size if you are completely consistent in your HLL coding style. To show how this works, its necessary to take a trip into assembly language.

When you write in assembly language you soon find that you perform the same series of instructions over and over again. For example, to add two numbers together, you might have pseudo assembly language code that looks something like this:

LD X, operand1 ; X points to operand 1
LD Y, operand2 ; Y points to operand 2
LD R0,X        ; Get operand 1
LD R1,Y        ; Get operand 2
ADD            ;
ST R0          ; Store the result in R0

After you have done this a few times, it becomes clear that the only thing that changes from use to use is the address of the operands. As a result, assembly language programmers would typically define a macro. The exact syntax varies from assembler to assembler, but it might look something like this:

MACRO ADD_BYTES(P1, P2)
LD X, P1  ; X points to parameter 1
LD Y, P2  ; Y points to parameter 2
LD R0,X   ; Get operand 1
LD R1,Y   ; Get operand 2
ADD       ;
ST R0     ; Store the result in R0
ENDM

Thereafter, whenever it is necessary to add two bytes together, one would simply enter the macro together with the name of the operands of interest. However, after you have invoked the macro a few dozen times, it probably dawns on you that you are chewing up memory un-necessarily and that you can save a lot by changing the macro to this:

MACRO ADD_BYTES(P1, P2)
LD X, P1  ; X points to parameter 1
LD Y, P2  ; Y points to parameter 2
CALL LDR0R1XY
ENDM

It is of course necessary to now define a subroutine ‘LDR0R1XY’ that looks like this:

LDR0R1XY:
LD R0,X  ; Get operand 1
LD R1,Y  ; Get operand 2
ADD      ;
ST R0    ; Store the result in R0
RET

Clearly this approach starts to save a few bytes per invocation, such that once one has used ADD_BYTES several times one achieves a net saving in memory usage. If one uses ADD_BYTES dozens of times then the savings can be substantial.

So how does this help if you are programming in a HLL? Well, decent compilers will do exactly the same optimization when told to perform full size optimization. However, in this case, the optimizer looks at all the code sequences generated by the compiler and identifies those code sequences that can be placed in a subroutine. A really good compiler will do this recursively in the sense that it will replace a code sequence with a subroutine call, and that subroutine call will in turn call another subroutine and so on. The results can be a dramatic reduction in code size – albeit at a potentially big increase in demand on the call stack.

Now clearly in order to take maximal advantage of this compiler optimization, it’s essential that the compiler see the same code sequences over and over again. You can maximize the likelihood of this occurring by being completely consistent in your coding style. Some examples:

  • When making function calls, keep the parameter orders consistent. For example if you call a lot of functions with two parameters such as a uint8_t and a uint16_t, then ensure that all your functions declare the parameters in the same order.
  • If most of your variables are 16 bit, with just a handful being 8 bit, then you may find you get a code size reduction if you convert all your variables to 16 bits.
  • Don’t flip randomly between case statements and if-else-if chains.

Note that notwithstanding the fact that being completely consistent can save you a lot of code space, I also think that code that is extremely consistent in its style has other merits as well, not the least of which is readability.As a final note, does anyone know the formal name for this type of optimization?

Next Tip

Previous Tip

Home