Archive for the ‘Firmware Bugs’ Category

The Sad State of Embedded Software Process

Wednesday, September 15th, 2010 Michael Barr

Today VDC is sharing their 2010 Embedded System Engineering Survey Results.  I’m sad to say that the collective immaturity of the embedded software community continues to be on display.

Consider these two depressing statistics from the survey results:

  • Only 1 out of 5 embedded software developers are using a static analysis tool on their current project.  That’s about the same percent as last year, so it’s not even like the trend is positive.  I find this unbelievable.  There are at least a dozen great static analysis tools on the market.  One of them, PC-Lint, costs just $389.  Yet 4 out of 5 firmware developers don’t use any such bug-killing tool!?!
  • Over 40% of embedded software developers either report following “no methodology” (27%) on their current project or say they “don’t know” which one (15%).  Choosing and following a software development process is not brain surgery.  It can be as straightforward as the waterfall model (design before you code, code before you test) or as adaptive as agile methods.  But please pick something and follow it.  How can 4 out of 10 firmware developers even manage to avoid it!?!

You can’t make this kind of stuff up!

It is 2010, my friends.  The first embedded system was designed almost 40 years ago.  We need to get our act together.

Check out my Firmware Code of Ethics, get yourself a static analysis tool, and start following a development process.

3 Things Every Programmer Should Know About RMA

Wednesday, August 18th, 2010 Michael Barr

This post was originally posted in the wrong blog.  I’m reposting it here.

Real-time systems design and RMA go together like peanut butter and jelly.  So why is it that wherever I go in the embedded community, engineers are developing real-time systems without applying RMA?  This is a dangerous situation, but one that is easily remedied by ensuring every programmer knows three things about RMA.

In case you are entirely unfamiliar with RMA, there’s a handy primer on the technique athttp://www.netrino.com/Embedded-Systems/How-To/RMA-Rate-Monotonic-Algorithm/. I’ve tried to write this blog post in a way that you can read that before or after, at your option.

#1: RMA is Not Just for Academics

You have probably heard of RMA.  Maybe you can even expand the acronym.   Maybe you also know that the theoretical underpinnings of RMA were developed largely at Carnegie Mellon University’s Software Engineering Institute and/or that the technique has been known for about three decades.

If, however, you are like the vast majority of the thousands of firmware engineers I have communicated with on the subject during my years as a writer/editor, consultant, and trainer, you probably think RMA is just for academics.  I also thought that way years ago—but here’s the straight dope:

  • All of the popular commercial real-time operating systems (e.g., VxWorksThreadX, and MicroC/OS) are built upon fixed-priority preemptive schedulers.[1]
  • RMA is the optimal method of assigning fixed priorities to RTOS tasks.  That is to say that if a set of tasks cannot be scheduled using RMA, it can’t be scheduled using any fixed-priority algorithm.
  • RMA provides convenient rules of thumb regarding the percentage of CPU you can safely use while still meeting all deadlines.  If you don’t use RMA to assign priorities to your tasks, there is no rule of thumb that will ensure all of their deadlines will be met.[2]

A key feature of RMA is the ability to prove a priori that a given set of tasks will always meet its deadlines—even during periods of transient overload.  Dynamic-priority operating systems cannot make this guarantee.  Nor can fixed-priority RTOSes running tasks prioritized in other ways.

Too many of today’s real-time systems built with an RTOS are working by luck. Excess processing power may be masking design and analysis sins or the worst-case just hasn’t happened—yet.

Bottom line: You’re playing with fire if you don’t use RMA to assign priorities to critical tasks; it might be just a matter of time before your product’s users get burned.[3]

#2: RMA Need Not Be Applied to Every Task

As any programmer that’s already put RMA into practice will tell you, the hardest part of the analysis phase is establishing an upper bound for the worst-case execution time of each task. The CPU utilization of each task is computed as the ratio of its worst-case execution time to its worst-case period. [4]

There are three ways to place an upper bound on execution time: (1) by measuring the actual execution time during the tested worst-case scenario; (2) by performing a top-down analysis of the code in combination with a cycle-counter; or (3) by making an educated guess based on big-O notation.  I call these alternatives measuring, analyzing, and budgeting, respectively, and note that the decision of which to use involves tradeoffs of precision vs. level of effort. Measurement can be extremely precise, but requires the ability to instrument and test the actual working code—which must be remeasured after every code change.  Budgeting is easiest and can be performed even at the beginning of the project, but it is necessarily imprecise (in the conservative direction of requiring more CPU bandwidth than is actually required).

But there is at least some good news about the analysis.  RMA need not be performed across the entire set of tasks in the system.  It is possible to define a smaller (often much smaller, in my experience) critical set of tasks on which RMA needs to be performed, with the remaining non-critical tasks simply assigned lower priorities.

This critical set of tasks should contain all of the tasks with deadlines that can’t be missed or else.  In addition, it should contain any other tasks the former set either share mutexes with or from which they require timely semaphore or message queue posts.  Every other task is considered non-critical.

RMA can be meaningfully applied to the critical set tasks only, so long as we ensure that all of the non-critical tasks have priorities below the entire critical set.  We then need only determine worst-case periods and worst-case execution times for the critical set.  Furthermore, we need only follow the rate monotonic algorithm for assignment of priorities within the critical set.

Bottom line: Anything goes at lower priorities where there are no deadlines.

#3: RMA Applies to Interrupt Service Routines Too

With few exceptions, books, articles, and papers that mention RMA describe it as a technique for prioritizing the tasks on a preemptive fixed-priority operating system.  But the technique is also essential for correctly prioritizing interrupt handlers.

Indeed, even if you have designed a real-time system that consists only of interrupt service routines (plus a do-nothing background loop in main), you should use the rate monotonic algorithm to prioritize them with respect to their worst-case frequency of occurrence.  Then you can use rate monotonic analysis to prove that they will all meet their real-time deadlines even during transient overload.

Furthermore, if you have a set of critical tasks in addition to interrupt service routines the prioritization and analysis associated with RMA need to be performed across the entire set of those entities.[5] This can be complicated, as there is an arbitrary “priority boundary” imposed by the CPU hardware: even the lowest priority ISR is deemed more important than the highest priority task.

For example, consider the conflict in the set of ISRs and tasks in Table 1.  RMA dictates that the priority of Task A should be higher than the priority of the ISR, because Task A can occur more frequently.  But the hardware demands otherwise, by limiting our ability to move ISRs down in priority.  If we leave things as they are, we cannot simply sum the CPU utilization of this set of entities to see if they are below the schedulable bound for four entities.

Runnable Entity Priority by RMA Worst-Case Execution Time Worst-Case Period CPU Utilization
ISR 2 500 us 10 ms 5%
Task A 1 750 us 3 ms 25%
Task C 3 300 us 30 ms 1%
Task B 4 8 ms 40 ms 20%

Table 1.  A Misprioritized Interrupt Handler

So what should we do in a conflicted scenario like this?  There are two options.  Either we change the program’s structure, by moving the ISR code into a polling loop that operates as a 10 ms task at priority 2—in which case total utilization is 51%.  Or we treat the ISR, for purposes of proof via rate monotonic analysis anyway, as though it actually has a worst-case period of 3 ms.  In the latter option, the ISR has an appropriate top priority by RMA but the CPU bandwidth dedicated to the ISR increases from 5% to 16.7%–bringing the new total up to 62.7%  Either way, the full set is provably schedulable.[6]

Bottom line: Interrupt handlers must be considered part of the critical set, with RMA used to prioritize them in relation to the tasks they might steal the CPU away from.

Conclusion

Every programmer should know three key things about RMA.  First, RMA is a technique that should be used to analyze any preemptive system with deadlines; it is not just for academics after all.  Second, the amount of effort involved in RMA analysis can be reduced by ignoring tasks outside the critical set; non-critical tasks can be assigned an arbitrary pattern of lower priorities and need not be analyzed.  Finally, if interrupts can preempt critical set tasks or even just each other, RMA should be used to analyze those too.


[1] Schedulers that tweak task priorities dynamically, as desktop flavors of Windows and Linux do, may miss deadlines indiscriminately during transient overload periods.   They should thus not be used in the design of safety-critical real-time systems.

[2] For example, it is widely rumored that a system less than 50% loaded will always meet its deadlines.  Unfortunately, there is no such rule of thumb that’s correct.  By contrast, when you do use RMA there is a simple rule of thumb ranging from a high of 82.8% for 2 tasks to a low of 69.2% for N tasks.

[3] Perhaps your failure to use RMA to prioritize tasks and prove they’ll meet deadlines explains one or more of those “glitches” your customers have been complaining about?

[4] Establishing the worst-case period of a task is both easier and more stable.

[5] Note this is necessary even if one or more of the interrupts doesn’t have a real-time deadline of its own.  That’s because the interrupts may occur during the transient overload and thus prevent one or more critical set tasks from meeting its real-time deadline.

[6] However, switching the code to use polling actually consumes cycles that are only reserved for the worst-case in the other solution.  That could mean failing to find CPU time for low priority non-critical tasks in the average case.

Firmware-Specific Bug #5: Heap Fragmentation

Monday, March 15th, 2010 Michael Barr

Dynamic memory allocation is not widely used by embedded software developers—and for good reasons. One of those is the problem of fragmentation of the heap.

All data structures created via C’s malloc() standard library routine or C++’s new keyword live on the heap. The heap is a specific area in RAM of a pre-determined maximum size. Initially, each allocation from the heap reduces the amount of remaining “free” space by the same number of bytes. For example, the heap in a particular system might span 10 KB starting from address 0x20200000. An allocation of a pair of 4-KB data structures would leave 2 KB of free space.

The storage for data structures that are no longer needed can be returned to the heap by a call to free() or use of the delete keyword. In theory this makes that storage space available for reuse during subsequent allocations. But the order of allocations and deletions is generally at least pseudo-random—leading the heap to become a mess of smaller fragments.

To see how fragmentation can be a problem, consider what would happen if the first of the above 4 KB data structures is free. Now the heap consists of one 4-KB free chunk and another 2-KB free chunk; they are not adjacent and cannot be combined. So our heap is already fragmented. Despite 6 KB of total free space, allocations of more than 4 KB will fail.

Fragmentation is similar to entropy: both increase over time. In a long running system (i.e., most every embedded system ever created), fragmentation may eventually cause some allocation requests to fail. And what then? How should your firmware handle the case of a failed heap allocation request?

Best Practice: Avoiding all use of the heap may is a sure way of preventing this bug. But if dynamic memory allocation is either necessary or convenient in your system, there is an alternative way of structuring the heap that will prevent fragmentation. The key observation is that the problem is caused by variable sized requests.

If all of the requests were of the same size, then any free block is as good as any other—even if it happens not to be adjacent to any of the other free blocks. Thus it is possible to use multiple “heaps”—each for allocation requests of a specific size—can using a “memory pool” data structure.

If you like you can write your own fixed-sized memory pool API. You’ll just need three functions:

  • handle = pool_create(block_size, num_blocks) – to create a new pool (of size M chunks by N bytes);
  • p_block = pool_alloc(handle) – to allocate one chunk (from a specified pool); and
  • pool_free(handle, p_block).

But note that many real-time operating systems (RTOSes) feature a fixed-size memory pool API. If you have access to one of those, use it instead of the compiler’s malloc() and free() or your own implementation.

Firmware-Specific Bug #4

Firmware-Specific Bug #6

Firmware-Specific Bug #4: Stack Overflow

Thursday, March 11th, 2010 Michael Barr

Every programmer knows that a stack overflow is a Very Bad Thing™. The effect of each stack overflow varies, though. The nature of the damage and the timing of the misbehavior depend entirely on which data or instructions are clobbered and how they are used. Importantly, the length of time between a stack overflow and its negative effects on the system depends on how long it is before the clobbered bits are used.

Unfortunately, stack overflow afflicts embedded systems far more often than it does desktop computers. This is for several reasons, including:

  1. embedded systems usually have to get by on a smaller amount of RAM;
  2. there is typically no virtual memory to fall back on (because there is no disk);
  3. firmware designs based on RTOS tasks utilize multiple stacks (one per task), each of which must be sized sufficiently to ensure against unique worst-case stack depth;
  4. and interrupt handlers may try to use those same stacks.

Further complicating this issue, there is no amount of testing that can ensure that a particular stack is sufficiently large. You can test your system under all sorts of loading conditions but you can only test it for so long. A stack overflow that only occurs “once in a blue moon” may not be witnessed by tests that run for only “half a blue moon.” Demonstrating that a stack overflow will never occur can, under algorithmic limitations (such as no recursion), be done with a top down analysis of the control flow of the code. But a top down analysis will need to be redone every time the code is changed.

Best Practice: On startup, paint an unlikely memory pattern throughout the stack(s). (I like to use hex 23 3D 3D 23, which looks like a fence ‘#==#’ in an ASCII memory dump.) At runtime, have a supervisor task periodically check that none of the paint above some pre-established high water mark has been changed. If something is found to be amiss with a stack, log the specific error (e.g., which stack and how high the flood) in non-volatile memory and do something safe for users of the product (e.g., controlled shut down or reset) before a true overflow can occur. This is a nice additional safety feature to add to the watchdog task.

Firmware-Specific Bug #3

Firmware-Specific Bug #5 (coming soon)

The Challenge of Debugging Cache Coherency Problems

Friday, February 19th, 2010 Michael Barr

The following is an example of a cache-related embedded software bug that is a real challenge to solve for several reasons, not the least of which is the fact that the actual problem was masked in the debugger’s view of memory.

One nasty bug that came up recently for us was the realization that we were not flushing the instruction cache after leaving the bootloader which had a very confusing effect when running our application. In our design our code pretty much runs out of flash. Our bootloader is in the lowest part of flash and our 2 images sit in their own higher memory ranges of flash. So we never realized we should do this.

Well, we had to copy a small piece of code into RAM for the purpose of allowing firmware upgrades to be written to flash. This piece of code would be executing when the actual erases and writes took place (i.e. we couldn’t execute from AND write to flash at the same time). This code would get copied out of flash both when the bootloader started execution AND when the image would start execution because they shared the startup code that we inherited from a board development kit (BDK).

Another thing we didn’t realize was that the RAM code optimized differently for the bootloader image and the application images. The end result is that the instruction cache would in certain cases have a hit and return the wrong instructions for us. For instance, when we tried to perform an upgrade while running from our image, it would erase a completely different area of flash than we intended. To make things somewhat more confusing, it did NOT help to step through the code using the debugger. The debugger was not showing us that the instruction cache was providing different lines of code than the lines of source it was showing.

This was ultimately one of the more frustrating bugs we have chased recently. Imagine the confusion when sometimes a firmware upgrade would work fine and other times it would completely brick your board (they could be salvaged with a JTAG programmer at least).

Thanks to Richard von Lehe of Starkey Labs for sharing this.