Posts Tagged ‘safety’

Firmware-Specific Bug #2: Non-Reentrant Function

Monday, February 15th, 2010 Michael Barr

Technically, the problem of a non-reentrant functions is a special case of the problem of a race condition.  For that reason the run-time errors caused by a non-reentrant function are similar and also don’t occur in a reproducible way—making them just as hard to debug.  Unfortunately, a non-reentrant function is also more difficult to spot in a code review than other types of race conditions.

The figure below shows a typical scenario.  Here the software entities subject to preemption are RTOS tasks.  But rather than manipulating a shared object directly, they do so by way of function call indirection.  For example, suppose that Task A calls a sockets-layer protocol function, which calls a TCP-layer protocol function, which calls an IP-layer protocol function, which calls an Ethernet driver.  In order for the system to behave reliably, all of these functions must be reentrant.

But the functions of the driver module manipulate the same global object in the form of the registers of the Ethernet Controller chip.  If preemption is permitted during these register manipulations, Task B may preempt Task A after the Packet A data has been queued but before the transmit is begun.  Then Task B calls the sockets-layer function, which calls the TCP-layer function, which calls the IP-layer function, which calls the Ethernet driver, which queues and transmits Packet B.  When control of the CPU returns to Task A, it finally requests its transmission.  Depending on the design of the Ethernet controller chip, this may either retransmit Packet B or generate an error.  Either way, Packet A’s data is lost and does not go out onto the network.

In order for the functions of this Ethernet driver to be callable from multiple RTOS tasks (near-)simultaneously, those functions must be made reentrant.  If each function uses only stack variables, there is nothing to do; each RTOS task has its own private stack.  But drivers and some other functions will be non-reentrant unless carefully designed.

The key to making functions reentrant is to suspend preemption around all accesses of peripheral registers, global variables (including static local variables), persistent heap objects, and shared memory areas.  This can be done either by disabling one or more interrupts or by acquiring and releasing a mutex; the specifics of the type of shared data usually dictate the best solution.

Best Practice: Create and hide a mutex within each library or driver module that is not intrinsically reentrant.  Make acquisition of this mutex a pre-condition for the manipulation of any persistent data or shared registers used within the module as a whole.  For example, the same mutex may be used to prevent race conditions involving both the Ethernet controller registers and a global (or static local) packet counter.  All functions in the module that access this data, must follow the protocol to acquire the mutex before manipulating these objects.

Beware that non-reentrant functions may come into your code base as part of third party middleware, legacy code, or device drivers.  Disturbingly, non-reentrant functions may even be part of the standard C or C++ library provided with your compiler.  For example, if you are using the GNU compiler to build RTOS-based applications, take note that you should be using the reentrant “newlib” standard C library rather than the default.

Firmware-Specific Bug #1

Firmware-Specific Bug #3

Firmware-Specific Bug #1: Race Condition

Thursday, February 11th, 2010 Michael Barr

A race condition is any situation in which the combined outcome of two or more threads of execution (which can be either RTOS tasks or main() plus an ISR) varies depending on the precise order in which the instructions of each are interleaved.

For example, suppose you have two threads of execution in which one regularly increments a global variable (g_counter += 1;) and the other occasionally resets it (g_counter = 0;). There is a race condition here if the increment cannot always be executed atomically (i.e., in a single instruction cycle). A collision between the two updates of the counter variable may never or only very rarely occur. But when it does, the counter will not actually be reset in memory; its value is henceforth corrupt. The effect of this may have serious consequences for the system, though perhaps not until a long time after the actual collision.

Best Practice: Race conditions can be prevented by surrounding the “critical sections” of code that must be executed atomically with an appropriate preemption-limiting pair of behaviors. To prevent a race condition involving an ISR, at least one interrupt signal must be disabled for the duration of the other code’s critical section. In the case of a race between RTOS tasks, the best practice is the creation of a mutex specific to that shared object, which each task must acquire before entering the critical section. Note that it is not a good idea to rely on the capabilities of a specific CPU to ensure atomicity, as that only prevents the race condition until a change of compiler or CPU.

Shared data and the random timing of preemption are culprits that cause the race condition. But the error might not always occur, making tracking down such bugs from symptoms to root causes incredibly difficult. It is, therefore, important to be ever-vigilant about protecting all shared objects.

Best Practice: Name all potentially shared objects—including global variables, heap objects, or peripheral registers and pointers to the same—in a way that the risk is immediately obvious to every future reader of the code. Netrino’s Embedded C Coding Standard advocates the use of a ‘g_‘ prefix for this purpose.

Locating all potentially shared objects is the first step in a code audit for race conditions.

Firmware-Specific Bug #2: Non-Reentrant Function

Embedded Software is the Future of Product Quality and Safety

Monday, February 8th, 2010 Michael Barr

Last year a friend had a St. Jude pacemaker attached to his heart. When he reported an unexpected low battery reading (displayed on an associated digital watch) to his doctor a month later, he learned this was the result of a firmware bug known to the manufacturer. The battery was fine and would last on the order of a decade more. His new-model pacemaker’s firmware didn’t include a bug fix that was provided the year before to wearers of old-model.

Another friend owns a Land Rover LR2 SUV with back-up sensors. When the car is in reverse and nearing an obstacle or another car, the driver is alerted via a beeping sound. Except that the back-up sensors don’t always work. Some “reboots” of the SUV don’t seem to have this feature enabled. He suspects there is a “race condition” during the software startup sequence.

Yet another friend has driven a Toyota Prius hybrid over 100,000 miles. He reports that the brakes very occasionally have an odd/different feel. But his older model Prius is not expected to be subject to the 2010 model year recall.

These are just a few of the personal anecdotes behind the headlines. Embedded software is everywhere now, with over 4 billion new devices manufactured each year. Increasingly the quality and safety of products is a side-effect of the quality and safety of the software embedded inside.

One important question is, can we trust future software updates any more than we can trust the existing firmware? How do we know that the Toyota Prius hybrids with upgraded braking firmware will be safer than those with the factory firmware?

Is Toyota’s Accelerator Problem Caused by Embedded Software Bugs?

Thursday, January 28th, 2010 Michael Barr

Last month I received an interesting e-mail in response to a column I wrote for Embedded Systems Design called The Lawyers are Coming! My column was partly about the poor state of embedded software quality across all industries, and my correspondent was writing to say my observations were accurate from his perch within the automotive industry. Included in his e-mail was this interesting tidbit:

I read something about the big Toyota recall being related to floor mats interfering with the accelerator, but I was told that the problem appears to be software (firmware) for the control-by-wire pedal.  Me thinks somebody probably forgot to check ranges, overflows, or stability properly when implementing the “algorithm”.

As background for those of you who have been working in SCIFs or other labs, the “big Toyota recall” was first announced in September 2009. It was said to concern removable floor mats causing the accelerator pedal to be pressed down. Some 3.8 million Toyota and Lexus vehicles were involved and owners were told to remove floor mats immediately.

This week several related major news events have transpired, including:

But none of the articles I’ve read have talked about software being a cause. And it’s not clear if the affected models are drive-by-wire. However, at least one article I read yesterday suggested that one fix being worked on is a software interlock to ensure that if both the brake and the gas pedal are depressed, the brake will override the accelerator. On the one hand, that seems to mean that software is already in the middle; on the other, I would be extremely surprised to learn that such an interlock wasn’t already present in a drive-by-wire system.

So what’s the story? Are embedded software bugs to blame for this massive recall? Do you know? Have you found any helpful articles pointing at software problems? Please share what you know in the comments below, or e-mail me privately.

Rate Monotonic Analysis and Round Robin Scheduling

Friday, January 22nd, 2010 Michael Barr

Rate Monotonic Analysis (RMA) is a way of proving a priori via mathematics (rather than post-implementation via testing) that a set of tasks and interrupt service routines (ISRs) will always meet their deadlines–even under worst-case timing.  In this blog, I address the issue of what to do if two or more tasks or ISRs have equal priority and whether round robin scheduling is necessary in an RTOS to deal with that special case.

First a little background.  In order for the schedulability analysis portion of the RMA mathematics to provide meaningful results, the following assumptions must hold:

Under RMA, the relative priorities are assigned according to a simple rule: “The more often a task or ISR runs (in the worst-case), the higher its priority.” Put another way, the task or ISR with the longest period between iterations (interarrival time, if you prefer) is least important. This is because an infrequent but high-priority task could prevent a more frequent task from missing an entire iteration.

So what happens if you are using RMA to assign priorities and you wind up with two (or more) tasks or ISRs assigned equal priority? (Translation: they have the same worst-case interarrival times). Must they be assigned equal priority in the real system? What if the RTOS (in the case of tasks) or hardware (in the case of interrupts) doesn’t support round-robin scheduling–or even equal priorities with run-to-completion?

Interestingly, it turns out not to matter a bit whether you:

  1. Merge the two tasks into one (i.e., executed code for Task A then Task B).
  2. Give them equal priority, either with round robin or run-to-completion behavior.
  3. Give them adjacent unequal priorities (in either relative order).

If you run through the timing diagrams for each of the above scenarios, you’ll see that all three are equivalent. Except that the equal priority with round robin potentially suffers a performance impact from unnecessary additional context switches.