Archive for the ‘Firmware Bugs’ Category

RTOS considered harmul

Monday, April 12th, 2010 Miro Samek

I have to confess that I’ve been experiencing a severe writer’s block lately. It’s not that I’m short of subjects to talk about, but I’m getting tired of circling around the most important issues that matter to me most and should matter the most to any embedded software developer. I mean the basic software structure.

Unfortunately, I find it impossible to talk about truly important issues without stepping on somebody’s toes, which means picking a fight. So, in this installment I decided to come out of the closet and say it openly: I consider RTOSes harmful, because they are a ticking bomb.

The main reason I say so is because a conventional RTOS implies a certain programming paradigm, which leads to particularly brittle designs. I’m talking about blocking. Blocking occurs any time you wait explicitly in-line for something to happen. All RTOSes provide an assortment of blocking mechanisms, such as various semaphores, event-flags, mailboxes, message queues, and so on. Every RTOS task, structured as an endless loop, must use at least one such blocking mechanism, or else it will take all the CPU cycles. Typically, however, tasks block in many places scattered throughout various functions called from the task routine (the endless loop). For example, a task can block and wait for a semaphore that indicates end of an ADC conversion. In other part of the code, the same task might wait for a timeout event flag, and so on.

Blocking is insidious, because it appears to work initially, but quickly degenerates into a unmanageable mess. The problem is that while a task is blocked, the task is not doing any other work and is not responsive to other events. Such task cannot be easily extended to handle other events, not just because the system is unresponsive, but also due to the fact the the whole structure of the code past the blocking call is designed to handle only the event that it was explicitly waiting for.

You might think that difficulty of adding new features (events and behaviors) to such designs is only important later, when the original software is maintained or reused for the next similar project. I disagree. Flexibility is vital from day one. Any application of nontrivial complexity is developed over time by gradually adding new events and behaviors. The inflexibility prevents an application to grow that way, so the design degenerates in the process known as architectural decay. This in turn makes it often impossible to even finish the original application, let alone maintain it.

The mechanisms of architectural decay of RTOS-based applications are manifold, but perhaps the worst is unnecessary proliferation of tasks. Designers, unable to add new events to unresponsive tasks are forced to create new tasks, regardless of coupling and cohesion. Often the new feature uses the same data as other feature in another tasks (we call such features cohesive). But placing the new feature in a different task requires very careful sharing of the common data. So mutexes and other such mechanisms must be applied. The designer ends up spending most of the time not on the feature at hand, but on managing subtle, hairy, unintended side-effects.

For decades embedded engineers were taught to believe that the only two alternatives for structuring embedded software are a “superloop” (main+ISRs) or an RTOS. But this is of course not true. Other alternatives exist, specifically event-driven programming with modern state machines is a much better way. It is not a silver bullet, of course, but after having used this method extensively for over a decade I will never go back to a raw RTOS. I plan to write more about this better way, why it is better and where it is still weak. Stay tuned.

A nail for a fuse

Friday, November 27th, 2009 Michael Barr

If I were to search my soul, I’d have to admit that the use of assertions has helped me more than any other single technique, even more than my favorite state machines. But, the use of assertions, simple as they are, is surrounded by so many misconceptions and misunderstandings that it’s difficult to know where to start. The discussion around the recent Jack Ganssle’s article “The Use of Assertions” shows many of the misunderstandings.

I suppose that the main difficulties in understanding assertions lay in the fact that while the implementation of assertions is trivial, the effective use of assertions requires a paradigm shift in the view of software construction and the nature of software errors in particular.

Perhaps the most important point to understand about assertions is that they neither handle nor prevent errors, in the same way as fuses in electrical circuits don’t prevent accidents or abuse. In fact, a fuse is an intentionally introduced weak spot in the circuit that is designed to fail sooner than anything else, so actually the whole circuit with a fuse is less robust than without it.

I believe that the analogy between assertions and fuses (which, by the way has been originally proposed by Niall Murphy in a private conversation at one of the Embedded Systems Conferences) is accurate and valuable, because it helps in making the paradigm shift in understanding many aspects of using assertions. Here I’d only like to elaborate just two aspects.

First, the analogy to fuses correctly suggests that assertions work best in the “weakest” spots. Such “weak spots” are often found at the interface between components (e.g., preconditions in a function) but there are many others. The best assertions are those that protect the most of the system. In other words, the best assertions catch errors that would have the most impact on the rest of the system.

The second important implication of the fuse analogy is the issue of disabling assertions in the production code. As the comments to the aforementioned article suggest, most engineers tend to disable assertions before shipping the code, especially in the safety critical products. I believe that this is exactly backwards.

I understand that the standard “assert.h” header file is designed to use assertions only in a debug build, so the macro assert() compiles to nothing when the symbol NDEBUG is defined. I strongly suggest rethinking this philosophy, because disabling assertions in the release configuration is like using nails, paper clips, or coins for fuses. Just imagine finding a nail in place of a fuse in a hospital’s operating room or in a dashboard of an airliner? What would you think of this sort of “repairs”?

Yet, by disabling assertions in our code we do exactly this.

I believe it is very important to understand that assertions have a very important role to play, especially in the filed and especially in the mission-critical systems, because they add additional safety layer in the software. Perhaps the biggest fallacy of our profession is the naïve optimism that our software will not fail. In a nutshell we somehow believe that when we stop checking for errors, they will stop occurring. After all–we don’t see them anymore. But this is not how computer systems work. An error, no matter how small, can cause catastrophic failure. With software, there are no “small” errors. Our software is either in complete control over the machine or it isn’t. Assertions help us know when we lose control.

So what do I suggest we do when the assertion fires in the filed? The proper course of action requires a lot of thinking and sometimes a lot of work. In safety-critical systems software failure should be part of the fault-tree analysis. Sometimes, reaching a fail-safe state requires some redundancy in the hardware. In any case, the assertion failures should be extensively tested.

But this is really the best we can do.