embedded software boot camp

Design for the Worst Case

Wednesday, August 11th, 2010 by Michael Barr

In real-time systems, as in life, anything that can go wrong will! A nurse could be using a GUI task to change system parameters on a ventilator just as the attached patient’s lungs demand the most help from another task. Or an interrupt signal could start acting funny, generating a stream of unexpected ISR invocations. Or all of those at once. And something else.

The designers of hard real-time systems must design for such a worst-case. They must ensure that sufficient CPU and memory bandwidth are present to handle the worst-case demands that could be placed on the software—simultaneously. In simple terms, we must size the processor bandwidth to the worst-case scenario.

Safety for the users of our products emerges as a side effect of buying a faster (read “higher priced”) CPU. Rate Monotonic Analysis helps ensure we’ve specified the right processor clock rate, so the users are safe. RMA is also the optimal fixed-priority scheduling algorithm, which prevents us from over-paying for clock rate. If a set of tasks cannot be scheduled using RMA, it can’t be scheduled using any fixed-priority algorithm.

The basics of RMA are well covered in many places, including my article Introduction to Rate Monotonic Scheduling. In summary, Rate Monotonic Analysis gives us mathematics to prove all deadlines are always met when you’ve followed the Rate Monotonic Algorithm to assign priorities.

Rate Monotonic Algorithm is a procedure for assigning fixed priorities to tasks and ISRs to maximize their schedulability. A particular set of tasks and ISRs is considered schedulable if all deadlines will be met even in the worst-case scenario.  The algorithm is simple: “Assign the priority of each task and ISR according to its worst-case period, so that the shorter the period the higher the priority.” For example if Task 1 and Task 2 have periods of 50 ms and 100 ms, respectively, then Task 1 is given higher priority. This ensures that a long Task 2 job can’t prevent Task 1 from missing its more frequent deadline.

Too many of today’s real-time systems built with an RTOS are working by luck. Excess processing power may be masking design and analysis sins or the worst-case simply hasn’t happened—yet.  Bottom line: You’re playing with fire if you don’t use RMA to assign priorities to safety-critical tasks; it might be just a matter of time before your product’s users get burned.  Perhaps your failure to use RMA to prioritize tasks and prove they’ll meet deadlines explains one or more of those “glitches” your customers have been complaining about?

One Response to “Design for the Worst Case”

  1. Notan says:

    Rate Monotonic Scheduling is truly the best bet to meet the deadlines. But there is still an open question: How long will your tasks take?
    There where times when you could simply count your assembler lines, but these times are over. With caches, out of order machines and speculative branching calculating or measuring the worst case runtime is not a straight forward process.

Leave a Reply