Archive for August, 2010

3 Things Every Programmer Should Know About RMA

Wednesday, August 18th, 2010 Michael Barr

This post was originally posted in the wrong blog.  I’m reposting it here.

Real-time systems design and RMA go together like peanut butter and jelly.  So why is it that wherever I go in the embedded community, engineers are developing real-time systems without applying RMA?  This is a dangerous situation, but one that is easily remedied by ensuring every programmer knows three things about RMA.

In case you are entirely unfamiliar with RMA, there’s a handy primer on the technique athttp://www.netrino.com/Embedded-Systems/How-To/RMA-Rate-Monotonic-Algorithm/. I’ve tried to write this blog post in a way that you can read that before or after, at your option.

#1: RMA is Not Just for Academics

You have probably heard of RMA.  Maybe you can even expand the acronym.   Maybe you also know that the theoretical underpinnings of RMA were developed largely at Carnegie Mellon University’s Software Engineering Institute and/or that the technique has been known for about three decades.

If, however, you are like the vast majority of the thousands of firmware engineers I have communicated with on the subject during my years as a writer/editor, consultant, and trainer, you probably think RMA is just for academics.  I also thought that way years ago—but here’s the straight dope:

  • All of the popular commercial real-time operating systems (e.g., VxWorksThreadX, and MicroC/OS) are built upon fixed-priority preemptive schedulers.[1]
  • RMA is the optimal method of assigning fixed priorities to RTOS tasks.  That is to say that if a set of tasks cannot be scheduled using RMA, it can’t be scheduled using any fixed-priority algorithm.
  • RMA provides convenient rules of thumb regarding the percentage of CPU you can safely use while still meeting all deadlines.  If you don’t use RMA to assign priorities to your tasks, there is no rule of thumb that will ensure all of their deadlines will be met.[2]

A key feature of RMA is the ability to prove a priori that a given set of tasks will always meet its deadlines—even during periods of transient overload.  Dynamic-priority operating systems cannot make this guarantee.  Nor can fixed-priority RTOSes running tasks prioritized in other ways.

Too many of today’s real-time systems built with an RTOS are working by luck. Excess processing power may be masking design and analysis sins or the worst-case just hasn’t happened—yet.

Bottom line: You’re playing with fire if you don’t use RMA to assign priorities to critical tasks; it might be just a matter of time before your product’s users get burned.[3]

#2: RMA Need Not Be Applied to Every Task

As any programmer that’s already put RMA into practice will tell you, the hardest part of the analysis phase is establishing an upper bound for the worst-case execution time of each task. The CPU utilization of each task is computed as the ratio of its worst-case execution time to its worst-case period. [4]

There are three ways to place an upper bound on execution time: (1) by measuring the actual execution time during the tested worst-case scenario; (2) by performing a top-down analysis of the code in combination with a cycle-counter; or (3) by making an educated guess based on big-O notation.  I call these alternatives measuring, analyzing, and budgeting, respectively, and note that the decision of which to use involves tradeoffs of precision vs. level of effort. Measurement can be extremely precise, but requires the ability to instrument and test the actual working code—which must be remeasured after every code change.  Budgeting is easiest and can be performed even at the beginning of the project, but it is necessarily imprecise (in the conservative direction of requiring more CPU bandwidth than is actually required).

But there is at least some good news about the analysis.  RMA need not be performed across the entire set of tasks in the system.  It is possible to define a smaller (often much smaller, in my experience) critical set of tasks on which RMA needs to be performed, with the remaining non-critical tasks simply assigned lower priorities.

This critical set of tasks should contain all of the tasks with deadlines that can’t be missed or else.  In addition, it should contain any other tasks the former set either share mutexes with or from which they require timely semaphore or message queue posts.  Every other task is considered non-critical.

RMA can be meaningfully applied to the critical set tasks only, so long as we ensure that all of the non-critical tasks have priorities below the entire critical set.  We then need only determine worst-case periods and worst-case execution times for the critical set.  Furthermore, we need only follow the rate monotonic algorithm for assignment of priorities within the critical set.

Bottom line: Anything goes at lower priorities where there are no deadlines.

#3: RMA Applies to Interrupt Service Routines Too

With few exceptions, books, articles, and papers that mention RMA describe it as a technique for prioritizing the tasks on a preemptive fixed-priority operating system.  But the technique is also essential for correctly prioritizing interrupt handlers.

Indeed, even if you have designed a real-time system that consists only of interrupt service routines (plus a do-nothing background loop in main), you should use the rate monotonic algorithm to prioritize them with respect to their worst-case frequency of occurrence.  Then you can use rate monotonic analysis to prove that they will all meet their real-time deadlines even during transient overload.

Furthermore, if you have a set of critical tasks in addition to interrupt service routines the prioritization and analysis associated with RMA need to be performed across the entire set of those entities.[5] This can be complicated, as there is an arbitrary “priority boundary” imposed by the CPU hardware: even the lowest priority ISR is deemed more important than the highest priority task.

For example, consider the conflict in the set of ISRs and tasks in Table 1.  RMA dictates that the priority of Task A should be higher than the priority of the ISR, because Task A can occur more frequently.  But the hardware demands otherwise, by limiting our ability to move ISRs down in priority.  If we leave things as they are, we cannot simply sum the CPU utilization of this set of entities to see if they are below the schedulable bound for four entities.

Runnable Entity Priority by RMA Worst-Case Execution Time Worst-Case Period CPU Utilization
ISR 2 500 us 10 ms 5%
Task A 1 750 us 3 ms 25%
Task C 3 300 us 30 ms 1%
Task B 4 8 ms 40 ms 20%

Table 1.  A Misprioritized Interrupt Handler

So what should we do in a conflicted scenario like this?  There are two options.  Either we change the program’s structure, by moving the ISR code into a polling loop that operates as a 10 ms task at priority 2—in which case total utilization is 51%.  Or we treat the ISR, for purposes of proof via rate monotonic analysis anyway, as though it actually has a worst-case period of 3 ms.  In the latter option, the ISR has an appropriate top priority by RMA but the CPU bandwidth dedicated to the ISR increases from 5% to 16.7%–bringing the new total up to 62.7%  Either way, the full set is provably schedulable.[6]

Bottom line: Interrupt handlers must be considered part of the critical set, with RMA used to prioritize them in relation to the tasks they might steal the CPU away from.

Conclusion

Every programmer should know three key things about RMA.  First, RMA is a technique that should be used to analyze any preemptive system with deadlines; it is not just for academics after all.  Second, the amount of effort involved in RMA analysis can be reduced by ignoring tasks outside the critical set; non-critical tasks can be assigned an arbitrary pattern of lower priorities and need not be analyzed.  Finally, if interrupts can preempt critical set tasks or even just each other, RMA should be used to analyze those too.


[1] Schedulers that tweak task priorities dynamically, as desktop flavors of Windows and Linux do, may miss deadlines indiscriminately during transient overload periods.   They should thus not be used in the design of safety-critical real-time systems.

[2] For example, it is widely rumored that a system less than 50% loaded will always meet its deadlines.  Unfortunately, there is no such rule of thumb that’s correct.  By contrast, when you do use RMA there is a simple rule of thumb ranging from a high of 82.8% for 2 tasks to a low of 69.2% for N tasks.

[3] Perhaps your failure to use RMA to prioritize tasks and prove they’ll meet deadlines explains one or more of those “glitches” your customers have been complaining about?

[4] Establishing the worst-case period of a task is both easier and more stable.

[5] Note this is necessary even if one or more of the interrupts doesn’t have a real-time deadline of its own.  That’s because the interrupts may occur during the transient overload and thus prevent one or more critical set tasks from meeting its real-time deadline.

[6] However, switching the code to use polling actually consumes cycles that are only reserved for the worst-case in the other solution.  That could mean failing to find CPU time for low priority non-critical tasks in the average case.

Design for the Worst Case

Wednesday, August 11th, 2010 Michael Barr

In real-time systems, as in life, anything that can go wrong will! A nurse could be using a GUI task to change system parameters on a ventilator just as the attached patient’s lungs demand the most help from another task. Or an interrupt signal could start acting funny, generating a stream of unexpected ISR invocations. Or all of those at once. And something else.

The designers of hard real-time systems must design for such a worst-case. They must ensure that sufficient CPU and memory bandwidth are present to handle the worst-case demands that could be placed on the software—simultaneously. In simple terms, we must size the processor bandwidth to the worst-case scenario.

Safety for the users of our products emerges as a side effect of buying a faster (read “higher priced”) CPU. Rate Monotonic Analysis helps ensure we’ve specified the right processor clock rate, so the users are safe. RMA is also the optimal fixed-priority scheduling algorithm, which prevents us from over-paying for clock rate. If a set of tasks cannot be scheduled using RMA, it can’t be scheduled using any fixed-priority algorithm.

The basics of RMA are well covered in many places, including my article Introduction to Rate Monotonic Scheduling. In summary, Rate Monotonic Analysis gives us mathematics to prove all deadlines are always met when you’ve followed the Rate Monotonic Algorithm to assign priorities.

Rate Monotonic Algorithm is a procedure for assigning fixed priorities to tasks and ISRs to maximize their schedulability. A particular set of tasks and ISRs is considered schedulable if all deadlines will be met even in the worst-case scenario.  The algorithm is simple: “Assign the priority of each task and ISR according to its worst-case period, so that the shorter the period the higher the priority.” For example if Task 1 and Task 2 have periods of 50 ms and 100 ms, respectively, then Task 1 is given higher priority. This ensures that a long Task 2 job can’t prevent Task 1 from missing its more frequent deadline.

Too many of today’s real-time systems built with an RTOS are working by luck. Excess processing power may be masking design and analysis sins or the worst-case simply hasn’t happened—yet.  Bottom line: You’re playing with fire if you don’t use RMA to assign priorities to safety-critical tasks; it might be just a matter of time before your product’s users get burned.  Perhaps your failure to use RMA to prioritize tasks and prove they’ll meet deadlines explains one or more of those “glitches” your customers have been complaining about?