I still remember the “Triumph of the Nerds” PBS special, where Steve Jobs recalled his early days at Apple and how the young Apple team picked up the brains of scientists at the Xerox Palo Alto Research Center (PARC) . Steve explained how PARC researchers showed them three revolutionary things: (1) the graphical user interface (GUI), (2) computer network, and (3) object-oriented programming. Out of these three things, Steve confessed to have understood only the first one at the time. This alone, however, proved enough to launch the Mac, and the rest is history.
I believe that the embedded industry still hasn’t learned from PARC even as much as Apple did some three decades ago. The question standing in my mind is: Why most embedded programs aren’t structured the same way as virtually all GUI programs are?
If you’re baffled why I am comparing embedded systems to GUIs, consider that just about every embedded system, just like every GUI, is predominantly event-driven, by nature. In both cases, the primary function of the system is reacting to events. In the case of embedded systems, the events might be different than GUI (e.g., time ticks or arrivals of data packets), rather than mouse clicks and button presses. But, the essential job is still the same: reacting to events that come at difficult to foresee order and timing.
Even the earliest GUIs, such as the original Mac, or the early-days Windows, were structured according to the “Hollywood principle“, which means “Don’t call us, we’ll call you”. The “Hollywood principle” recognizes that the program is not really in control—the events are. So instead of pretending that the program is running the system, the system runs your program by calling your code to process events.
This reversal of control seems natural, I hope, and has served well all GUI systems. However, the concept hasn’t really caught on in the embedded space. The time-honored approaches are still either the “superloop” (main+ISR) or an RTOS, none of which really embodies the “Hollywood principle”.
It really takes more than “just” an API, such as a traditional RTOS. What you typically need is a framework that provides the main body of the application and calls the code that you provide. Such event-driven real-time frameworks are not new. Today, virtually every design automation tool for embedded systems incorporates a variant of such an event-driven framework. The frameworks buried inside tools prove that the concept works very well in very wide range of embedded systems.
My point is that a Real-Time Framework (RTF) should, and I believe eventually will, replace the traditional RTOS. What do you think?