Archive for the ‘state machines’ Category

State-Space Blog Continues at state-machine.com

Monday, January 4th, 2021 Miro Samek

The readers of this blog have certainly noticed that EmbeddedGurus is no longer active.

But the State-Space blog is not dead! The blog has been migrated to state-machine.com, where it will continue. Please check it out!

Modern Embedded Programming: Beyond the RTOS

Wednesday, April 27th, 2016 Miro Samek

An RTOS (Real-Time Operating System) is the most universally accepted way of designing and implementing embedded software. It is the most sought after component of any system that outgrows the venerable “superloop”. But it is also the design strategy that implies a certain programming paradigm, which leads to particularly brittle designs that often work only by chance. I’m talking about sequential programming based on blocking.

Blocking occurs any time you wait explicitly in-line for something to happen. All RTOSes provide an assortment of blocking mechanisms, such as time-delays, semaphores, event-flags, mailboxes, message queues, and so on. Every RTOS task, structured as an endless loop, must use at least one such blocking mechanism, or else it will take all the CPU cycles. Typically, however, tasks block not in just one place in the endless loop, but in many places scattered throughout various functions called from the task routine. For example, in one part of the loop a task can block and wait for a semaphore that indicates the end of an ADC conversion. In other part of the loop, the same task might wait for an event flag indicating a button press, and so on.

This excessive blocking is insidious, because it appears to work initially, but almost always degenerates into a unmanageable mess. The problem is that while a task is blocked, the task is not doing any other work and is not responsive to other events. Such a task cannot be easily extended to handle new events, not just because the system is unresponsive, but mostly due to the fact that the whole structure of the code past the blocking call is designed to handle only the event that it was explicitly waiting for.

You might think that difficulty of adding new features (events and behaviors) to such designs is only important later, when the original software is maintained or reused for the next similar project. I disagree. Flexibility is vital from day one. Any application of nontrivial complexity is developed over time by gradually adding new events and behaviors. The inflexibility makes it exponentially harder to grow and elaborate an application, so the design quickly degenerates in the process known as architectural decay.

perils_of_blocking

The mechanisms of architectural decay of RTOS-based applications are manifold, but perhaps the worst is the unnecessary proliferation of tasks. Designers, unable to add new events to unresponsive tasks are forced to create new tasks, regardless of coupling and cohesion. Often the new feature uses the same data and resources as an already existing feature (such features are called cohesive). But unresponsiveness forces you to add the new feature in a new task, which requires caution with sharing the common data. So mutexes and other such blocking mechanisms must be applied and the vicious cycle tightens. The designer ends up spending most of the time not on the feature at hand, but on managing subtle, intermittent, unintended side-effects.

For these reasons experienced software developers avoid blocking as much as possible. Instead, they use the Active Object design pattern. They structure their tasks in a particular way, as “message pumps”, with just one blocking call at the top of the task loop, which waits generically for all events that can flow to this particular task. Then, after this blocking call the code checks which event actually arrived, and based on the type of the event the appropriate event handler is called. The pivotal point is that these event handlers are not allowed to block, but must quickly return to the “message pump”. This is, of course, the event-driven paradigm applied on top of a traditional RTOS.

While you can implement Active Objects manually on top of a conventional RTOS, an even better way is to implement this pattern as a software framework, because a framework is the best known method to capture and reuse a software architecture. In fact, you can already see how such a framework already starts to emerge, because the “message pump” structure is identical for all tasks, so it can become part of the framework rather than being repeated in every application.

paradigm-shift

This also illustrates the most important characteristics of a framework called inversion of control. When you use an RTOS, you write the main body of each task and you call the code from the RTOS, such as delay(). In contrast, when you use a framework, you reuse the architecture, such as the “message pump” here, and write the code that it calls. The inversion of control is very characteristic to all event-driven systems. It is the main reason for the architectural-reuse and enforcement of the best practices, as opposed to re-inventing them for each project at hand.

But there is more, much more to the Active Object framework. For example, a framework like this can also provide support for state machines (or better yet, hierarchical state machines), with which to implement the internal behavior of active objects. In fact, this is exactly how you are supposed to model the behavior in the UML (Unified Modeling Language).

As it turns out, active objects provide the sufficiently high-level of abstraction and the right level of abstraction to effectively apply modeling. This is in contrast to a traditional RTOS, which does not provide the right abstractions. You will not find threads, semaphores, or time delays in the standard UML. But you will find active objects, events, and hierarchical state machines.

An AO framework and a modeling tool beautifully complement each other. The framework benefits from a modeling tool to take full advantage of the very expressive graphical notation of state machines, which are the most constructive part of the UML. The modeling tool benefits from the well-defined “framework extension points” designed for customizing the framework into applications, which in turn provide well-defined rules for generating code.

In summary, RTOS and superloop aren’t the only game in town. Actor frameworks, such as Akka, are becoming all the rage in enterprise computing, but active object frameworks are an even better fit for deeply embedded programming. After working with such frameworks for over 15 years , I believe that they represent a similar quantum leap of improvement over the RTOS, as the RTOS represents with respect to the “superloop”.

If you’d like to learn more about active objects, I recently posted a presentation on SlideShare: Beyond the RTOS: A Better Way to Design Real-Time Embedded Software

Also, I recently ran into another good presentation about the same ideas. This time a NASA JPL veteran describes the best practices of “Managing Concurrency in Complex Embedded Systems”. I would say, this is exactly active object model. So, it seems that it really is true that experts independently arrive at the same conclusions…

Superloop vs event-driven framework

Thursday, May 31st, 2012 Miro Samek

On the free support forum for the QP state machine frameworks, an engineer has recently asked a question “superloop vs event dispatching“, which I quote below. I think that this question is so common that my answer could be interesting to the readers of this “state-space” blog.

Question:

In the classical way of programming, I write state-machines with switch statements, where each distinct case represents a separate state. The super loop ( while (1) ) executes continuously and looks if a different state is reached based on the past occurances until that line is executed.

I am practicing with reactive class way of state-charts for a while and I get confused a little bit. First there is no explicit superloop, but an event dispatcher task instead, feeding events into the state-machine of the target reactive object, where the event dispatcher task serves possibly more than one object.

Please explain me the difference between this new approach and the traditional switch-case approach.

Your state-machine code requires dynamic instantiation of events, a place to store those events and deletion of events at the end. In the classical way, there is no need for such dynamic management of events, which makes me think that this new approach brings a more heavyweight solution.

I want to hear something more than “you can visually design states”.

What are things that can NOT be done in the classical way of state-machine coding with only switch-case statements but can be done in the real-time framework supported state-machine handling with event management.

Answer:

Many years ago I also used to program with a good old “superloop”, which would call different state machines structured as nested switch statements. There was no clear concept of an “event”, but instead the state machines polled global variables or hardware registers directly. For example, a state machine for receiving bytes from a UART would poll the receive-ready register directly.

While this approach could be made to work for simpler systems, it was rather flaky, inefficient, and hard to scale up. I know, because I’ve spent many nights and weekends chasing the elusive bugs. The main problems centered around the communication within the system.

One class of problems was related to the fact that a global variable or a hardware register has only capacity of storing one event (a queue of depth one), so some event instances were occasionally lost when the superloop didn’t come around quickly enough. But it was even worse than that. Sometimes a state machine in the superloop was still busy processing an event when an interrupt fired and changed the global variable(s) related to this event. When the state machine resumed after the interrupt, it found itself in a corrupted state, having processed part of the old event and part of the new event. Such problems could be remedied by disabling interrupts around access to the global variables, but it tended to screw up the timing for any longer processing.

Another class of problem was communication among state machines within the superloop. The naive approach is to simply call one state machine from another, but it only works as long as the second state machine does not attempt to call the first one back. The reason why it doesn’t work is that all state machine formalisms, from the simplest switch-statements to the most sophisticated UML statecharts require run-to-completion (RTC) event processing. RTC means simply that a state machine must complete processing of one event before processing another. In the circular call-back scenario the first state machine was still processing an event while the second state machine called it again and asked to process the next event.

I hope that the rather obvious corollary of this discussion so far is that event-driven systems need queuing of events. But you cannot queue global variables or hardware registers. You need event objects. You also need at least one event queue, but with just one queue you cannot easily prioritize events. So, it is better to have multiple priority-queues. Once you agree to priority queues, you need a mechanism to call your state machines in priority order and that would also guarantee RTC event processing. RTC requires at lest two things: (1) never call a state machine when it is still processing a previous event, and (2) don’t change an event as long as it is still being processed. But wait, you also need an event-driven mechanism to deliver timeouts. And finally, all this must be done in a thread-safe manner, so that you don’t have to worry about sate corruption by interrupts.

Now, if you think for a minute about events, I hope you realize that they need to convey both what happened as well as the quantitative information related to this occurrence. For example, a PACKET_RECEIVED event from an Ethernet controller informs that an Ethernet packet has been received plus the whole payload of this packet. Only that way, a state machine has all the information it needs to process such event. The beauty of packaging both signal (what happened) with parameters is that they such events can be delivered in thread-safe manner and they will not change as long as they are needed. But this convenience has its price. An event, possibly quite large, must exist as long as its needed but then must be reused as quickly as possible to conserve memory. The QP framework addresses it by providing dynamic events.

In summary, it has become pretty obvious to me that in order to make state machines truly robust, scalable, efficient, and ready for real-time, you need an infrastructure around them. A primitive superloop is just not enough to make state machines truly practical. In any state machine based system, you need events, queues, RTC-guarantee, and time events. QP is one of the simplest and most lightweight examples of such infrastructures. Of course there is some learning curve involved, but to put the complexity of QP in perspective, QP is actually smaller than the smallest bare-bones RTOS or a full-blown implementation of the single printf() function.

 

ESD closes shop. What’s next in store for embedded programming?

Sunday, April 29th, 2012 Miro Samek

The demise of the ESD Magazine marks the end of an era. In his recent post “Trends in Embedded Software Design“, the magazine insider Michael Barr commemorates this occasion by looking back at the early days and offering a look ahead at the new emerging trends. As we all enjoy predictions, I’d also like to add a couple of my own thoughts about the current status and the future developments in embedded systems programming.

 

It’s never been better to be an embedded software engineer!

When I joined the embedded field in the mid 1990s, I thought that I was late to the party. The really cool stuff, such as programming in C, C++, or Ada (as opposed to assembly), RTOS, RMA, or even modeling (remember ROOM?) have been already invented and applied to programming embedded systems. The Embedded Systems Programming magazine brought every month invaluable, relevant articles that advanced the art and taught the whole generation. But then, right around the time of the first Internet boom, something happened that slowed down the progress. The universities and colleges stopped teaching C in favor of Java and, the numbers of graduates with skills necessary to write embedded code dropped sharply, at least in the U.S. In 2005 the Embedded Systems Programming magazine has been renamed to Embedded Systems Design (ESD), and lost its focus on embedded software. After this transition, I’ve noticed that the ESD articles became far less relevant to my software work.

All this is ironic, because at the same time embedded software grew to be more important than ever. The whole society is already completely dependent on embedded software and looks to embedded systems for solutions to the toughest problems of our time: environment protection, energy production and delivery, transportation, and healthcare. With the Moore’s law marching unstoppably into its fifth decade, even hardware design starts looking more like software development. There is no doubt in my mind: the golden time of embedded systems programming is right now. The skyrocketing demand for embedded software, on one hand, and diminished supply of qualified embedded developers on the other hand, creates a perfect storm. It simply has never been better to be an embedded software engineer! And I don’t see this changing in the foreseeable future.

 

Future trends

Speaking about the future, it’s obvious that we need to develop more embedded code with less people in less time. The only way to achieve this is to increase programmer’s productivity. And the only know way to boost productivity is to increase the level of abstraction either by working with higher-level concepts or by building code from higher-level components (code reuse).

Trend 1: Real-time embedded frameworks

My first prediction is that embedded applications will increasingly be based on specialized real-time embedded frameworks, which will increasingly augment and eventually displace the traditional RTOSes.

The software development for the desktop, the web, or mobile devices has already moved to frameworks (.NET, Qt, Ruby on Rails, Android, Akka, etc.), far beyond the raw APIs of the underlying OSes. In contrast, the dominating approach in the embedded space is still based directly on the venerable RTOS (or the “superloop” at the lowest end). The main difference is that when you use an RTOS you write the main body of the application (such as the thread routines for all your tasks) and you call the RTOS services (e.g., a semaphore, or a time delay). When you use a framework, you reuse the main body and write the code that it calls. In other words, the control resides in the framework, not in your code, so the control is inverted compared to an application based on an RTOS.

The advantage, long discovered by the developers of “big” software, is that frameworks offer much higher level of architectural reuse and give conceptual integrity (often based on design patterns) to the applications. Frameworks also encapsulate the difficult or dangerous aspects of their application domains. For example, most programmers vastly underestimate the skills needed to use the various RTOS mechanisms, such as semaphores, mutexes, or even simple time delays, correctly and therefore developers vastly underestimate the true costs of using an RTOS. A generic, event-driven, real-time framework can encapsulate such low-level mechanisms, so the programmers can develop responsive, multitasking applications without even knowing what a semaphore is. The difference is like having to build a bridge each time you want to cross a river, and driving over a bridge that experts in this domain have built for you.

Real-time, embedded frameworks are not new and, in fact, have been extensively used for decades inside modeling tools capable of code generation. For example, a leading such tool, IBM Rhapsody, comes with an assortment of frameworks (OXF, IDF, SXF, MXF, MicroC, as well as third-party frameworks, such as RXF from Willert).

However, I don’t think that many people realize that frameworks of this sort can be used even without the big modeling tools and that they can be very small, about the same size as bare-bones RTOS kernels. For example, the open source QP/C or QP/C++ frameworks from Quantum Leaps (my company) require only 3-4KB of ROM and significantly less RAM than an RTOS.

I believe that in the future we will see proliferation of various real-time embedded frameworks, just as we see proliferation of RTOSes today.

 

Trend 2: Agile modeling and code generation

Another trend, closely related to frameworks is modeling and automatic code generation.

I realize of course, that big tools for modeling and code generation have been tried and failed to penetrate the market beyond a few percentage points. The main reason, as I argued in my previous post “Economics 101: UML in Embedded Systems“, is the poor ROI (return on investment) of such tools. The investment, in terms of tool’s cost, learning curve, maintenance, and constant “fighting the tool” is so large, that the returns must also be extremely high to make any economic sense.

As the case in point take for example xtUML (executable UML) tools, which are perhaps the highest-ceremony tools of this kind on the market. In xtUML, you create platform-independent models (PIMs), which need to be translated to platform-specific models (PSMs) by highly customizable “model compilers”. To preserve the illusion of complete “platform and technology independence” (whatever that means in embedded systems that by any definition of the term are specific to the task and technology), any code entered into the model is specified in “Object Action Language” (OAL). AOL functionally corresponds to crippled C, but has a different syntax. All this is calculated to keep you completely removed from the generated code, which you can only influence by tweaking the “model compiler”.

But what if you could skip the indirection level of AOL and use C or C++ directly for programming “action functions”? (Maybe you don’t care for the ability to change the generated code from C to, say, Java). What if you could replace the “model compiler” indirection layer with a real-time framework? What if the tool can turn the code generation “upside down” and give you direct access to the code and the freedom to actually do the physical design (see my previous post “Turning code generation upside down“)?

Obviously, the resulting tool will be far less ambitious and “lover level” than xtUML. But “lower level” is not necessarily pejorative. In fact, the main reason for popularity of C in embedded programming is that C is a relatively “low level” high-level programming language.

I predict that we will see more innovation in the area of “low level” and low-cost modeling and code generating tools (such as the free QM modeling tool from Quantum Leaps). I hope that this innovation will eventually bring us “modeling and code generation for the masses”.

Economics 101: UML in Embedded Systems

Tuesday, April 17th, 2012 Miro Samek

With UML, just as with anything else in the embedded space, the ultimate criterion for success is the return on investment (ROI). Sure there are many factors at play, such as “coolness factor”, yearning for a “silver bullet” and truly “automatic programming” all fueled by the aggressive marketing rhetoric of tool vendors. But ultimately, to be successful, the benefits of a method must outweigh the learning curve, the cost of tools, the added maintenance costs, the hidden costs of “fighting the tool” and so on.

As it turns out, the ROI of UML is lousy unless the models are used to generate substantial portions of the production code. Without code generation, the models inevitably fall behind and become more of a liability than an asset. In this respect I tend to agree with the “UML Modeling Maturity Index (UMMI)“, invented by Bruce Douglass. According to the UMMI, without code generation UML can reach at most 30% of its potential, and this is assuming correct use of behavioral modeling. Without it, the benefits are below 10%. This is just too low to outweigh all the costs.

Unfortunately, code generation capabilities have been always associated with complex, expensive UML tools with a very steep learning curve and a price tag to match. With such a big investment side of the ROI equation, it’s quite difficult to reach sufficient return. Consequently, all too often big tools get abandoned and if they continue to be used at all, they end up as overpriced drawing packages.

So, to follow my purely economic argument, unless we make the investment part of the ROI equation low enough, without reducing the returns too much, UML has no chance. On the other hand, if we could achieve positive ROI (something like 80% of benefits for 10% of the cost), we would have a “game changer”.

To this end, when you look closer, the biggest “bang for the buck” in UML with respect to embedded code generation are: (1) hierarchical state machines (UML statecharts) and (2) an embedded real-time framework. Of course, these two ingredients work best together and complement each other. State machines can’t operate in vacuum and need an event-driven framework to provide execution context, thread-safe event passing, event queueing, etc. Framework benefits from state machines for structure and code generation capabilities.

I’m not sure if many people realize the critical importance of a framework, but a good framework is in many ways even more valuable than the tool itself, because the framework is the big enabler of architectural reuse, testability, traceability, and code generation to name just a few. The second component are state machines, but again I’m not sure if everybody realizes the importance of state nesting. Without support for state hierarchy, traditional “flat” state machines suffer from the phenomenon known as “state-transition explosion”, which renders them unusable for real-life problems.

As it turns out, the two critical ingredients for code generation can be had with much lower investment than traditionally thought. An event-driven, real-time framework can be no more complex as a traditional bare-bones RTOS (e.g., see the family of the open source QP frameworks). A UML modeling tool for creating hierarchical state machines and production code generation can be free and can be designed to minimize the problem of “fighting the tool” (e.g., see QM). Sure, you don’t get all the bells and whistles of IBM Rhapsody, but you get the arguably most valuable ingredients. More importantly, you have a chance to achieve a positive ROI on your first project. As I said, this to me is game changing.

Can a lightweight framework like QP and the QM modeling tool scale to really big projects? Well, I’ve seen it used for tens of KLOC-size projects by big, distributed teams and I haven’t seen any signs of over-stressing the architecture or the tool.