Archive for the ‘UML’ Category

State-Space Blog Continues at state-machine.com

Monday, January 4th, 2021 Miro Samek

The readers of this blog have certainly noticed that EmbeddedGurus is no longer active.

But the State-Space blog is not dead! The blog has been migrated to state-machine.com, where it will continue. Please check it out!

ESD closes shop. What’s next in store for embedded programming?

Sunday, April 29th, 2012 Miro Samek

The demise of the ESD Magazine marks the end of an era. In his recent post “Trends in Embedded Software Design“, the magazine insider Michael Barr commemorates this occasion by looking back at the early days and offering a look ahead at the new emerging trends. As we all enjoy predictions, I’d also like to add a couple of my own thoughts about the current status and the future developments in embedded systems programming.

 

It’s never been better to be an embedded software engineer!

When I joined the embedded field in the mid 1990s, I thought that I was late to the party. The really cool stuff, such as programming in C, C++, or Ada (as opposed to assembly), RTOS, RMA, or even modeling (remember ROOM?) have been already invented and applied to programming embedded systems. The Embedded Systems Programming magazine brought every month invaluable, relevant articles that advanced the art and taught the whole generation. But then, right around the time of the first Internet boom, something happened that slowed down the progress. The universities and colleges stopped teaching C in favor of Java and, the numbers of graduates with skills necessary to write embedded code dropped sharply, at least in the U.S. In 2005 the Embedded Systems Programming magazine has been renamed to Embedded Systems Design (ESD), and lost its focus on embedded software. After this transition, I’ve noticed that the ESD articles became far less relevant to my software work.

All this is ironic, because at the same time embedded software grew to be more important than ever. The whole society is already completely dependent on embedded software and looks to embedded systems for solutions to the toughest problems of our time: environment protection, energy production and delivery, transportation, and healthcare. With the Moore’s law marching unstoppably into its fifth decade, even hardware design starts looking more like software development. There is no doubt in my mind: the golden time of embedded systems programming is right now. The skyrocketing demand for embedded software, on one hand, and diminished supply of qualified embedded developers on the other hand, creates a perfect storm. It simply has never been better to be an embedded software engineer! And I don’t see this changing in the foreseeable future.

 

Future trends

Speaking about the future, it’s obvious that we need to develop more embedded code with less people in less time. The only way to achieve this is to increase programmer’s productivity. And the only know way to boost productivity is to increase the level of abstraction either by working with higher-level concepts or by building code from higher-level components (code reuse).

Trend 1: Real-time embedded frameworks

My first prediction is that embedded applications will increasingly be based on specialized real-time embedded frameworks, which will increasingly augment and eventually displace the traditional RTOSes.

The software development for the desktop, the web, or mobile devices has already moved to frameworks (.NET, Qt, Ruby on Rails, Android, Akka, etc.), far beyond the raw APIs of the underlying OSes. In contrast, the dominating approach in the embedded space is still based directly on the venerable RTOS (or the “superloop” at the lowest end). The main difference is that when you use an RTOS you write the main body of the application (such as the thread routines for all your tasks) and you call the RTOS services (e.g., a semaphore, or a time delay). When you use a framework, you reuse the main body and write the code that it calls. In other words, the control resides in the framework, not in your code, so the control is inverted compared to an application based on an RTOS.

The advantage, long discovered by the developers of “big” software, is that frameworks offer much higher level of architectural reuse and give conceptual integrity (often based on design patterns) to the applications. Frameworks also encapsulate the difficult or dangerous aspects of their application domains. For example, most programmers vastly underestimate the skills needed to use the various RTOS mechanisms, such as semaphores, mutexes, or even simple time delays, correctly and therefore developers vastly underestimate the true costs of using an RTOS. A generic, event-driven, real-time framework can encapsulate such low-level mechanisms, so the programmers can develop responsive, multitasking applications without even knowing what a semaphore is. The difference is like having to build a bridge each time you want to cross a river, and driving over a bridge that experts in this domain have built for you.

Real-time, embedded frameworks are not new and, in fact, have been extensively used for decades inside modeling tools capable of code generation. For example, a leading such tool, IBM Rhapsody, comes with an assortment of frameworks (OXF, IDF, SXF, MXF, MicroC, as well as third-party frameworks, such as RXF from Willert).

However, I don’t think that many people realize that frameworks of this sort can be used even without the big modeling tools and that they can be very small, about the same size as bare-bones RTOS kernels. For example, the open source QP/C or QP/C++ frameworks from Quantum Leaps (my company) require only 3-4KB of ROM and significantly less RAM than an RTOS.

I believe that in the future we will see proliferation of various real-time embedded frameworks, just as we see proliferation of RTOSes today.

 

Trend 2: Agile modeling and code generation

Another trend, closely related to frameworks is modeling and automatic code generation.

I realize of course, that big tools for modeling and code generation have been tried and failed to penetrate the market beyond a few percentage points. The main reason, as I argued in my previous post “Economics 101: UML in Embedded Systems“, is the poor ROI (return on investment) of such tools. The investment, in terms of tool’s cost, learning curve, maintenance, and constant “fighting the tool” is so large, that the returns must also be extremely high to make any economic sense.

As the case in point take for example xtUML (executable UML) tools, which are perhaps the highest-ceremony tools of this kind on the market. In xtUML, you create platform-independent models (PIMs), which need to be translated to platform-specific models (PSMs) by highly customizable “model compilers”. To preserve the illusion of complete “platform and technology independence” (whatever that means in embedded systems that by any definition of the term are specific to the task and technology), any code entered into the model is specified in “Object Action Language” (OAL). AOL functionally corresponds to crippled C, but has a different syntax. All this is calculated to keep you completely removed from the generated code, which you can only influence by tweaking the “model compiler”.

But what if you could skip the indirection level of AOL and use C or C++ directly for programming “action functions”? (Maybe you don’t care for the ability to change the generated code from C to, say, Java). What if you could replace the “model compiler” indirection layer with a real-time framework? What if the tool can turn the code generation “upside down” and give you direct access to the code and the freedom to actually do the physical design (see my previous post “Turning code generation upside down“)?

Obviously, the resulting tool will be far less ambitious and “lover level” than xtUML. But “lower level” is not necessarily pejorative. In fact, the main reason for popularity of C in embedded programming is that C is a relatively “low level” high-level programming language.

I predict that we will see more innovation in the area of “low level” and low-cost modeling and code generating tools (such as the free QM modeling tool from Quantum Leaps). I hope that this innovation will eventually bring us “modeling and code generation for the masses”.

Economics 101: UML in Embedded Systems

Tuesday, April 17th, 2012 Miro Samek

With UML, just as with anything else in the embedded space, the ultimate criterion for success is the return on investment (ROI). Sure there are many factors at play, such as “coolness factor”, yearning for a “silver bullet” and truly “automatic programming” all fueled by the aggressive marketing rhetoric of tool vendors. But ultimately, to be successful, the benefits of a method must outweigh the learning curve, the cost of tools, the added maintenance costs, the hidden costs of “fighting the tool” and so on.

As it turns out, the ROI of UML is lousy unless the models are used to generate substantial portions of the production code. Without code generation, the models inevitably fall behind and become more of a liability than an asset. In this respect I tend to agree with the “UML Modeling Maturity Index (UMMI)“, invented by Bruce Douglass. According to the UMMI, without code generation UML can reach at most 30% of its potential, and this is assuming correct use of behavioral modeling. Without it, the benefits are below 10%. This is just too low to outweigh all the costs.

Unfortunately, code generation capabilities have been always associated with complex, expensive UML tools with a very steep learning curve and a price tag to match. With such a big investment side of the ROI equation, it’s quite difficult to reach sufficient return. Consequently, all too often big tools get abandoned and if they continue to be used at all, they end up as overpriced drawing packages.

So, to follow my purely economic argument, unless we make the investment part of the ROI equation low enough, without reducing the returns too much, UML has no chance. On the other hand, if we could achieve positive ROI (something like 80% of benefits for 10% of the cost), we would have a “game changer”.

To this end, when you look closer, the biggest “bang for the buck” in UML with respect to embedded code generation are: (1) hierarchical state machines (UML statecharts) and (2) an embedded real-time framework. Of course, these two ingredients work best together and complement each other. State machines can’t operate in vacuum and need an event-driven framework to provide execution context, thread-safe event passing, event queueing, etc. Framework benefits from state machines for structure and code generation capabilities.

I’m not sure if many people realize the critical importance of a framework, but a good framework is in many ways even more valuable than the tool itself, because the framework is the big enabler of architectural reuse, testability, traceability, and code generation to name just a few. The second component are state machines, but again I’m not sure if everybody realizes the importance of state nesting. Without support for state hierarchy, traditional “flat” state machines suffer from the phenomenon known as “state-transition explosion”, which renders them unusable for real-life problems.

As it turns out, the two critical ingredients for code generation can be had with much lower investment than traditionally thought. An event-driven, real-time framework can be no more complex as a traditional bare-bones RTOS (e.g., see the family of the open source QP frameworks). A UML modeling tool for creating hierarchical state machines and production code generation can be free and can be designed to minimize the problem of “fighting the tool” (e.g., see QM). Sure, you don’t get all the bells and whistles of IBM Rhapsody, but you get the arguably most valuable ingredients. More importantly, you have a chance to achieve a positive ROI on your first project. As I said, this to me is game changing.

Can a lightweight framework like QP and the QM modeling tool scale to really big projects? Well, I’ve seen it used for tens of KLOC-size projects by big, distributed teams and I haven’t seen any signs of over-stressing the architecture or the tool.

Turning automatic code generation upside down

Tuesday, February 14th, 2012 Miro Samek

Much ink has been spilled on the Next Big Thing in software development. One of these things has always been “automatic code generation” from high-level models (e.g., from state machines).

But even though many tools on the market today support code generation, their widespread acceptance has grown rather slowly. Of course, many factors contribute to this, but one of the main reasons is that the generated code has simply too many shortcomings, which too often require manual “massaging” of the generated code. But this breaks the connection with the original model. The tool industry’s answer has been “round-trip engineering”, which is the idea of feeding the changes in the code back to the model.

Unfortunately, “round-trip engineering” simply does not work well enough in practice. This should not be so surprising, considering that no other code generation in software history has ever worked that way. You don’t edit by hand the binary machine code generated by an assembler. You don’t edit by hand the assembly code generated by the high-level language compiler. This would be ridiculous. So, why modeling tools assume that the generated code will be edited manually?

Well, the modeling tools have to assume this, because the generated code is hard to use “as-is” without modifications.

First, the generated code might be simply incomplete, such as skeleton code with “TODO” comments generated from class diagrams. I’m not a fan of this, because I think that in the long run such code generation is outright counterproductive.

Second, most code generating tools impose a specific physical design (by physical design I mean partitioning of the code into directories, and files, such as header files and implementation files). For example, for generation of C/C++ code (which dominate real-time embedded programming), the beaten path is to generate <class>.h and <class>.cpp files for every class. But what if I want to put class declaration in a file scope of a .cpp file and not to generate the <class>.h file at all? Actually, I often want to do this to achieve even better encapsulation. A typical tool would not allow me to do this.

And finally, all too often the automatically generated code is hard to integrate with other code, not created by the tool. For example, a class definition might rely on many included header files. But while most tools recognize that and allow inserting some custom beginning of the file, they don’t allow to insert code in an arbitrary place in the file.

But, how about a tool that actually allows you to do your own physical design? How about turning the whole code generation process upside down?

A tool like this would allow you to create and name directories and files instead of the tool imposing it on you. Obviously, this is still manual coding. But, the twist here is that in this code you can “ask” the tool to synthesize parts of the code based on the model. (The “requests” are special tags that you mix in your code.) For example, you can “ask” the tool to generate a class declaration in one place, a class method definition in another, and a state machine definition in yet another place in your code.

This “inversion” of code generation responsibilities solves most of the problems with the integration between the generated code and other code. You can simply “ask” the tool to generate as much or as little code as you see fit. The tool helps, where it can add value, but otherwise you can keep it out of your way.

The idea of “inverting” the code generation is so simple, that I would be surprised if it was not already implemented in some tools. One example I have is the free QM tool from my company (http://www.state-machine.com/qm). If you know of any other tool that works that way, I would be very interested to hear about it.

Linear statechart notation

Monday, November 15th, 2010 Miro Samek

The traditional fully 2-dimensional structure of UML state diagrams is too much rope to hang yourself with. There is no standard drawing order or pattern; some designers start from the top, some from the middle, and others just “go with the flow”. Transitions can originate at any state edge and go in any direction, so they are easy to miss in any nontrivial diagram because you don’t know exactly where to look.

In many ways, UML state diagrams resemble the old way of drawing tree structures (e.g., a family tree) with the root in the middle and branches fanning out in all directions. But today nobody draws hierarchical structures that way anymore. If you look at your file-system explorer you will see the hierarchical structure of directories and files arranged linearly, top-to-bottom. The “linear statechart notation” that I would like to propose here is based on the same idea.

The goal of the “linear statechart notation” is to make the diagrams more structured and legible by reducing the use of horizontal dimension.

As an example of the “linear notation” consider the state diagram shown in the screen shot from the free QM tool.  (This statechart shows the lifescycle of a space ship in a simple “Fly ‘n’ Shoot” game.) First please take a look at the hierarchical model explorer pane on the left-side of the screen. You see the Ship object, its attributes and methods followed by the Statechart with the hierarchically arranged elements below. Now, please take a look at the state diagram in the middle. You will see the one-to-one correspondence between the diagram and the explorer view. Please note that the state diagram is essentially 1-dimensional.

Finally, please take a look at the right-hand side, which is a screenshot of the *generated code* in the Eclipse-based tool (Atollic TrueSTUDIO in this case). The generated code corresponds to the state “flying”, which is highlighted in the diagram. Interestingly, the code itself is an extension of the “linear notation” that zooms into the “flying” state. Again, just go from the top to bottom in the code, inside the “flying” state in the diagram, and in the “flying” element in the model explorer. You see exactly the same elements represented in the same order, from the entry action, through the events TIME_TICK, PLAYER_TRIGGER, DESTROYED_MINE, HIT_WALL, and HIT_MINE. I think this consistency and tracibility is great.

I’d like to hear your comments about the proposed notation. I also hope that this post explains a bit how the free QM tool works and generates code.