Archive for the ‘Uncategorized’ Category

State-Space Blog Continues at state-machine.com

Monday, January 4th, 2021 Miro Samek

The readers of this blog have certainly noticed that EmbeddedGurus is no longer active.

But the State-Space blog is not dead! The blog has been migrated to state-machine.com, where it will continue. Please check it out!

Dual Targeting and Agile Prototyping of Embedded Software on Windows

Friday, April 12th, 2013 Miro Samek

When developing embedded code for devices with non-trivial user interfaces, it often pays off to build a prototype (virtual prototype) of the embedded system of a PC. The strategy is called “dual targeting”, because you develop software on one machine (e.g., Windows PC) and run it on a deeply embedded target, as well as on the PC. Dual targeting is the main strategy for avoiding the “target system bottleneck” in the agile embedded software development, popularized in the book “Test-Driven Development for Embedded C” by James Grenning.

Avoiding Target Hardware Bottleneck with Dual Targeting

Please note that dual targeting does not mean that the embedded device has anything to do with the PC. Neither it means that the simulation must be cycle-exact with the embedded target CPU.

Dual targeting simply means that from day one, your embedded code (typically in C) is designed to run on at least two platforms: the final target hardware and your PC. All you really need for this is two C compilers: one for the PC and another for the embedded device.

However, the dual targeting strategy does require a specific way of designing the embedded software such that any target hardware dependencies are handled through a well-defined interface often called the Board Support Package (BSP). This interface has at least two implementations: one for the actual target and one for the PC, for example running Windows. With such interface in place, the bulk of the embedded code can remain completely unaware which BSP implementation it is linked to and so it can be developed quickly on the PC, but can also run on the target hardware without any changes.

While some embedded programmers can view dual targeting as a self-inflicted burden, the more experienced developers generally agree that paying attention to the boundaries between software and hardware is actually beneficial, because it results in more modular, more portable, and more maintainable software with much longer useful lifetime. The investment in dual targeting has also an immediate payback in the vastly accelerated compile-run-debug cycle, which is much faster and more productive on the powerful PC compared to much slower, recourse-constrained deeply embedded target with limited visibility into the running code.

Agile Rapid Prototyping of Embedded Software with Dual Targeting

Dual targeting can have many different objectives. For example, in the test-driven development (TDD) of embedded software, the objective is to build relatively concise unit tests and execute them on the desktop as console-type applications. The main challenge is management of the inter-module dependencies and flexibility of tests, but the overall architecture of the final product is of lesser concerns, as the unit tests are executed in isolation using special test harnesses.

However, dual targeting can also be used for (rapid) prototyping and simulating the whole embedded devices on the PC, not just executing unit tests. In this case, the objective is to build a possibly complete prototype of the embedded device as a GUI-type application. This approach is particularly interesting for embedded systems with non-trivial user interfaces, such as: home appliances, office equipment, thermostats, medical devices, industrial controllers, etc. As it turns out, significant percentage of the code embedded in all those devices is devoted to the user interface and can be, or even should be, developed on the desktop.

QWIN GUI Toolkit

When developing embedded code for devices with non-trivial user interfaces, one often runs into the problem of representing the embedded front panels as GUI elements on the PC. The problem is so common, that I’m really surprised that my internet search couldn’t uncover any simple C-only interface to the basic elements, such as LCDs, buttons, and LEDs. I’ve posted questions on StackOverflow, and other such forums, but again, I got recommendations for .NET, C#, VisualBasic, and many expensive proprietary tools, none of which provided an easy, direct binding to C. My objective is not really that complicated, yet it seems that every embedded developer has to re-invent this wheel over and over again.

qwin_ani

 

 

So, to help embedded developers interested in prototyping embedded devices on Windows, I have created a QWIN GUI Toolkit” and have posted on SourceForge (as part of the Qtools collection) under the permissive MIT open source license. This toolkit relies only on the raw Win32 API in C and currently provides the following elements:

  • Graphic display for an efficient, pixel-addressable displays such as graphical LCDs, OLEDs, etc. with full 24-bit color.
  • Segment display for segmented display such as segment LCDs, and segment LEDs with generic, custom bitmaps for the segments.
  • Owner-drawn buttons with custom “depressed” and “released” bitmaps and capable of generating separate events when depressed and when released.

The toolkit comes with an example and an App Note, showing how to handle input from the owner-drawn buttons, regular buttons, keyboard, and the mouse. You can also view a 1-minute YouTube video “Flyn ‘n’ Shoot game on windows” that shows a virtual embedded board running a game.

Regarding the size and complexity of the “QWIN GUI Toolkit“, the implementation of the aforementioned GUI elements takes only about 250 lines of C. The example with all sources of input and a lot of comments amounts to some 300 lines of C. The toolkit has been tested with the free MinGW compiler, the free Visual C++ Express 2013, and the free ResEdit resource editor.

Enjoy!

RTOS, TDD and the “O” in the S-O-L-I-D rules

Monday, June 11th, 2012 Miro Samek

In Chapter 11 of the “Test-Driven Development for Embedded C” book, James Grenning discusses the S-O-L-I-D rules for effective software design. These rules have been compiled by Robert C. Martin and are intended to make a software system easier to develop, maintain, and extend over time. The acronym SOLID stands for the following five principles:

S: Single Responsibility Principle
O: Open-Closed Principle
L: Liskov Substitution Principle
I: Interface Segregation Principle
D: Dependency Inversion Principle

Out of all the SOLID design rules, the “O” rule (Open-Closed Principle) seems to me the most important for TDD, as well as the iterative and incremental development in general. If the system we design is “open for extension but closed for modification”, we can keep extending it without much re-work and re-testing of the previously developed and tested code. On the other hand, if the design requires constant re-visiting of what’s already been done and tested, we have to re-do both the code and the tests and essentially the whole iterative, TDD-based approach collapses. Please note that I don’t even mean here extensibility for the future versions of the system. I mean small, incremental extensions that we keep piling up every day to build the system in the first place.

So, here is my problem: RTOS-based designs are generally lousy when it comes to the Open-Closed Principle. The fundamental reason is that RTOS-based designs use blocking for everything, from waiting on a semaphore to timed delays. Blocked tasks are unresponsive for the duration of the blocking and the whole intervening code is designed to handle this one event on which the task was waiting. For example, if a task blocks and waits for a button press, the code that follows the blocking call handles the button. So now, it is hard to add a new event to this task, such as reception of a byte from a UART, because of the timing (waiting on user input is too long and unpredictable) and because of the whole intervening code structure. In practice, people keep adding new tasks that can wait and block on new events, but this often violates the “S” rule (Single Responsibility Principle). Often, the added tasks have the same responsibility as the old tasks and have high degree of coupling (cohesion) with them. This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.

Compare this with the event-driven approach, in which the system processes events quickly without ever blocking. Extending such systems with new events is trivial and typically does not require re-doing existing event handlers. Therefore such designs realize the Open-Closed Principle very naturally. You can also much more easily achieve the Single Responsibility Principle, because you can easily group related events in one cohesive design unit. This design unit (an active object) becomes also natural unit for TDD.

So, it seems to me that TDD should naturally favor event-driven approaches, such as active objects (actors), over traditional blocking RTOS.

I’m really curious about your thoughts about this, as it seems to me quite fundamental to the success of TDD. I’m looking forward to an interesting discussion.

ESD closes shop. What’s next in store for embedded programming?

Sunday, April 29th, 2012 Miro Samek

The demise of the ESD Magazine marks the end of an era. In his recent post “Trends in Embedded Software Design“, the magazine insider Michael Barr commemorates this occasion by looking back at the early days and offering a look ahead at the new emerging trends. As we all enjoy predictions, I’d also like to add a couple of my own thoughts about the current status and the future developments in embedded systems programming.

 

It’s never been better to be an embedded software engineer!

When I joined the embedded field in the mid 1990s, I thought that I was late to the party. The really cool stuff, such as programming in C, C++, or Ada (as opposed to assembly), RTOS, RMA, or even modeling (remember ROOM?) have been already invented and applied to programming embedded systems. The Embedded Systems Programming magazine brought every month invaluable, relevant articles that advanced the art and taught the whole generation. But then, right around the time of the first Internet boom, something happened that slowed down the progress. The universities and colleges stopped teaching C in favor of Java and, the numbers of graduates with skills necessary to write embedded code dropped sharply, at least in the U.S. In 2005 the Embedded Systems Programming magazine has been renamed to Embedded Systems Design (ESD), and lost its focus on embedded software. After this transition, I’ve noticed that the ESD articles became far less relevant to my software work.

All this is ironic, because at the same time embedded software grew to be more important than ever. The whole society is already completely dependent on embedded software and looks to embedded systems for solutions to the toughest problems of our time: environment protection, energy production and delivery, transportation, and healthcare. With the Moore’s law marching unstoppably into its fifth decade, even hardware design starts looking more like software development. There is no doubt in my mind: the golden time of embedded systems programming is right now. The skyrocketing demand for embedded software, on one hand, and diminished supply of qualified embedded developers on the other hand, creates a perfect storm. It simply has never been better to be an embedded software engineer! And I don’t see this changing in the foreseeable future.

 

Future trends

Speaking about the future, it’s obvious that we need to develop more embedded code with less people in less time. The only way to achieve this is to increase programmer’s productivity. And the only know way to boost productivity is to increase the level of abstraction either by working with higher-level concepts or by building code from higher-level components (code reuse).

Trend 1: Real-time embedded frameworks

My first prediction is that embedded applications will increasingly be based on specialized real-time embedded frameworks, which will increasingly augment and eventually displace the traditional RTOSes.

The software development for the desktop, the web, or mobile devices has already moved to frameworks (.NET, Qt, Ruby on Rails, Android, Akka, etc.), far beyond the raw APIs of the underlying OSes. In contrast, the dominating approach in the embedded space is still based directly on the venerable RTOS (or the “superloop” at the lowest end). The main difference is that when you use an RTOS you write the main body of the application (such as the thread routines for all your tasks) and you call the RTOS services (e.g., a semaphore, or a time delay). When you use a framework, you reuse the main body and write the code that it calls. In other words, the control resides in the framework, not in your code, so the control is inverted compared to an application based on an RTOS.

The advantage, long discovered by the developers of “big” software, is that frameworks offer much higher level of architectural reuse and give conceptual integrity (often based on design patterns) to the applications. Frameworks also encapsulate the difficult or dangerous aspects of their application domains. For example, most programmers vastly underestimate the skills needed to use the various RTOS mechanisms, such as semaphores, mutexes, or even simple time delays, correctly and therefore developers vastly underestimate the true costs of using an RTOS. A generic, event-driven, real-time framework can encapsulate such low-level mechanisms, so the programmers can develop responsive, multitasking applications without even knowing what a semaphore is. The difference is like having to build a bridge each time you want to cross a river, and driving over a bridge that experts in this domain have built for you.

Real-time, embedded frameworks are not new and, in fact, have been extensively used for decades inside modeling tools capable of code generation. For example, a leading such tool, IBM Rhapsody, comes with an assortment of frameworks (OXF, IDF, SXF, MXF, MicroC, as well as third-party frameworks, such as RXF from Willert).

However, I don’t think that many people realize that frameworks of this sort can be used even without the big modeling tools and that they can be very small, about the same size as bare-bones RTOS kernels. For example, the open source QP/C or QP/C++ frameworks from Quantum Leaps (my company) require only 3-4KB of ROM and significantly less RAM than an RTOS.

I believe that in the future we will see proliferation of various real-time embedded frameworks, just as we see proliferation of RTOSes today.

 

Trend 2: Agile modeling and code generation

Another trend, closely related to frameworks is modeling and automatic code generation.

I realize of course, that big tools for modeling and code generation have been tried and failed to penetrate the market beyond a few percentage points. The main reason, as I argued in my previous post “Economics 101: UML in Embedded Systems“, is the poor ROI (return on investment) of such tools. The investment, in terms of tool’s cost, learning curve, maintenance, and constant “fighting the tool” is so large, that the returns must also be extremely high to make any economic sense.

As the case in point take for example xtUML (executable UML) tools, which are perhaps the highest-ceremony tools of this kind on the market. In xtUML, you create platform-independent models (PIMs), which need to be translated to platform-specific models (PSMs) by highly customizable “model compilers”. To preserve the illusion of complete “platform and technology independence” (whatever that means in embedded systems that by any definition of the term are specific to the task and technology), any code entered into the model is specified in “Object Action Language” (OAL). AOL functionally corresponds to crippled C, but has a different syntax. All this is calculated to keep you completely removed from the generated code, which you can only influence by tweaking the “model compiler”.

But what if you could skip the indirection level of AOL and use C or C++ directly for programming “action functions”? (Maybe you don’t care for the ability to change the generated code from C to, say, Java). What if you could replace the “model compiler” indirection layer with a real-time framework? What if the tool can turn the code generation “upside down” and give you direct access to the code and the freedom to actually do the physical design (see my previous post “Turning code generation upside down“)?

Obviously, the resulting tool will be far less ambitious and “lover level” than xtUML. But “lower level” is not necessarily pejorative. In fact, the main reason for popularity of C in embedded programming is that C is a relatively “low level” high-level programming language.

I predict that we will see more innovation in the area of “low level” and low-cost modeling and code generating tools (such as the free QM modeling tool from Quantum Leaps). I hope that this innovation will eventually bring us “modeling and code generation for the masses”.

On the Origin of Software by Means of Artificial Selection

Friday, August 5th, 2011 Miro Samek

If you haven’t put your hands on the recent James Grenning’s book “Test-Driven Development for Embedded C” yet, I highly recommend you do. Here is why.

First, you need to realize that this book is not really about testing–it is about software development. The central idea behind TDD (Test-Driven Development) is that software, as any complex system in nature, has to evolve gradually and has to keep working throughout all the development stages. This idea is of course not new and goes back all the way to the Darwin’s “On the Origin of Species”. More recently, in his 1977 book “Systemantics: How Systems Work and Especially How They Fail” John Gall wrote:

“A complex system that works is invariably found to have evolved from a simple system that worked…. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system”.

The key point of TDD is to subject the software to constant “struggle for existence” to actually see if it indeed is still working and in the process weed out any undesired mutations.

Of course, in developing software we don’t have the deep evolutionary time, so we need to accelerate the pace of software evolution. We do this by automating the testing.

For embedded development this means avoiding the target system bottleneck (James calls it DOH-Development On Hardware). The embedded TDD strategy is to develop embedded software on the desktop and only occasionally check it on the real embedded hardware. This means that the C/C++ compilers and tools for the desktop (such as Visual C++, MinGW, or Cygwin for Windows and GCC for Linux and Mac OS X) are important for us.

The book comes with testing frameworks (Unity and CppUTest) and plenty of example code. The code works right of the box on Linux, but I had some issues running it on Windows. In the process of learning the tools, I’ve prepared a small template for Visual C++ 2008, which is available for download from:

http://www.state-machine.com/attachments/blinky_tdd.zip

This demo assumes that you download and install the CppUTest framework (http://sourceforge.net/projects/cpputest/) and that you define the environment variable CPP_U_TEST to point to the directory where you installed CppUTest. The Visual Studio solution AllTest.sln is located in the blinky\tests directory.

I’d love to hear about your experiences with TDD in embedded programming. I’m sure I will blog more about it in the future.