embedded software boot camp

RTOS, TDD and the “O” in the S-O-L-I-D rules

Monday, June 11th, 2012 by Miro Samek

In Chapter 11 of the “Test-Driven Development for Embedded C” book, James Grenning discusses the S-O-L-I-D rules for effective software design. These rules have been compiled by Robert C. Martin and are intended to make a software system easier to develop, maintain, and extend over time. The acronym SOLID stands for the following five principles:

S: Single Responsibility Principle
O: Open-Closed Principle
L: Liskov Substitution Principle
I: Interface Segregation Principle
D: Dependency Inversion Principle

Out of all the SOLID design rules, the “O” rule (Open-Closed Principle) seems to me the most important for TDD, as well as the iterative and incremental development in general. If the system we design is “open for extension but closed for modification”, we can keep extending it without much re-work and re-testing of the previously developed and tested code. On the other hand, if the design requires constant re-visiting of what’s already been done and tested, we have to re-do both the code and the tests and essentially the whole iterative, TDD-based approach collapses. Please note that I don’t even mean here extensibility for the future versions of the system. I mean small, incremental extensions that we keep piling up every day to build the system in the first place.

So, here is my problem: RTOS-based designs are generally lousy when it comes to the Open-Closed Principle. The fundamental reason is that RTOS-based designs use blocking for everything, from waiting on a semaphore to timed delays. Blocked tasks are unresponsive for the duration of the blocking and the whole intervening code is designed to handle this one event on which the task was waiting. For example, if a task blocks and waits for a button press, the code that follows the blocking call handles the button. So now, it is hard to add a new event to this task, such as reception of a byte from a UART, because of the timing (waiting on user input is too long and unpredictable) and because of the whole intervening code structure. In practice, people keep adding new tasks that can wait and block on new events, but this often violates the “S” rule (Single Responsibility Principle). Often, the added tasks have the same responsibility as the old tasks and have high degree of coupling (cohesion) with them. This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.

Compare this with the event-driven approach, in which the system processes events quickly without ever blocking. Extending such systems with new events is trivial and typically does not require re-doing existing event handlers. Therefore such designs realize the Open-Closed Principle very naturally. You can also much more easily achieve the Single Responsibility Principle, because you can easily group related events in one cohesive design unit. This design unit (an active object) becomes also natural unit for TDD.

So, it seems to me that TDD should naturally favor event-driven approaches, such as active objects (actors), over traditional blocking RTOS.

I’m really curious about your thoughts about this, as it seems to me quite fundamental to the success of TDD. I’m looking forward to an interesting discussion.

Tags: , , ,

4 Responses to “RTOS, TDD and the “O” in the S-O-L-I-D rules”

  1. Sebastian says:

    I’ve written event-driven (QP) as well as RTOS based code but I have got no experience with embedded TDD (just read Grennings enlightening book). Your post encourages me again to give TDD+QP a try but I’d really like to see a practical example – e.g. your “Fly and Shoot” tutorial coming with tests and a TDD-environment.

    • Miro Samek says:

      That’s a fair request and QP is slowly getting there.

      The latest QP release 4.5.00 brings integration between QP and the Qt GUI framework. The QP-Qt integration can be used for rapid prototyping (virtual prototyping), simulation, and testing of deeply embedded software on the desktop, including building realistic user interfaces consisting of buttons, knobs, LEDs, dials, and LCD displays (both segmented and graphical). Moving embedded software development from an embedded target to the desktop eliminates the target system bottleneck
      and is a critical step for TDD.

      The next step is updating the QM modeling tool to the QP 4.5.x level and extending it with the ability to launch external tools, such as make. This will allow running tests right from QM on the desktop.

      The final step is integration of a unit test framework into QP. Here, the very important component is the QS (Quantum Spy) software tracing facility, because it is ideal to report test results. It is also critical that QS has been specifically designed to run in deeply embedded targets, so the test could be executed both on the desktop and the target.

      So, here is a high-level game plan for QP. There will certainly be some examples and, as usual, an extensive Application Note about doing TDD with QP.

  2. Greg says:


    “So, it seems to me that TDD should naturally favour event-driven approaches”

    I use TDD embedded extensively. Indeed I use event driven approach (probably because I do not know anything better), but I always design interface to be RTOS or framework independent. This give me more flexibility in porting among any possibly environment and simplifies unit tests.
    Dropping all possibly dependency of any framework gives you really portable module design, obviously we loosing some performance and footprint, also we need extra integration code, but it is always irrelevant, especially in rapid prototyping.
    E.g. please see interface of implementation iso15765 from one of commercial company: http://www.simmasoftware.com/iso-15765-users-manual.pdf

    I am glad to read support for unit testing in QP framework. It should push forward project.
    One more thing:
    For me TDD is something more general purpose tool.
    There is no tool fit to everything, most of things is question of practise and knowledge and more important: understanding your tool. TDD approach is nothing more than methodology, itself without tools what build environment is noting. Methodology do not favour any approach, but ‘practise’, ‘tools’, ‘economy’ and ‘nowadays technology’ may favor some approach, specially event-driven. But remember that event-driven will defend itself (my opinion)! I do not like pushy advocacy.

    “This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.”
    so in ma opinion, there is no any nightmare in TDD.
    1. where TDD is not fit, do not use it, understand your tool
    2. TDD itself is methodology, unit testing with TDD need isolation, if technology and experience in building isolation environment is not on acceptable level, than TDD is not a solution

    Do you build your environment using mocks? if not I recommend, because (in may opinion) TDD without mocks is real nightmare.
    I probably do not understand all context of “nightmare problem”, just some thoughts and advocacy of TDD :)

  3. Fabian says:


    I have some doubts that equating “RTOS-based designs” with “blocking, waiting, …” is right. Typically, an event-triggered RTOS provides everything that is needed to implement a real-time application in an event-triggered fashion. This is different in time-triggered environments, of course. However, such systems inherently are different from event-triggered systems and dealing with non-periodic events is really problematic – in most cases, you only can poll such events. But, such systems just are not suited for interactive applications demanding pressed buttons. So, IMO this is not a matter of “RTOS-based design” but of application design in general and choosing the right paradigm for that application.

Leave a Reply