Posts Tagged ‘TDD’

RTOS, TDD and the “O” in the S-O-L-I-D rules

Monday, June 11th, 2012 Miro Samek

In Chapter 11 of the “Test-Driven Development for Embedded C” book, James Grenning discusses the S-O-L-I-D rules for effective software design. These rules have been compiled by Robert C. Martin and are intended to make a software system easier to develop, maintain, and extend over time. The acronym SOLID stands for the following five principles:

S: Single Responsibility Principle
O: Open-Closed Principle
L: Liskov Substitution Principle
I: Interface Segregation Principle
D: Dependency Inversion Principle

Out of all the SOLID design rules, the “O” rule (Open-Closed Principle) seems to me the most important for TDD, as well as the iterative and incremental development in general. If the system we design is “open for extension but closed for modification”, we can keep extending it without much re-work and re-testing of the previously developed and tested code. On the other hand, if the design requires constant re-visiting of what’s already been done and tested, we have to re-do both the code and the tests and essentially the whole iterative, TDD-based approach collapses. Please note that I don’t even mean here extensibility for the future versions of the system. I mean small, incremental extensions that we keep piling up every day to build the system in the first place.

So, here is my problem: RTOS-based designs are generally lousy when it comes to the Open-Closed Principle. The fundamental reason is that RTOS-based designs use blocking for everything, from waiting on a semaphore to timed delays. Blocked tasks are unresponsive for the duration of the blocking and the whole intervening code is designed to handle this one event on which the task was waiting. For example, if a task blocks and waits for a button press, the code that follows the blocking call handles the button. So now, it is hard to add a new event to this task, such as reception of a byte from a UART, because of the timing (waiting on user input is too long and unpredictable) and because of the whole intervening code structure. In practice, people keep adding new tasks that can wait and block on new events, but this often violates the “S” rule (Single Responsibility Principle). Often, the added tasks have the same responsibility as the old tasks and have high degree of coupling (cohesion) with them. This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.

Compare this with the event-driven approach, in which the system processes events quickly without ever blocking. Extending such systems with new events is trivial and typically does not require re-doing existing event handlers. Therefore such designs realize the Open-Closed Principle very naturally. You can also much more easily achieve the Single Responsibility Principle, because you can easily group related events in one cohesive design unit. This design unit (an active object) becomes also natural unit for TDD.

So, it seems to me that TDD should naturally favor event-driven approaches, such as active objects (actors), over traditional blocking RTOS.

I’m really curious about your thoughts about this, as it seems to me quite fundamental to the success of TDD. I’m looking forward to an interesting discussion.

On the Origin of Software by Means of Artificial Selection

Friday, August 5th, 2011 Miro Samek

If you haven’t put your hands on the recent James Grenning’s book “Test-Driven Development for Embedded C” yet, I highly recommend you do. Here is why.

First, you need to realize that this book is not really about testing–it is about software development. The central idea behind TDD (Test-Driven Development) is that software, as any complex system in nature, has to evolve gradually and has to keep working throughout all the development stages. This idea is of course not new and goes back all the way to the Darwin’s “On the Origin of Species”. More recently, in his 1977 book “Systemantics: How Systems Work and Especially How They Fail” John Gall wrote:

“A complex system that works is invariably found to have evolved from a simple system that worked…. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system”.

The key point of TDD is to subject the software to constant “struggle for existence” to actually see if it indeed is still working and in the process weed out any undesired mutations.

Of course, in developing software we don’t have the deep evolutionary time, so we need to accelerate the pace of software evolution. We do this by automating the testing.

For embedded development this means avoiding the target system bottleneck (James calls it DOH-Development On Hardware). The embedded TDD strategy is to develop embedded software on the desktop and only occasionally check it on the real embedded hardware. This means that the C/C++ compilers and tools for the desktop (such as Visual C++, MinGW, or Cygwin for Windows and GCC for Linux and Mac OS X) are important for us.

The book comes with testing frameworks (Unity and CppUTest) and plenty of example code. The code works right of the box on Linux, but I had some issues running it on Windows. In the process of learning the tools, I’ve prepared a small template for Visual C++ 2008, which is available for download from:

http://www.state-machine.com/attachments/blinky_tdd.zip

This demo assumes that you download and install the CppUTest framework (http://sourceforge.net/projects/cpputest/) and that you define the environment variable CPP_U_TEST to point to the directory where you installed CppUTest. The Visual Studio solution AllTest.sln is located in the blinky\tests directory.

I’d love to hear about your experiences with TDD in embedded programming. I’m sure I will blog more about it in the future.