embedded software boot camp

The Crap Code Conundrum

June 29th, 2012 by Nigel Jones

Listed below are three statements. Based on my nearly thirty years in the embedded space I can confidently state that: One I have never heard stated. Another I have rarely heard stated, and the third I hear a lot. Here they are in order:

  1. I write crap code.
  2. You know so-and-so. (S)he writes really good code.
  3. This code is complete crap.

If your experience comports with mine, then it leads to what I have coined the ‘crap code conundrum’. In short, crap code is everywhere – but no one admits to or realizes they are writing it! So how can this be? Well I see several possibilities:

  1. In fact a lot of so called crap code is labeled as such because the author did things differently to the way the reader would have done it. I think it’s important to recognize this before summarily dismissing some code. Notwithstanding this, I all too often find myself saying ‘this code is complete crap’ – because it is!
  2. This is related to point 1, and essentially comes down to different people have different ideas about what constitutes good code. For example I think wrapping code in complex macros is an invitation for disaster. Others see it as a perfectly good way of simplifying things. (I’m right :-))
  3. The code started out being pretty good and has degenerated over time because the author hasn’t been allowed the time to perform the necessary refactoring. I think this does explain a lot of the bad code I see.
  4. The folks that write crap code are completely oblivious to the fact they are doing it. Indeed it’s only the self aware / self critical types that would even bother to ask themselves the question ‘is my code any good?’ Indeed, the first step to improving ones code is to ask oneself the question – how can I improve my code?

My gut feel is that point 4 is most likely the main cause. Now if you are so self-absorbed that you wouldn’t even dream to ask yourself the question ‘do I write crap code?’, then I seriously doubt whether you’d be reading this article. However if you have crossed this hurdle, then how can you determine if the code you are writing is any good? Well I took a stab at this a while back with this article . However some of the commenters pointed out that it’s quite easy to write code that has good metrics – yet is still complete crap. So clearly the code metrics approach is part of the story – but not the entire story.

So a couple of weeks ago I found myself in a bar in San Francisco having a beer with Michael Barr and a very smart  guy Steve Loudon. The topic of crap code came up and I posed the question ‘how can you tell code is crap?’ After all I think that crap code is a bit like pornography – you know it when you see it. After a spirited debate, the most pithy statement we could come up with is this:

If it’s hard to maintain, it’s crap.

Clearly there are all sorts of exceptions and qualifications, but at the end of the day I think this statement pretty much says it all. Thus if you are wondering if you write crap code, just ask yourself the question – how hard is this code to maintain? If you don’t like the answer, then it’s time to make a change.

 

 

 

Optimizing for the CPU / compiler

June 3rd, 2012 by Nigel Jones

It is well known that standard C language features map horribly on to the architecture of many processors. While the mapping is obvious and appalling for some processors (low end PICs, 8051 spring to mind), it’s still not necessarily great at the 32 bit end of the spectrum where processors without floating point units can be hit hard with C’s floating point promotion rules. While this is all obvious stuff, it’s essentially about what those CPUs are lacking. Where it gets really interesting in the embedded space is when you have a processor that has all sorts of specialized features that are great for embedded systems – but which simply do not map on to the C language view of the world. Some examples will illustrate my point.

Arithmetic vs. Logical shifting

The C language does of course have support for performing shift operations. However, these are strictly arithmetic shifts. That is when bits get shifted off the end of an integer type, they are simply lost. Logical shifting, sometimes known as rotation, is different in that bits simply get rotated back around (often through the carry bit but not always). Now while arithmetic shifting is great for, well arithmetic operations, there are plenty of occasions in which I find myself wanting to perform a rotation. Now can I write a rotation function in C – sure – but it’s a real pain in the tuches.

Saturated addition

If you have ever had to design and implement an integer digital filter, I am sure you found yourself yearning for an addition operator that will saturate rather than overflow. [In this form of arithmetic, if the integral type would overflow as the result of an operation, then the processor simply returns the minimum or maximum value as appropriate].  Processors that the designers think might be required to perform digital filtering will have this feature built directly into their instruction sets.  By contrast the C language has zero direct support for such operations, which must be coded using nasty checks and masks.

Nibble swapping

Swapping the upper and lower nibbles of a byte is a common operation in cryptography and related fields. As a result many processors include this ever so useful instruction in their instruction sets. While you can of course write C code to do it, it’s horrible looking and grossly inefficient when compared to the built in instruction.

Implications

If you look over the examples quoted I’m sure you noticed a theme:

  1. Yes I can write C code to achieve the desired functionality.
  2. The resultant C code is usually ugly and horribly inefficient when compared to the intrinsic function of the processor.

Now in many cases, C compilers simply don’t give you access to these intrinsic functions, other than resorting to the inline assembler. Unfortunately, using the inline assembler causes a lot of problems. For example:

  1. It will often force the compiler to not optimize the enclosing function.
  2. It’s really easy to screw it up.
  3. It’s banned by most coding standards.

As a result, the intrinsic features can’t be used anyway. However, there are embedded compilers out there that support intrinsic functions. For example here’s how to swap nibbles using IAR’s AVR compiler:

foo = __swap_nibbles(bar);

There are several things to note about this:

  1. Because it’s a compiler intrinsic function, there are no issues with optimization.
  2. Similarly because one works with standard variable names, there is no particular likelihood of getting this wrong.
  3. Because it looks like a function call, there isn’t normally a problem with coding standards.

This then leads to one of the essential quandaries of embedded systems. Is it better to write completely standard (and hence presumably portable) C code, or should one take every advantage of neat features that are offered by your CPU (and if it is any good), your compiler?

I made my peace with this decision many years ago and fall firmly into the camp of take advantage of every neat feature offered by the CPU / compiler – even if it is non-standard. My rationale for doing so is as follows:

  1. Porting code from one CPU to another happens rarely. Thus to burden the bulk of systems with this mythical possibility seems weird to me.
  2. End users do not care. When was the last time you heard someone extoll the use of standard code in the latest widget? Instead end users care about speed, power and battery life. All things that can come about by having the most efficient code possible.
  3. It seems downright rude not to use those features that the CPU designer built in to the CPU just because some purist says I should not.

Having said this, I do of course understand completely if you are in the business of selling software components (e.g. an AES library), where using intrinsic / specialized instructions could be a veritable pain. However for the rest of the industry I say use those intrinsic functions! As always, let the debate begin.

 

Two out of three ain’t bad

March 25th, 2012 by Nigel Jones

With all due apologies to Meatloaf for the title, I thought I’d pass along something that I’ve found useful over the years. Being a consultant, I regularly find myself in discussions with clients concerning new product development. Without fail, the following three topics are always high on the agenda:

  1. Features, reliability, bug rates etc. I tend to lump all these into the ‘good’ requirement.
  2. Timescales. When can you start, and more importantly, when can you deliver? I call this the ‘fast’ requirement.
  3. Costs. What will it cost and what can be done to minimize these costs? I call this the ‘cheap’ requirement.

What I tell my clients is this:

You can have it good, you can have it fast and you can have it cheap – but you can’t have all three.

Despite the slightly flippant nature of this meme, I’ve found that clients instinctively recognize the truth in this statement. As a result it forces them to think seriously about what their real priorities are – and to plan accordingly.

Incidentally, you may wonder what combinations end up getting asked for the most. Here’s my qualitative assessment:

Good and Fast

This is the most common. Typically the client that is in a time crunch to deliver something that has already been promised. In situations like this, they have to accept that development is going to be more costly.

Good and Cheap

This is also common. This is the client that is planning ahead. They want a product developed, but know it takes time – and accept this reality.

Fast and Cheap

This comes up when clients are looking for a prototype / proof of concept.

Of course you always get the person that wants all three. I just tell them that two out of three ain’t bad :-).

Why you really shouldn’t steal source code

February 11th, 2012 by Nigel Jones

As an embedded systems consultant, I spend a substantial part of my work time working on your typical embedded systems projects. However I also spend a significant amount of time working as an expert witness in legal proceedings. While the expert witness work is quite varied, one of the things I have noticed in the last few months is an increase in the number of cases related to source code theft. Typically these cases involve the plaintiff claiming that the defendant has stolen their source code and is using it in a competing product. These claims are often plausible, because as we all know, it’s trivial to walk out of a company with Gigabytes of information in your pocket. Even in companies with strong security measures, it’s normally the case that the engineers are smart enough to work out how to bypass the security systems, so the ‘there’s no way I could have got the code out of there’ defense isn’t usually very plausible.

Thus given how easy it is to steal source code, why shouldn’t you do it? Well let’s start with the obvious – it’s wrong. if you don’t understand this, go and have a chat with your mother – I’m sure she’ll spell it out for you. Notwithstanding the morality (and legality) of the issue, here’s another reason why you shouldn’t do it – there’s a great chance you’ll be found out. If that happens, you can find yourself in serious legal jeopardy.

So just how easy is it to show that someone has stolen your code?

Typically the first step is for the (future) plaintiff to have their suspicions aroused. If half the engineering department leaves and starts up a company with a competing product, then it’s hardly surprising that your ex-employer will be suspicious. Of course suspicions aren’t grounds for a lawsuit. The plaintiff needs at least some evidence of your malfeasance. Now sometimes this can be done purely by the functionality / look and feel of a product. However in other cases it’s necessary for the plaintiff to get at your code’s binary image. You can make this very hard (and hence expensive) to do. However for your typical microprocessor, this step is surprisingly easy. Indeed there are any number of organizations around the world that are quite adept at extracting binary images from processors.  So what you may ask? I took the code, moved stuff around, used a different compiler and compiled it for a different processor, so good luck with showing that I used your code. Well the trouble with this, is that using tools such as Ida-Pro, it’s easy to count the number of functions in the code, and the arguments they take. These metrics are a remarkably good signature. [BTW, there are other metrics as well, but I really don’t want to give the whole game away]. Thus if the original code base and the stolen code base have a very similar function call signature, then there’s an excellent chance that the plaintiffs have enough evidence to file a lawsuit.

It’s at this point that you are really in trouble. As part of the lawsuit, the plaintiffs are allowed to engage in discovery.  In a case like this, it means quite simply that the court will require you to turn your source code over to an expert that has been retained by the plaintiffs (i.e. someone like yours truly). At this point, I can use any number of tools that are available for comparing code bases. Some of the tools are designed expressly for litigation purposes, while others are just some of the standard tools we use as part of our everyday work. Anyway, the point is this: these tools are really good at finding all sorts of obfuscations, including things such as:

  • Renaming variables, constants functions etc.
  • Changing function parameter orders
  • Replacing comments
  • Adding / deleting white space
  • Splitting / merging files

In many cases, they can even detect the plagiarism (theft) even if you have switched languages. In other words, if you have indeed stolen the source code, then the chances of  it not being conclusively proven at this stage are pretty slim. In short, life is about to get very unpleasant.

Having said the above, I like to think that the readers of this blog are not the type that would engage in source code theft. However I suspect that some of you have been tempted to go into business competing against your current employer. If this describes you, then what should you do to ensure that you don’t get hit with a lawsuit a year or two after starting your own business?  Well clearly the best bet is not to go into a competing business. However if you must do this, then get some legal advice (please don’t rely on what is written here – I’m just an engineer!) before you start. You will probably be advised to do a ‘clean room’ design, which in a nutshell will require you to demonstrate that the code in your competing product was designed from scratch, using nothing from your former employer. Be advised that even in these cases, if you adopt the same algorithms, then you may still be in trouble.

 

 

 

The absolute truth about abs()

February 1st, 2012 by Nigel Jones

One of the more depressing things about the C language is how often the results of various operations are undefined. A prime example of this is the abs() function that I’m fairly sure is liberally dispersed throughout your code (it is through mine). The undefined operation of the abs() function comes about if you have the temerity to use a compiler that represents negative numbers using 2’s complement notation. In this case, the most negative representable number is always numerically larger than the most positive representable number. In plain English, for 16 bit integers, the range is -32768 … + 32767. Thus if you pass -32768 to the abs() function, the result is undefined.

The problem of course in an embedded system is that undefined operations are just dangerous, so surely an embedded compiler will do something sensible, like return +32767 if you pass -32768 to abs? To test this hypothesis I whipped up the following code for my favourite 8 bit compiler (IAR’s AVR compiler).

#include <stdlib.h>
#include <inttypes.h>
#include <stdio.h>

void main(void)
{
    int16_t i;
    int16_t absi;

    i = INT16_MIN + 1;        /* Set i to one more than the most negative number */
    absi = abs(i);

    printf("Argument = %" PRId16 ". Absolute value of argument = %" PRId16, i, absi);

    i--;                    /* i should now equal INT16_MIN */
    absi = abs(i);
    printf("\nArgument = %" PRId16 ". Absolute value of argument = %" PRId16, i, absi);
}

If you don’t understand the printf strings, see this article. Here’s the output of this code:

Argument = -32767. Absolute value of argument = 32767
Argument = -32768. Absolute value of argument = -32768

Clearly abs(-32768) = -32768 is not a very useful result! If I look in <stdlib.h> I find that abs() is implemented as

  int abs(int i)
  {      /* compute absolute value of int argument */
    return (i < 0 ? -i : i);
  }

Clearly no check is being made on the bounds of the parameter, and so the result that we get depends upon how the negation of the argument is performed. Thus this leads me to my first suggestion, namely write your own abs function. I’ll call this function sabs() for safe abs(). The first pass implementation for sabs() looks like this (note you’ll have to include <limits.h> to get INT_MIN and INT_MAX):

int sabs(int i)
{
    int res;

    if (INT_MIN == i)
    {
        res = INT_MAX;
    }
    else
    {
        res = i < 0 ? -i : i;
    }

    return res;
}

Here’s the output:

Argument = -32767. Absolute value of argument = 32767
Argument = -32768. Absolute value of argument = 32767

I think for most embedded systems this is a better result. However what happens if you use a smaller integer than the native integer size (e.g. a 16 bit integer on a 32 bit system, or an 8 bit integer on a 16 bit system?). To test this question, I modified the code thus:

#include <stdlib.h>
#include <limits.h>
#include <inttypes.h>
#include <stdio.h>

int sabs(int i);

void main(void)
{
    int8_t i;
    int8_t absi;

    i = INT8_MIN + 1;        /* Set i to one more than the most negative number */
    absi = sabs(i);

    printf("Argument = %" PRId8 ". Absolute value of argument = %" PRId8, i, absi);

    i--;                    /* i should now equal INT8_MIN */
    absi = sabs(i);
    printf("\nArgument = %" PRId8 ". Absolute value of argument = %" PRId8, i, absi);
}

int sabs(int i)
{
    int res;

    if (INT_MIN == i)
    {
        res = INT_MAX;
    }
    else
    {
        res = i < 0 ? -i : i;
    }

    return res;
}

So in this case, the native integer size is 16 bits and I’m passing an 8 bit integer. Here’s the output:

Argument = -127. Absolute value of argument = 127
Argument = -128. Absolute value of argument = -128

Clearly, I still haven’t solved the problem, as I’d really like abs(-128) to be 127 when using 8 bit integers.  I suspect that I could come up with some fancy expression that handles all integer types.  However I’m a great believer in simple code, and so  my recommendation is that you write sabs() functions for all your integer types. Thus:

/* Safe 8 bit absolute function */
int8_t sabs8(int8_t i)
{
    int8_t res;

    if (INT8_MIN == i)
    {
        res = INT8_MAX;
    }
    else
    {
        res = i < 0 ? -i : i;
    }

    return res;
}
/* Safe 16 bit absolute function */
int16_t sabs16(int16_t i)
{
    int16_t res;

    if (INT16_MIN == i)
    {
        res = INT16_MAX;
    }
    else
    {
        res = i < 0 ? -i : i;
    }

    return res;
}
/* Safe 32 bit absolute function */
int32_t sabs32(int32_t i)
{
    int32_t res;

    if (INT32_MIN == i)
    {
        res = INT32_MAX;
    }
    else
    {
        res = i < 0 ? -i : i;
    }

    return res;
}

 

The above approach is all well and good, but let’s face it, I’ve added a lot of overhead for a very rare condition. So this raises the question as to whether there is a better way of doing things? Well in many cases we use the abs() function to check for some limit. For example

    if (abs(i) > SOME_LIMIT)
    {
        printf("\nLimit exceeded");
    }

In cases like these, you can use what I call a negative absolute function, aka nabs(). nabs() works with negative absolutes and so can’t overflow. To demonstrate, here’s the code:

int nabs(int i);

void main(void)
{
    int i;
    int absi;

    i = INT_MIN + 1;        /* Set i to one more than the most negative number */
    absi = nabs(i);

    printf("Argument = %d Negative absolute value of argument = %d", i, absi);

    i--;                    /* i should now equal INT_MIN */
    absi = nabs(i);
    printf("\nArgument = %d Negative absolute value of argument = %d", i, absi);

    i = INT_MAX;
    absi = nabs(i);
    printf("\nArgument = %d Negative absolute value of argument = %d", i, absi);
}

int nabs(int i)
{
    return i > 0 ? -i : i;
}

The output looks like this:

Argument = -32767 Negative absolute value of argument = -32767
Argument = -32768 Negative absolute value of argument = -32768
Argument = 32767 Negative absolute value of argument = -32767

Armed with this function, you merely flip your tests around, such that you have:

    if (nabs(i) < SOME_NEGATIVE_LIMIT)
    {
        printf("\nLimit exceeded");
    }

I’ll leave it to you to decide whether gaining the efficiency is worth it for the rather strange looking code.

As a final note, do a search for the abs() function on the Internet. You’ll find that most references don’t mention the undefined behavior of abs with INT_MIN as an argument. The notable exception is the always excellent GNU reference . It’s thus hardly surprising that most embedded systems use abs().