Archive for August, 2011

Don’t Follow These 5 Dangerous Coding Standard Rules

Tuesday, August 30th, 2011 Michael Barr

Over the summer I happened across a brief blog post by another firmware developer in which he presented ten C coding rules for better embedded C code. I had an immediate strong negative reaction to half of his rules and later came to dislike a few more, so I’m going to describe what I don’t like about each. I’ll refer to this author as BadAdvice. I hope that if you have followed rules like the five below my comments will persuade you to move away from those toward a set of embedded C coding rules that keep bugs out. If you disagree, please start a constructive discussion in the comments.

Bad Rule #1: Do not divide; use right shift.

As worded, the above rule is way too broad. It’s not possible to always avoid C’s division operator. First of all, right shifting only works as a substitute for division when it is integer division and the denominator is a power of two (e.g., right shift by one bit to divide by 2, two bits to divide by 4, etc.). But I’ll give BadAdvice the benefit of the doubt and assume that he meant to say you should “use right shift as a substitute for division whenever possible”.

For his example, BadAdvice shows code to compute an average over 16 integer data samples, which are accumulated into a variable sum, during the first 16 iterations of a loop. On the 17th iteration, the average is computed by right shifting sum by 4 bits (i.e., dividing by 16). Perhaps the worst thing about this example code is how much it is tied a pair of #defines for the magic numbers 16 and 4. A simple but likely refactoring to average over 15 instead of 16 samples would break the entire example–you’d have to change from the right shift to a divide proper. It’s also easy to imagine someone changing AVG_COUNT from 16 to 15 without realizing about the shift; and if you didn’t change this, you’d get a bug in that the sum of 15 samples would still be right shifted by 4 bits.

Better Rule: Shift bits when you mean to shift bits and divide when you mean to divide.

There are many sources of bugs in software programs. The original programmer creates some bugs. Other bugs result from misunderstandings by those who later maintain, extend, port, and/or reuse the code. Thus coding rules should emphasize readability and portability most highly. The choice to deviate from a good coding rule in favor of efficiency should be taken only within a subset of the code. Unless there is a very specific function or construct that needs to be hand optimized, efficiency concerns should be left to the compiler.

Bad Rule #2: Use variable types in relation to the maximum value that variable may take.

BadAdvice gives the example of a variable named seconds, which holds integer values from 0 to 59. And he shows choosing char for the type over int. His stated goal is to reduce memory use.

In principle, I agree with the underlying practices of not always declaring variables int and choosing the type (and signedness) based on the maximum range of values. However, I think it essential that any practice like this be matched with a corresponding practice of always declaring specifically sized variables using C99’s portable fixed-width integer types.

It is impossible to understand the reasoning of the original programmer from unsigned char seconds;. Did he choose char because it is big enough or for some other reason? (Remember too that a plain char may be naturally signed or unsigned, depending on the compiler. Perhaps the original programmer even knows his compiler’s chars are default unsigned and omits that keyword.) The intent behind variables declared short and long is at least as difficult to decipher. A short integer may be 16-bits or 32-bits (or something else), depending on the compiler; a width the original programmer may have (or may not have) relied upon.

Better Rule: Whenever the width of an integer matters, use C99’s portable fixed-width integer types.

A variable declared uint16_t leaves no doubt about the original intent as it is very clearly meant to be a container for an unsigned integer value no wider than 16-bits. This type selection adds new and useful information to the source code and makes programs both more readable and more portable. Now that C99 has standardized the names of fixed-width integer types, declarations involving short and long should no longer be used. Even char should only be used for actual character (i.e., ASCII) data. (Of course, there may still be int variables around, where size does not matter, such as in loop counters.)

Bad Rule #3: Avoid >= and use <.

As worded above, I can’t say I understand this rule or its goal sufficiently, but to illustrate it BadAdvice gives the specific example of an if-else if wherein he recommends if (speed < 100) ... else if (speed > 99) instead of if (speed < 100) ... else if (speed >= 100). Say what? First of all, why not just use else for that specific scenario, as speed must be either below 100 or 100 or above.

Even if we assume we need to test for less than 100 first and then for greater than or equal to 100 second, why would anyone in their right mind prefer to use greater than 99? That would be confusing to any reader of the code. To me it reads like a bug and I need to keep going back over it to find the logical problem with the apparently mismatched range checks. Additionally, I believe that BadAdvice’s terse rationale that “Benefits: Lesser Code” is simply untrue. Any half decent compiler should be able to optimize either comparison as needed for the underlying processor.

Better Rule: Use whatever comparison operator is easiest to read in a given situation.

One of the very best things any embedded programmer can do is to make their code as readable as possible to as broad an audience as possible. That way another programmer who needs to modify your code, a peer doing code review to help you find bugs, or even you years later, will find the code hard to misinterpret.

Bad Rule #4: Avoid variable initialization while defining.

BadAdvice says that following the above rule will make initialization faster. He gives the example of unsigned char MyVariable = 100; (not preferred) vs:


#define INITIAL_VALUE 100
unsigned char MyVariable;
// Before entering forever loop in main
MyVariable = INITIAL_VALUE

Though it’s unclear from the above, let’s assume that MyVariable is a local stack variable. (It could also be global, the way his pseudo code is written.) I don’t think there should be a (portably) noticeable efficiency gain from switching to the latter. And I do think that following this rule creates an opening to forget to do the initialization or to unintentionally place the initialization code within a conditional clause.

Better Rule: Initialize every variable as soon as you know the initial value.

I’d much rather see every variable initialized on creation with perhaps the creation of the variable postponed as long as possible. If you’re using a C99 or C++ compiler, you can declare a variable anywhere within the body of a function.

Bad Rule #5: Use #defines for constant numbers.

The example given for this rule is of defining three constant values, including #define ON 1 and #define OFF 0. The rationale is “Increased convenience of changing values in a single place for the whole file. Provides structure to the code.” And I agree that using named constants instead of magic numbers elsewhere in the code is a valuable practice. However, I think there is an even better way to go about this.

Better Rule: Declare constants using const or enum.

C’s const keyword can be used to declare a variable of any type as unable to be changed at run-time. This is a preferable way of declaring constants, as they are in this way given a type that can be used to make comparisons properly and enabling them to be type-checked by the compiler if they are passed as parameters to function calls. Enumeration sets may be used instead for integer constants that come in groups, such as enum { OFF = 0, ON };.

Final Thoughts

There are two scary things about these and a few of the other rules on BadAdvice’s blog. First, is that they are out there on the Internet to be found with a search for embedded C coding rules. Second, is that BadAdvice’s bio says he works on medical device design. I’m not sure which is worse. But I do hope the above reasoning and proposed better rules gets you thinking about how to develop more reliable embedded software with fewer bugs.