embedded software boot camp

A ‘C’ Test: The 0x10 Best Questions for Would-be Embedded Programmers

December 17th, 2009 by Nigel Jones

This, our most popular article, has become the de-facto C test used in interviews for embedded systems programmers. Also available in a PDF version.

An obligatory and significant part of the recruitment process for embedded systems programmers seems to be the ‘C Test’. Over the years, I have had to both take and prepare such tests and in doing so have realized that these tests can be very informative for both the interviewer and interviewee. Furthermore, when given outside the pressure of an interview situation, they can also be quite entertaining (hence this article).

From the interviewee’s perspective, you can learn a lot about the person that has written or administered the test. Is the test designed to show off the writer’s knowledge of the minutiae of the ANSI standard rather than to test practical know-how? Does it test ludicrous knowledge, such as the ASCII values of certain characters? Are the questions heavily slanted towards your knowledge of system calls and memory allocation strategies, indicating that the writer may spend his time programming computers instead of embedded systems? If any of these are true, then I know I would seriously doubt whether I want the job in question.

From the interviewer’s perspective, a test can reveal several things about the candidate. Primarily, one can determine the level of the candidate’s knowledge of C. However, it is also very interesting to see how the person responds to questions to which they do not know the answers. Do they make intelligent choices, backed up with some good intuition, or do they just guess? Are they defensive when they are stumped, or do they exhibit a real curiosity about the problem, and see it as an opportunity to learn something? I find this information as useful as their raw performance on the test.

With these ideas in mind, I have attempted to construct a test that is heavily slanted towards the requirements of embedded systems. This is a lousy test to give to someone seeking a job writing compilers! The questions are almost all drawn from situations I have encountered over the years. Some of them are very tough; however, they should all be informative.

This test may be given to a wide range of candidates. Most entry-level applicants will do poorly on this test, while seasoned veterans should do very well. Points are not assigned to each question, as this tends to arbitrarily weight certain questions. However, if you choose to adapt this test for your own uses, feel free to assign scores.

Preprocessor

1. Using the #define statement, how would you declare a manifest constant that returns the number of seconds in a year? Disregard leap years in your answer.

#define SECONDS_PER_YEAR (60UL * 60UL * 24UL * 365UL)

I’m looking for several things here:

(a) Basic knowledge of the #define syntax (i.e. no semi-colon at the end, the need to parenthesize etc.).

(b) A good choice of name, with capitalization and underscores.

(c) An understanding that the pre-processor will evaluate constant expressions for you. Thus, it is clearer, and penalty free to spell out how you are calculating the number of seconds in a year, rather than actually doing the calculation yourself.

(d) A realization that the expression will overflow an integer argument on a 16 bit machine – hence the need for the L, telling the compiler to treat the expression as a Long.

(e) As a bonus, if you modified the expression with a UL (indicating unsigned long), then you are off to a great start because you are showing that you are mindful of the perils of signed and unsigned types – and remember, first impressions count!

2. Write the ‘standard’ MIN macro. That is, a macro that takes two arguments and returns the smaller of the two arguments.

#define MIN(A,B) ((A) <= (B) ? (A) : (B))

The purpose of this question is to test the following:

(a) Basic knowledge of the #define directive as used in macros. This is important, because until the inline operator becomes part of standard C, macros are the only portable way of generating inline code. Inline code is often necessary in embedded systems in order to achieve the required performance level.

(b) Knowledge of the ternary conditional operator. This exists in C because it allows the compiler to potentially produce more optimal code than an if-then-else sequence. Given that performance is normally an issue in embedded systems, knowledge and use of this construct is important.

(c) Understanding of the need to very carefully parenthesize arguments to macros.

(d) I also use this question to start a discussion on the side effects of macros, e.g. what happens when you write code such as :

least = MIN(*p++, b);

3. What is the purpose of the preprocessor directive #error?

Either you know the answer to this, or you don’t. If you don’t, then see reference 1. This question is very useful for differentiating between normal folks and the nerds. It’s only the nerds that actually read the appendices of C textbooks that find out about such things. Of course, if you aren’t looking for a nerd, the candidate better hope she doesn’t know the answer.

Infinite Loops

4. Infinite loops often arise in embedded systems. How does one code an infinite loop in C?

There are several solutions to this question. My preferred solution is:

while(1)

{

}

Another common construct is:

for(;;)

{

}

Personally, I dislike this construct because the syntax doesn’t exactly spell out what is going on. Thus, if a candidate gives this as a solution, I’ll use it as an opportunity to explore their rationale for doing so. If their answer is basically – ‘I was taught to do it this way and I have never thought about it since’ – then it tells me something (bad) about them. Conversely, if they state that it’s the K&R preferred method and the only way to get an infinite loop passed Lint, then they score bonus points.

A third solution is to use a goto:

Loop:

goto Loop;

Candidates that propose this are either assembly language programmers (which is probably good), or else they are closet BASIC / FORTRAN programmers looking to get into a new field.

Data declarations

5. Using the variable a, write down definitions for the following:

(a) An integer

(b) A pointer to an integer

(c) A pointer to a pointer to an integer

(d) An array of ten integers

(e) An array of ten pointers to integers

(f) A pointer to an array of ten integers

(g) A pointer to a function that takes an integer as an argument and returns an integer

(h) An array of ten pointers to functions that take an integer argument and return an integer.

The answers are:

(a) int a; // An integer

(b) int *a; // A pointer to an integer

(c) int **a; // A pointer to a pointer to an integer

(d) int a[10]; // An array of 10 integers

(e) int *a[10]; // An array of 10 pointers to integers

(f) int (*a)[10]; // A pointer to an array of 10 integers

(g) int (*a)(int); // A pointer to a function a that takes an integer argument and returns an integer

(h) int (*a[10])(int); // An array of 10 pointers to functions that take an integer argument and return an integer

People often claim that a couple of these are the sorts of thing that one looks up in textbooks – and I agree. While writing this article, I consulted textbooks to ensure the syntax was correct. However, I expect to be asked this question (or something close to it) when in an interview situation. Consequently, I make sure I know the answers – at least for the few hours of the interview. Candidates that don’t know the answers (or at least most of them) are simply unprepared for the interview. If they can’t be prepared for the interview, what will they be prepared for?

Static

6. What are the uses of the keyword static?

This simple question is rarely answered completely. Static has three distinct uses in C:

(a) A variable declared static within the body of a function maintains its value between function invocations.

(b) A variable declared static within a module[1], (but outside the body of a function) is accessible by all functions within that module. It is not accessible by functions within any other module. That is, it is a localized global.

(c) Functions declared static within a module may only be called by other functions within that module. That is, the scope of the function is localized to the module within which it is declared.

Most candidates get the first part correct. A reasonable number get the second part correct, while a pitiful number understand answer (c). This is a serious weakness in a candidate, since they obviously do not understand the importance and benefits of localizing the scope of both data and code.

Const

7. What does the keyword const mean?

As soon as the interviewee says ‘const means constant’, I know I’m dealing with an amateur. Dan Saks has exhaustively covered const in the last year, such that every reader of ESP should be extremely familiar with what const can and cannot do for you. If you haven’t been reading that column, suffice it to say that const means “read-only”. Although this answer doesn’t really do the subject justice, I’d accept it as a correct answer. (If you want the detailed answer, then read Saks’ columns – carefully!).

If the candidate gets the answer correct, then I’ll ask him these supplemental questions:

What do the following incomplete[2] declarations mean?

const int a;

int const a;

const int *a;

int * const a;

int const * a const;

The first two mean the same thing, namely a is a const (read-only) integer. The third means a is a pointer to a const integer (i.e., the integer isn’t modifiable, but the pointer is). The fourth declares a to be a const pointer to an integer (i.e., the integer pointed to by a is modifiable, but the pointer is not). The final declaration declares a to be a const pointer to a const integer (i.e., neither the integer pointed to by a, nor the pointer itself may be modified).

If the candidate correctly answers these questions, I’ll be impressed.

Incidentally, one might wonder why I put so much emphasis on const, since it is very easy to write a correctly functioning program without ever using it. There are several reasons:

(a) The use of const conveys some very useful information to someone reading your code. In effect, declaring a parameter const tells the user about its intended usage. If you spend a lot of time cleaning up the mess left by other people, then you’ll quickly learn to appreciate this extra piece of information. (Of course, programmers that use const, rarely leave a mess for others to clean up…)

(b) const has the potential for generating tighter code by giving the optimizer some additional information.

(c) Code that uses const liberally is inherently protected by the compiler against inadvertent coding constructs that result in parameters being changed that should not be. In short, they tend to have fewer bugs.

Volatile

8. What does the keyword volatile mean? Give three different examples of its use.

A volatile variable is one that can change unexpectedly. Consequently, the compiler can make no assumptions about the value of the variable. In particular, the optimizer must be careful to reload the variable every time it is used instead of holding a copy in a register. Examples of volatile variables are:

(a) Hardware registers in peripherals (e.g., status registers)

(b) Non-stack variables referenced within an interrupt service routine.

(c) Variables shared by multiple tasks in a multi-threaded application.

If a candidate does not know the answer to this question, they aren’t hired. I consider this the most fundamental question that distinguishes between a ‘C programmer’ and an ‘embedded systems programmer’. Embedded folks deal with hardware, interrupts, RTOSes, and the like. All of these require volatile variables. Failure to understand the concept of volatile will lead to disaster.

On the (dubious) assumption that the interviewee gets this question correct, I like to probe a little deeper, to see if they really understand the full significance of volatile. In particular, I’ll ask them the following:

(a) Can a parameter be both const and volatile? Explain your answer.

(b) Can a pointer be volatile? Explain your answer.

(c) What is wrong with the following function?:

int square(volatile int *ptr)

{

return *ptr * *ptr;

}

The answers are as follows:

(a) Yes. An example is a read only status register. It is volatile because it can change unexpectedly. It is const because the program should not attempt to modify it.

(b) Yes. Although this is not very common. An example is when an interrupt service routine modifies a pointer to a buffer.

(c) This one is wicked. The intent of the code is to return the square of the value pointed to by *ptr. However, since *ptr points to a volatile parameter, the compiler will generate code that looks something like this:

int square(volatile int *ptr)

{

int a,b;

a = *ptr;

b = *ptr;

return a * b;

}

Since it is possible for the value of *ptr to change unexpectedly, it is possible for a and b to be different. Consequently, this code could return a number that is not a square! The correct way to code this is:

long square(volatile int *ptr)

{

int a;

a = *ptr;

return a * a;

}

Bit Manipulation

9. Embedded systems always require the user to manipulate bits in registers or variables. Given an integer variable a, write two code fragments. The first should set bit 3 of a. The second should clear bit 3 of a. In both cases, the remaining bits should be unmodified.

These are the three basic responses to this question:

(a) No idea. The interviewee cannot have done any embedded systems work.

(b) Use bit fields. Bit fields are right up there with trigraphs as the most brain-dead portion of C. Bit fields are inherently non-portable across compilers, and as such guarantee that your code is not reusable. I recently had the misfortune to look at a driver written by Infineon for one of their more complex communications chip. It used bit fields, and was completely useless because my compiler implemented the bit fields the other way around. The moral – never let a non-embedded person anywhere near a real piece of hardware![3]

(c) Use #defines and bit masks. This is a highly portable method, and is the one that should be used. My optimal solution to this problem would be:

#define BIT3 (0x1 << 3)

static int a;

void set_bit3(void) {

a |= BIT3;

}

void clear_bit3(void) {

a &= ~BIT3;

}

Some people prefer to define a mask, together with manifest constants for the set & clear values. This is also acceptable. The important elements that I’m looking for are the use of manifest constants, together with the |= and &= ~ constructs.

Accessing fixed memory locations

10. Embedded systems are often characterized by requiring the programmer to access a specific memory location. On a certain project it is required to set an integer variable at the absolute address 0x67a9 to the value 0xaa55. The compiler is a pure ANSI compiler. Write code to accomplish this task.

This problem tests whether you know that it is legal to typecast an integer to a pointer in order to access an absolute location. The exact syntax varies depending upon one’s style. However, I would typically be looking for something like this:

int *ptr;

ptr = (int *)0x67a9;

*ptr = 0xaa55;

A more obfuscated approach is:

*(int * const)(0x67a9) = 0xaa55;

Even if your taste runs more to the second solution, I suggest the first solution when you are in an interview situation.

Interrupts

11. Interrupts are an important part of embedded systems. Consequently, many compiler vendors offer an extension to standard C to support interrupts. Typically, this new key word is __interrupt. The following code uses __interrupt to define an interrupt service routine. Comment on the code.

__interrupt double compute_area(double radius) {

{

double area = PI * radius * radius;

printf(“\nArea = %f”, area);

return area;

}

This function has so much wrong with it, it’s almost tough to know where to start.

(a) Interrupt service routines cannot return a value. If you don’t understand this, then you aren’t hired.

(b) ISR’s cannot be passed parameters. See item (a) for your employment prospects if you missed this.

(c) On many processors / compilers, floating point operations are not necessarily re-entrant. In some cases one needs to stack additional registers, in other cases, one simply cannot do floating point in an ISR. Furthermore, given that a general rule of thumb is that ISRs should be short and sweet, one wonders about the wisdom of doing floating point math here.

(d) In a similar vein to point (c), printf() often has problems with reentrancy and performance. If you missed points (c) & (d) then I wouldn’t be too hard on you. Needless to say, if you got these two points, then your employment prospects are looking better and better.

Code Examples

12. What does the following code output and why?

void foo(void)

{

unsigned int a = 6;

int b = -20;

(a+b > 6) ? puts(“> 6”) : puts(“<= 6”);

}

This question tests whether you understand the integer promotion rules in C – an area that I find is very poorly understood by many developers. Anyway, the answer is that this outputs “> 6”. The reason for this is that expressions involving signed and unsigned types have all operands promoted to unsigned types. Thus –20 becomes a very large positive integer and the expression evaluates to greater than 6. This is a very important point in embedded systems where unsigned data types should be used frequently (see reference 2). If you get this one wrong, then you are perilously close to not being hired.

13. Comment on the following code fragment?

unsigned int zero = 0;

unsigned int compzero = 0xFFFF; /*1’s complement of zero */

On machines where an int is not 16 bits, this will be incorrect. It should be coded:

unsigned int compzero = ~0;

This question really gets to whether the candidate understands the importance of word length on a computer. In my experience, good embedded programmers are critically aware of the underlying hardware and its limitations, whereas computer programmers tend to dismiss the hardware as a necessary annoyance.

By this stage, candidates are either completely demoralized – or they are on a roll and having a good time. If it is obvious that the candidate isn’t very good, then the test is terminated at this point. However, if the candidate is doing well, then I throw in these supplemental questions. These questions are hard, and I expect that only the very best candidates will do well on them. In posing these questions, I’m looking more at the way the candidate tackles the problems, rather than the answers. Anyway, have fun…

Dynamic memory allocation.

14. Although not as common as in non-embedded computers, embedded systems still do dynamically allocate memory from the heap. What are the problems with dynamic memory allocation in embedded systems?

Here, I expect the user to mention memory fragmentation, problems with garbage collection, variable execution time, etc. This topic has been covered extensively in ESP, mainly by Plauger. His explanations are far more insightful than anything I could offer here, so go and read those back issues! Having lulled the candidate into a sense of false security, I then offer up this tidbit:

What does the following code fragment output and why?

char *ptr;

if ((ptr = (char *)malloc(0)) == NULL) {

puts(“Got a null pointer”);

}

else {

puts(“Got a valid pointer”);

}

This is a fun question. I stumbled across this only recently, when a colleague of mine inadvertently passed a value of 0 to malloc, and got back a valid pointer! After doing some digging, I discovered that the result of malloc(0) is implementation defined, so that the correct answer is ‘it depends’. I use this to start a discussion on what the interviewee thinks is the correct thing for malloc to do. Getting the right answer here is nowhere near as important as the way you approach the problem and the rationale for your decision.

Typedef

15. Typedef is frequently used in C to declare synonyms for pre-existing data types. It is also possible to use the preprocessor to do something similar. For instance, consider the following code fragment:

 

#define dPS struct s *

typedef struct s * tPS;

 

The intent in both cases is to define dPS and tPS to be pointers to structure s. Which method (if any) is preferred and why?

This is a very subtle question, and anyone that gets it right (for the right reason) is to be congratulated or condemned (“get a life” springs to mind). The answer is the typedef is preferred. Consider the declarations:

dPS p1,p2;

tPS p3,p4;

The first expands to

struct s * p1, p2;

which defines p1 to be a pointer to the structure and p2 to be an actual structure, which is probably not what you wanted. The second example correctly defines p3 & p4 to be pointers.

Obfuscated syntax

16. C allows some appalling constructs. Is this construct legal, and if so what does this code do?

int a = 5, b = 7, c;

c = a+++b;

This question is intended to be a lighthearted end to the quiz, as, believe it or not, this is perfectly legal syntax. The question is how does the compiler treat it? Those poor compiler writers actually debated this issue, and came up with the “maximum munch” rule, which stipulates that the compiler should bite off as big a (legal) chunk as it can. Hence, this code is treated as:

c = a++ + b;

Thus, after this code is executed, a = 6, b = 7 & c = 12;

If you knew the answer, or guessed correctly – then well done. If you didn’t know the answer then I would not consider this to be a problem. I find the biggest benefit of this question is that it is very good for stimulating questions on coding styles, the value of code reviews and the benefits of using lint.

Well folks, there you have it. That was my version of the C test. I hope you had as much fun doing it as I had writing it. If you think the test is a good test, then by all means use it in your recruitment. Who knows, I may get lucky in a year or two and end up being on the receiving end of my own work.

References:

  1. In Praise of the #error directive. ESP September 1999.
  2. Efficient C Code for Eight-Bit MCUs. ESP November 1988.

[1] Translation unit for the pedagogues out there.

[2] I’ve had complaints that these code fragments are incorrect syntax. This is correct. However, writing down syntactically correct code pretty much gives the game away.

[3] I’ve recently softened my stance on bit fields. At least one compiler vendor (IAR) now offers a compiler switch for specifying the bit field ordering. Furthermore the compiler generates optimal code with bit field defined registers – and as such I now do use bit fields in IAR applications.

Effective C Tip #8 – Structure Comparison

December 4th, 2009 by Nigel Jones

This is the eighth in a series of tips on writing effective C. Today’s topic concerns how best to compare two structures (naturally of the same type) for equality. The conventional wisdom is that the only safe way to do this is by explicit member by member comparison, and that one should avoid doing a straight bit (actually byte) comparison using memcmp() because of the problem of padding bytes.

To see why this argument is advanced, one must understand that a compiler is free to place pad bytes between members of a structure so as produce more favorable alignment of the data in memory. Furthermore, the compiler is not obligated to initialize these pad bytes to any particular value. This code fragment illustrates the problem:

typedef struct
{
 uint8_t x;
 uint8_t pad1;  /* Compiler added padding */
 uint8_t y;
 uint8_t pad2;  /* Compiler added padding */
} COORD;
void foo(void)
{
 COORD p1 p2;
  p1.x = p2.x = 3;
  p1.y = p2.y = 4;
  /* Note pad bytes are not initialized */
  if (memcmp(&p1, &p2, sizeof(p1)) != 0)
  {
    /* We may get here */
  }
 ...
}

Thus, it’s clear that to avoid these kinds of problems, we must do a member by member comparison. However, before you rush off and start writing these member by member comparison functions, you need to be aware of a gigantic weakness with this approach. To see what I mean, consider the comparison function for my COORD structure. A reasonable implementation might look like this:

bool are_equal(COORD *p1, COORD *p2)
{
 return ((p1->x == p2->x)  &&  (p1->y == p2->y));
}
void foo(void)
{
 COORD p1 p2;
 p1.x = p2.x = 3;
 p1.y = p2.y = 4;
 if (!are_equal(&p1, &p2))
 {
   /* We should never get here */
 }
 ...
}

Now consider what happens if I add a third member z to the COORD structure. My structure definition and function foo() become:

typedef struct
{
 uint8_t x;
 uint8_t pad1;  /* Compiler added padding */
 uint8_t y;
 uint8_t pad2;  /* Compiler added padding */
 uint8_t z;
 uint8_t pad3;  /* Compiler added padding */
} COORD;
void foo(void)
{
 COORD p1 p2;
 p1.x = p2.x = 3;
 p1.y = p2.y = 4;
 p1.z = 6;
 p2.z = 5;
 if (!are_equal(&p1, &p2)
 {
  /* We will not get here */
 }
 ...
}

The problem is that I now have to remember to also update the comparison function. Now clearly in a simple case like this, it isn’t a big deal. However, in the real world where you might have a 500 line file, with the comparison function buried miles away from the structure declaration, it is way too easy to forget to update the comparison function. The compiler is of no help. Furthermore it’s my experience that all too often these sorts of problems can exist for a long time before they are caught. Thus the bottom line, is that member by member comparison has its own set of problems.

So what do I suggest? Well, I think the following is a reasonable approach:

  1. If there is no way that your structure can change (presumably because of outside constraints such as hardware), then use a member by member comparison.
  2. If you are working on a system where structure members are aligned on byte boundaries (which is true to the best of my knowledge for all 8 bit processors, and also most 16 bit processors), then use memcmp(). However, you need to think about doing this very carefully if there is the possibility of the code being ported to a platform where alignment is not on an 8 bit boundary.
  3. If you are working on a system that aligns on a non 8 bit boundary, then you must either use member by member comparison, or take steps to ensure that all the bytes of a structure are initialized using memset() before you start assigning values to the actual members. If you do this, then you can probably use memcmp() with a reasonable amount of confidence.
  4. If speed is a priority, then clearly memcmp() is the way to go. Just make sure you aren’t going to fall into a pothole as you blaze down the road.

Before I leave this topic, I should mention a few esoteric things for you to consider.

If you use the memcmp() approach you are checking for bit equality rather than value equality. Now most of the time they are the same. Sometimes however, they are not. To illustrate this, consider a structure that contains a parameter that is a boolean. If in one structure the parameter has a value of 1, and in the other structure it has a value of 2, then clearly they differ at the bit level, but are essentially the same at a value level. What should you do in this case? Well clearly it’s implementation dependent. It does however illustrate the perils of structure comparison.

Finally I should mention issues associated with structures that contain pointers. CS guys like to distinguish between deep and shallow structure comparison. I rarely write code where a deep comparison is required, and so for me it’s mostly a non-issue.

Next Tip

Previous Tip
Home

Keeping your EEPROM data valid through firmware updates

November 26th, 2009 by Nigel Jones

Back when embedded systems used EPROM (no that is not a typo for my younger readers) rather than Flash, the likelihood of the code being updated in the field was close to nil. Today however, it is common for embedded systems to contain mechanisms to allow the code to be updated easily. Like most people, I embraced this feature enthusiastically. However, after I’d implemented a few systems that were field upgradable, I discovered that the ability to update in the field had an unexpected impact on my EEPROM data. To see what I mean, read on…

Most of the embedded systems I work on contain EEPROM. One of the prime uses for this EEPROM is for storing configuration / calibration information for the system. As a result, I often store data in EEPROM as a series of data structures at fixed locations, with gaps in between them. Thus, my EEPROM map might look something like this:

#define CAL_DATA_LOCATION     0x0010
#define CONFIG_DATA_LOCATION  0x0200
...
#define SYSTEM_PARAMS_LOCATION  0x1000
typedef struct
{
 uint32_t param1;
 uint16_t param2;
 ...
 uint8_t  spare[10];
} CALIBRATION_DATA;
__eeprom CALIBRATION_DATA Cal_Data @ CAL_DATA_LOCATION;
__eeprom CONFIGURATION_DATA Config_Data @ CONFIG_DATA_LOCATION;
...
__eeprom SYSTEM_DATA System_Data @ SYSTEM_PARAMS_LOCATION;

As you can see, I was smart enough to allow room for growth within the structure via the spare[] array. (I have intentionally omitted support related to corruption detection to avoid complicating the issue at hand). As a result I thought I was all set if at some time a SW update caused me to have to use more parameters in a given EEPROM structure. Well I went along in this blissful state of ignorance for a few years until the real world intruded in a rather ugly way. Here’s what happened. The firmware upgrade didn’t require me to add any new parameters to the EEPROM, per se, but it did require that the data type of some of the parameters be changed. For example, my CALIBRATION_DATA structure example might have to change to this:

typedef struct
{
 float     param1;
 uint16_t  param2;
 ...
 uint8_t   spare[10];
} CALIBRATION_DATA;

Thus param1 has changed from a uint32_t type to a float. Thus when the new code powered up, it had to read param1 as a uint32_t, and then convert it to a float type and write it back to the EEPROM. This clearly was quite straightforward. However, where the problem came was the next time the system powered up. I realized that without some sort of logic in place, I would re-read param1, treat it as a uint32_t (even though it is a float), ‘convert’ it to a float and write it back to EEPROM. Clearly I needed some method of signaling that I had already performed the requisite upgrade. As I pondered this problem, I realized that it was even more complicated. Let us denote the two versions of CALIBRATION_DATA as version 1 and version 2 respectively. Furthermore, let’s assume that in version 3 of the code, param1 gets changed to a double (thus shifting all the other parameters down and consuming some of the spare allocation). I.e. it looks like this:

typedef struct
{
 double     param1;
 uint16_t   param2;
 ...
 uint8_t    spare[6];
} CALIBRATION_DATA;

In this case, we must not only be able to handle the upgrade from version 2 to version 3 – but also directly from version 1 to version 3. (You could of course require that users perform all upgrades in order. While I recognize that sometimes this is unavoidable, I suspect that most times it’s because the developer has backed themselves in to the sort of corner I describe here).

Anyway, with this insight in hand, I realized that I needed a generic system for both tagging an EEPROM structure with the version of software that created it, together with a means of providing arbitrary updates. This is how I do it.

Step 1.
Make the first location of each EEPROM structure a version field. This version field contains the firmware version that created the structure. By making it the first location in the EEPROM data structure, you ensure that you can always read it regardless of what else happens to the structure. Thus my CALIBRATION_DATA structure now looks something like this:

typedef struct
{
 uint16_t version;
 uint32_t param1;
 uint16_t param2;
 ...
 uint8_t  spare[10];
} CALIBRATION_DATA;

Step 2.
Add code to handle the upgrades. This code must be called before any parameters are used from EEPROM. The code looks something like this:

void eeprom_Update(void)
{
if (Cal_Data.version != SW_VERSION)
{
 switch (Cal_Data.version)
 {
  case 0x100:
   /* Do necessary steps to perform upgrade */
  break;
  case 0x200:
   /* Do necessary steps to perform upgrade */
  break;
  default:
  break;
  Cal_Data.version = SW_VERSION;  /* Update the EEPROM version number */
 }
}

Incidentally, I find that is often one of those cases where falling through case statements is really useful. Of course doing this is usually banned and so one ends up with much more clumsy code than would otherwise be required.

An Apology
Regular readers will no doubt have noticed that this is my first post in a while. A deadly combination of vacation and urgent projects with tight deadlines had conspired against me to prevent me blogging at my usual pace.

Home

Eye, Aye I!

November 6th, 2009 by Nigel Jones

Today’s post should probably be called ‘Thoughts on non-descriptive variable names’, but once in a while I have to let my creative side out!

Anyway, the motivation for today’s post, is actually Michael Barr’s latest blog posting concerning analysis of the source code for a breathalyzer. Since I do expert witness work, as well as develop products I was keen to see what the experts in this case had to say. One snippet from the expert for the plaintiffs caught my eye. In appendix B of their report, Draeger made the following statement concerning general code issues:

Non descriptive variable names – i, j, dummy and temp

This touched upon something where I seem to be at odds with the conventional wisdom. I’ll illustrate what I mean. Consider initializing an array to zero (I’ll ignore that we could use a library function for this). I would code it like this:

uint8_t buffer[BUFSIZE];
uint8_t i;
for (i = 0; i < BUFSIZE; i++)
{
 buffer[i] = 0;
}

This code would be rejected by many coding standards (and apparently would offend Draeger), as the loop variable ‘i’ is not descriptive. To be ‘correct’, I should instead code it like this

uint8_t buffer[BUFSIZE];
uint8_t buffer_index;
for (buffer_index = 0; buffer_index <BUFSIZE; buffer_index++)
{
 buffer[buffer_index] = 0;
}

So for me, the question is, does the second approach buy me anything – or indeed cost me anything? Well clearly, this is a matter of opinion. However I’d make the following observations:

  1. I think my code is clear, concise and easily understood by even the most unskilled programmer
  2. Is the variable name ‘buffer_index’ clearer – yes but only to a native English speaker. It’s my experience that there are a lot of non-native English speakers in the industry.
  3. Personally, I find the use of similar words in close proximity (buffer[buffer_index]) to be a bit harder to read, and very easy to mis-read if there are other variables around prefixed with buffer.

I’d also make the observation that many coding standards require variable names to be at least 3 characters long, and as a result I’ve seen code that looks like this:

uint8_t buffer[BUFSIZE];
uint8_t iii;
for (iii = 0; iii < BUFSIZE; iii++)
{
 buffer[iii] = 0;
}

Clearly in this case, the person is addressing the letter of the standard (if you’ll pardon the pun), but not the spirit. Where the standard requires the variable names to be meaningful, I’ve also seen this done:

uint8_t buffer[BUFSIZE];
uint8_t idx;
for (idx = 0; idx < BUFSIZE; idx++)
{
 buffer[idx] = 0;
}

This code meets the letter of the standard, and arguably the spirit. Is it really any more understandable than my original code? I don’t think so – but I’ll be interested to get your comments.

Home

Lowering power consumption tip #3 – Using Relays

November 2nd, 2009 by Nigel Jones

This is the third in a series of tips on lowering power consumption in embedded systems. Today’s topic concerns relays. It may be just the markets that I operate in, but relays seem to crop up in a very large percentage of the designs that I work on. If this is true for you, then today’s tip should be very helpful in reducing the power consumption of your system.

I’ll start by observing that relays consume a lot of power – at least in comparison to silicon based components, and thus anything that can be done to minimize their power consumption typically has a large impact on the overall consumption of the system. That being said, usually the thing that will reduce a relay’s power consumption the most is to simply use a latching relay. (A latching relay is designed to maintain its state once power is removed from its coil. Thus it only consumes power when switching – much like a CMOS gate). However, latching relays cannot be used in circumstances where it is important that the relays revert to a known state in the event of a loss of power. Most embedded systems that I work on require the relays to have this property. Thus in these cases, what can be done to minimize the relay’s power consumption?

If you look at the data sheet for a relay, you will see a plethora of parameters. However, the one of most interest is the operating current. (Relays are current operated devices. That is it is the presence of current flowing through the relay coil that generates a magnetic field that in turn produces the magneto-magnetic force that moves the relay armature). This current is the current required to actuate (pull-in) the relay. Not much can be done about this. However, once a relay is actuated, the current required to hold the relay in this state is typically anywhere between a third and two thirds less than the pull-in current. This current is called the holding current – and may or may not appear on the data sheet. Despite the fact that the holding current is so much less than the pull-in current, almost every design I see (including many of mine I might add) eschews the power savings that are up for grabs and instead simply puts the pull-in current through the relay the whole time the relay is activated.

So why is this? Well, the answer is that it turns out it isn’t trivial to switch from the pull-in current to the holding current. To see what I mean – read on!

The typical hardware to drive a relay consists of a microcontroller port pin connected to gate of an N channel FET (BJT’s are used, but if you are interested in reducing power, a FET is the way to go). The FET in turn is connected to the relay coil. Thus to turn the relay on, one need only configure the microcontroller port pin as an output and drive it high – a trivial exercise.

To use the holding current approach, you need to do the following.

  1. Connect the FET to a microcontroller port pin that can generate a PWM waveform. The hardware is otherwise unchanged.
  2. To turn the relay on, drive the port pin high as before.
  3. Delay for the pull in time of the relay. The pull in time is typically of the order of 10 – 100 ms.
  4. Switch the port pin over to a PWM output. The PWM depth of course dictates the effective current through the relay, and this is how you set the holding current. The other important parameter is the PWM frequency. Its period should be at most one tenth of the pull-in time. For example, a relay that has a pull in time of 10 ms, would require a PWM period of no more than 1 ms, giving a PWM frequency of 1 kHz. You can of course use higher frequencies – but then you are burning unnecessary power in charging and discharging the gate of the FET.
  5. To turn the relay off, you must disable the PWM output and then drive the port pin low.

Looking at this, it really doesn’t seem too hard. However compared to simply setting and clearing a port pin, it’s certainly a lot of work. Given that management doesn’t normally award points for reducing the power consumption of an embedded system, but does reward getting the system delivered on time, it’s hardly surprising that most systems don’t use this technique. Perhaps this post will start a tiny movement towards rectifying this situation.

Next Tip
Previous Tip
Home