Archive for the ‘General C issues’ Category

The N_ELEMENTS macro

Friday, March 18th, 2011 Nigel Jones

Many years ago I came across a simple macro that has proven to be quite useful. Its usual definition looks something like this:

#define N_ELEMENTS(X)           (sizeof(X)/sizeof(*(X)))

Its nominal use is to determine the number of elements in an incomplete array declaration. For example

void foo(void)
{
 uint8_t bar[] = {0, 1, 2, 3, 4};
 uint8_t    i;

 /* Transmit each byte in bar[] */
 for (i = 0; i < N_ELEMENTS(bar); ++i)
 {
  txc(bar[i]);
 }
}

Clearly this is quite useful. However, once you have this macro in your arsenal you will eventually run into a conundrum. To illustrate what I mean consider the following code:

#define BUF_SIZE    (5)
void foo(void)
{
 uint8_t bar[BUF_SIZE] = {0, 1, 2, 3, 4};
 uint8_t    i;

 /* Transmit each byte in bar[] */
 for (i = 0; i < BUF_SIZE; ++i)
 {
  txc(bar[i]);
 }
}

This uses the classic approach of  defining a manifest constant  (BUF_SIZE) and then using it to define the array size and also as the loop limit. The conundrum is this: is one better off using N_ELEMENTS in this case as well. In other words, is the following better code?

#define BUF_SIZE    (5)
void foo(void)
{
 uint8_t bar[BUF_SIZE] = {0, 1, 2, 3, 4};
 uint8_t    i;

 /* Transmit each byte in bar[] */
 for (i = 0; i < N_ELEMENTS(bar); ++i)
 {
  txc(bar[i]);
 }
}

This code is guaranteed to operate on every element of the array bar[] regardless of what is done to the array declaration. For example:

#define BUF_SIZE    (5)
void foo(void)
{
 uint8_t bar[BUF_SIZE + 1] = {0, 1, 2, 3, 4, 5};
 uint8_t    i;

 /* Transmit each byte in bar[] */
 for (i = 0; i < N_ELEMENTS(bar); ++i)
 {
  txc(bar[i]);
 }
}

In this case I have changed the array declaration. The code that uses N_ELEMENTS would still work while the code that used BUF_SIZE would have failed. So from this perspective the N_ELEMENTS code is more robust. However I don’t think the N_ELEMENTS based code is as easy to read. As a result I have oscillated back and fore over the years as to which approach is better. My current view is that the N_ELEMENTS approach is indeed the better way. I’d be interested in your opinion.

Efficient C Tip #13 – use the modulus (%) operator with caution

Tuesday, February 8th, 2011 Nigel Jones

This is the thirteenth in a series of tips on writing efficient C for embedded systems.  As the title suggests, if you are interested in writing efficient C, you need to be cautious about using the modulus operator.  Why is this? Well a little thought shows that C = A % B is equivalent to C = A – B * (A / B). In other words the modulus operator is functionally equivalent to three operations. As a result it’s hardly surprising that code that uses the modulus operator can take a long time to execute. Now in some cases you absolutely have to use the modulus operator. However in many cases it’s possible to restructure the code such that the modulus operator is not needed. To demonstrate what I mean, some background information is in order as to how this blog posting came about.

Converting seconds to days, hours, minutes and seconds

In Embedded Systems Design there is an increasing need for some form of real time clock. When this is done, the designer typically implements the time as a 32 bit variable containing the number of seconds since a particular date. When this is done, it’s not usually long before one has to convert the ‘time’ into days, hours, minutes and seconds. Well I found myself in just such a situation recently. As a result, I thought a quick internet search was in order to find the ‘best’ way of converting ‘time’ to days, hours, minutes and seconds. The code I found wasn’t great and as usual was highly PC centric. I thus sat down to write my own code.

Attempt #1 – Using the modulus operator

My first attempt used the ‘obvious’ algorithm and employed the modulus operator. The relevant code fragment appears below.

void compute_time(uint32_t time)
{
 uint32_t    days, hours, minutes, seconds;

 seconds = time % 60UL;
 time /= 60UL;
 minutes = time % 60UL;
 time /= 60UL;
 hours = time % 24UL;
 time /= 24UL;
 days = time;  
}

This approach has a nice looking symmetry to it.  However, it contained three divisions and three modulus operations. I thus was rather concerned about its performance and so I measured its speed for three different architectures – AVR (8 bit), MSP430 (16 bit), and ARM Cortex (32 bit). In all three cases I used an IAR compiler with full speed optimization. The number of cycles quoted are for 10 invocations of the test code and include the test harness overhead:

AVR:  29,825 cycles

MSP430: 27,019 cycles

ARM Cortex: 390 cycles

No that isn’t a misprint. The ARM was nearly two orders of magnitude more cycle efficient than the MSP430 and AVR. Thus my claim that the modulus operator can be very inefficient is true for some architectures – but not all.  Thus if you are using the modulus operator on an ARM processor then it’s probably not worth worrying about. However if you are working on smaller processors then clearly something needs to be done  – and so I investigated some alternatives.

Attempt #2 – Replace the modulus operator

As mentioned in the introduction,  C = A % B is equivalent to C = A – B * (A / B). If we compare this to the code in attempt 1, then it should be apparent that the intermediate value (A/B) computed as part of the modulus operation is in fact needed in the next line of code. Thus this suggests a simple optimization to the algorithm.

void compute_time(uint32_t time)
{
 uint32_t    days, hours, minutes, seconds;

 days = time / (24UL * 3600UL);    
 time -= days * 24UL * 3600UL;
 /* time now contains the number of seconds in the last day */
 hours = time / 3600UL;
 time -= (hours * 3600UL);
 /* time now contains the number of seconds in the last hour */
 minutes = time / 60U;
 seconds = time - minutes * 60U;
 }

In this case I have replaced three mods with three subtractions and three multiplications. Thus although I have replaced a single operator (%) with two operations (- *) I still expect an increase in speed because the modulus operator is actually three operators in one (- * /).  Thus effectively I have eliminated three divisions and so I expected a significant improvement in speed. The results however were a little surprising:

AVR:  18,720 cycles

MSP430: 14,805 cycles

ARM Cortex: 384 cycles

Thus while this technique yielded a roughly order of two improvements for the AVR and MSP430 processors, it had essentially no impact on the ARM code.  Presumably this is because the ARM has native support for the modulus operation. Notwithstanding the ARM results, it’s clear that at least in this example, it’s possible to significantly speed up an algorithm by eliminating the modulus operator.

I could of course just stop at this point. However examination of attempt 2 shows that further optimizations are possible by observing that if seconds is a 32 bit variable, then days can be at most a 16 bit variable. Furthermore, hours, minutes and seconds are inherently limited to an 8 bit range. I thus recoded attempt 2 to use smaller data types.

Attempt #3 – Data type size reduction

My naive implementation of the code looked like this:

void compute_time(uint32_t time)
{
 uint16_t    days;
 uint8_t     hours, minutes, seconds;
 uint16_t    stime;

 days = (uint16_t)(time / (24UL * 3600UL));    
 time -= (uint32_t)days * 24UL * 3600UL;
 /* time now contains the number of seconds in the last day */
 hours = (uint8_t)(time / 3600UL);
 stime = time - ((uint32_t)hours * 3600UL);
 /*stime now contains the number of seconds in the last hour */
 minutes = stime / 60U;
 seconds = stime - minutes * 60U;
}

All I have done is change the data types and to add casts where appropriate. The results were interesting:

AVR:  14,400 cycles

MSP430: 11,457 cycles

ARM Cortex: 434 cycles

Thus while this resulted in a significant improvement for the AVR & MSP430, it resulted in a significant worsening for the ARM. Clearly the ARM doesn’t like working with non 32 bit variables. Thus this suggested an improvement that would make the code a lot more portable – and that is to use the C99 fast types. Doing this gives the following code:

Attempt #4 – Using the C99 fast data types

void display_time(uint32_t time)
{
 uint_fast16_t    days;
 uint_fast8_t    hours, minutes, seconds;
 uint_fast16_t    stime;

 days = (uint_fast16_t)(time / (24UL * 3600UL));    
 time -= (uint32_t)days * 24UL * 3600UL;
 /* time now contains the number of seconds in the last day */
 hours = (uint_fast8_t)(time / 3600UL);
 stime = time - ((uint32_t)hours * 3600UL);
 /*stime now contains the number of seconds in the last hour */
 minutes = stime / 60U;
 seconds = stime - minutes * 60U;
}

All I have done is change the data types to the C99 fast types. The results were encouraging:

AVR:  14,400 cycles

MSP430: 11,595 cycles

ARM Cortex: 384 cycles

Although the MSP430 time increased very slightly, the AVR and ARM stayed at their fastest speeds. Thus attempt #4 is both fast and portable.

Conclusion

Not only did replacing the modulus operator with alternative operations result in faster code, it also opened up the possibility for further optimizations. As a result with the AVR & MSP430 I was able to more than halve the execution time.

Converting Integers for Display

A similar problem (with a similar solution) occurs when one wants to display integers on a display. For example if you are using a custom LCD panel with say a 3 digit numeric field, then the problem arises as to how to determine the value of each digit. The obvious way, using the modulus operator is as follows:

void display_value(uint16_t value)
{
 uint8_t    msd, nsd, lsd;

 if (value > 999)
 {
 value = 999;
 }

 lsd = value % 10;
 value /= 10;
 nsd = value % 10;
 value /= 10;
 msd = value;

 /* Now display the digits */
}

However, using the technique espoused above, we can rewrite this much more efficiently as:

void display_value(uint16_t value)
{
 uint8_t    msd, nsd, lsd;

 if (value > 999U)
 {
  value = 999U;
 }

 msd = value / 100U;
 value -= msd * 100U;

 nsd = value / 10U;
 value -= nsd * 10U;

 lsd = value;

 /* Now display the digits */
}

If you benchmark this you should find it considerably faster than the modulus based approach.

Previous Tip

Formatted output when using C99 data types

Tuesday, February 1st, 2011 Nigel Jones

Regular readers of this blog will know that I am a proponent of using the C99 data types. They will also know that I’m no fan of formatted output. Notwithstanding this, I do use formatted output (particularly vsprintf) on larger systems. Well if you use the C99 data types and you use formatted output, you will quickly run into a problem – namely what modifier do you give printf()  to print say a uint16_t variable? Now if you are working on an 8 or 16 bit architecture, then you’d probably be OK guessing that %u would work quite nicely. However if you are working on a 32 bit architecture, what would you use for say a uint_fast8_t variable? Well it so happens that the C99 folks were aware of this problem and came up with just about the ugliest solution imaginable.

inttypes.h

In order to solve this problem, you first of all need to #include a file inttypes.h. This header file in turn includes stdint.h so that you have access to the C99 data types. If you examine this file, you will find that it consists of a large number of definitions. An example definition might look like this:

#define PRId16 __INT16_SIZE_PREFIX__ "d"

If you are like me, when I first saw this I was a little puzzled. How exactly was this supposed to help? Well I’ll give you an example of its usage, and then explain how it works.

#include <inttypes.h>
#include <stdio.h>

void print_int16(int16_t value)
{
 printf("Value = %" PRId16, value);
}

So what’s going on here? Well let’s assume for now that __INT16_SIZE_PREFIX__ is in turn defined to be “h”.  Our code is converted by the preprocessor into the following:

#include <inttypes.h>
#include <stdio.h>

void print_int16(int16_t value)
{
 printf("Value = %" "h" "d", value);
}

At compile time, the successive strings “Value = %” “h” “d” are concatenated into the single string “Value = %hd”, so that we end up with:

#include <inttypes.h>
#include <stdio.h>

void print_int16(int16_t value)
{
 printf("Value = %hd", value);
}

This is legal syntax for printf. More importantly, the correct format string for this implementation is now being passed to printf () for an int16_t data type.

Thus the definitions in inttypes.h allow one to write portable formatted IO while still using the C99 data types.

Naming Convention

Examination of inttypes.h shows that a consistent naming convention has been used. For output, the constant names are constructed thus:

<PRI><printf specifier><C99 modifier><number of bits> where

<PRI> is the literal characters PRI.

<printf specifier> is the list of integer specifiers we all know so well {d, i, o, u, x, X}

<C99 modifier> is one of {<empty>, LEAST, FAST, MAX, PTR}

<number of bits> is one of {8, 16, 32,64 <empty>}. <empty> only applies to the MAX and PTR C99 modifiers.

Examples:

To print a uint_fast8_t in lower case hexadecimal you would use PRIxFAST8.

To print a int_least64_t in octal you would use PRIoLEAST64.

Formatted Input

For formatted input, simply replace PRI with SCN.

Observations

While I applaud the C99 committee for providing this functionality, it can result in some dreadful looking format statements. For example here’s a string from a project I’m working on:

wr_vstr(1, 0, MAX_STR_LEN, "%-+*" PRId32 "%-+4" PRId32 "\xdf", tap_str_len, tap, angle);

Clearly a lot of this has to do with the inherently complex formatted IO syntax. The addition of the C99 formatters just makes it even worse.

Personally I’d have liked the C99 committee to have bitten the bullet and introduced a formatted IO function that had the following characteristics:

  1. Explicit support for the C99 data types.
  2. No support for octal. Does anyone ever use the octal formatter?
  3. Support for printing binary – this I do need to do from time to time.
  4. A standard defined series of reduced functionality formatted IO subsets. This way I’ll know that if I restrict myself to a particular set of format types I can use the smallest version of the formatted IO function.

PC Lint

Regular readers will also know that I’m a major proponent of using PC-Lint from Gimpel. I was surprised to discover that while Lint is smart enough to handle string concatenation with printf() etc, it doesn’t do it with user written functions that are designed to accept format strings. For example, the function wr_vstr() referenced above looks like this:

static void wr_vstr(uint_fast8_t row, uint_fast8_t col, uint_fast8_t width, char const * format, ...)
{
 va_list  args;
 char  buf[MAX_STR_LEN];

 va_start(args, format);
 (void)vsnprintf(buf, MAX_STR_LEN, format, args);     /* buf contains the formatted string */

 wr_str(row, col, buf, width);    /* Call the generic string writer */

 va_end(args);                    /* Clean up. Do NOT omit */
}

I described this technique here. Anyway, if you use the inttypes.h constants like I did above, then you will find that PC-Lint complains loudly.

Final Thoughts

Inttypes.h is very useful for writing portable formatted IO with the C99 data types. It’s ugly – but it beats the alternative. I recommend you add it to your bag of tricks.

Configuring hardware – part 3

Wednesday, January 26th, 2011 Nigel Jones

This is the final part in a series on configuring the hardware peripherals in a microcontroller. In the first part I talked about how to set / clear bits in a configuration register, and in the second part I talked about putting together the basic framework for the driver. When I finished part 2, we had got as far as configuring all the bits in the open function. It’s at this point that things get interesting. In my experience the majority of driver problems fall into three areas:

  1. Failing to place the peripheral into the correct mode.
  2. Getting the clocking wrong.
  3. Mishandling interrupts.

I think most people tend to focus on the first item. Personally I have learned that it’s usually better to tackle the above problems in the reverse order.

Mishandling interrupts

Almost all peripheral drivers need interrupt handlers, and these are often the source of many problems.  If you have followed my advice, then at this stage you should have a skeleton interrupt handler for every possible interrupt vector that the peripheral uses.  You should also have an open and close function. A smart thing to do at this stage is to download your code to your debug environment. I then place a break-point on every interrupt handler and then I call  the open function. If the open function merely configures the peripheral, yet does not enable it, then presumably no interrupts should occur. If they do, then you need to find out why and fix the problem.

At this point I now add just enough code to each interrupt handler such that it will clear the source of the interrupt and generate the requisite interrupt acknowledge. Sometimes this is done for you in hardware. In other cases you have to write a surprising amount of code to get the job done. I strongly recommend that you take your time over this stage as getting an interrupt acknowledge wrong can cause you endless problems.

The next stage is to write the enable function, download the code and open and enable the peripheral. This time you need to check that you do get the expected interrupts (e.g. a timer overflow interrupt) and that you acknowledge them correctly. Just as importantly you also need to check that you don’t get an unexpected interrupt (e.g. a timer match interrupt). On the assumption that all is well, then you can be reasonably confident that  there are no egregious errors in your setup of interrupts. At this point you will probably have to further flesh out the interrupt handlers in order to give the driver some limited functionality. Although I’m sure you’ll be tempted to get on with the problem at hand, I recommend that you don’t do this, but rather write code to help tackle the next problem – namely that of clocking verification.

Clocking

Most peripherals use a clock source internal to the microprocessor. Now modern processors have multiple clock domains, PLL based frequency multipliers, and of course multi-level pre-scalars. As a result it can be a real nightmare trying to get the correct frequency to a peripheral. Even worse it is remarkably easy to get the approximately correct frequency to a peripheral. This issue can be a real problem with asynchronous communications links where a 1% error in frequency may be OK with one host and fail with another. As a result I now make it a rule to always try and verify that I am indeed clocking a peripheral with the correct frequency. To do this, there is no substitute for breaking out the oscilloscope or logic analyzer and measuring something. For timers one can normally output the signal on a port pin (even if this is just for verification purposes). For communications links one can simply set up the port to constantly transmit a fixed pattern. For devices such as A2D converters I usually have to resort to toggling  a port pin at the start and end of conversion. Regardless of the peripheral, it’s nearly always worth taking the time to write some code to help you verify that the peripheral is indeed being clocked at the correct frequency.

When you are doing this, there are a couple of things to watch out for:

  1. If your processor has an EMI reduction mode, then consider turning it off while performing clocking measurements. The reason for this is that ‘EMI reduction’ is actually achieved by dithering (quasi randomly varying) the clock frequency. Clearly a randomly varying clock isn’t conducive to accurate frequency measurements.
  2. Make sure that your system is indeed being clocked by the correct source. I mention this because some debuggers can provide the clock to the target.

Finally, if you find that you have an occasional problem with a peripheral, then checking that the clocking is precise is always a good place to start.

Mode

At this stage you have done the following:

  1. Considered every bit in every register in your open function.
  2. Verified that you have interrupts set up correctly.
  3. Written the enable function and at least part of the interrupt handler(s).
  4. Verified that you have the correct frequency clocks going to the peripheral.

You should now complete writing the driver. This is where you write the bulk of the application specific code. Clearly this part is highly application specific. Notwithstanding this, I can offer one piece of advice. Probably the single biggest mistake that I have made over the years is to assume that because the driver ‘works’ that it must be correct. I will give you a simple example to demonstrate what I mean.

It’s well known that the popular SPI port found on many devices can operate in one of four modes (often imaginatively called Mode0, Mode1, Mode2 & Mode3). These modes differ based on the phase relationship of the clock and data lines and whether the data are valid on the rising or falling edge of the clock. Thus it’s necessary to study the data sheet of the SPI peripheral to find out its required mode. Let’s assume that after studying the data sheet you conclude that Mode2 operation is called for – and you implement the code and it works. If you then walk away from the code then I humbly suggest you are asking for it. The reason is that it’s possible that a peripheral will ‘work’ in Mode 2, even though it should be operated in Mode 3. The peripheral ‘works’ in Mode 2 even though you are right on the edge of violating the various required setup and hold times. A different temperature or a different chip lot and your code will fall over. It’s for this reason that I strongly recommend that you break out the logic analyzer and carefully compare the signals to what is specified in the data sheet. There is nothing quite like comparing waveforms to what is in the data sheet to give you a warm fuzzy feeling that the driver really is doing its job correctly.

Final Thoughts

Driver writing is hard. Engineers that can take on this task and write clean, fast and correct drivers in a timely manner are immensely valuable to organizations. Thus even if you cringe at the thought of having to write a device driver, you might want to put the effort into learning how to do it – your career will thank you!

Configuring hardware – part 2.

Wednesday, December 15th, 2010 Nigel Jones

This is the second in a series on configuring the hardware peripherals in a microcontroller. In the first part I talked about how to set / clear bits in a configuration register.  Now while setting bits is an essential part of the problem, it is by no means the most difficult task. Instead the real problem is this. You need to configure the peripheral but on examining the data sheet you discover that the peripheral has twenty registers, can operate in a huge number of modes and has multiple interrupt sources. To compound the difficulty, you may not fully understand the task the peripheral performs – and the data sheet appear to have been written by someone who has clearly never written a device driver in their life. If this sounds a lot like what you have experienced, then read on!

When I first started working in embedded systems, I used to dread having to write a device driver. I knew I was in for days, if not weeks of anguish trying to make the stupid thing work. Today I can usually get a peripheral to do what I want with almost no heartache – and in a fraction of the time it used to take me. I do this by following a standard approach that helps minimize various problems that seem to crop up all the time in device drivers. These problems are as follows:

  1. Setting the wrong bits in a register
  2. Failing to configure a register at all.
  3. Setting the correct configuration bits – but in the wrong temporal order.
  4. Interrupts incorrectly handled.

To help minimize these types of problems, this is what I do.

Step 0 – Document *what* the driver is supposed to do

This is a crucial step. If you can’t write in plain English (or French etc) what the driver is supposed to do then you stand no chance of making it work correctly.  This is a remarkably difficult thing to do. If you find that you can’t succinctly and unambiguously describe the driver’s functionality then attempting to write code is futile. I typically put this explanation in the module header block where future readers of the code can see it. An explanation may look something like this.

This is a serial port driver. It is intended to be used on an RS232 line at 38400 baud, 8 data bits, no parity, one stop bit. The driver supports CTS / RTS handshaking. It does not support Xon / Xoff handshaking.

Characters to be transmitted are buffered and sent out under interrupt. If the transmit buffer fills up then incoming characters are dropped.

Characters are received under interrupt and placed in a buffer. When the receive buffer is almost full, the CTS line is asserted. Once the receive buffer has dropped below the low threshold, CTS is negated. If the host ignores the CTS line and continues to transmit then characters received after the receive buffer is full are discarded.

As it stands, this description is incomplete; for example it doesn’t say what happens if a receiver overrun is detected. However you should get the idea.

Incidentally I can’t stress the importance of this step enough. This was the single biggest breakthrough I made in improving my driver writing. This is also the step that I see missing from almost all driver code.

Step 1 – Create standard function outlines

Nearly all drivers need the following functions:

  1. Open function. This function does the bulk of the peripheral configuration, but typically does not activate (enable) the peripheral.
  2. Close function. This is the opposite of the open function in that it returns a peripheral to its initial (usually reset) condition. Even if your application would never expect to close a peripheral it is often useful to write this function as it can deepen your understanding of the peripheral’s functionality.
  3. Start function. This function typically activates the peripheral. For peripherals such as timers, the start function is aptly and accurately named. For more complex peripherals, the start function may be more of an enable function. For example a CAN controller’s start function may start the CAN controller listening for packets.
  4. Stop function. This is the opposite of the start function. Its job is to stop the peripheral from running, while leaving it configured.
  5. Update function(s). These function(s) are highly application specific. For example an ADC peripheral may not need an update function. A PWM channel’s update function would be used to update the PWM depth. A UART’s update function would be the transmit function. In some cases you may need multiple update functions.
  6. Interrupt handler(s). Most peripheral’s need at least one interrupt handler. Even if you aren’t planning on using an interrupt source, I strongly recommend you put together a function outline for it. The reason will become clear!

At this stage, your driver looks something like this:

/*
 Detailed description of what the driver does goes here
*/

void driver_Open(void)
{
}

void driver_Close(void)
{
}

void driver_Start(void)
{
}

void driver_Stop(void)
{
}

void driver_Update(void)
{
}

__interrupt void driver_Interrupt1(void)
{
}

__interrupt void driver_Interrupt2(void)
{
}

Step 2 – Set up power, clocks, port pins

In most modern processors, a peripheral does not exist in isolation. Many times peripherals need to be powered up, clocks need to routed to the peripheral and port pins need to be configured. This step is separate from the configuration of the peripheral. Furthermore documentation on these requirements is often located in non-obvious places – and thus this step is often overlooked. This is an area where I must give a thumbs-up to NXP. At the start of each of their peripherals is a short clear write up documenting the ancillary registers that need to be configured for the peripheral to be used. An example is shown below:

Basic Configuration Steps for the SSP

Personally, I usually place the configuration of these registers in a central location which is thus outside the driver. However there is also a case for placing the configuration of these registers in the driver open function. I will address why I do it this way in a separate blog post.

Step 3 – Add all the peripheral registers to the open function

This step is crucial. In my experience a large number of driver problems come about because a register hasn’t been configured. The surest way to kill this potential problem is to open up the data sheet at the register list for the peripheral and simply add all the registers to the open function. For example, here is the register list for the SSP controller on an NXP ARM processor:

Ten registers are listed.  Even though one register is listed as read only, I still add it to the driver_Open function as I may need to read it in order to clear status flags. Thus my open function now becomes this:

void driver_Open(void)
{
 SSP0CR0 = 0;
 SSP0CR1 = 0;
 SSP0DR = 0;
 SSP0SR;            /* Status register - read and discard */
 SSP0CPSR = 0;
 SSP0IMSC = 0;
 SSP0RIS = 0;
 SSP0MIS = 0;
 SSP0ICR = 0;
 SSP0DMACR = 0;
}

At this stage all I have done is ensure that my code is at least aware of the requisite registers.

Step 4 – Arrange the registers in the correct order

For many peripherals, it is important that registers be configured in a specific order. In some cases a register must be partially configured, then other registers must be configured, and then the initial register must be completely configured. There is no way around this, other than to read the data sheet to determine if this ordering exists. I should note that the order that registers appear in the data sheet is rarely the order in which they should be configured. In my example, I will assume that the registers are correctly ordered.

Step 5 – Write the close function

While manufacturer’s often put a lot of effort into telling you how to configure a peripheral, it’s rare to see information on how to shut a peripheral down. In the absence of this information, I have found that a good starting point is to simply take the register list from the open function and reverse it. Thus the first pass close function looks like this:

void driver_Close(void)
{
 SSP0DMACR = 0;
 SSP0ICR = 0;
 SSP0MIS = 0;
 SSP0RIS = 0;
 SSP0IMSC = 0;
 SSP0CPSR = 0;
 SSP0DR = 0;
 SSP0CR1 = 0;    
 SSP0CR0 = 0;
}

Step 6 – Configure the bits in the open function

This is the step where you have to set and clear the bits in the registers. If you use the technique that I espoused in part 1 of this series, then your open function will now explicitly consider every bit in every register.  An example of a partially completed open function is shown below:

void driver_Open(void)
{
 SSP1CR0 = ((4 - 1) << 0) |    /* DSS = 4 bit transfer (min value allowed) */
            (0U << 4) |        /* SPI format */
            (1U << 6) |        /* CPOL = 1 => Clock idles high */
            (1U << 7) |        /* CPHA = 1 => Output data valid on rising edge */
            (5U << 8);         /* SCR = 5 to give a division by 6 */

 SSP1CR1 =  (0U << 0) |        /* LPM = 0 ==> no loopback mode */
            (1U << 1) |        /* SSE = 1 ==> SSP1 is enabled */
            (0U << 2) |        /* MS = 0 ==> Master mode */
            (0U << 3);         /* SOD = 0 (don't care as we are in master mode */

 SSP0DR = 0;
 SSP0SR;            /* Status register - read and discard */
 SSP0CPSR = 0;
 SSP0IMSC = 0;
 SSP0RIS = 0;
 SSP0MIS = 0;
 SSP0ICR = 0;
 SSP0DMACR = 0;
}

Clearly this is the toughest part of the exercise. However at least if you have followed these steps, then you are guaranteed not to have made an error of omission.

This blog posting has got long enough. In the next part of this series, I will address common misconfiguration issues, interrupts etc.