Archive for the ‘Hardware’ Category

EEPROM Wear Leveling

Wednesday, July 5th, 2017 Nigel Jones

A problem that occurs from time to time is the need for an embedded system to keep track of the number of doodads it has processed since it was put in to service. Clearly you can’t keep this value in RAM as it will be lost when power is removed. Thus the obvious solution is to store the count in EEPROM. However, common EEPROM  specifications are 10^5 guaranteed write cycles, with 10^6 typical write cycles. If you expect to process say 20 million doodads during the life of your product you clearly can’t do something like this:

__eeprom uint32_t doodad_cntr;
...
doodad_cntr++;

The above assumes that the __eeprom memory space qualifier is sufficient for the compiler to generate the requisite code sequences to access EEPROM.

The first part of the solution is to define an array in EEPROM for holding the doodad counter. Thus:

__eeprom uint32_t doodad_cntr[DOODAD_ARRAY_SIZE];

Since the number of doodads processed increases monotonically, on power up one simply searches the array looking for the last entry, – i.e. the entry in the array with the biggest value. One records this offset into a static variable. Thus:

static uint8_t doodad_array_offset;
doodad_array_offset = find_last_used_entry();

The next time a doodad is processed, then the write occurs at the next location beyond doodad_array_offset in doodad_cntr[]. This simple technique immediately gives one a guaranteed increase of DOODAD_ARRAY_SIZE more writes than using a single variable. Whilst this is nice, there are a number of other things that one can do to improve the situation. These are all based on the observation that the erased state of EEPROM is a ‘1’ and not a zero. Thus a ‘blank’ EEPROM actually consists of 0xFF …. 0xFF, without a zero in sight. To take advantage of this, rather than writing the actual doodad count value to the array, instead write the 1’s complement of the value. This means that rather than writing, for example, the value 0x00000001 to EEPROM, you’ll instead write 0xFFFFFFFE. In this case the actual number of bits that have to change state is just 1 rather than 31, resulting in considerably less stress on the EEPROM, potentially increasing the life of the EEPROM. Note that this technique is equivalent to initializing the EEPROM to 0xFFFFFFFF and then decrementing.

Writing the 1’s complement also opens up another potential improvement. EEPROM is often byte or word addressable. Furthermore, to program the EEPROM is usually a 2-step process consisting of erasing the memory location (i.e. erase it to 0xFF) and then programming the erased location with the desired value. Using 1’s complement values, it should be apparent that for most of the time, many of the bytes in the EEPROM will be at 0xFF. If we combine this with the fact that most of the time only one out of the four bytes in a uint32_t will change value when a uint32_t value is incremented, then we can dramatically minimize the actual number of erases / writes performed on the EEPROM. The code to do this looks a bit like this:

typedef union {
    uint8_t cnt_bytes[4U];
    uint32_t counter;
} eeprom_union_t;
__eeprom eeprom_union_t doodad_cntr[DOODAD_ARRAY_SIZE];

void increment_doodad(void) {
    eeprom_union_t current_value.counter = doodad_cntr[doodad_array_offset].counter;
    if (++doodad_array_offset >= DOODAD_ARRAY_SIZE) {
        doodad_array_offset = 0U;
    }
    --current_value.counter;  //1's complement increment
    for (uint8_t i = 0U; i < 4U; ++i) {
        //Are the bytes different? If no, then do nothing
        if (doodad_array[doodad_array_offset].cnt_bytes[i] != current_value.cnt_bytes[i]) {
            //Need to erase byte
            erase(doodad_array[doodad_array_offset].cnt_bytes[i]);
            //See if we need to actually program now
            if (current_value.cnt_bytes[i] != 0xFFU) {
                doodad_array[doodad_array_offset].cnt_bytes[i] = current_value.cnt_bytes[i];
            }
        }
    }
}

Clearly this is quite a bit more work. However given that EEPROM erase / write times are in the 2 – 10ms arena, the time taken in comparison to execute the above code is insignificant. Given that it will also save millions of EEPROM writes over the life of the product, then the complexity is well worth it.
Finally if your product needs to keep track of an enormous number of doodads, then you’ll likely have no choice other than to keep track of when EEPROM cells go bad. This is typically done by assigning another area of EEPROM that has  DOODAD_ARRAY_SIZE bits – e.g._eeprom uint8_t bad_cell[DOODAD_ARRAY_SIZE / sizeof(uint8_t)]. These bits are erased to 1. Once you detect that a write has failed at a certain cell in doodad_cntr[], then you change the corresponding bit in bad_cell[] from ‘1’ to a ‘0’ and the cell is considered bad for all time.  Obviously you then have to interrogate the bad_cell[] array to determine whether the code should use a specific cell.

Novel uses for RTC registers

Sunday, December 4th, 2011 Nigel Jones

For obvious reasons, I usually write about things that are widely applicable. Today I’m going to deviate from this slightly and talk about the real time clock registers / RAM that are available on some (many?) ARM processors as well as I suspect a number of other architectures. An excerpt from the NXP data sheet is shown in the figure below.

NXP LPC17xx RTC registers

Of most interest is the column labeled ‘Reset Value’. You will notice that the values highlighted in red are ‘NC’. ‘NC’ means that the registers are unaffected by a reset condition. Furthermore, these particular registers may also be powered from an alternate power source such that they are also unaffected by loss of power. So why is this useful? Well I have found a couple of uses for them beyond the obvious and intended applications of maintaining date and time (for the RTC registers) and for providing non-volatile R/W storage for the General Purpose registers.

Communicating with the Bootstrap loader

Most embedded applications today contain a bootstrap loader (BSL). Although there are several ways of entering the BSL from the main application, the most common that I see is to force a watchdog reset, resulting in the CPU rebooting and starting up in the BSL. This technique is pretty good and I use it all the time. However I usually find it necessary for the main application to communicate some information to the BSL. For example, at a minimum the BSL needs to know that it has been intentionally entered for the purposes of performing a firmware update (as opposed to being entered as a result of a genuine watchdog failure). Under some circumstances I also need to pass other information to the BSL such as the port that initiated the update. In the past I have tended to pass this information via EEPROM. However with these registers available to me, I now use them for this task.

Debugging

If you are plagued with your system taking unexpected resets, then it’s a relatively trivial matter to write some debug code that writes context information to these registers. For example most RTOS provide mechanisms for calling user functions prior to performing a task switch. Within this function it’s trivial to write the task ID to one of these registers. Then it’s just a matter of putting a breakpoint on the entry into main() to discover which task was running when the reset occurred. Once you have it narrowed down to a task, you can then instrument functions in a similar manner.

I suspect I may find other uses for these registers in the future. Suffice to say I’d really like it if this feature became widespread across all processor families.

An embedded systems hardware test – a collaborative effort

Friday, February 25th, 2011 Nigel Jones

Regular readers will probably be aware that back in 2000 I wrote an article for Embedded Systems Programming magazine entitled A ‘C’ Test: The 0×10 Best Questions for Would-be Embedded Programmers. In the intervening years I have often thought that it would be entertaining / useful to come up with a similar test—except this time I would be testing someone’s hardware knowledge. As a result over the years I have collected together a number of fun questions, which I intend to use in the forth-coming article. However it occurred to me that I have a lot of very smart readers and that collectively we could put together a far better test than I could do so on my own. Thus I’m looking for your hardware questions! Before you flood me with your suggestions here are the ground rules:

  1. Embedded systems design, not hardware design
    The test is intended to test the hardware knowledge of persons writing embedded code. It is NOT a test for persons that will be designing hardware. Thus questions about the minutiae of hardware filter design are not what I’m looking for.
  2. Traps
    The best questions will be examples from your past where someone got into trouble because they didn’t understand something about the hardware that you thought they should have.
  3. Why
    As well as posing the question (and giving the answer!), please explain why you think it’s important that someone should know what you are asking.
  4. Oscilloscope and logic analyzer
    I expect that the questions will cover circuits, processor architectures and tools. While I’m interested in all three, I’m particularly interested in elegant questions that will allow the questioner to determine if the candidate knows how to use an oscilloscope or logic analyzer.
  5. Original
    Please don’t send me any copyrighted or plagiarized material. Links are of course fine. (I mention this because not only is it legally and morally wrong – but I’m also tired of people ripping off my work and claiming it as their own).
  6. Attribution
    If I choose to use your suggestion, then tell me how you’d like it attributed. Full name + email address through anonymous are all fine.
  7. Early bird…
    If I get multiple similar suggestions, then the first one received gets the credit.
  8. Fame
    By sending me something you are agreeing to let me publish it. Other than attribution (and the accompanying fame 🙂 ), no other compensation will be given.

Anyway, if you’d like to participate then contact me

Thanks! I expect that I will publish the article in a few weeks.

Configuring hardware – part 3

Wednesday, January 26th, 2011 Nigel Jones

This is the final part in a series on configuring the hardware peripherals in a microcontroller. In the first part I talked about how to set / clear bits in a configuration register, and in the second part I talked about putting together the basic framework for the driver. When I finished part 2, we had got as far as configuring all the bits in the open function. It’s at this point that things get interesting. In my experience the majority of driver problems fall into three areas:

  1. Failing to place the peripheral into the correct mode.
  2. Getting the clocking wrong.
  3. Mishandling interrupts.

I think most people tend to focus on the first item. Personally I have learned that it’s usually better to tackle the above problems in the reverse order.

Mishandling interrupts

Almost all peripheral drivers need interrupt handlers, and these are often the source of many problems.  If you have followed my advice, then at this stage you should have a skeleton interrupt handler for every possible interrupt vector that the peripheral uses.  You should also have an open and close function. A smart thing to do at this stage is to download your code to your debug environment. I then place a break-point on every interrupt handler and then I call  the open function. If the open function merely configures the peripheral, yet does not enable it, then presumably no interrupts should occur. If they do, then you need to find out why and fix the problem.

At this point I now add just enough code to each interrupt handler such that it will clear the source of the interrupt and generate the requisite interrupt acknowledge. Sometimes this is done for you in hardware. In other cases you have to write a surprising amount of code to get the job done. I strongly recommend that you take your time over this stage as getting an interrupt acknowledge wrong can cause you endless problems.

The next stage is to write the enable function, download the code and open and enable the peripheral. This time you need to check that you do get the expected interrupts (e.g. a timer overflow interrupt) and that you acknowledge them correctly. Just as importantly you also need to check that you don’t get an unexpected interrupt (e.g. a timer match interrupt). On the assumption that all is well, then you can be reasonably confident that  there are no egregious errors in your setup of interrupts. At this point you will probably have to further flesh out the interrupt handlers in order to give the driver some limited functionality. Although I’m sure you’ll be tempted to get on with the problem at hand, I recommend that you don’t do this, but rather write code to help tackle the next problem – namely that of clocking verification.

Clocking

Most peripherals use a clock source internal to the microprocessor. Now modern processors have multiple clock domains, PLL based frequency multipliers, and of course multi-level pre-scalars. As a result it can be a real nightmare trying to get the correct frequency to a peripheral. Even worse it is remarkably easy to get the approximately correct frequency to a peripheral. This issue can be a real problem with asynchronous communications links where a 1% error in frequency may be OK with one host and fail with another. As a result I now make it a rule to always try and verify that I am indeed clocking a peripheral with the correct frequency. To do this, there is no substitute for breaking out the oscilloscope or logic analyzer and measuring something. For timers one can normally output the signal on a port pin (even if this is just for verification purposes). For communications links one can simply set up the port to constantly transmit a fixed pattern. For devices such as A2D converters I usually have to resort to toggling  a port pin at the start and end of conversion. Regardless of the peripheral, it’s nearly always worth taking the time to write some code to help you verify that the peripheral is indeed being clocked at the correct frequency.

When you are doing this, there are a couple of things to watch out for:

  1. If your processor has an EMI reduction mode, then consider turning it off while performing clocking measurements. The reason for this is that ‘EMI reduction’ is actually achieved by dithering (quasi randomly varying) the clock frequency. Clearly a randomly varying clock isn’t conducive to accurate frequency measurements.
  2. Make sure that your system is indeed being clocked by the correct source. I mention this because some debuggers can provide the clock to the target.

Finally, if you find that you have an occasional problem with a peripheral, then checking that the clocking is precise is always a good place to start.

Mode

At this stage you have done the following:

  1. Considered every bit in every register in your open function.
  2. Verified that you have interrupts set up correctly.
  3. Written the enable function and at least part of the interrupt handler(s).
  4. Verified that you have the correct frequency clocks going to the peripheral.

You should now complete writing the driver. This is where you write the bulk of the application specific code. Clearly this part is highly application specific. Notwithstanding this, I can offer one piece of advice. Probably the single biggest mistake that I have made over the years is to assume that because the driver ‘works’ that it must be correct. I will give you a simple example to demonstrate what I mean.

It’s well known that the popular SPI port found on many devices can operate in one of four modes (often imaginatively called Mode0, Mode1, Mode2 & Mode3). These modes differ based on the phase relationship of the clock and data lines and whether the data are valid on the rising or falling edge of the clock. Thus it’s necessary to study the data sheet of the SPI peripheral to find out its required mode. Let’s assume that after studying the data sheet you conclude that Mode2 operation is called for – and you implement the code and it works. If you then walk away from the code then I humbly suggest you are asking for it. The reason is that it’s possible that a peripheral will ‘work’ in Mode 2, even though it should be operated in Mode 3. The peripheral ‘works’ in Mode 2 even though you are right on the edge of violating the various required setup and hold times. A different temperature or a different chip lot and your code will fall over. It’s for this reason that I strongly recommend that you break out the logic analyzer and carefully compare the signals to what is specified in the data sheet. There is nothing quite like comparing waveforms to what is in the data sheet to give you a warm fuzzy feeling that the driver really is doing its job correctly.

Final Thoughts

Driver writing is hard. Engineers that can take on this task and write clean, fast and correct drivers in a timely manner are immensely valuable to organizations. Thus even if you cringe at the thought of having to write a device driver, you might want to put the effort into learning how to do it – your career will thank you!

Configuring hardware – part 2.

Wednesday, December 15th, 2010 Nigel Jones

This is the second in a series on configuring the hardware peripherals in a microcontroller. In the first part I talked about how to set / clear bits in a configuration register.  Now while setting bits is an essential part of the problem, it is by no means the most difficult task. Instead the real problem is this. You need to configure the peripheral but on examining the data sheet you discover that the peripheral has twenty registers, can operate in a huge number of modes and has multiple interrupt sources. To compound the difficulty, you may not fully understand the task the peripheral performs – and the data sheet appear to have been written by someone who has clearly never written a device driver in their life. If this sounds a lot like what you have experienced, then read on!

When I first started working in embedded systems, I used to dread having to write a device driver. I knew I was in for days, if not weeks of anguish trying to make the stupid thing work. Today I can usually get a peripheral to do what I want with almost no heartache – and in a fraction of the time it used to take me. I do this by following a standard approach that helps minimize various problems that seem to crop up all the time in device drivers. These problems are as follows:

  1. Setting the wrong bits in a register
  2. Failing to configure a register at all.
  3. Setting the correct configuration bits – but in the wrong temporal order.
  4. Interrupts incorrectly handled.

To help minimize these types of problems, this is what I do.

Step 0 – Document *what* the driver is supposed to do

This is a crucial step. If you can’t write in plain English (or French etc) what the driver is supposed to do then you stand no chance of making it work correctly.  This is a remarkably difficult thing to do. If you find that you can’t succinctly and unambiguously describe the driver’s functionality then attempting to write code is futile. I typically put this explanation in the module header block where future readers of the code can see it. An explanation may look something like this.

This is a serial port driver. It is intended to be used on an RS232 line at 38400 baud, 8 data bits, no parity, one stop bit. The driver supports CTS / RTS handshaking. It does not support Xon / Xoff handshaking.

Characters to be transmitted are buffered and sent out under interrupt. If the transmit buffer fills up then incoming characters are dropped.

Characters are received under interrupt and placed in a buffer. When the receive buffer is almost full, the CTS line is asserted. Once the receive buffer has dropped below the low threshold, CTS is negated. If the host ignores the CTS line and continues to transmit then characters received after the receive buffer is full are discarded.

As it stands, this description is incomplete; for example it doesn’t say what happens if a receiver overrun is detected. However you should get the idea.

Incidentally I can’t stress the importance of this step enough. This was the single biggest breakthrough I made in improving my driver writing. This is also the step that I see missing from almost all driver code.

Step 1 – Create standard function outlines

Nearly all drivers need the following functions:

  1. Open function. This function does the bulk of the peripheral configuration, but typically does not activate (enable) the peripheral.
  2. Close function. This is the opposite of the open function in that it returns a peripheral to its initial (usually reset) condition. Even if your application would never expect to close a peripheral it is often useful to write this function as it can deepen your understanding of the peripheral’s functionality.
  3. Start function. This function typically activates the peripheral. For peripherals such as timers, the start function is aptly and accurately named. For more complex peripherals, the start function may be more of an enable function. For example a CAN controller’s start function may start the CAN controller listening for packets.
  4. Stop function. This is the opposite of the start function. Its job is to stop the peripheral from running, while leaving it configured.
  5. Update function(s). These function(s) are highly application specific. For example an ADC peripheral may not need an update function. A PWM channel’s update function would be used to update the PWM depth. A UART’s update function would be the transmit function. In some cases you may need multiple update functions.
  6. Interrupt handler(s). Most peripheral’s need at least one interrupt handler. Even if you aren’t planning on using an interrupt source, I strongly recommend you put together a function outline for it. The reason will become clear!

At this stage, your driver looks something like this:

/*
 Detailed description of what the driver does goes here
*/

void driver_Open(void)
{
}

void driver_Close(void)
{
}

void driver_Start(void)
{
}

void driver_Stop(void)
{
}

void driver_Update(void)
{
}

__interrupt void driver_Interrupt1(void)
{
}

__interrupt void driver_Interrupt2(void)
{
}

Step 2 – Set up power, clocks, port pins

In most modern processors, a peripheral does not exist in isolation. Many times peripherals need to be powered up, clocks need to routed to the peripheral and port pins need to be configured. This step is separate from the configuration of the peripheral. Furthermore documentation on these requirements is often located in non-obvious places – and thus this step is often overlooked. This is an area where I must give a thumbs-up to NXP. At the start of each of their peripherals is a short clear write up documenting the ancillary registers that need to be configured for the peripheral to be used. An example is shown below:

Basic Configuration Steps for the SSP

Personally, I usually place the configuration of these registers in a central location which is thus outside the driver. However there is also a case for placing the configuration of these registers in the driver open function. I will address why I do it this way in a separate blog post.

Step 3 – Add all the peripheral registers to the open function

This step is crucial. In my experience a large number of driver problems come about because a register hasn’t been configured. The surest way to kill this potential problem is to open up the data sheet at the register list for the peripheral and simply add all the registers to the open function. For example, here is the register list for the SSP controller on an NXP ARM processor:

Ten registers are listed.  Even though one register is listed as read only, I still add it to the driver_Open function as I may need to read it in order to clear status flags. Thus my open function now becomes this:

void driver_Open(void)
{
 SSP0CR0 = 0;
 SSP0CR1 = 0;
 SSP0DR = 0;
 SSP0SR;            /* Status register - read and discard */
 SSP0CPSR = 0;
 SSP0IMSC = 0;
 SSP0RIS = 0;
 SSP0MIS = 0;
 SSP0ICR = 0;
 SSP0DMACR = 0;
}

At this stage all I have done is ensure that my code is at least aware of the requisite registers.

Step 4 – Arrange the registers in the correct order

For many peripherals, it is important that registers be configured in a specific order. In some cases a register must be partially configured, then other registers must be configured, and then the initial register must be completely configured. There is no way around this, other than to read the data sheet to determine if this ordering exists. I should note that the order that registers appear in the data sheet is rarely the order in which they should be configured. In my example, I will assume that the registers are correctly ordered.

Step 5 – Write the close function

While manufacturer’s often put a lot of effort into telling you how to configure a peripheral, it’s rare to see information on how to shut a peripheral down. In the absence of this information, I have found that a good starting point is to simply take the register list from the open function and reverse it. Thus the first pass close function looks like this:

void driver_Close(void)
{
 SSP0DMACR = 0;
 SSP0ICR = 0;
 SSP0MIS = 0;
 SSP0RIS = 0;
 SSP0IMSC = 0;
 SSP0CPSR = 0;
 SSP0DR = 0;
 SSP0CR1 = 0;    
 SSP0CR0 = 0;
}

Step 6 – Configure the bits in the open function

This is the step where you have to set and clear the bits in the registers. If you use the technique that I espoused in part 1 of this series, then your open function will now explicitly consider every bit in every register.  An example of a partially completed open function is shown below:

void driver_Open(void)
{
 SSP1CR0 = ((4 - 1) << 0) |    /* DSS = 4 bit transfer (min value allowed) */
            (0U << 4) |        /* SPI format */
            (1U << 6) |        /* CPOL = 1 => Clock idles high */
            (1U << 7) |        /* CPHA = 1 => Output data valid on rising edge */
            (5U << 8);         /* SCR = 5 to give a division by 6 */

 SSP1CR1 =  (0U << 0) |        /* LPM = 0 ==> no loopback mode */
            (1U << 1) |        /* SSE = 1 ==> SSP1 is enabled */
            (0U << 2) |        /* MS = 0 ==> Master mode */
            (0U << 3);         /* SOD = 0 (don't care as we are in master mode */

 SSP0DR = 0;
 SSP0SR;            /* Status register - read and discard */
 SSP0CPSR = 0;
 SSP0IMSC = 0;
 SSP0RIS = 0;
 SSP0MIS = 0;
 SSP0ICR = 0;
 SSP0DMACR = 0;
}

Clearly this is the toughest part of the exercise. However at least if you have followed these steps, then you are guaranteed not to have made an error of omission.

This blog posting has got long enough. In the next part of this series, I will address common misconfiguration issues, interrupts etc.