Archive for the ‘Hardware’ Category

Configuring hardware – part 1.

Saturday, November 13th, 2010 Nigel Jones

One of the more challenging tasks in embedded systems programming is configuring the hardware peripherals in a microcontroller. This task is challenging because:

  1. Some peripherals are stunningly complex. If you have ever configured the ATM controller on a PowerQUICC processor then you know what I mean!
  2. The documentation is often poor. See for example just about any LCD controller’s data sheet.
  3. The person configuring the hardware (i.e. me in my case) has an incomplete understanding of how the peripheral works.
  4. One often has to write the code before the hardware is available for testing.
  5. Manufacturer supplied example code is stunningly bad

I think I could extend this list a little further – but you get the idea. Anyway, I have struggled with this problem for many years. Now while it is impossible to come up with a methodology that guarantees correct results, I have come up with a system that really seems to make this task easier. In the first part of this series I will address the most elemental task – and that is how to set the requisite bits in the register.

By way of example, consider this register definition.

This is a control register for an ADC found in the MSP430 series of microcontrollers. The task at hand is how to write the code to set the desired bits. Now in some ways this is trivial. However if you are serious about your work, then your concern isn’t just setting the correct bits – but doing so in such a manner that it is crystal clear to someone else (normally a future version of yourself) as to what you have done – and why. With this as a premise, let’s look at some of the ways you can tackle this problem.

Magic Number

Probably the most common methodology I see is the magic number approach. For example:

ADC12CTL0 = 0x362C;

This method is an abomination. It’s error prone, and very difficult to maintain. Having said that, there is one case in which this approach is useful – and that’s when one wants to shutdown a peripheral. In which case I may use the construct:

ADC12CTL0 = 0;   /* Return register to its reset condition */

Other than that, I really can’t see any justification for this approach.

Bit Fields

Even worse than the magic number approach is to attempt to impose a bit field structure on to the register. While on first glance this may be appealing – don’t do it! Now while I think bitfields have their place, I certainly don’t recommend them for mapping on to hardware registers. The reason is that in a nutshell the C standard essentially allows the compiler vendor carte blanche in how they implement them. For a passionate exposition on this topic, see this comment on the aforementioned post. Anyway, this approach is so bad I refuse to give an example of it!

Defined fields – method 1

This method is quite good. The idea is that one defines the various fields. The definitions below are taken from an IAR supplied header file:

#define ADC12SC             (0x001)   /* ADC12 Start Conversion */
#define ENC                 (0x002)   /* ADC12 Enable Conversion */
#define ADC12TOVIE          (0x004)   /* ADC12 Timer Overflow interrupt enable */
#define ADC12OVIE           (0x008)   /* ADC12 Overflow interrupt enable */
#define ADC12ON             (0x010)   /* ADC12 On/enable */
#define REFON               (0x020)   /* ADC12 Reference on */
#define REF2_5V             (0x040)   /* ADC12 Ref 0:1.5V / 1:2.5V */
#define MSC                 (0x080)   /* ADC12 Multiple Sample Conversion */
#define SHT00               (0x0100)  /* ADC12 Sample Hold 0 Select 0 */
#define SHT01               (0x0200)  /* ADC12 Sample Hold 0 Select 1 */
#define SHT02               (0x0400)  /* ADC12 Sample Hold 0 Select 2 */
#define SHT03               (0x0800)  /* ADC12 Sample Hold 0 Select 3 */
#define SHT10               (0x1000)  /* ADC12 Sample Hold 1 Select 0 */
#define SHT11               (0x2000)  /* ADC12 Sample Hold 1 Select 1 */
#define SHT12               (0x4000)  /* ADC12 Sample Hold 2 Select 2 */
#define SHT13               (0x8000)  /* ADC12 Sample Hold 3 Select 3 */

With these definitions, one can now write code that looks something like this:

ADCT12CTL0 = ADC12TOVIE + ADC12ON + REFON + MSC;

However, there is a fundamental problem with this approach. To see what I mean, examine the comment associated with the define REF2_5V. You will notice that in this case, setting the bit to zero selects a 1.5V reference. Thus in my example code, I have implicitly set the reference voltage to 1.5V. If one examines the code at a later date, then it’s unclear if I intended to select a 1.5V reference – or whether I just forgot to select any reference – and ended up with the 1.5V by default. One possible way around this is to add the following definition:

#define REF1_5V             (0x000)   /* ADC12 Ref = 1.5V */

One can then write:

ADCT12CTL0 = ADC12TOVIE + ADC12ON + REF1_5V + REFON + MSC;

Clearly this is an improvement. However there is nothing stopping you writing:

ADCT12CTL0 = ADC12TOVIE + ADC12ON + REF1_5V + REFON + MSC + REF2_5V;

Don’t laugh – I have seen this done. There is also another problem with the way the fields have been defined, and that concerns the fields which are more than 1 bit wide. For example the field SHT0x is used to define the number of clock cycles the sample and hold should be active. It’s a 4 bit field, and thus has 16 possible combinations. If I need 13 clocks of sample and hold, then I have to write code that looks like this:

ADCT12CTL0 = ADC12TOVIE + ADC12ON + REF1_5V + REFON + MSC + SHT00 + SHT02 + SHT03;

It’s not exactly clear from the above that I desire 13 clock samples on the sample and hold. Now clearly one can overcome this problem by having additional defines – and that’s precisely what IAR does. For example:

#define SHT0_0               (0*0x100u)
#define SHT0_1               (1*0x100u)
#define SHT0_2               (2*0x100u)
...
#define SHT0_15             (15*0x100u)

Now you can write:

ADCT12CTL0 = ADC12TOVIE + ADC12ON + REF1_5V + REFON + MSC + SHT0_13;

However, if you use this approach you will inevitably end up confusing SHT00 and SHT0_0 – with disastrous and very frustrating results.

Defining Fields – method 2

In this method, one defines the bit position of the fields. Thus our definitions now look like this:

#define ADC12SC             (0)   /* ADC12 Start Conversion */
#define ENC                 (1)   /* ADC12 Enable Conversion */
#define ADC12TOVIE          (2)   /* ADC12 Timer Overflow interrupt enable */
#define ADC12OVIE           (3)   /* ADC12 Overflow interrupt enable */
#define ADC12ON             (4)   /* ADC12 On/enable */
#define REFON               (5)   /* ADC12 Reference on */
#define REF2_5V             (6)   /* ADC12 Ref */
#define MSC                 (7)   /* ADC12 Multiple Sample Conversion */
#define SHT0                (8)   /* ADC12 Sample Hold 0 */
#define SHT1                (12)  /* ADC12 Sample Hold 1 */

Our example configuration now looks like this:

ADCT12CTL0 = (1 << ADC12TOVIE) + (1 << ADC12ON) + (1 << REFON) + (0 << REF2_5V) + (1 << MSC) + (13 << SHT0);

Note that zero is given to the REF2_5V argument and 13 to the SHT0 argument. This was my preferred approach for a long time. However it had certain practical weaknesses:

  1. It relies upon the manifest constants being correct / me using the correct manifest constant. You only need to spend a few hours tracking down a bug that ends up being an incorrect #define to know how frustrating this can be.
  2. It still doesn’t really address the issue of fields that aren’t set. That is, was it my intention to leave them at zero, or was it an oversight?
  3. There is often a mismatch between what the compiler vendor calls a field and what appears in the data sheet. For example, the data sheet shows that the SHT0 field is called SHT0x. However the compiler vendor may choose to simply call this SHT0, or SHT0X etc. Thus I end up fighting compilation errors because of trivial naming mismatches.
  4. When debugging, I often end up looking at a window that tells me that ADC12CTL0 bit 6 is set – and I’m stuck trying to determine what that means. (I recognize that some debuggers will symbolically label the bits – however it isn’t universal).

Eschewing definitions

We now come to my preferred methodology. What I wanted was a method that has the following properties:

  1. It requires me to explicitly set / clear every bit.
  2. It is not susceptible to errors in definition / use of #defines.
  3. It allows easy interaction with a debugger.

This is what I ended up with:

ADC12CTL0 =
 (0u << 0) |        /* Don't start conversion yet */
 (0u << 1) |        /* Don't enable conversion yet */
 (1u << 2) |        /* Enable conversion-time-overflow interrupt */
 (0u << 3) |        /* Disable ADC12MEMx overflow-interrupt */
 (1u << 4) |        /* Turn ADC on */
 (1u << 5) |        /* Turn reference on */
 (0u << 6) |        /* Reference = 1.5V */
 (1u << 7) |        /* Automatic sample and conversion */
 (13u <<  8) |      /* Sample and hold of 13 clocks for channels 0-7 */
 (0u << 12);        /* Sample and hold of don't care clocks for channels 8-15 */

There are multiple things to note here:

  1. I have done away with the various #defines. At the end of the day, the hardware requires that bit 5 be set to turn the reference on. The best way to ensure that bit 5 is set is to explicitly set it. Now this thinking tends to fly in the face of conventional wisdom. However, having adopted this approach I have found it to be less error prone – and a lot easier to debug / maintain.
  2. Every bit position is explicitly set or cleared. This forces me to consider every bit in turn and decide what it’s appropriate value should be.
  3. The layout is important. By looking down the columns, I can check that I haven’t missed any fields. Just as important, many debuggers present the bit fields of a register as a column just like this. Thus it’s trivial to map what you see in the debugger to what you have written.
  4. The value being shifted has a ‘u’ appended to it. This keeps the MISRA folks happy – and it’s a good habit to get into.
  5. The comments are an integral part of this approach

There are still a few problems with this approach. This is what I have discovered so far:

  1. It can be tedious with a 32 bit register.
  2. Lint will complain about shifting zero (as it considers it pointless). It will also complain about shifting anything zero places (as it also considers it pointless). In which case you have to suppress these complaints. The following macros do the trick:
#define LINT_SUPPRESS(n)  /*lint --e{n} */
LINT_SUPPRESS(835)        /**< Inform Lint not to worry about zero being given as an argument to << */
LINT_SUPPRESS(845)        /**< Inform Lint not to worry about the right side of the | operator being zero */

In the next part of this article I will describe how one can extend this technique to make configuring peripherals a lot less painful.

DigiView Logic Analyzer

Wednesday, October 6th, 2010 Nigel Jones

Today is one of those rare days on which I recommend a product. I only do this when I find a product that has genuinely made my life easier, and which by extension I think will also make your life easier. The product in question is a  DigiView logic analyzer. Now the fact that logic analyzers are useful tools should not be news to you. Indeed if you have been in this business long enough you will no doubt remember the bad old days of debugging code by decoding execution traces on a logic analyzer. That being said, I almost stopped using logic analyzers because they were big, expensive, difficult to set up and highly oriented towards bus-based systems. Given that I had my own consulting company with limited cash, limited space and a propensity to work on non-bus based systems (i.e. single chip microcontrollers), it’s hardly surprising that a logic analyzer wasn’t part of my toolbox.

This state of affairs persisted for a number of years until I obtained via a convoluted route a DigiView DV1-100. This is a USB powered, hand-sized box, with 18 channels at 100 MHz. It’s successor (The DV3100) sells for $499. The device sat on my shelf for a while until I decided to give it a spin one day. Since then I have found it to be an indispensable tool. Interestingly I find I use it the most when implementing the myriad of synchronous protocols that seem to exist on peripheral ICs today. While it is of course very useful for getting the interfaces working, I also find it extremely useful in fine tuning the interfaces. Via the use of the logic analyzer I can really examine set-up and hold times, clock frequencies, transmission latencies and so on. Doing so has allowed me to dramatically improve the performance of these interfaces in many cases. Indeed, I have had such success in this area that I now routinely hook the analyzer up, even when the interface works first time. If nothing else it gives me a nice warm fuzzy feeling that the interface is working the way it was designed – and not by luck.

Another area where I find it very useful is when I need to reverse engineer a product. I do this a lot as part of my expert witness work – and it is really quite remarkable how much you can learn from looking at a logic analyzer trace.

Anyway, the bottom line is this. $499 gets you an 18 channel 100 MHz personal logic analyzer that can handle most of the circuitry most of us see on a daily basis. If you value your time at all, then the DigiView will pay for itself the first time you use it. Go hassle your boss to get one.

Hardware vs. firmware naming conventions

Sunday, March 28th, 2010 Nigel Jones

Today’s post is motivated in part by Gary Stringham. Gary is the newest member of EmbeddedGurus and he consults and blogs on what he calls the bridge between hardware and firmware. Since I work on both hardware and firmware, I’m looking forward to what Gary has to say in the coming months. Anyway, I’d recently read his posting on Early hardware / firmware collaboration when I found myself looking at a fairly complex schematic. The microprocessor had a lot of IO pins, most of which were being used. When I looked at the code to gain insight on how some of the IO was being used I found that the hardware engineer and firmware engineer had adopted completely different naming conventions. For example, what appeared on the schematic as “Relay 6” appeared in the code as “ALARM_RELAY_2”. As a result the only way I could reconcile the schematic and the code was to look at a signal’s port pin assignment on the schematic and then search the code to see what name was associated with that port pin. After I’d done this a few times, I realized I needed a more systematic approach and ended up going through all the port pin assignments in the code and using them to hand mark up the schematic. Clearly this was not only a colossal time waster, it also had the potential for introducing stupid bugs.

So how had this come about? Well if you have ever designed hardware, you will know that naming nets is essentially optional. In other words one can create a perfectly correct schematic without naming any of the nets. Instead all you have to do is ensure that their connectivity is correct. (This is loosely analogous in firmware to referring to variables via their absolute addresses instead of assigning a name to the variable and using it. However, the consequences for the hardware design are nowhere near as dire). Furthermore, if the engineer does decide to name a net, then in most schematic packages I’ve seen, one is free to use virtually any combination of characters. For example “~LED A” would be a perfectly valid net name – but is most definitely not a valid C variable name. If one throws in the usual issue of numbering things from zero or one (should the first of four LED’s be named LED0 or LED1?), together with hardware engineer’s frequent (and understandable) desire to indicate if a signal is active low or active high by using some form of naming convention, then one has the recipe for a real mess.

So what’s to be done? Well here are my suggestions:

  1. The hardware team should have a rigorously enforced naming standards convention (in much the same way that most companies have a coding standards manual).
  2. All nets that are used by firmware must be named on the schematic.
  3. The net names must adhere to the C standard for naming variables.
  4. The firmware must use the identical name to that appearing on the schematic.

Clearly this can be facilitated by having very early meetings between the hardware and firmware teams, such that when the first version of the schematic is released, there is complete agreement on the net names. If you read Gary’s blog post you’ll see that this is his point too – albeit in a slightly different field.
Home

Voltage gradients in embedded systems

Sunday, January 24th, 2010 Nigel Jones

Today’s’ post was prompted by an excellent comment from Phil Ouellette in a recent newsletter from Jack Ganssle. In a nutshell Phil was advocating strobing switches with an alternating voltage waveform, rather than a direct voltage in order to minimize corrosion and premature switch failure. This happens to be an area in which I have some experience and so I thought I’d extend the concept a little bit and also give you some food for thought.

The basic idea behind Phil Ouellette’s comments is that if one has a bias voltage between two pins (such as between switch contacts), and if this bias is always in one direction (i.e. DC), then the bias can act so as to drive an electrochemical reaction. The exact results of this electrochemical reaction vary (corrosion, dendrite growth etc.), but the net result is normally the same – namely an unwanted short circuit between two pins.

These problems arise particularly on products that have to operate in humid environments and / or products that have to spend a very long time under power over their expected operational life.

So what can be done about this? Well the most important thing to understand is that it is voltage gradient (volts per meter) that is the driving force in this problem and that furthermore, voltage gradient is a vector quantity and thus its direction is important. With this understanding, it should be clear that to minimize these types of problems, one has to minimize the integral of the voltage gradient over time. To do this one has three basic choices:

  1. Minimize the voltage
  2. Maximize the separation
  3. Modulate the voltage

Let’s take a look at each of these in turn:

Minimize the voltage
Clearly technology is progressing in the right direction for us here, as 5V systems are rapidly becoming extinct and 1.8V systems becoming more common place. Thus all other things being equal, the voltage gradient between any two pins on a 5V system will be 2.8 times greater than on a 1.8V system. Thus if you are designing a system where corrosion is a concern you will do yourself a big favor for opting for as low a voltage system as you can. However, see the caveat below.

Note also that given that what counts is minimizing the voltage over time, it follows that you can normally improve the system performance by powering down systems that are not needed at any given time. This also of course saves you power and thus is a highly recommended step.

Maximize the separation
This is by far and away the toughest problem. Twenty years ago, ICs ran on 5V and had 0.1 inch lead spacing, giving a maximum voltage gradient between pins of 5 / 0.00254 = 1969 V/m. Today, a typical 1.8V IC has a lead spacing of 0.5 mm giving a maximum voltage gradient of 1.8 / 0.0005 = 3600 V/m. Thus the voltage gradient between pins on a typical IC has gone up – despite the decrease in the operating voltage. Thus if selecting a low voltage part means that you must use a fine lead pitch part, then you are almost certainly shooting yourself in the foot!

Other areas where you can increase separation without too much pain is in the selection of passive components. For example a 1206 resistor has a much lower voltage gradient than an 0402 resistor, and an axial leaded capacitor is usually preferable to a radially leaded device. As a result, when I’m designing systems that have potential corrosion issues I really prefer to use larger components. Of course this can put you into conflict with marketing and production.

Modulate the voltage
The method suggested by Phil Ouellette is reasonably straightforward for something like a switch. However, if one generalizes the problem to all the components on the board, then it becomes a much more complex problem. For example, consider an address bus to an external memory. The most significant address lines will presumably change state at a much lower frequency than the low order address lines. Indeed it is not uncommon for a system that has booted up and is running normally to be in a situation where the top two or three address lines never change state. Now if they are all at the same state (for example high), then no voltage gradient exists between them and so there is no problem. However if the lines are say High – Low – High, then the voltage gradient is as bad as it gets – and you have a potential problem. There are of course various solutions to this particular problem. The easiest solution is to ensure that the most significant address lines are routed to non contiguous pins on the memory chip (sometimes known as address scrambling) so that high frequency address lines are adjacent to low frequency lines. A much more difficult problem is to link the application so that all address lines are guaranteed to toggle at a reasonable frequency…

Another interesting example comes when one performs microcontroller port pin assignment. Normally one has little choice about certain pin assignments – but for the remainder one has free rein. The next time you find yourself in this position you may want to try performing the assignment so as to minimize the voltage gradients. I think you will find it to be a very challenging task.

Anyway, I hope this has given you some food for thought. Please let us know via the comments section if you have faced any of these types of problems and how you solved them.

Home

Hardware costs versus development costs

Thursday, December 24th, 2009 Nigel Jones

Earlier this year I posted about how to solve the problem of PIC stack overflow. As part of that article I asked the question as to why does anybody use a PIC anyway when there are superior architectures such as the AVR available? Well, various people have linked to the posting and so I get a regular stream of visitors to it, some of whom weigh in on the topic. The other day, one visitor offered as a reason for using the PIC the fact that they are cheap. Is this the case I asked myself? So I did some rough comparisons of the AVR & 8 bit PIC processors – and sure enough PIC processors are cheaper to buy. For example comparing the ATmega168P vs the PIC16F1936 (two roughly equivalent processors – the AVR has more memory and a lot more throughput, the PIC has more peripherals) I found that Digikey was selling the AVR for 2.60 per 980 and the PIC for $1.66 per 1600. A considerable difference – or is it?

Most of the products I work on have sales volumes of 1000 pieces a year, with a product life of 5 years. Thus if I chose the AVR for such a product, then my client would be paying approximately $1000 a year more for say 5 years. Applying a very modest 5% discount rate, this translates to a Net Present Value of $4,329.

This is when it gets interesting. Does the AVR’s architecture allow a programmer to be more productive? Well, clearly this is a somewhat subjective manner. However my sense is that the AVR does indeed allow greater programmer productivity. The reasons are as follows:

  1. The AVR’s architecture lends itself by design to being coded in a HLL. The PIC does not. As a result, programming the PIC in a HLL is always a challenge and one is far more likely to have to resort to assembly language – with the attendant drop in productivity.
  2. The AVR’s inherent higher processing speed means that one has to be less concerned with fine tuning code in order to get the job done. Fine tuning code can be very time consuming.
  3. The AVR’s greater code density means that one is less likely to be concerned with making the application fit in the available memory (something that can be extremely time consuming when things gets tight).
  4. The AVR’s superior interrupt structure means that interrupt handling is far easier. Again interrupts are something that can consume inordinate amounts of time when things aren’t working.

Now if one is a skilled PIC programmer and a novice on the AVR, then your experience will probably offset what I have postulated are inherent inefficiencies in the PIC architecture. However what about someone such as myself who is equally comfortable in both arenas? In this case the question becomes – how many more days will it take to code up the PIC versus the AVR and what do those days cost?

Of course if you are a salaried employee, then your salary is ‘hidden’. However when you are a consultant the cost of extra days is clearly visible to the client. In this case, if using an AVR reduces the number of development days by even a few, the cost difference between the two processors rapidly dwindles to nothing. Thus the bottom line is – I’m not sure that PICs are cheaper. However I suspect that in many organizations what counts is the BOM cost of a product – and perhaps this finally explains why PIC’s are more popular than AVR’s.

As always, I’m interested in your thoughts.
Home