embedded software boot camp

The perils of overloading

February 4th, 2008 by Nigel Jones

This post is coming to you from Sweden – a very fine country that I heartily recommend visiting if you get the chance. (If you’re wondering why I’m in Sweden – I’m here on business as one of my clients is located in Gothenburg). Anyway, the fact that I’m in Sweden is relevant to this post, as to get here I had to put myself at the mercies of United Airlines. Now the fact that the flight over here was less than perfect wouldn’t be news to any of you that travel regularly. However, the reason that the flight was a disaster is relevant, as I’ll now try and explain…

Upon arrival at the United check in desk at Dulles airport, I was greeted by an array of self check in kiosks, with a total of one real live human being to take care of baggage check in. Thinking myself to be computer savvy, I negotiated the check in kiosk with ease, only to be told that:

  1. I had to see the human in order to check my bags in, and
  2. The system was unable to assign me a seat and that seat assignment would be done at the gate.

The first instruction was par for the course, while the second instruction I found to be very strange. Anyway, I shrugged my shoulders and went over to the sole person working the desk. There was one gentleman in front of me. This gentleman, not unreasonably asked if he could use some of his frequent flier miles to upgrade to business class. No problem said the United employee, who proceeded to rattle the keys. After 5 minutes, he announced that although the system was showing that seats were available in business class, the computer system refused to allow him to assign a seat. This was the second clue that things were heading south in a hurry. It then took the clerk another 10 minutes to wait list the gentleman (giving a total processing time of 15 minutes). Although it’s possible the clerk was incompetent, I got the impression that he really knew what he was doing, and was just being stymied by the system.

Anyway, I checked my bag in and proceeded to the gate. When I got to the gate, I found another 100+ passengers that also had no seat assignments. When eventually I got called to the counter, I found a harried women with a sea of boarding passes printed out in front of her. She was manually searching through them trying to find my name. Eventually she found it and handed it over. My nature being what it is, I politely inquired as to the reason for this astonishingly strange system of assigning seats and issuing boarding passes. Apparently this was the opportunity that the clerk had been waiting for to vent her frustration, as she gladly explained to me that the powers that be had over booked the flight. And so my gentle reader, we come to the point of this post. It was apparent that the United system was unable to handle an overbooked flight correctly, and rather than degrade gracefully, had all but collapsed. At which point I started making some snarky comments to myself about database programmers and how surely all database programmers worked in that field because they couldn’t handle the rigors of the embedded / real time world and that any half decent embedded systems person would never make such an elementary mistake. It was then that I had my epiphany. We make the same mistake in the embedded world all the time. When was the last time you used RMA (Rate monotonic analysis) to guarantee that all your tasks would meet their scheduling deadlines? How many failures of embedded systems are caused by overloading (or over scheduling) and the failure to correctly assign task priorities. How many times do weird things happen in your code that you just shrug off as “one of those things”? In short, I found myself cutting a break to the poor sod that wrote United’s code. I was still ticked off though!

Home

A new way to tell if something is an embedded system

January 27th, 2008 by Nigel Jones

Periodically someone tries to come up with a definition of an embedded system. For example there is an excellent and oft cited definition here. What got me thinking about this topic is the latest gadget I love to hate – my Verizon Treo phone running Windows mobile. A few years ago, there would have been no doubt that a cell phone was an embedded system. Today, the Treo, the i-Phone etc are all running versions of traditional computer operating systems, and are much more computer like than they are an embedded system. So the question is what are they – an embedded system or a computer?

Well today I offer a new simple test to tell if these devices are fish or fowl (foul is perhaps more appropriate), to wit:

“Is the device a pain in the neck to use?” If the answer is “yes”, then it’s a computer. My Treo is a computer. Enough said!

Home

Electronic Component Footprints

January 18th, 2008 by Nigel Jones

As well as writing code and designing hardware, I also do PCB layout. I started doing this after I discovered it was often faster for me to layout a board myself than to try and convey all my requirements to a board layout person. If you’ve ever done PCB layout, you’ll know that getting information about a device’s footprint is a real pain. What you may not know is that this is a major source of errors on printed circuit boards, resulting in costly board re-spins and project delays. These errors come about for several reasons.

  1. Getting the information. Many manufacturers include packaging information directly into the parts data sheet. Other manufacturers (TI being a principal offender) instead just cite a packaging part number and say something contrite like “See our website for the latest information”. One is then forced into searching a gigantic web site to discover that packaging style WP8 is what the rest of the world calls SO8. I don’t mind them decoupling the packaging information from the part data sheet. I just wish they’d get with the program and discover something called Hyper-linking (it’s only been around since the 1960s).
  2. Footprints are usually dimensioned as if they were a mechanical part. By this I mean that the drawing is usually rendered like most mechanical parts. Unfortunately, the layout package I use (and I suspect most of the others) treats a footprint as an electrical component. This results in all the pads being on an X-Y grid, with pin 1 usually being at (0,0). What this usually means is that one has to spend time performing a series of elementary trigonometric calculations in order to work out where to place the pads exactly. As you may imagine, this is a major source of error in footprint creation. The frustrating thing for me is that for the mechanical person providing the footprint information, it would be trivial to have their CAD system generate the information in a way that is directly usable.
  3. Many suppliers of mechanical components now offer solid models of their parts on their websites. Typically the models are offered in a number of formats (ProEngineer, Solid Works etc). Thus, if I’m using say a valve from this supplier, I don’t have to create the model. I just download it and incorporate it into my working drawing. Why then do suppliers of electronic components not do the same thing for part footprints? I suspect the answer is that no one ever selected a part to use in a design because it made the layout person’s job easier.
  4. Lastly, you may be unaware that the footprint for a surface mount part differs depending on whether it is to be reflow soldered or wave-soldered. Some companies (mainly in Europe) supply both footprints. Too many however simply supply the reflow footprint and leave it up to the lowly layout person to try and work out what the footprint should be for wave soldering.

So what’s the point of this screed? Well, our industry is all about getting products to market as soon as possible at the lowest possible cost. Component manufacturers could help their customers (which in turn would help them) achieve this goal by simply providing information that removed the footprint bottleneck.

Home

Omniscient Code Generation

January 13th, 2008 by Nigel Jones

Hi Tech Software has recently been making a lot of noise about its “Omniscient Code Generation”. In a nutshell, the technology appears to defer code generation until the entire program has been compiled, and then look at everything before generating the final object code. The end result is a dramatically more compact (and presumably faster running) program image. I haven’t had a chance to play with the compiler yet (in part because it’s still in beta testing). If they have done what they claim, then Hi Tech should be commended. On my list of things to check out about the technology will be:

  • Is the technology smart enough to track function calls via function pointers? If it is, then this is truly a neat piece of technology. If instead, it’s one of the limitations of the product, then its usefulness to me has just plummeted.
  • Does the technology also track function calls from within interrupts? My experience is that interrupt handling is still the poor relation of compiler technology. If Hi Tech does this, then I’ll be impressed.

Also of interest to me is how other compiler manufacturers will respond. Keil has performed global register coloring on its 8051 compiler for years. I suspect that the Hi Tech approach is a step beyond this, so there’s a chance that Keil will be finally knocked from their #1 position in 8051 code generation. IAR offers a multi unit compilation option with some of its compilers. However, this option isn’t integrated into its Embedded Workbench, so it’s practically useless. With Hi Tech offering compilers for ARM, PIC & MSP430 I can see this really creating a burst of competition in the industry. Excellent!

Home

An unfortunate consequence of a 32-bit world

August 29th, 2007 by Nigel Jones

Back in the bad old days when I was a lad, one learned about microprocessors by programming 8 bit devices in assembly language. In fact I can still remember my first lab assignment – namely to multiply two 8 bit unsigned quantities together to get a 16 bit result (without the use of a hardware multiplier of course). One of the indelible lessons that comes from doing an exercise such as this, is that it can take many instructions to perform even the most innocuous of high level language statements.

I mention this, because today I was looking at some code written by a young engineer who was recommended to me. In examining some of his code, I noticed the following construct:

void some_function(void)
{
 ...
 ++ivar;
 ...
}

interrupt void isr_handler(void)
{
 ...
 --ivar;
 ...
}

Notwithstanding the fact that ivar should have been declared volatile, the most egregious mistake here was the assumption that the statement ++ivar is an atomic operation. Now if one is used to working on 32 bit machines, the concept of incrementing an integer being anything other than an atomic operation is of course ludicrous. However, in the 8 or 16 bit world where many of us labor in the embedded space, the idea of incrementing an integer being an atomic operation is equally ridiculous. The trouble is with bugs like this is that they are difficult to spot, and will only rear their head after months or even years of operation.

So, is this a case of an incompetent individual? Although nominally yes, I suspect that the real problem is that he was raised on a diet of big CPUs. Perhaps the universities could do these engineers a favor, and throw away the ARM based evaluation boards and replace them with an 8051 based system.

Home