embedded software boot camp

The Limits of Knowledge

November 7th, 2003 by Michael Barr

The practice of engineering has often been likened to a form of art. It is, I think, the art of making scientific tradeoffs. As scientists with a practical, rather than academic or theoretical, focus, we are often challenged to build things on the basis of information at or very near the boundaries of what is known to man.

In virtually all endeavors of engineers, there are unknowns, subtleties, and complexities over which we exercise limited control. The cost, in engineering time and resources, to fully comprehend everything about a system is in some cases unbounded; such a thorough analysis is generally at least cost prohibitive. If the product works, we can’t often afford to do much more than ship it and move on to the next project.

Just as tradeoffs are made in the area of features or implementation techniques, so too must tradeoffs be made in the area of knowledge. It is rarely possible to build a saleable product (that will also earn our employer profits) while at the same time completely understanding all of the possible implications of our numerous design and implementation decisions.

Simply put: components fail. And when individual components fail they can take even carefully-designed systems down with them. Such system failures do sometimes also take the lives of their operators or other people. Catastrophes like this are unfortunate—and bound to increase as people rely increasingly on technological solutions to everyday problems.

The designers of each system must decide how much time and money to spend investigating the dark corners. Those designing pacemakers and airplanes, for example, are responsible to shine the light of knowledge brightly in all corners of their designs; whereas the designers of stereos and televisions can leave a great deal more to chance.

There are, of course, areas of engineering that suffer from the need for thorough analysis but are not profit driven. Manned missions to space, such as those conducted by NASA, are of this nature. Tremendous efforts are made by the engineers working at NASA to understand all of the complexities and potential failure points of the Space Shuttles. Unfortunately, there is likely an unbounded amount of work to be done; these systems have millions of individual components and operate in unforgiving and poorly understood environments. And there’s only limited time to show results.

As the losses of the Challenger and Columbia have demonstrated, sometimes it is a part of a design that is thought to be reasonably well understood that is actually the most dangerous. In both cases, very similar past failures had been observed, documented, and discussed by engineers—yet the true problem and the danger it posed was not fully comprehended until after each catastrophe struck.

I don’t blame the engineers at NASA for the loss of either shuttle; in both cases they knew there was a problem but had too many other, seemingly more important, concerns. I’m willing to let NASA administrators and their overseers decide if managerial mistakes were made and, if so, how to correct them. But all engineers everywhere should learn from NASA’s mission failures: What is the true source of the problem in your system? What danger does it pose? How can you overcome organizational challenges to see the proper solution through?

Verticality

September 25th, 2003 by Michael Barr

More than 6 billion processors of all types (4-bit to 64-bit plus DSPs) were sold in 2002. That astonishing number was actually off about 25% from the record volume of over 8 billion recorded two years earlier. Only about 100 million of those chips (just 1.5%) became the brains of PCs, Macs, and UNIX workstations born each year; embedded systems designers are the cause of the rest.

With about one new processor taking hold per person per year, it would be fair to say embedded systems are everywhere—or soon will be. The technology is only three decades old at this point—imagine the ultimate potential! Applications span the realm of imagination. Think of the number and diversity of insects and you’re on the right track.

There are so many different applications, and thus so many different types of embedded processors, in fact, that it’s becoming increasingly valuable in some circles to break that huge market up into more easily digested chunks. Applications can be grouped roughly into about seven key categories: communications, computer peripherals, industrial controls, military/aerospace, consumer, medical, and automotive, plus the obligatory catchall other/miscellaneous. These are termed vertical markets.

We should give some thought to what it is that ties all of the many vertical markets together. There is significant overlap between the work of designers of embedded systems for applications as seemingly diverse as consumer gadgets and military/aerospace systems. I suspect it is in this overlap that you find your identity as an embedded system designer.

Smart Sensors

July 19th, 2003 by Michael Barr

A lot of media attention is generally focused on the latest gee-whiz processor advances and forecasts of their potential uses and market sizes. But the trend of older technologies coming down in price and thus creating new markets for themselves is sometimes even more exciting. As prices fall, new uses for old components emerge. One example of this trend is the increasing use of so-called “smart sensors,” which have on-board processing.

Being analog components, most sensors are prone to nonlinearities; they also exhibit offset and gain errors. At the outputs, sensors typically have limited dynamic range and high impedance, which make them susceptible to electrical noise as well. As a result, ordinary (dumb?) sensors typically require dedicated external circuitry for signal conditioning and to perform error compensation and filtering. If lots of data is generated in spurts, buffering too may be required.

Unlike their dumb brethren, smart sensors integrate the sensor along with the required buffering and conditioning circuitry in a single enclosure. Circuitry on-board the smart sensor usually consists of data converters, a processor and firmware, and some form of nonvolatile memory. Being processor-based devices, such sensors can be custom-programmed to satisfy specific system requirements and later reprogrammed as needed.

A smart sensor can be easily added to a piece of embedded hardware, say as a single chip or a daughtercard via a digital interface. Or, as is increasingly common in the field of remote data sensing, a wireless-equipped smart sensor can perform local processing of the raw data then ship the processed data up to a base station at regular intervals.

The benefits are tremendous. Vendor-supplied firmware on-board a smart sensor can automate the removal of nonlinearities and offset and gain errors from raw sensor readings, thus eliminating the need for custom post-processing at the main processor. The calibration data on a smart sensor can also be stored locally, in nonvolatile memory, so that the sensor module as a whole can be moved and reused without recalibration.

On-board data processing and local storage also make new capabilities at the sensor’s location, such as the ability to take action without intervention by the host processor. For example, a smart sensor could issue a quick early warning when measured parameters are approaching critical limits, or are changing at an abnormal rate. A sensor could even send a maintenance alert to the main system controller calling for replacement.

Even components as seemingly mundane as these can be made significantly more interesting with a little firmware. As the price of computing power drops, useful new applications for that power do indeed emerge.

Firmer-ware?

June 29th, 2003 by Michael Barr

Too much of the terminology embedded systems engineers use in their everyday oral communications and written documentation is only vaguely defined—at best. For example, terms like mutex, binary semaphore, and semaphore are often interchanged by software developers. In a related context, task, thread, and process are also tossed around as if they all represented the very same construct.

Hardware designers too are terminologically challenged. For example, does the new board need a 1 kb, 1 kB, or 1 KB FIFO? Will it have flash or Flash, or is that flash really an EEPROM? There are also terms like emulator, which are overloaded with multiple meanings and must be expanded to be fully understood.

About a year ago, Jack Ganssle and I teamed up to finally do something about all this linguistic nonsense. We have since combined our years of experience as embedded hardware and software developers and precisely defined more than 2,800 terms used in the field of embedded systems design, development, and testing. The results, called the Embedded Systems Dictionary, should be available from CMP Books about the time you read this.

But this is not meant to be a sales pitch. I bring all this up only as background. You see, at a joint “book launch” attended by Jack and me at the CMP Books booth at the Embedded Systems Conference San Francisco, we had a very interesting conversation with a hardware designer who considered the results of his Verilog coding to be firmware. Neither Jack nor I had encountered such a usage of the term before.

In fact, Jack and I have both often seen the use of the term firmware somewhat restricted to either specifically DSP code or embedded software written entirely in assembly. When compiling the dictionary we agreed, however, that the definition ought properly to include code written in any programming language:

firmware n. Executable software that is stored within ROM. (Usage Note: This term is interchangeable with “embedded software” and sometimes used even when the executable is not stored in ROM.)

But this fellow brought up a very interesting point. At what point does hardware written in Verilog (or VHDL) and compiled into an executable binary format become indistinguishable from software written in C (or any other language) and compiled into an executable binary format. Is the hardware executable “firmer”-ware than the software executable? Or are they both just different flavors of firmware?

What will happen when, ultimately, a special-purpose C compiler can generate hardware? Or when a UML tool can automatically (and optimally) partition the description of a system’s requirements into hardware and software and output a pair of binaries for the processor and the programmable logic in a single-chip platform FPGA?

At that point, will the hardware designers and software developers even be able to distinguish themselves from each other? As the line between hardware and software continues to blur, perhaps it is only the hours we keep and the forms of caffeine we favor that will belie our EE or CS backgrounds. That’s when things will get really interesting.

Distributed Development

May 20th, 2003 by Michael Barr

Though the trend toward overseas development has been brewing for more than a decade, I’ve just lately been noticing a number of IT-sector layoff announcements in the U.S. featuring near-simultaneous announcements of increases in overseas outsourcing by the same companies. It’s not entirely clear if there’s an active migration of engineering jobs from the U.S. to overseas, but there’s certainly a decent case to be made that something like that is happening.

According to the Bureau of Labor Statistics, over 120,000 electrical engineers and computer scientists were unemployed at the end of 2002. That represents almost a three-fold increase in just the past two years, and a near record unemployment level. Yet even as skilled engineers remained in good supply, companies such as Microsoft, Sun, and HP recently announced major expansions of their overseas development operations.

To be honest, I am not sure what to make of this. I favor free markets and believe in the equality of all people in all nations. I traveled to India in 2001 and was impressed by the entrepreneurial spirit the new engineering jobs have generated there. I’m also pleased that engineers there and in many other parts of the world have increasing job prospects and standards of living.

You may be thinking that outsourcing is obviously a negative trend and that “the American engineer” will suffer. If you’re unemployed right now and are personally affected, hang in there. You’ll almost certainly disagree with what I have to say next, but I’ll say it anyway.

The very technologies we’ve been developing and improving for the past few decades are key enablers of distributed development. As the world becomes more interconnected, it becomes increasingly reasonable to bring together a group of geographically-diverse individuals with the collective skill set needed to get the job done. If some of these minds are on the other side of the world, so be it. If they’ve got the same skills as someone here but will work for a lot less, we’ll lose that job.

But in the long run we’ll win too. Increasing standards of living for workers in other parts of the world do more than just take jobs from better developed countries. Those workers spend the money they make in a variety of ways and that expands markets. Things also get cheaper here as a result of their labors. The ensuing economic growth creates more opportunities and jobs here too. Unfortunately, the process doesn’t happen as quickly or seamlessly as anyone likes—and some individuals do get caught in the crossfire.

Fortunately, U.S. engineers continue to be among the best in the world. Those who continue to improve their skills will always be in high demand. They’ll also be well poised when the global economy eventually does turn up again, which I’m confident it will.