Posts Tagged ‘processors’

Trends in Embedded Software Design

Wednesday, April 18th, 2012 Michael Barr

In many ways, the story of my career as an embedded software developer is intertwined with the history of the magazine Embedded Systems Design. When it was launched in 1988, under the original title Embedded Systems Programming (ESP), I was finishing high school. Like the vast majority of people at that time, I had never heard the term “embedded system” or thought much about the computers hidden away inside other kinds of products. Six years later I was a degreed electrical engineer who, like many EEs by that time in the mid-90’s, had a job designing embedded software rather than hardware. Shortly thereafter I discovered the magazine on a colleague’s desk, and became a subscriber and devotee.

The Early Days

In the early 1990s, as now, the specialized knowledge needed to write reliable embedded software was mostly not taught in universities. The only class I’d had in programming was in FORTRAN; I’d taught myself to program in assembly and C through a pair of hands-on labs that were, in hindsight, my only formal education in writing embedded software. It was on the job and from the pages of the magazine, then, that I first learned the practical skills of writing device drivers, porting and using operating systems, meeting real-time deadlines, implementing finite state machines, the pros and cons of languages other than C and assembly, remote debugging and JTAG, and so much more.

In that era, my work as a firmware developer involved daily interactions with Intel hex files, device programmers, tubes of EPROMs with mangled pins, UV erasers, mere kilobytes of memory, 8- and 16-bit processors, in-circuit emulators, and ROM monitors. Databooks were actual books; collectively, they took up whole bookshelves. I wrote and compiled my firmware programs on an HP-UX workstation on my desk, but then had to go downstairs to a lab to burn the chips, insert them into the prototype board, and test and debug via an attached ICE. I remember that on one especially daunting project eight miles separated my compiler and device programmer from the only instance of the target hardware; a single red LED and a dusty oscilloscope were the extent of my debugging toolbox.

Like you I had the Internet at my desk in the mid-90s, but it did not yet provide much useful or relevant information to my work other than via certain FTP sites (does anyone else remember FTPing into sunsite.unc.edu? or Gopher?). The rest was mostly blinking headlines and dancing hamster; and Amazon was merely the world’s biggest river. There was not yet an Embedded.com or EETimes.com. To learn about software and hardware best practices, I pursued an MSEE and CS classes at night and traveled to the Embedded Systems Conferences.

At the time, I wasn’t aware of any books about embedded programming. And every book that I had found on C started with “Hello, World”, only went up in abstraction from there, and ended without ever once addressing peripheral control, interrupt service routines, interfacing to assembly language routines, and operating systems (real-time or other). For reasons I couldn’t explain years later when Jack Ganssle asked me, I had the gumption to think I could write that missing book for embedded C programmers, got a contract from O’Reilly, and did–ending, rather than starting, mine with “Hello, World” (via an RS-232 port).

In 1998, a series of at least three twists of fate spanning four years found me taking a seat next to an empty chair at the speaker’s lunch at an Embedded Systems Conference. The chair’s occupant turned out to be Lindsey Vereen, who was then well into his term as the second editor-in-chief of the magazine. In addition to the book, I’d written an article or two for ESP by that time and Lindsey had been impressed with my ability to explain technical nuances. When he told me that he was looking for someone to serve as a technical editor, I didn’t realize it was the first step towards my role in that position.

Future Trends

Becoming and then staying involved with the magazine, first as technical editor and later as editor-in-chief and contributing editor, has been a highlight of my professional life. I had been a huge fan of ESP and of its many great columnists and other contributors in its first decade. And now, looking back, I believe my work helped make it an even more valuable forum for the exchange of key design ideas, best practices, and industry learning in its second decade. And, though I understand the move away from print towards online publishing and advertising, I am nonetheless saddened to see the magazine come to an end.

Reflecting back on these days long past reminds me that a lot truly has changed about embedded software design. Assembly language is used far less frequently today; C and C++ much more. EPROMs with their device programmers and UV erasers have been supplanted by flash memory and bootloaders. Bus widths and memory sizes have increased dramatically. Expensive in-circuit emulators and ROM monitors have morphed into inexpensive JTAG debug ports. ROM-DOS has been replaced with whatever Microsoft is branding embedded Windows this year. And open-source Linux has done so well that it has limited the growth of the RTOS industry as a whole–and become a piece of technology we all want to master if only for our resumes.

So what does the future hold? What will the everyday experiences of embedded programmers be like in 2020, 2030, or 2040? I see three big trends that will affect us all over those timeframes, each of which has already begun to unfold.

Trend 1: Volumes Finally Shift to 32-bit CPUs

My first prediction is that inexpensive, low-power, highly-integrated microcontrollers–as best exemplified by today’s ARM Cortex-M family–will bring 32-bit CPUs into even the highest volume application domains. The volumes of 8- and 16-bit CPUs will finally decline as these parts become truly obsolete.

Though you may be programming for a 32-bit processor already, it’s still true that 8- and 16-bit processors still drive CPU chip sales volumes. I’m referring, of course, to microcontrollers such as those based on 8051, PIC, and other instruction set architectures dating back 30-40 years. These older architectures remain popular today only because certain low-margin, high-volume applications of embedded processing demand squeezing every penny out of BOM cost.

The limitations of 8- and 16-bit architectures impact the embedded programmers in a number of ways. First, there are the awkward memory limitations resulting from limited address bus widths–and the memory banks, segmenting techniques, and other workarounds to going beyond those limitations. Second, these CPUs are much better at decision making than mathematics–they lack the ability to manipulate large integers efficiently and have no floating-point capability. Finally, these older processors frequently lack modern development tools, are unable to run larger Internet-enabled operating systems, such as Linux, and don’t feature the security and reliabiltiy protections afforded by an MMU.

There will, of course, always be many applications that are extremely cost-conscious, so my prediction is not that they will disappear completely, but that the overall price (including BOM cost as well as power consumption) of 32-bit micro controllers, with their improved instruction set architectures and transistor geometries, will win on price. That will put the necessary amount of computing power into the hands of some designers and make our work easier for all of us. It also helps programmers accomplish more in less time.

Trend 2: Complexity Forces Programmers Beyond C

My second prediction is that the days of the C programming language’s dominance in embedded systems are numbered.

Don’t get me wrong, C is a language I know and love. But, as you may know firsthand, C is simply not up to the task of building systems requiring over a million lines of code. Nonetheless, the demanded complexity of embedded software has been driving our systems towards more than a million lines of code. At this level of complexity, something has to give.

Additionally, our industry is facing a crisis: the average age of an embedded developer is rapidly increasing and C is generally not taught in universities anymore. Thus, even as the demand for embedded intelligence in every industry continues to increase, the population of skilled and experienced C programmers is on the decline. Something has to give on this front too.

But what alternative language can be used to build real-time software, manipulate hardware directly, and be quickly ported to numerous instruction set architectures? It’s not going to be C++ or Ada or Java, for sure–as those have already been tried and found lacking. A new programming language is probably not the answer either, across so many CPU families and with so many other languages already tried.

Thus I predict that tools that are able to reliably generate those millions of lines of C code automatically for us, based on system specifications, will ultimately take over. As an example of a current tool of this sort that could be part of the trend, I direct your attention to Miro Samek’s dandy open source Quantum Platform (QP) framework for event-driven programs and his (optional) free Quantum Modeler (QM) graphical modeling tool. You may not like the idea of auto-generated code today, but I guarantee that once you push a button to generate consistent and correct code from an already expressive statechart diagram, you will see the benefits of the overall structure and be ready to move up a level in programming efficiency.

I view C as a reasonable common output language for such tools (given that C can manipulate hardware registers directly and that every processor ever invented has a C compiler). Note that I do expect there to be continued demand for those of us with the skills and interest to fine tune the performance of the generated code or write device drivers to integrate it more closely to the hardware.

Trend 3: Connectivity Drives Importance of Security

We’re increasingly connecting embedded systems–to each other and to the Internet. You’ve heard the hype (e.g., “Internet of things” and “ubiquitous computing”) and you’ve probably already also put TCP/IP into one or more of your designs. But connectivity has a lot of implications that we are only starting to come to terms with. The most obvious of these is security.

A connected device cannot hide for long behind “security through obscurity” and, so, we must design security into our connected devices from the start. In my travels around our industry I’ve observed that the majority of embedded designers are largely unfamiliar with security. Sure some of you have read about encryption algorithms and know the names of a few. But mostly the embedded community is shooting in the dark as security designers, within organizations that aren’t of much help. And security is only as strong as the weakest link in the chain.

This situation must change. Just as Flash memory has supplanted UV-erasable EPROM, so too will over-the-net patches and upgrades take center stage as a download mechanism in coming years and decades. We must architect our systems first to be secure and then to accepted trusted downloads so that our products can keep up in the inevitable arms race against hackers and attackers.

And That’s a Wrap

Whatever the future holds, I am certain that embedded software development will remain an engaging and challenging career. And you’ll still find me writing about the field at https://embeddedgurus.com/barr-code and http://twitter.com/embeddedbarr.

Out of the Bottle

Sunday, September 16th, 2001 Michael Barr

The genie is really out of the bottle this time. As first reported in Komsomolskaya Pravda newspaper and later by ABCNews.com, Russian engineer and entrepreneur Dmitri Zhurin recently invented a talking bottle cap. This is a bottle cap that looks like any other, but houses a tiny battery-powered embedded system that speaks to those gathered around the bottle for a drink.

Why would anyone want a talking bottle cap? For several reasons, according to the inventor. First, because Russians like to drink, but don’t like to drink alone. Initially, the voice in the cap offers only generally helpful instructions like “pour.” As additional drinks are taken from the bottle, however, the cap’s performance gets livelier, ultimately providing a friendly group of incorporeal drinking companions. The second reason for a talking bottle cap is that it can help out with the toasting duties. It seems Russians mostly drink in communal rounds at parties, with a toast preceding each round. Coming up with a large number of toasts in one evening can be a real challenge for the host or hostess.

Clearly, these problems are so compelling that the market had to respond with a solution—hence Mr. Zhurin’s Vodka Genie. Now, whether or not consumers will actually buy the Vodka Genie is an interesting question. What’s more interesting to me, though, is that here’s yet another place where no one expected computing power to turn up—and yet it has.

>It would be a major understatement to say that in 1943, when Thomas Watson commissioned the oft-cited study that concluded there was a total world market for five computers and that IBM would make all of them, no one could have imagined computers inside bottle caps. The first commercial microprocessor wouldn’t even be developed for three more decades!

Yet look at where we are less than three decades after the invention of that first microprocessor. We are literally surrounded by computing engines—the vast majority of them unrecognized as such. And this is only the beginning of the next era in computing—the embedded era.

>So much attention is focused on developments at the high-end—cheaper 32-bit processors, entire systems-on-a-chip, increasing memory budgets, connectivity to the world, millions of lines of code, the need for better development languages and debug tools, and increasing use of off-the-shelf software components—that it’s easy to forget that there are ongoing developments at the low-end of the spectrum too.

Processors of the 16-, 8-, and 4-bit varieties get cheaper every year too. New family members add more on chip memory and peripherals for no additional cost. Some even include specialized capabilities like speech synthesis, for niche applications like talking dolls bottle caps. All for just pennies a chip, at the current 4-bit price-points.

The point I’m trying to make is that 4- and 8-bit micros will never be replaced. In truth, the number of new opportunities for simple 4-bit micros is expanding at a much faster rate than the number of new uses for 32-bitters. And it only takes a single engineer and a few months to design and build a disposable product like the Vodka Genie, which could very well sell millions before this reaches your mailbox.

NOTE: this article was originally published on 6/2/01.