embedded software boot camp

Trends in Embedded Software Design

Wednesday, April 18th, 2012 by Michael Barr

In many ways, the story of my career as an embedded software developer is intertwined with the history of the magazine Embedded Systems Design. When it was launched in 1988, under the original title Embedded Systems Programming (ESP), I was finishing high school. Like the vast majority of people at that time, I had never heard the term “embedded system” or thought much about the computers hidden away inside other kinds of products. Six years later I was a degreed electrical engineer who, like many EEs by that time in the mid-90’s, had a job designing embedded software rather than hardware. Shortly thereafter I discovered the magazine on a colleague’s desk, and became a subscriber and devotee.

The Early Days

In the early 1990s, as now, the specialized knowledge needed to write reliable embedded software was mostly not taught in universities. The only class I’d had in programming was in FORTRAN; I’d taught myself to program in assembly and C through a pair of hands-on labs that were, in hindsight, my only formal education in writing embedded software. It was on the job and from the pages of the magazine, then, that I first learned the practical skills of writing device drivers, porting and using operating systems, meeting real-time deadlines, implementing finite state machines, the pros and cons of languages other than C and assembly, remote debugging and JTAG, and so much more.

In that era, my work as a firmware developer involved daily interactions with Intel hex files, device programmers, tubes of EPROMs with mangled pins, UV erasers, mere kilobytes of memory, 8- and 16-bit processors, in-circuit emulators, and ROM monitors. Databooks were actual books; collectively, they took up whole bookshelves. I wrote and compiled my firmware programs on an HP-UX workstation on my desk, but then had to go downstairs to a lab to burn the chips, insert them into the prototype board, and test and debug via an attached ICE. I remember that on one especially daunting project eight miles separated my compiler and device programmer from the only instance of the target hardware; a single red LED and a dusty oscilloscope were the extent of my debugging toolbox.

Like you I had the Internet at my desk in the mid-90s, but it did not yet provide much useful or relevant information to my work other than via certain FTP sites (does anyone else remember FTPing into sunsite.unc.edu? or Gopher?). The rest was mostly blinking headlines and dancing hamster; and Amazon was merely the world’s biggest river. There was not yet an Embedded.com or EETimes.com. To learn about software and hardware best practices, I pursued an MSEE and CS classes at night and traveled to the Embedded Systems Conferences.

At the time, I wasn’t aware of any books about embedded programming. And every book that I had found on C started with “Hello, World”, only went up in abstraction from there, and ended without ever once addressing peripheral control, interrupt service routines, interfacing to assembly language routines, and operating systems (real-time or other). For reasons I couldn’t explain years later when Jack Ganssle asked me, I had the gumption to think I could write that missing book for embedded C programmers, got a contract from O’Reilly, and did–ending, rather than starting, mine with “Hello, World” (via an RS-232 port).

In 1998, a series of at least three twists of fate spanning four years found me taking a seat next to an empty chair at the speaker’s lunch at an Embedded Systems Conference. The chair’s occupant turned out to be Lindsey Vereen, who was then well into his term as the second editor-in-chief of the magazine. In addition to the book, I’d written an article or two for ESP by that time and Lindsey had been impressed with my ability to explain technical nuances. When he told me that he was looking for someone to serve as a technical editor, I didn’t realize it was the first step towards my role in that position.

Future Trends

Becoming and then staying involved with the magazine, first as technical editor and later as editor-in-chief and contributing editor, has been a highlight of my professional life. I had been a huge fan of ESP and of its many great columnists and other contributors in its first decade. And now, looking back, I believe my work helped make it an even more valuable forum for the exchange of key design ideas, best practices, and industry learning in its second decade. And, though I understand the move away from print towards online publishing and advertising, I am nonetheless saddened to see the magazine come to an end.

Reflecting back on these days long past reminds me that a lot truly has changed about embedded software design. Assembly language is used far less frequently today; C and C++ much more. EPROMs with their device programmers and UV erasers have been supplanted by flash memory and bootloaders. Bus widths and memory sizes have increased dramatically. Expensive in-circuit emulators and ROM monitors have morphed into inexpensive JTAG debug ports. ROM-DOS has been replaced with whatever Microsoft is branding embedded Windows this year. And open-source Linux has done so well that it has limited the growth of the RTOS industry as a whole–and become a piece of technology we all want to master if only for our resumes.

So what does the future hold? What will the everyday experiences of embedded programmers be like in 2020, 2030, or 2040? I see three big trends that will affect us all over those timeframes, each of which has already begun to unfold.

Trend 1: Volumes Finally Shift to 32-bit CPUs

My first prediction is that inexpensive, low-power, highly-integrated microcontrollers–as best exemplified by today’s ARM Cortex-M family–will bring 32-bit CPUs into even the highest volume application domains. The volumes of 8- and 16-bit CPUs will finally decline as these parts become truly obsolete.

Though you may be programming for a 32-bit processor already, it’s still true that 8- and 16-bit processors still drive CPU chip sales volumes. I’m referring, of course, to microcontrollers such as those based on 8051, PIC, and other instruction set architectures dating back 30-40 years. These older architectures remain popular today only because certain low-margin, high-volume applications of embedded processing demand squeezing every penny out of BOM cost.

The limitations of 8- and 16-bit architectures impact the embedded programmers in a number of ways. First, there are the awkward memory limitations resulting from limited address bus widths–and the memory banks, segmenting techniques, and other workarounds to going beyond those limitations. Second, these CPUs are much better at decision making than mathematics–they lack the ability to manipulate large integers efficiently and have no floating-point capability. Finally, these older processors frequently lack modern development tools, are unable to run larger Internet-enabled operating systems, such as Linux, and don’t feature the security and reliabiltiy protections afforded by an MMU.

There will, of course, always be many applications that are extremely cost-conscious, so my prediction is not that they will disappear completely, but that the overall price (including BOM cost as well as power consumption) of 32-bit micro controllers, with their improved instruction set architectures and transistor geometries, will win on price. That will put the necessary amount of computing power into the hands of some designers and make our work easier for all of us. It also helps programmers accomplish more in less time.

Trend 2: Complexity Forces Programmers Beyond C

My second prediction is that the days of the C programming language’s dominance in embedded systems are numbered.

Don’t get me wrong, C is a language I know and love. But, as you may know firsthand, C is simply not up to the task of building systems requiring over a million lines of code. Nonetheless, the demanded complexity of embedded software has been driving our systems towards more than a million lines of code. At this level of complexity, something has to give.

Additionally, our industry is facing a crisis: the average age of an embedded developer is rapidly increasing and C is generally not taught in universities anymore. Thus, even as the demand for embedded intelligence in every industry continues to increase, the population of skilled and experienced C programmers is on the decline. Something has to give on this front too.

But what alternative language can be used to build real-time software, manipulate hardware directly, and be quickly ported to numerous instruction set architectures? It’s not going to be C++ or Ada or Java, for sure–as those have already been tried and found lacking. A new programming language is probably not the answer either, across so many CPU families and with so many other languages already tried.

Thus I predict that tools that are able to reliably generate those millions of lines of C code automatically for us, based on system specifications, will ultimately take over. As an example of a current tool of this sort that could be part of the trend, I direct your attention to Miro Samek’s dandy open source Quantum Platform (QP) framework for event-driven programs and his (optional) free Quantum Modeler (QM) graphical modeling tool. You may not like the idea of auto-generated code today, but I guarantee that once you push a button to generate consistent and correct code from an already expressive statechart diagram, you will see the benefits of the overall structure and be ready to move up a level in programming efficiency.

I view C as a reasonable common output language for such tools (given that C can manipulate hardware registers directly and that every processor ever invented has a C compiler). Note that I do expect there to be continued demand for those of us with the skills and interest to fine tune the performance of the generated code or write device drivers to integrate it more closely to the hardware.

Trend 3: Connectivity Drives Importance of Security

We’re increasingly connecting embedded systems–to each other and to the Internet. You’ve heard the hype (e.g., “Internet of things” and “ubiquitous computing”) and you’ve probably already also put TCP/IP into one or more of your designs. But connectivity has a lot of implications that we are only starting to come to terms with. The most obvious of these is security.

A connected device cannot hide for long behind “security through obscurity” and, so, we must design security into our connected devices from the start. In my travels around our industry I’ve observed that the majority of embedded designers are largely unfamiliar with security. Sure some of you have read about encryption algorithms and know the names of a few. But mostly the embedded community is shooting in the dark as security designers, within organizations that aren’t of much help. And security is only as strong as the weakest link in the chain.

This situation must change. Just as Flash memory has supplanted UV-erasable EPROM, so too will over-the-net patches and upgrades take center stage as a download mechanism in coming years and decades. We must architect our systems first to be secure and then to accepted trusted downloads so that our products can keep up in the inevitable arms race against hackers and attackers.

And That’s a Wrap

Whatever the future holds, I am certain that embedded software development will remain an engaging and challenging career. And you’ll still find me writing about the field at https://embeddedgurus.com/barr-code and http://twitter.com/embeddedbarr.

Tags: , , , , , , , , , ,

11 Responses to “Trends in Embedded Software Design”

  1. Rakshith Amarnath says:

    Hi Michael,
    This is a good article and just as I expected it to be. You’ve hit the nail on the head when you have mentioned the 3 trends.
    I hear especially a lot about model driven development these days. But I am a little skeptical about the quality of code it generates as of now. You have talked about 32 bit controllers but I haven’t seen you mention multi-core. That is a hardware aspect I agree but to generate efficient and parallel embedded software which utilizes the multi-core hardware would be a key challenge too is my surmise. Feel free to add more details here.

  2. Bob Stout says:

    Hi Michael,

    Good editorial. I have been working with embedded systems since my TRS-80 days and wonder where things will go. I still think for many small projects and circuit “glue” 8 bitters will still be practical…for a while. But many things are just continuing to get more complex so I think you are right on regarding Arm processors, etc. It has been an interesting journey and I hope to ride it for a little while longer, God willing! We will see.

    I have enjoyed your articles as well as Jack Ganzle’s over the years. Keep it up and God speed!

    Bob

  3. Lee Riemenschneider says:

    Mentor Graphics’ BridgePoint is another excellent tool for generating C code from models, and it models at a higher level of abstraction than what I’ve seen Miro Samek propose. It is also very mature, as Shlaer and Mellor started the underlying technology over 20 years ago. Other tools in this vein are Abstract Solutions’ iUml and Pathfinder Solutions’ PathMATE. Under this technology, the generation of code is referred to as, “model compilation”. Using a model compiler is analogous to using a C compiler, as the UML models are to the C code as C is to assembly. Optimization of the generated code is done through compiler switches, not by modifying the model. The compilation is also wholly separate from the modeling, so changing languages (C++, Java, SystemC, …) only involves invoking the correct model compiler, again without changes to the model. I’ve been using the technology (formerly Shlaer-Mellor, now called Executable UML) for over 10 years in real-time/embedded.
    A good, recent book on the technology is Model-Driven Development by H.S. Lahman. If nothing else, his history of software development section is worth reading. (He started with plug board programming.)

    • Miro Samek says:

      BridgePoint falls into the category of big, complex, high-ceremony tools, which also includes Rhapsody, Rational Rose RT, Statemate, StateFlow, Artisan Studio, Enterprise Architect, visualSTATE, Visual Paradigm, MagicDraw, and many others.

      Of all these choices, tools based on xUML (eXecutable UML), of which BridgePoint is the leading example, require perhaps the highest degree of ceremony. xUML tools are designed to create an illusion that your model is completely “platform and technology independent” (Whatever that means in embedded systems that by any definition of the term are specific to the task and technology). So, for example, to avoid committing to a specific programming language (such as C), xUML tools use an Abstract Action Language (AAL) to code actions executed by state machines as well as any other “Domain Functions”. Interestingly, various xUML tools use different AALs and BridgePoint, specifically, uses Object Action Language (OAL), which is somewhat like C (only more crippled), but with a different syntax.

      But what if I don’t care for the ability to change my embedded code from C to, say, Java? What if I don’t wish to learn and tweak hundreds of parameters of the “model compiler” every time I want to have a little better control over the generated code? What if I care more about avoiding endless repetitions in my state machines (a.k.a. “state/transition explosion”) by using state hierarchy, which BridgePoint for some reason does not support?

      The point is, as I argue in my EmbeddedGurus blog post “Economics 101: UML in Embedded Systems”, that all too often a big, complex tool incurs high costs while still missing some important objectives of the project. With such very high ongoing overhead, it is quite easy that a tool fails to deliver a positive return on investment (ROI). When this happens, the tool and the whole modeling approach gets abandoned. And this is just too bad, because it is like throwing out the baby with the bath water.

      So, while I don’t contest that big, high-ceremony modeling tools can be applicable in certain situations, I do argue that the embedded industry can also benefit from novel, simpler, more agile approaches. As I describe in my EmbeddedGurus post “Turning automatic code generation upside down”, the free QM modeling tool provides an example of a truly novel, low-risk, and low-overhead approach that fundamentally simplifies modeling and code generation for real-time embedded systems.

  4. Rich Williams says:

    Very thought provoking trends!
    It’s interesting to think about how your trends may affect each other.
    Trend 2 could reduce trend 1. If the code is generated by models and state charts, it matters less if it’s on a 8, 16 or 32 bit processor as long as it meets the requirement for the product. The best solution may not even be a regular processor at all but a gate array or a processor with programmable logic.
    Trend 2 may increase the problems of trend 3. If there are security holes in the code generated by the modeling tools, it will be widely deployed across multiple platforms and very attractive to hackers.

    • Miro Samek says:

      To better assess Michael’s predictions, it’s perhaps beneficial to better understand what kind of code is generated fully automatically by the tools Michael mentions in his Trend #2. Modeling tools generally produce the so called “housekeeping code”, such as state machine structure including states, various kinds of transitions, guard conditions, etc. However, the tools typically don’t autonomously come up with the lower-level “action code” executed by state machines, such as entry/exit actions, or actions on transitions. This type of code is simply added to the model by the designer and the tool makes sure that it is executed in the right places of the overall structure.

      So, for example, Trend #1 (32-bit processors) is not as coupled with Trend #2 (code generation), as it might first appear. This is because the benefits of 32-bit processors over 8-bitters lie mostly in the low-level “action code”.

      For example, accessing any peripheral with more than 8-bit-wide data, such as 10-bit ADC or a 16-bit timer, on an 8-bit architecture requires multiple instructions. This can lead to very subtle and difficult to reproduce bugs, for example overflow from the low-byte to the high-byte in a 16-bit timer. (Jack Ganssle’s ESC class “RealReal-Time Sysems” is entirely devoted to this type of issues). Many 8-bitters provide special latch registers and other hardware tricks to avoid such problems. But the programmer must *know* about them, or else they won’t work (e.g., the latch works correctly only if low-byte is read before the high-byte or vice versa). Add to this segmented memory architecture, crippled stack, and other gotchas of 8-bitters, and you can see how 32-bitters can make our life easier. All those problems are simply non-existent in the 32-bit world.

      Regarding Trend #3 (connectivity and security), I think that Trend #2 (code generation) helps, rather than hinders security. Most security vulnerabilities arise though inconsistencies (such as buffer overruns). Automatically generated code tends to dramatically improve consistency. And if any vulnerabilities are discovered in the code generation, it’s much easier to fix them at the source (code generator) and *consistently* re-generate the code.

  5. Seb says:

    Hello,
    I really enjoyed reading this article, I am jsut 29 years old and I suppose everyone in my age without a long history of working in embeddeded systems design would agree it is really hard to understand the roots.

    You learn a whole lot stuff about history in school, but noone ever tells where electronics history comes from. So it is really a pleassure to get your inside about how one started to learn 20 – 30 years ago. For example I did not learn assembly nor c language in university I did learn it from working expirience, but since I can program in C now I will never really learn assembly now, since my systems are running succesfully in C.

    And even I can see those future trends, I have expirience with freescale and TIs MSP430 and both are introducing more and more graphical program helpers.
    For the newcomers this will help big time, I guess. I wont use it, since I now know what I am using, but so you older guys could say you still program time critical parts in plain assembly code, which then I wouldnt be able to use since I was starting with C programming.

    And so time keeps on going with its trends and we have to see if it works out well.
    Thanks for the article.

  6. Snehal Oza says:

    A great read! I too think the most profound change will be #2 – that of using tools to model the logic. It’s true that today’s tools may not be powerful enough to realize all logic written in C. However, if one sees evolution of EDA tools ASIC/SoC designers use for multimillion gate RTL design, with great complexities involving different clock domains, power domains, physical parameter related constrains at high speeds and shrinking footprint of transistors – it’s imperative that embedded software development too will become tool/model driven. One of the compelling reason for silicon to use such tools is cost of re-spin. The law of economics will eventually catch up with embedded software too. With increasing complexities of embedded programming owing to both feature packed systems and powerful system resources in terms of multi-core etc. coupled with exponential growth of embedded devices will eventually make every field upgrades increasingly expensive. That will drive evolution of xUML tools. I would also imagine the concept of DFT and DFM will soon touch embedded software development as NRE costs attributed to embedded software development becomes larger and larger.

    Thanks again for a thought provoking article.

  7. Jeffrey says:

    When you talk about using automated tools to generate code it might have been useful to note that there are already some definable characteristics that tend to clarify the differences in the tools that are currently available. The names that describe the categories of these tools are still fluid but at one end you’ve got what are frequently referred to as “model-based” tools that take their input from a GUI-based diagram (for something like a control system) and convert that diagram into modules that can be linked to a small run-time to make an application that meets even the most stringent requirements of absolute determinism imposed by the highest standards of a requirement like RTCA-DO-178C, the classic example of this type of tool is SCADE. At the other extreme you’ve got “code generators” like Rhapsody, which accept as input a language like UML but are designed to be much more friendly to environments which are allowed to be both object-oriented and potentially multi-threaded. The other major difference between these extremes is the source code output of the code generator tool has to be designed to generate human-readable code, since this type of tool is frequently counted on to provide mostly a template for considerable hand-customizing of the output.

    I’ve been on a number of projects using the former type of tool and been pleasantly surprised at its effectiveness. The closer you go to the other extreme, however, the worse things seem to fall apart, especially when the requirement for multi-threaded code comes into consideration. Leaving aside the issue of the state of the tools for a moment, you’ve probably got more programmers trained in C++ now than any other “modern” language, but C++ might well be the WORST such language for multi-thread – for one thing the current specification doesn’t even provide for ANY form of automatic memory management, let alone one designed to be effective in a multi-thread environment (and add-ons like the Boehm tool for example only work if your system already supports virtual memory, which is not always available in an embedded environment). Also there don’t seem to be any effective tools for C++ to test the source code in order to see that an application is “thread-safe” (like Chord for Java). And I’m pretty certain that a lot of folks would identify the extensive availability of pointers in C++ (thought of by its proponents as an invaluable feature) as not only a relic but an actual albatross inasmuch as it either prevents or severely degrades the performance of any mechanism that could be devised to manage memory automatically.

    Certainly “object-by-object” memory management is not going to stand for long if we want to generate our million-line applications, so the development environment needs to evolve – and quickly! In the same way perhaps in the future we’ll have tools that will allow us to simply generate a single-thread model of our application in a language like UML and submit it to a “thread assigner” that will find the maximum likely number of independent threads and convert our input automatically into the multi-thread version. (And if that sounds a bit too much like science fiction, how would you respond if you asked your boss tomorrow “how many independent threads do you need this new program divided into?” and he responds with “this code’s gonna be around for at least three decades, we expect the number of available cores in the target to double each year, how many cores can you keep busy at one time?”, what would you say that would totally satisfy him?) No we need tools like these RIGHT NOW – and better languages too! (I’m also thinking largely in terms of languages following the imperative programming paradigm, since experiments such as Ericcson’s with Erlang as a multiprocessing language, impressive as they are, nonetheless only apply in comparison with other functional languages, which is really a completely different kettle of fish.)

  8. Mark Texx says:

    Trend 2: Complexity Forces Programmers Beyond C – “It’s not going to be C++ or Ada or Java, for sure–as those have already been tried and found lacking. A new programming language is probably not the answer either, across so many CPU families and with so many other languages already tried.”
    In my world, C++ has made big in-roads with all it’s warts ( I am not very happy with that). It is a pretty sad commentary that a newer evolution of a programming language for the embedded folks has not been developed (or embraced). I have been watching this “trend” for over 15 years including all the UML stuff. Some of the contenders like D are just too complex like C++ and do not address the special needs of embedded. In this time period many new languages for the “IT” folks have become main stream (Java and C# for example).

  9. Vladimir says:

    Hi Michael,
    It almost 4 years from the time of this writing.
    Very interesting predictions, and it is very interesting to read it today, in 4 year.

    “…My second prediction is that the days of the C programming language’s dominance in embedded systems are numbered..”

    What do you think now about the RUST-language?

    It obviously have a high complexity level, but it looks like the team around it set pretty strong goal to make easy cross-compilation support for many architectures.
    And it looks like young companys-vendors of ARM based single board computers provides support for the language. (maybe they try to lure some new gen of programmers..)

    Is it perspective contender to C-lang in a near future or not?
    It’s actually will be very interesting to read your thoughts about it.

Leave a Reply to Mark Texx