embedded software boot camp

Is C Passing?

June 2nd, 2002 by Michael Barr

One of the very first comments I received in response to my editorial for the May 2002 edition of Embedded Systems Programming magazine (subsequently revised and posted to this blog as “Firmware Ethics”) was the following:

You’re obviously not a very good programmer and are using an archaic language. Nearly everything you said was biased toward mediocrity. Those of us Ada professionals wish that you would speak for yourself.

Though quite rudely put, the author does suggest an interesting possibility. Could the choice of programming language alone significantly improve the quality and safety of our finished products?

Of the eight suggested professional “ethics,” three (check return codes, enforce coding standards, and run lint regularly) might possibly be removed from the list if we all used a more strongly enforced language. Such a language would need to at least support exceptions, have strict syntax rules, and be strongly typed.

The C programming language fails on the first two counts. C++ adds support for exceptions but does not require programmers to use them. Among the “well-known” languages only Ada and Java meet all three requirements. Both also offer language-level support for multithreading, which enhances program portability. Yet few embedded programmers use either language.

A recent Embedded Systems Programming study suggests that almost three quarters of subscribers use C regularly. C++ is used by about half, with a slight erosion of C’s relative numbers in recent years. Assembly language remains almost as popular as C++, though the trend with that language is clearly toward decreasing use. Despite its strengths, the use of Ada—currently below 5%—is also on the decline. And, though its use is higher and on the increase, Java has a long way to go before it achieves acceptance within the embedded community.

Why not consider a switch? Not all of us develop systems with safety aspects; but perhaps those who do ought to take the issue of language selection seriously at the outset of new projects. How many bugs need to be preventable to make such a transition cost effective?

Many recognize C’s weaknesses and some, like the U.K.’s Motor Industry Software Reliability Association, have even laid down complex ground rules for its use in safety-critical systems. So why stick with C at all? Why should we allow past practice to dictate future language choices? Will it take a future catastrophe to get us to make the change we should today?

Don’t get me wrong. I love C. It was my first programming language and the one I use most competently. In an ideal world, though, the language decision should not be made based on our personal biases and experience. This is a decision that should be based solely on professional standards. But how can we compare languages analytically and measure the results of a transition from one language to another? It would be nice if there were easy answers.

Unfortunately, even if every one of us did switch to some “safer” language, miscommunication and logical errors would continue to be part and parcel of our discipline. To produce quality maintainable code, it would still be necessary to comment our work well, use version control, perform code inspections and regular testing, and measure real-time performance. Though compilers might be able to protect us from shooting ourselves in the foot, they’ll never stop us from being entirely too human.

Reality Bites

April 24th, 2002 by Michael Barr

The analog world is a messier place than most engineers like to admit. When we work exclusively on the digital, either in hardware or software, it’s possible to forget that the analog never quite behaves the same way twice. I was reminded of this the hard way when trying to model the behavior of a mechanical brake for a piece of precision physical therapy equipment.

A brake is nothing more than a couple of slabs of metal connected to a shaft: one fixed slab presses on the other, thus limiting the movement of the other (and thereby the shaft). The amount of force applied during braking can be controlled fairly precisely in such a system via an electrical signal (e.g., PWM). However, the resulting torque on the shaft is very difficult to accurately predict. In the real world, friction and other physical effects count for a lot: individual brakes have large frictional differences; they also heat up as they are used, which changes their surface characteristics; and the surfaces wear with use as well.

Trying to build a software model for all this such that we could always control the amount of torque required to “slip” any brake to within the required +/- 0.5 lb-in turned out to be an elusive goal. Even if you have all the variables at your disposal, it’s unlikely the analog world will cooperate to produce a consistent result time after time. (Even when we controlled all the variables in our system, we still found variations in the slip torque up to ten times greater than our required precision.) The development of a closed loop control algorithm for the electrical signal driving the brake, based on the torque applied by the user and the shaft’s velocity at each instant, was in the end a far more rational use of engineering time.

The digital is merely a model for the analog; and models are never perfect.

Keeping these crucial lessons in mind, we learn to solve such difficult problems by designing the software to adapt to the real world as it changes. Then all the designers need know at the outset is the range of permissible values and what decision to make at each.

Artificial intelligence takes this approach to its extreme. Far from enabling robots to laugh and cry, artificial intelligence is simply a system for making dynamic decisions. The system is provided a database of known facts; it learns other facts as it operates. By the application of a priori rules, decisions can be made based on the current environment.

We should also recognize that when we simulate a complex system, imperfections in the model are accepted as a tradeoff. Some systems, like airplanes or helicopters, are simply too expensive or dangerous to be used during the early stages of software development. The purpose of the model, then, is to get designers through those early stages, until the software is “safe” to test on the real hardware. Such a model, therefore, need not always be 100% correct.

We can deal with differences between the digital and the analog world best by staying aware of the differences. If we close our eyes to these differences, our systems may fail to adapt correctly to the real world around them.

Making Java Real

March 30th, 2002 by Michael Barr

Embedded processors span a wide range, from tiny 4-bit microcontrollers all the way up to 64-bit current sinks. Both chips and cores find themselves employed in systems with varying degrees of real-time requirements. In the middle of these spectrums, and most typical of today’s embedded designs, are the many soft-real time systems built around 32-bit processors. These are precisely the systems that should benefit from the recent completion of the Real-Time Specification for Java (RTSJ).

Having spent too much time in my career debugging silly typos because of C’s weak syntax rules; porting pieces of code from one RTOS’s threading API to another; keeping track of compiler differences related to the size of primitive data types and the order of bitfields; and trying to learn all the ins and outs of C++’s overly-complex brand of OOP, I was taken with Java right from my first encounter with the language, in 1996. When I attended a discussion about making Java suitable for real-time use about a year later, at the National Institute of Standards and Technology (NIST), I learned I wasn’t the only embedded programmer who’d taken a liking to this new language.

Of course, the reason we were all drawn to that discussion was because Java was decidedly not ready for real-time. As the Java language had moved out of its R&D origins at Sun and become popular with Web and enterprise application developers, its standard libraries had grown appreciably and its basic definition had become far too loose. For example, the built-in priority-based threading API, which should have proved useful as a basis for real-time software, had by then been dumbed down to the point that even those the few scheduling rules weren’t enforced by most JVM (Java Virtual Machine) implementations.

A result of that initial NIST meeting, and several working group sessions that followed, was the consensus document “Requirements for Real-Time Extensions for the Java Platform”. This document captured the opinions of experts at more than fifty companies and organizations (as well as the Department of Defense, which was seeking alternatives to Ada) on the subject of real-time software. It ended by stating a baker’s dozen goals, plus derived requirements, for any eventual real-time Java specification.

Last November, the real-time Java spec we hoped for then became a reality. So many folks wanted to see a real-time version of Java, in fact, that the RTSJ was the very first effort (JSR-001) to come out of the Java Community Process. (Even the language’s creator, James Gosling, signed up to help out.) Led by IBM, the work of the so-called Real-Time Java Experts Group resulted in a specification that describes how a real-time Java Virtual Machine must behave, a reference implementation, and a testing compatibility kit.

Among other things, the RTSJ addresses scheduling, synchronization, and garbage collection. For good measure, it adds a standardized way to peek and poke physical memory (and, thereby, memory-mapped peripheral status and control registers) and high-resolution timers to the Java libraries. By setting only a minimum standard in many areas, it also leaves the door open for advanced/alternative real-time schedulers to be offered by competing implementers.

I want to take this opportunity to thank all of those who have rolled up their sleeves and brought us closer to real-time Java. Your efforts have been worthwhile and the results look promising. Keep up the good work.

Unfortunately, we’re not done yet. The effort to bring Java to a wide range of embedded systems must include developing small (50-100KB) JVM/RTOS combinations that are compatible with RTSJ. Something along the lines of a real-time version of Sun’s KVM ought to do the trick. I’m confident we’re getting close to that final solution.

Whither Embedded

February 8th, 2002 by Michael Barr

Embedded computers appeared a full decade before personal computers. Yet a generation later many define embedded as “anything but” its popular successor. What’s going to happen to the embedded category in the future? Can an increasingly disparate mix of systems continue to be identified as similar in their dissimilarity?
Personal computers—whether “IBM-compatible,” variety of apple, or UNIX workstation—are easy enough to describe. The English word “computer” is enough to put that very image in the mind of at least a billion people; a quarter as many own one.

Embedded systems are completely different. Few other than engineers have heard the term; most people don’t even notice if their microwave or cell phone has a computer at its heart. Even the designers of such systems can’t always agree on what exactly the term means.

Personal computers are, upon consideration, just a family of similar products (a particular vertical market)—no more different from one another than cell phones. Yet the world’s billion plus cell phones are frequently lumped into the same broad catch-all category as anti-lock brakes and cruise missiles: embedded systems. Why should that be?

Why should PCs be different? Don’t PC designers have as much in common with cell phone designers as the latter do with anti-lock brake designers? Where do we draw the line: is a PDA a general-purpose computer or an embedded system? What about a cell phone (or oscilloscope) with a Java Virtual Machine and application manager?

By 1999, the percentage of new 32-bit processors destined for PC motherboards had dropped to a rounded 0%. That alone suggests the segregation is no longer appropriate. But if we accept that, doesn’t it mean that embedded, as a category, doesn’t actually exist? That there are instead a large number of vertical market segments: automotive electronics, office equipment, telecom/datacom, consumer electronics, avionics, industrial automation, personal computers, etc; each representing a different type of computer.

Such vertical markets clearly already exist. There are trade associations, journals, conferences, and other signs of “community” in each. So what holds the embedded development community together? Why do we choose to identify ourselves as designers of embedded systems, rather than designers of avionics or cell phones? And will we continue to do so into the future?

I think it’s the skills that matter most in this. We identify ourselves as embedded designers up to the point that we feel our skills are transferable between these various markets. The company Cisco doesn’t think of itself as an “embedded system design company”—though many of the engineers who work there may consider that they do just that. The skills we learn from other embedded programmers through community resources online, in print, and in person help us keep a broad perspective. Though we only write software for microwave ovens today, we might write software for avionics systems tomorrow; the same basic skills apply.

So there is an important similarity that binds us together after all. I don’t see that changing anytime soon. If anything, I expect there will be many more people joining our community in the coming decades. And though the customers who buy our products may not consider them embedded computers, those of us who design them always will.

Open Sores

January 5th, 2002 by Michael Barr

In the past two years, increasing numbers of embedded programmers have been getting to know Linux and other open source software packages intimately. What has primarily attracted this interest is the non-existent pricing structure. But some of the initial enthusiasm—particularly for Linux—seems to be fading.

Is the use of open source software as building blocks for embedded systems just a fad?
I’ve just found a couple of interesting insights about Linux buried within a recent survey of embedded developers by Evans Data Corporation. The survey asked a number of questions focused on Linux, and the results are cross-tabulated in interesting ways. One table, titled “Perceptions of Linux’ Biggest Technical Difficulties by Degree of Community Interaction,” presents data gleaned from a question asked of those considering and already using Linux to various degrees, sorted by their experience level. Developers who hadn’t actually done anything with Linux yet (about 84% of those surveyed) perceived its biggest technical hurdles to be “availability of device drivers” and “lack of board support packages.” However, developers with hands-on Linux experience including kernel modifications (about 6%) were most concerned about the “size” of the package.

You’d think that the size of the Linux code (which is measured in Megabytes), its worst-case interrupt latency and other performance characteristics, and RAM requirements (also Megabytes) would be the overriding concerns for embedded programmers. And yet the big issues that I hear everyone complain about are legalities surrounding open source licensing terms and fragmentation of the, widely distributed, code base. In reality, these latter are not big problems for embedded programmers—as those who’ve actually investigated Linux already know. It’s the memory and performance issues that really get in our way.

As the reality begins to overtake the hype, a consultant/author friend had this to say about the evolving market for his Linux services:

Two years ago I was pumped up on embedded Linux. You said it would pass; I thought you were crazy. Well… I just stopped work on my book. I only found two Linux clients and I ran out of money. Back to VxWorks to pay the bills—and get me out of debt for the time and effort I put into Linux.

Though there are certainly companies out there embedding Linux, the market isn’t growing as rapidly as most analysts predicted it would.