Posts Tagged ‘embedded’

Real Men [Still] Program in C

Wednesday, March 29th, 2017 Michael Barr

It’s hard for me to believe, but it’s been nearly 8 years since I wrote the popular “Real Men Program in C” blog post (turned article). That post was prompted by a conversation with a couple of younger programmers who told me: “C is too hard for programmers of our generation to bother mastering.”

I ended then:

If you accept [] that C shall remain important for the foreseeable future and that embedded software is of ever-increasing importance, then you’ll begin to see trouble brewing. Although they are smart and talented computer scientists, [younger engineers] don’t know how to competently program in C. And they don’t care to learn.

But someone must write the world’s ever-increasing quantity of embedded software. New languages could help, but will never be retrofitted onto all the decades-old CPU architectures we’ll continue to use for decades to come. As turnover is inevitable, our field needs to attract a younger generation of C programmers.

What is the solution? What will happen if these trends continue to diverge?

Now that a substantial period of years has elapsed, I’d like to revisit two key phrases from that quote: Is C still important? and Is there a younger generation of C programmers? There’s no obvious sign of any popular “new language” nor of any diminution of embedded systems.

Is C Still Important?

The original post used survey data from 1997-2009 to establish that C was (through that entire era) the dominant programming language for embedded systems. The “primary” programming languages used in the final year were C (62%), C++ (24%), and Assembly (5%).

As the figure below shows (data from Barr Group‘s 2017 Embedded Systems Safety & Security Survey), C has now consolidated its dominance as the lingua franca of embedded programmers: now at 71%. Use of C++ remains at about the same level (22%) while use of assembly as the primary language has basically disappeared.

Primary Programming Language

Conclusion: Obviously, C is still important in embedded systems.

Is There a Younger Generation of C Programmers?

The next figure shows the years of paid, professional experience of embedded system designers (data from the same source). Unfortunately, I don’t have data from that older time period about the average ages of embedded programmers. But what looks potentially telling about this is that the average years of experience of American designers (two decades) is much higher than the averages in Europe (14 years) and Asia (11). I dug into the data on the U.S. engineers a bit and found that the experience curve was essentially flat, with no bigger younger group like in the worldwide data.

Years of Experience

Conclusion: The jury is still out. It’s possible there is already a missing younger generation in the U.S., but there also seems to be some youth coming up into our field in Asia at least.

It should be really interesting to see how this all plays out in the next 8 years. I’m putting a tickler in my to-do list to blog about this topic again then!

Footnote: Same as last time, I’m not excluding women. There are plenty of great embedded systems designers who are women–and they mostly program in C too, I presume.

Government-Sponsored Hacking of Embedded Systems

Wednesday, March 11th, 2015 Michael Barr

Everywhere you look these days, it is readily apparent that embedded systems of all types are under attack by hackers.

In just one example from the last few weeks, researchers at Kaspersky Lab (a Moscow-headquartered maker of anti-virus and other software security products) published a report documenting a specific pernicious and malicious attack against “virtually all hard drive firmware”. The Kaspersky researchers deemed this particular data security attack the “most advanced hacking operation ever uncovered” and confirmed that at least hundreds of computers, in dozens of countries, have already been infected.

Here are the technical facts:

  • Disk drives contain a storage medium (historically one or more magnetic spinning platters; but increasingly solid state memory chips) upon which the user stores data that is at least partly private information;
  • Disk drives are themselves embedded systems powered by firmware (mostly written in C and assembly, sans formal operating system);
  • Disk drive firmware (stored in non-volatile memory distinct from the primary storage medium) can be reflashed to upgrade it;
  • The malware at issue comprises replacement firmware images for all of the major disk drive brands (e.g., Seagate, Western Digital) that can perform malicious functions such as keeping copies of the user’s private data in a secret partition for later retrieval;
  • Because the malicious code resides in the firmware, existing anti-virus software cannot detect it (even when they scan the so-called Master Boot Record); and
  • Even a user who erases and reformats his drive will not remove the malware.

The Kaspersky researchers have linked this hack to a number of other sophisticated hacks over the past 14 years, including the Stuxnet worm attack on embedded systems within the Iranian nuclear fuel processing infrastructure. Credited to the so-called “Equation Group,” these attacks are believed be the the work of a single group: NSA. One reason: a similar disk drive firmware hack code-named IRATEMONK is described in an internal NSA document made public by Edward Snowden.

I bring this hack to your attention because it is indicative of a broader class of attacks that embedded systems designers have not previously had to worry about. In a nutshell:

Hackers gonna hack. Government-sponsored hackers with unlimited black budgets gonna hack the shit out of everything.

This is a sea change. Threat modeling for embedded systems most often identifies a range of potential attacker groups, such as: hobbyist hackers (who only hack for fun, and don’t have many resources), academic researchers (who hack for the headlines, but don’t care if the hacks are practical), and company competitors (who may have lots of resources, but also need to operate under various legal systems).

For example, through my work history I happen to be an expert on satellite TV hacking technology. In that field, a hierarchy of hackers emerged in which organized crime syndicates had the best resources for reverse engineering and achieved practical hacks based on academic research; the crime syndicates initially tightly-controlled new hacks in for-profit schemes; and most hacks eventually trickled down to the hobbyist level.

For those embedded systems designers making disk drives and other consumer devices, security has not historically been a consideration at all. Of course, well-resourced competitors sometimes reverse engineered even consumer products (to copy the intellectual property inside), but patent and copyright laws offered other avenues for reducing and addressing that threat.

But we no longer live in a world where we can ignore the security threat posed by the state-sponsored hackers, who have effectively unlimited resources and a new set of motivations. Consider what any interested agent of the government could learn about your private business via a hack of any microphone-(and/or camera-)equipped device in your office (or bedroom).

Some embedded systems with microphones are just begging to be easily hacked. For example, the designers of new smart TVs with voice control capability are already sending all of the sounds in the room (unencrypted) over the Internet. Or consider the phone on your office desk. Hacks of at least some VOIP phones are known to exist and allow for remotely listening to everything you say.

Of course, the state-sponsored hacking threat is not only about microphones and cameras. Consider a printer firmware hack that remotely prints or archives a copy of everything you ever printed. Or a motion/sleep tracker or smart utility meter that lets burglars detect when you are home or away. Broadband routers are a particularly vulnerable point of most small office/home office intranets, and one that is strategically well located for sniffing on and interfering with devices deeper in the network.

How could your product be used to creatively spy on or attack its users?

Do we have an ethical duty or even obligation, as professionals, to protect the users of our products from state-sponsored hacking? Or should we simply ignore such threats, figuring this is just a fight between our government and “bad guys”? “I’m not a bad guy myself,” you might (like to) think. Should the current level of repressiveness of the country the user is in while using our product matter?

I personally think there’s a lot more at stake if we collectively ignore this threat, and refer you to the following to understand why:

Imagine what Edward Snowden could have accomplished if he had a different agenda. Always remember, too, that the hacks the NSA has already developed are now–even if they weren’t before–known to repressive governments. Furthermore, they are potentially in the hands of jilted lovers and blackmailers everywhere. What if someone hacks into an embedded system used by a powerful U.S. Senator or Governor; or by the candidate for President (that you support or that wants to reign in the electronic security state); or a member of your family?

P.S. THIS JUST IN: The CIA recently hired a major defense contractor to develop a variant of an open-source compiler that would secretly insert backdoors into all of the programs it compiled. Is it the compiler you use?

First Impressions of Google Glass 2.0

Tuesday, April 22nd, 2014 Michael Barr

Last week I took advantage of Google’s special 1-day-only buying opportunity to purchase an “Explorer” edition of Google Glass 2.0. My package arrived over the weekend and I finally found a few hours this morning for the unboxing and first use.

Let me begin by saying that the current price is quite high and that the buying process itself is cumbersome. To buy Google Glass you must shell out $1,500 (plus taxes and any accessories) and you can only pay this entrance fee via a Google Wallet account. I didn’t have a Google Wallet account setup until last week and various problems associated with setting up Wallet and linking it to my credit card had prevented me from using an earlier Explorer email invite. Google absolutely needs to make Glass cheaper and easier to purchase if they are to have any hope of making this a mainstream product.

Upon opening the box and donning Glass, I was initially at a loss for how to actually use the thing. There were instructions for turning it on in the box, but I had to find and watch YouTube videos on my own (like this one) to grok the touchpad controls “menu”/UI paradigm. I also quickly came to learn that Glass is only useable when you have at least all of the following: (a) a Google+ account; (b) an Android or iOS smartphone; (c) the My Glass app installed on said smartphone; and (d) a Bluetooth-tethered or WiFi connection to the Internet. (Well, and also the USB charging cable and a power supply–given the very short battery life I’ve experienced so far)

At the present time there are very few apps available. Here’s a master list of what is currently just 44 “Glassware” apps. And none of either the built-in capabilities or those apps strikes me as the kind of must-have feature that’s likely to drive widespread adoption of Glass as a mainstream computing platform with a vibrant application developer community.

I’ll finish out the negatives by saying that the current form factor makes you look like an uber-geek (when you are not too busy being physically attacked for some assumed offense) and that the touchpad area on the right side of your head gets surprisingly hot during normal use.

Now for the few positives. First, the location of the heads-up display just above your line of sight feels right for an always-available computer. As someone who walks for miles every day for exercise, I would so love to replace my handheld smartphone form factor with a heads-up display like this. So it’s too bad that browsing the web and reading email aren’t viable on Glass’ meager 640×360 display. I think there are probably dozens of hands-on jobs in which those who do them would be made more productive with a screen (and the right application) in this form factor. I also think the heads-up wearable form factor feels like a great place for quick reference information, such as maps/navigation, pop-up weather alerts, etc., while the wearer is otherwise busy walking, biking, or even driving.

A second positive is that the voice recognition is really very surprisingly good. Dictation, for example, seems to work far better on Glass so far than it ever has on my iPhone 5. You can’t always talk to Glass (hint: generally only when the words “ok glass” are on screen or there is a microphone icon), but when you do talk Glass seems to listen quite well. Good dictation is key, of course, because there is no obvious way to edit the things you’ve drafted if they are misspelled or improperly formatted; you either hit send or start over. And the only application launcher is your voice via “ok google” home screen/clock.

Giving Glass instructions such as “okay glass, listen to ” is an extremely intuitive user interface. And so far that music feature combined with a $10/month “All Access” Google Play music account seems like the only thing I might like to use everyday. I also like the idea of dictating SMS and email messages or taking and sharing photos while doing other things with my hands, though the SMS feature doesn’t work when Glass is paired with an iPhone and the only multi-person sharing option seems to be via Google+. So far the SMS, email, and outgoing call features are not impressing me enough to see me using them regularly or even to convince me to entrust Google with access to my full iPhone contacts database. And searching through a lot of contacts appears to be a real chore too, unless they match with voice recognition on the first try.

In terms of applications that seem interesting, Evernote seems a reasonable near-term substitute for the lack of a To Do list interface to Toodledo or RememberTheMilk. And I sure do like the idea of receiving pop-up extreme weather alerts based on my location. Some of the simplistic sample games (such as balancing blocks on your head) are fun and I could see this form factor perhaps changing multi-player gaming in a few interesting ways. But that’s about it for the interesting apps so far.

To summarize my thinking, Google Glass so far makes me think about the Apple Newton. Everyone knew that Apple was on to something with the Newton MessagePad, way back circa 1993. But the Newton was also too far ahead of its time in terms of cost and size relative to practical usefulness. Eventually Apple came back and did the “communicator” platform right more than a decade later with the iPhone, which it has continued to improve even more dramatically in the half decade since. I think the same is likely to be the hindsight experience for Google Glass in terms of it being an agreed precursor of what’s to come in terms of a heads up wearable but a near term flop. If it does fail, let it be known that Forbes says Google dug its own grave by putting it out there in too many hands too soon.

A Look Back at the Audi 5000 and Unintended Acceleration

Friday, March 14th, 2014 Michael Barr

I was in high school in the late 1980’s when NHTSA (pronounced “nit-suh”), Transport Canada, and others studied complaints of unintended acceleration in Audi 5000 vehicles. Looking back on the Audi issues, and in light of my own recent role as an expert investigating complaints of unintended acceleration in Toyota vehicles, there appears to be a fundamental contradiction between the way that Audi’s problems are remembered now and what NHTSA had to say officially at the time.

Here’s an example from a pretty typical remembrance of what happened, from a 2007 article written “in defense of Audi”:

In 1989, after three years of study[], the National Highway Traffic Safety Administration (NHTSA) issued their report on Audi’s “sudden unintended acceleration problem.” NHTSA’s findings fully exonerated Audi… The report concluded that the Audi’s pedal placement was different enough from American cars’ normal set-up (closer to each other) to cause some drivers to mistakenly press the gas instead of the brake.

And here’s what NHTSA’s official Audi 5000 report actually concluded:

Some versions of Audi idle-stabilization system were prone to defects which resulted in excessive idle speeds and brief unanticipated accelerations of up to 0.3g. These accelerations could not be the sole cause of [long-duration unintended acceleration incidents], but might have triggered some [of the long-duration incidents] by startling the driver.”

Contrary to the modern article, NHTSA’s original report most certainly did not “fully exonerate” Audi. Similarly, though there were differences in pedal configuration compared to other cars, NHTSA seems to have concluded that the first thing that happened was a sudden unexpected surge of engine power that startled drivers and that the pedal misapplication sometimes followed that.

This sequence of, first, a throttle malfunction and, then, pedal confusion was summarized in a 2012 review study by NHTSA:

Once an unintended acceleration had begun, in the Audi 5000, due to a failure in the idle-stabilizer system (producing an initial acceleration of 0.3g), pedal misapplication resulting from panic, confusion, or unfamiliarity with the Audi 5000 contributed to the severity of the incident.

The 1989 NHTSA report elaborates on the design of the throttle, which included an “idle-stabilization system” and documents that multiple “intermittent malfunctions of the electronic control unit were observed and recorded”. In a nutshell, the Audi 5000 had a main mechanical throttle control, wherein the gas pedal pushed and pulled on the throttle valve with a cable, as well as an electronic throttle control idle adjustment.

It is unclear whether the “electronic control unit” mentioned by NHTSA was purely electronic or if it also had embedded software. (ECU, in modern lingo, includes firmware.) It is also unclear what percentage of the Audi 5000 unintended acceleration complaints were short-duration events vs. long-duration events. If there was software in the ECU and short-duration events were more common, well that would lead to some interesting questions. Did NHTSA and the public learn all of the right lessons from the Audi 5000 troubles?

Lethal Software Defects: Patriot Missile Failure

Thursday, March 13th, 2014 Michael Barr

During the Gulf War, twenty-eight U.S. soldiers were killed and almost one hundred others were wounded when a nearby Patriot missile defense system failed to properly track a Scud missile launched from Iraq. The cause of the failure was later found to be a programming error in the computer embedded in the Patriot’s weapons control system.

On February 25, 1991, Iraq successfully launched a Scud missile that hit a U.S. Army barracks near Dhahran, Saudi Arabia. The 28 deaths by that one Scud constituted the single deadliest incident of the war, for American soldiers. Interestingly, the “Dhahran Scud”, which killed more people than all 70 or so of the earlier Scud launches, was apparently the last Scud fired in the Gulf War.

Unfortunately, the “Dhahran Scud” succeeded where the other Scuds failed because of a defect in the software embedded in the Patriot missile defense system. This same bug was latent in all of the Patriots deployed in the region. However, the presence of the bug was masked by the fact that a particular Patriot weapons control computer had to be continuously running for several days before the bug could cause the hazard of a failure to track a Scud.

There is a nice concise write-up of the problem, with a prefatory background on how the Patriot system is designed to work, in the official post-failure analysis report by the U.S. General Accounting Office (GAO IMTEC-92-26) entitled “Patriot Missile Defense: Software Problem Led to System Failure at Dhahran, Saudi Arabia“.

The hindsight explanation is that:

a software problem “led to an inaccurate tracking calculation that became worse the longer the system operated” and states that “at the time of the incident, the [Patriot] had been operating continuously for over 100 hours” by which time “the inaccuracy was serious enough to cause the system to look in the wrong place [in the radar data] for the incoming Scud.”

Detailed Analysis

The GAO report does not go into the technical details of the specific programming error. However, I believe we can infer the following based on the information and data that is provided about the incident and about the defect.

A first important observation is that the CPU was a 24-bit integer-only CPU “based on a 1970s design”. Befitting the time, the code was written in assembly language.

A second important observation is that real numbers (i.e., those with fractions) were apparently manipulated as a whole number in binary in one 24-bit register plus a binary fraction in a second 24-bit register. In this fixed-point numerical system, the real number 3.25 would be represented as binary 000000000000000000000011:010000000000000000000000, in which the : is my marker for the separator between the whole and fractional portions of the real number. The first half of that binary represents the whole number 3 (i.e., bits are set for 2 and 1, the sum of which is 3). The second portion represents the fraction 0.25 (i.e., 0/2 + 1/4 + 0/8 + …).

A third important observation is that system [up]time was “kept continuously by the system’s internal clock in tenths of seconds [] expressed as an integer.” This is important because the fraction 1/10 cannot be perfectly represented in 24-bits of binary fraction because its binary expansion, as a series of 1 or 0 over 2^n bits, does not terminate.

I understand that the missile-interception algorithm that did not work that day is approximately as follows:

  1. Consider each object that might be a Scud missile in the 3-D radar sweep data.
  2. For each, calculate an expected next location at the known speed of a Scud (+/- an acceptable window).
  3. Check the radar sweep data again at a future time to see if the object is in the location a Scud would be.
  4. If it is a Scud, engage and fire missiles.

Furthermore, the GAO reports that the problem was an accumulating linear error of .003433 seconds per 1 hour of uptime that affected every deployed Patriot equally. This was not a clock-specific or system-specific issue.

Given all of the above, I reason that the problem was that one part of the Scud-interception calculations utilized time in its decimal representation and another used the fixed-point binary representation. When the uptime was still low, targets were found in the expected locations when they were supposed to be and the latent software bug was hidden.

Of course, all of the above detail is specific to the Patriot hardware and software design that was in use at the time of the Gulf War. As the Patriot system has since been modernized by Raytheon, many details like these will have likely changed.

According to the GAO report:

Army officials [] believed the Israeli experience was atypical [and that] other Patriot users were not running their systems for 8 or more hours at a time. However, after analyzing the Israeli data and confirming some loss in targeting accuracy, the officials made a software change which compensated for the inaccurate time calculation. This change allowed for extended run times and was included in the modified software version that was released [9 days before the Dhahran Scud incident]. However, Army officials did not use the Israeli data to determine how long the Patriot could operate before the inaccurate time calculation would render the system ineffective.

Four days before the deadly Scud attack, the “Patriot Project Office [in Huntsville, Alabama] sent a message to Patriot users stating that very long run times could cause [targeting problems].” That was about the time of the last reboot of the Patriot missile that failed.

Note that if time samples were all in the decimal timebase or all in the binary timebase then the two compared radar samples would always be close in time and the error would not accumulate with uptime. And that is the likely fix that was implemented.

Firmware Updates

Here are a few tangentially interesting tidbits from the GAO report:

  • “During the [Gulf War] the Patriot’s software was modified six times.”
  • “Patriots had to be shut down for at least 1 to 2 hours to install each software modification.”
  • “Rebooting[] takes about 60 to 90 seconds” and sets the “time back to zero.”
  • The “[updated] software, which compensated for the inaccurate time calculation, arrived in Dhahran” the day after the deadly attack.

Public Statements

In hindsight, there are some noteworthy quotes from the 1991 news articles initially reporting on this incident. For example,

Brig. Gen. Neal, United States Command (2 days after):

The Scud apparently fragmented above the atmosphere, then tumbled downward. Its warhead blasted an eight-foot-wide crater into the center of the building, which is three miles from a major United States air base … Our investigation looks like this missile broke apart in flight. On this particular missile it wasn’t in the parameters of where it could be attacked.

U.S. Army Col. Garnett, Patriot Program Director (4 months after):

The incident was an anomaly that never showed up in thousands of hours of testing and involved an unforeseen combination of dozens of variables — including the Scud’s speed, altitude and trajectory.

Importantly, the GAO report states that, a few weeks before the Dharan Scud, Israeli soldiers reported to the U.S. Army that their Patriot had a noticeable “loss in accuracy after … 8 consecutive hours.” Thus, apparently, all of this “thousands of hours” of testing involved frequent reboots. (I can envision the test documentation now: “Step 1: Power up the Patriot. Step 2: Check that everything is perfect. Step 3: Fire the dummy target.”) The GAO reported that “an endurance test has [since] been conducted to ensure that extended run times do not cause other system difficulties.”

Note too that the quote about “thousands of hours of testing” was also misleading in that the Patriot software was, also according to the GAO report, hurriedly modified in the months leading up to the Gulf War to track Scud missiles going about 2.5 times faster than the aircraft and cruise missiles it was originally designed to intercept. Improvements to the Scud-specific tracking/engagement algorithms were apparently even being made during the Gulf War.

These specific theories and statements about went wrong or why it must have been a problem outside the Patriot itself were fully discredited once the source code was examined. When computer systems may have misbehaved in a lethal manner, it is important to remember that newspaper quotes from those on the side of the designers are not scientific evidence. Indeed, the humans who offer those quotes often have conscious and/or subconscious motives and blind spots that favor them to be falsely overconfident in the computer systems. A thorough source code review takes time but is the scientific way to go about finding the root cause.

As a New York Times editorial dated 4 months after the incident explained:

The Pentagon initially explained that Patriot batteries had withheld their fire in the belief that Dhahran’s deadly Scud had broken up in midflight. Only now does the truth about the tragedy begin to emerge: A computer software glitch shut down the Patriot’s radar system, blinding Dhahran’s anti-missile batteries. It’s not clear why, even after Army investigators had reached this conclusion, the Pentagon perpetuated its fiction

At least in this case, it was only a few months before the U.S. Army admitted the truth about what happened to themselves and to the public. That is to the U.S. Army’s credit. Other actors in other lethal software defect cases have been far more stubborn to admit what has later become clear about their systems.