Posts Tagged ‘ethics’

Government-Sponsored Hacking of Embedded Systems

Wednesday, March 11th, 2015 Michael Barr

Everywhere you look these days, it is readily apparent that embedded systems of all types are under attack by hackers.

In just one example from the last few weeks, researchers at Kaspersky Lab (a Moscow-headquartered maker of anti-virus and other software security products) published a report documenting a specific pernicious and malicious attack against “virtually all hard drive firmware”. The Kaspersky researchers deemed this particular data security attack the “most advanced hacking operation ever uncovered” and confirmed that at least hundreds of computers, in dozens of countries, have already been infected.

Here are the technical facts:

  • Disk drives contain a storage medium (historically one or more magnetic spinning platters; but increasingly solid state memory chips) upon which the user stores data that is at least partly private information;
  • Disk drives are themselves embedded systems powered by firmware (mostly written in C and assembly, sans formal operating system);
  • Disk drive firmware (stored in non-volatile memory distinct from the primary storage medium) can be reflashed to upgrade it;
  • The malware at issue comprises replacement firmware images for all of the major disk drive brands (e.g., Seagate, Western Digital) that can perform malicious functions such as keeping copies of the user’s private data in a secret partition for later retrieval;
  • Because the malicious code resides in the firmware, existing anti-virus software cannot detect it (even when they scan the so-called Master Boot Record); and
  • Even a user who erases and reformats his drive will not remove the malware.

The Kaspersky researchers have linked this hack to a number of other sophisticated hacks over the past 14 years, including the Stuxnet worm attack on embedded systems within the Iranian nuclear fuel processing infrastructure. Credited to the so-called “Equation Group,” these attacks are believed be the the work of a single group: NSA. One reason: a similar disk drive firmware hack code-named IRATEMONK is described in an internal NSA document made public by Edward Snowden.

I bring this hack to your attention because it is indicative of a broader class of attacks that embedded systems designers have not previously had to worry about. In a nutshell:

Hackers gonna hack. Government-sponsored hackers with unlimited black budgets gonna hack the shit out of everything.

This is a sea change. Threat modeling for embedded systems most often identifies a range of potential attacker groups, such as: hobbyist hackers (who only hack for fun, and don’t have many resources), academic researchers (who hack for the headlines, but don’t care if the hacks are practical), and company competitors (who may have lots of resources, but also need to operate under various legal systems).

For example, through my work history I happen to be an expert on satellite TV hacking technology. In that field, a hierarchy of hackers emerged in which organized crime syndicates had the best resources for reverse engineering and achieved practical hacks based on academic research; the crime syndicates initially tightly-controlled new hacks in for-profit schemes; and most hacks eventually trickled down to the hobbyist level.

For those embedded systems designers making disk drives and other consumer devices, security has not historically been a consideration at all. Of course, well-resourced competitors sometimes reverse engineered even consumer products (to copy the intellectual property inside), but patent and copyright laws offered other avenues for reducing and addressing that threat.

But we no longer live in a world where we can ignore the security threat posed by the state-sponsored hackers, who have effectively unlimited resources and a new set of motivations. Consider what any interested agent of the government could learn about your private business via a hack of any microphone-(and/or camera-)equipped device in your office (or bedroom).

Some embedded systems with microphones are just begging to be easily hacked. For example, the designers of new smart TVs with voice control capability are already sending all of the sounds in the room (unencrypted) over the Internet. Or consider the phone on your office desk. Hacks of at least some VOIP phones are known to exist and allow for remotely listening to everything you say.

Of course, the state-sponsored hacking threat is not only about microphones and cameras. Consider a printer firmware hack that remotely prints or archives a copy of everything you ever printed. Or a motion/sleep tracker or smart utility meter that lets burglars detect when you are home or away. Broadband routers are a particularly vulnerable point of most small office/home office intranets, and one that is strategically well located for sniffing on and interfering with devices deeper in the network.

How could your product be used to creatively spy on or attack its users?

Do we have an ethical duty or even obligation, as professionals, to protect the users of our products from state-sponsored hacking? Or should we simply ignore such threats, figuring this is just a fight between our government and “bad guys”? “I’m not a bad guy myself,” you might (like to) think. Should the current level of repressiveness of the country the user is in while using our product matter?

I personally think there’s a lot more at stake if we collectively ignore this threat, and refer you to the following to understand why:

Imagine what Edward Snowden could have accomplished if he had a different agenda. Always remember, too, that the hacks the NSA has already developed are now–even if they weren’t before–known to repressive governments. Furthermore, they are potentially in the hands of jilted lovers and blackmailers everywhere. What if someone hacks into an embedded system used by a powerful U.S. Senator or Governor; or by the candidate for President (that you support or that wants to reign in the electronic security state); or a member of your family?

P.S. THIS JUST IN: The CIA recently hired a major defense contractor to develop a variant of an open-source compiler that would secretly insert backdoors into all of the programs it compiled. Is it the compiler you use?

Apple’s #gotofail SSL Security Bug was Easily Preventable

Monday, March 3rd, 2014 Michael Barr

If programmers at Apple had simply followed a couple of the rules in the Embedded C Coding Standard, they could have prevented the very serious `Gotofail` SSL bug from entering the iOS and OS X operating systems. Here’s a look at the programming mistakes involved and the easy-to-follow coding standard rules that could have easily prevent the bug.

In case you haven’t been following the computer security news, Apple last week posted security updates for users of devices running iOS 6, iOS 7, and OS X 10.9 (Mavericks). This was prompted by a critical bug in Apple’s implementation of the SSL/TLS protocol, which has apparently been lurking for over a year.

In a nutshell, the bug is that a bunch of important C source code lines containing digital signature certificate checks were never being run because an extraneous goto fail; statement in a portion of the code was always forcing a jump. This is a bug that put millions of people around the world at risk for man-in-the-middle attacks on their apparently-secure encrypted connections. Moreover, Apple should be embarrassed that this particular bug also represents a clear failure of software process at Apple.

There is debate about whether this may have been a clever insider-enabled security attack against all of Apple’s users, e.g., by a certain government agency. However, whether it was an innocent mistake or an attack designed to look like an innocent mistake, Apple could have and should have prevented this error by writing the relevant portion of code in a simple manner that would have always been more reliable as well as more secure. And thus, in my opinion, Apple was clearly negligent.

Here are the lines of code at issue (from Apple’s open source code server), with the extraneous goto in bold:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, ...)
    OSStatus  err;

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err =, &hashOut)) != 0)
        goto fail;

    return err;

The code above violates at least two rules from Barr Group‘s Embedded C Coding Standard book. Importantly, had Apple followed at least the first of these rules, in particular, this dangerous bug should almost certainly have been prevented from ever getting into even a single device.

Rule 1.3.a

Braces shall always surround the blocks of code (a.k.a., compound statements), following if, else, switch, while, do, and for statements; single statements and empty statements following these keywords shall also always be surrounded by braces.

Had Apple not violated this always-braces rule in the SSL/TLS code above, there would have been either just one set of curly braces after each if test or a very odd looking hard-to-miss chunk of code with two sets of curly braces after the if with two gotos. Either way, this bug was preventable by following this rule and performing code review.

Rule 1.7.c

The goto keyword shall not be used.

Had Apple not violated this never-goto rule in the SSL/TLS code above, there would not have been a double goto fail; line to create the unreachable code situation. Certainly if that forced each of the goto lines to be replaced with more than one line of code, it would have forced programmers to use curly braces.

On a final note, Apple should be asking its engineers and engineering managers about the failures of process (at several layers) that must have occurred for this bug to have gone into end user’s devices. Specifically:

  • Where was the peer code review that should have spotted this, or how did the reviewers fail to spot this?
  • Why wasn’t a coding standard rule adopted to make such bugs easier to spot during peer code reviews?
  • Why wasn’t a static analysis tool, such as Klocwork, used, or how did it fail to detect the unreachable code that followed? Or was it users of such a tool, at Apple, who failed to act?
  • Where was the regression test case for a bad SSL certificate signature, or how did that test fail?

Dangerous bugs, like this one from Apple, often result from a combination of accumulated errors in the face of flawed software development processes. Too few programmers recognize that many bugs can be kept entirely out of a system simply by adopting (and rigorously enforcing) a coding standard that is designed to keep bugs out.

Security Risks of Embedded Systems

Wednesday, January 15th, 2014 Michael Barr

In the words of security guru and blogger Bruce Schneier “The Internet of Things is Wildly Insecure — and Often Unpatchable”. As Bruce describes the current state of affairs in a recent Wired magazine article:

We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself — as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.

It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard — if not impossible — to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure — publishing vulnerabilities to force companies to issue patches quicker — and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.

But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.

If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them — including some of the most popular and common brands.

I agree with Bruce and like to see a mainstream security guru talking about embedded systems. I recommend you read the whole article here.

An Update on Toyota and Unintended Acceleration

Saturday, October 26th, 2013 Michael Barr

In early 2011, I wrote a couple of blog posts (here and here) as well as a later article (here) describing my initial thoughts on skimming NASA’s official report on its analysis of Toyota’s electronic throttle control system. Half a year later, I was contacted and retained by attorneys for numerous parties involved in suing Toyota for personal injuries and economic losses stemming from incidents of unintended acceleration. As a result, I got to look at Toyota’s engine source code directly and judge for myself.

From January 2012, I’ve led a team of seven experienced engineers, including three others from Barr Group, in reviewing Toyota’s electronic throttle and some other source code as well as related documents, in a secure room near my home in Maryland. This work proceeded in two rounds, with a first round of expert reports and depositions issued in July 2012 that led to a billion-dollar economic loss settlement as well as an undisclosed settlement of the first personal injury case set for trial in U.S. Federal Court. The second round began with an over 750 page formal written expert report by me in April 2013 and culminated this week in an Oklahoma jury’s decision that the multiple defects in Toyota’s engine software directly caused a September 2007 single vehicle crash that injured the driver and killed her passenger.

It is significant that this was the first and only jury so far to hear any opinions about Toyota’s software defects. Earlier cases either predated our source code access, applied a non-software theory, or was settled by Toyota for an undisclosed sum.

In our analysis of Toyota’s source code, we built upon the prior analysis by NASA. First, we looked more closely at more lines of the source code for more vehicles for more man months. And we also did a lot of things that NASA didn’t have time to do, including reviewing Toyota’s operating system’s internals, reviewing the source code for Toyota’s “monitor CPU”, performing an independent worst-case stack depth analysis, running portions of the main CPU software including the RTOS in a processor simulator, and demonstrating–in 2005 and 2008 Toyota Camry vehicles–a link between loss of throttle control and the numerous defects we found in the software.

In a nutshell, the team led by Barr Group found what the NASA team sought but couldn’t find: “a systematic software malfunction in the Main CPU that opens the throttle without operator action and continues to properly control fuel injection and ignition” that is not reliably detected by any fail-safe. To be clear, NASA never concluded software wasn’t at least one of the causes of Toyota’s high complaint rate for unintended acceleration; they just said they weren’t able to find the specific software defect(s) that caused unintended acceleration. We did.

Now it’s your turn to judge for yourself. Though I don’t think you can find my expert report outside the Court system, here are links to the trial transcript of my expert testimony to the Oklahoma jury and a (redacted) copy of the slides I shared with the jury in Bookout, v. Toyota.

Note that the jury in Oklahoma found that Toyota owed each victim $1.5 million in compensatory damages and also found that Toyota acted with “reckless disregard”. The latter legal standard meant the jury was headed toward deliberations on additional punitive damages when Toyota called the plaintiffs to settle (for yet another undisclosed amount). It has been reported that an additional 400+ personal injury cases are still working their way through various courts.

Related Stories


On December 13, 2013, Toyota settled the case that was set for the next trial, in West Virginia in January 2014, and announced an “intensive” settlement process to try to resolve approximately 300 of the remaining personal injury case, which are consolidated in U.S. and California courts.

Toyota continues to publicly deny there is a problem and seems to have no plans to address the unsafe design and inadequate fail safes in its drive-by-wire vehicles–the electronics and software design of which is similar in most of the Toyota and Lexus (and possibly Scion) vehicles manufactured over at least about the last ten model years. Meanwhile, incidents of unintended acceleration continue to be reported in these vehicles (see also the NHTSA complaint database) and these new incidents, when injuries are severe, continue to result in new personal injury lawsuits against Toyota.

In March 2014, the U.S. Department of Justice announced a $1.2 billion settlement in a criminal case against Toyota. As part of that settlement, Toyota admitted to past lying to NHTSA, Congress, and the public about unintended acceleration and also to putting its brand before public safety. Yet Toyota still has made no safety recalls for the defective engine software.

On April 1, 2014, I gave a keynote speech at the EE Live conference, which touched on the Toyota litigation in the context of lethal embedded software failures of the past and the coming era of self-driving vehicles. The slides from that presentation are available for download at

On September 18, 2014, Professor Phil Koopman, of Carnegie Mellon University, presented a talk about his public findings in these Toyota cases entitled “A Case Study of Toyota Unintended Acceleration and Software Safety“.

On October 30, 2014, Italian computer scientist Roberto Bagnara presented a talk entitled “On the Toyota UA Case
and the Redefinition of Product Liability for Embedded Software
” at the 12th Workshop on Automotive Software & Systems, in Milan.

Dead Code, the Law, and Unintended Consequences

Wednesday, February 6th, 2013 Michael Barr

Dead code is source code that is not executed in the final system. It comes in two forms. First, there is dead code that is commented out or removed via #ifdef’s. That dead code has no corresponding form in the binary. Other dead code is present in the binary but cannot be or is never invoked. Either way, dead code is a vestige or unnecessary part of the product.

One of the places I have seen a lot of dead code is in my work as an expert witness. And I’ve observed that the presence of dead code can have unintended legal consequences. In at least one case I was involved in it is likely that strings in certain dead code in the binary was a major cause of a lawsuit being brought against a maker of embedded system products. I have also observed several scenarios in which dead code (at least in part) heightened the probability of a loss in court.

One way that dead code can increases the probability of a loss in court is if the dead code implements part (or all) of a patented algorithm. When a patent infringement suit is brought against your company, one or more versions of your source code–when potentially relevant–must be produced to the other side’s legal team. The patent owner’s expert(s) will pore over this source code for many hours, seeking to identify portions of the code that implement each part of the algorithm. If one of those parts is implemented in dead code that becomes part of the binary the product may still infringe an asserted claim of the patent–even if it is never invoked. (I’m not a lawyer and not sure if dead code does legally infringe, but consider it at least possible that neither side’s expert will notice it is dead or that the judge or jury won’t be convinced by a dead code defense.)

Another potential consequence of dead code is that an expert assessing the quality of your source code (e.g., in a product liability suit involving injury or death) may use as one basis of her opinion of poor quality that the source code she examined is overly-complex and riddled with commented out code and/or preprocessing directives. As you know, source code that is hard to read is harder than it needs to be to maintain. And,I think most experts would agree, code that is hard to read and maintain is more likely to contain bugs. In such a scenario, your engineering team may come off as sloppy or incompetent to the jury, which is not exactly the first impression you want to make when your product is alleged to have caused injury or death. Note that overly-complex code also increases the cost of litigation–as both side’s experts will need to spend more time reviewing the source code to understand it fully.

In a source code copyright (or copyleft) suit the mere presence of another party’s source code may result be sufficient to prove infringement–even if it is isn’t actually built into the binary! Consider the risks of your code containing files or functions of open source software that, by their mere existence in your source code, attaches an open source license to all of your proprietary code.

Bottom line advice: If source code is dead remove it. If you think you might need to refer to that code again later, well that is what version control systems are for–make a searchable comment about what you’ve removed at such a checkin. Do this as soon as you are certain it won’t be in a release version of your firmware.