embedded software boot camp

Apple’s #gotofail SSL Security Bug was Easily Preventable

March 3rd, 2014 by Michael Barr

If programmers at Apple had simply followed a couple of the rules in the Embedded C Coding Standard, they could have prevented the very serious `Gotofail` SSL bug from entering the iOS and OS X operating systems. Here’s a look at the programming mistakes involved and the easy-to-follow coding standard rules that could have easily prevent the bug.

In case you haven’t been following the computer security news, Apple last week posted security updates for users of devices running iOS 6, iOS 7, and OS X 10.9 (Mavericks). This was prompted by a critical bug in Apple’s implementation of the SSL/TLS protocol, which has apparently been lurking for over a year.

In a nutshell, the bug is that a bunch of important C source code lines containing digital signature certificate checks were never being run because an extraneous goto fail; statement in a portion of the code was always forcing a jump. This is a bug that put millions of people around the world at risk for man-in-the-middle attacks on their apparently-secure encrypted connections. Moreover, Apple should be embarrassed that this particular bug also represents a clear failure of software process at Apple.

There is debate about whether this may have been a clever insider-enabled security attack against all of Apple’s users, e.g., by a certain government agency. However, whether it was an innocent mistake or an attack designed to look like an innocent mistake, Apple could have and should have prevented this error by writing the relevant portion of code in a simple manner that would have always been more reliable as well as more secure. And thus, in my opinion, Apple was clearly negligent.

Here are the lines of code at issue (from Apple’s open source code server), with the extraneous goto in bold:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, ...)
{
    OSStatus  err;
    ...

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;
    ...

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;
}

The code above violates at least two rules from Barr Group‘s Embedded C Coding Standard book. Importantly, had Apple followed at least the first of these rules, in particular, this dangerous bug should almost certainly have been prevented from ever getting into even a single device.

Rule 1.3.a

Braces shall always surround the blocks of code (a.k.a., compound statements), following if, else, switch, while, do, and for statements; single statements and empty statements following these keywords shall also always be surrounded by braces.

Had Apple not violated this always-braces rule in the SSL/TLS code above, there would have been either just one set of curly braces after each if test or a very odd looking hard-to-miss chunk of code with two sets of curly braces after the if with two gotos. Either way, this bug was preventable by following this rule and performing code review.

Rule 1.7.c

The goto keyword shall not be used.

Had Apple not violated this never-goto rule in the SSL/TLS code above, there would not have been a double goto fail; line to create the unreachable code situation. Certainly if that forced each of the goto lines to be replaced with more than one line of code, it would have forced programmers to use curly braces.

On a final note, Apple should be asking its engineers and engineering managers about the failures of process (at several layers) that must have occurred for this bug to have gone into end user’s devices. Specifically:

  • Where was the peer code review that should have spotted this, or how did the reviewers fail to spot this?
  • Why wasn’t a coding standard rule adopted to make such bugs easier to spot during peer code reviews?
  • Why wasn’t a static analysis tool, such as Klocwork, used, or how did it fail to detect the unreachable code that followed? Or was it users of such a tool, at Apple, who failed to act?
  • Where was the regression test case for a bad SSL certificate signature, or how did that test fail?

Dangerous bugs, like this one from Apple, often result from a combination of accumulated errors in the face of flawed software development processes. Too few programmers recognize that many bugs can be kept entirely out of a system simply by adopting (and rigorously enforcing) a coding standard that is designed to keep bugs out.

Security Risks of Embedded Systems

January 15th, 2014 by Michael Barr

In the words of security guru and blogger Bruce Schneier “The Internet of Things is Wildly Insecure — and Often Unpatchable”. As Bruce describes the current state of affairs in a recent Wired magazine article:

We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself — as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.

It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard — if not impossible — to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure — publishing vulnerabilities to force companies to issue patches quicker — and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.

But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.

If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them — including some of the most popular and common brands.

I agree with Bruce and like to see a mainstream security guru talking about embedded systems. I recommend you read the whole article here.

An Update on Toyota and Unintended Acceleration

October 26th, 2013 by Michael Barr

In early 2011, I wrote a couple of blog posts (here and here) as well as a later article (here) describing my initial thoughts on skimming NASA’s official report on its analysis of Toyota’s electronic throttle control system. Half a year later, I was contacted and retained by attorneys for numerous parties involved in suing Toyota for personal injuries and economic losses stemming from incidents of unintended acceleration. As a result, I got to look at Toyota’s engine source code directly and judge for myself.

From January 2012, I’ve led a team of seven experienced engineers, including three others from Barr Group, in reviewing Toyota’s electronic throttle and some other source code as well as related documents, in a secure room near my home in Maryland. This work proceeded in two rounds, with a first round of expert reports and depositions issued in July 2012 that led to a billion-dollar economic loss settlement as well as an undisclosed settlement of the first personal injury case set for trial in U.S. Federal Court. The second round began with an over 750 page formal written expert report by me in April 2013 and culminated this week in an Oklahoma jury’s decision that the multiple defects in Toyota’s engine software directly caused a September 2007 single vehicle crash that injured the driver and killed her passenger.

It is significant that this was the first and only jury so far to hear any opinions about Toyota’s software defects. Earlier cases either predated our source code access, applied a non-software theory, or was settled by Toyota for an undisclosed sum.

In our analysis of Toyota’s source code, we built upon the prior analysis by NASA. First, we looked more closely at more lines of the source code for more vehicles for more man months. And we also did a lot of things that NASA didn’t have time to do, including reviewing Toyota’s operating system’s internals, reviewing the source code for Toyota’s “monitor CPU”, performing an independent worst-case stack depth analysis, running portions of the main CPU software including the RTOS in a processor simulator, and demonstrating–in 2005 and 2008 Toyota Camry vehicles–a link between loss of throttle control and the numerous defects we found in the software.

In a nutshell, the team led by Barr Group found what the NASA team sought but couldn’t find: “a systematic software malfunction in the Main CPU that opens the throttle without operator action and continues to properly control fuel injection and ignition” that is not reliably detected by any fail-safe. To be clear, NASA never concluded software wasn’t at least one of the causes of Toyota’s high complaint rate for unintended acceleration; they just said they weren’t able to find the specific software defect(s) that caused unintended acceleration. We did.

Now it’s your turn to judge for yourself. Though I don’t think you can find my expert report outside the Court system, here are links to the trial transcript of my expert testimony to the Oklahoma jury and a (redacted) copy of the slides I shared with the jury in Bookout, et.al. v. Toyota.

Note that the jury in Oklahoma found that Toyota owed each victim $1.5 million in compensatory damages and also found that Toyota acted with “reckless disregard”. The latter legal standard meant the jury was headed toward deliberations on additional punitive damages when Toyota called the plaintiffs to settle (for yet another undisclosed amount). It has been reported that an additional 400+ personal injury cases are still working their way through various courts.

Related Stories

Updates

On December 13, 2013, Toyota settled the case that was set for the next trial, in West Virginia in January 2014, and announced an “intensive” settlement process to try to resolve approximately 300 of the remaining personal injury case, which are consolidated in U.S. and California courts.

Toyota continues to publicly deny there is a problem and seems to have no plans to address the unsafe design and inadequate fail safes in its drive-by-wire vehicles–the electronics and software design of which is similar in most of the Toyota and Lexus (and possibly Scion) vehicles manufactured over at least about the last ten model years. Meanwhile, incidents of unintended acceleration continue to be reported in these vehicles (see also the NHTSA complaint database) and these new incidents, when injuries are severe, continue to result in new personal injury lawsuits against Toyota.

In March 2014, the U.S. Department of Justice announced a $1.2 billion settlement in a criminal case against Toyota. As part of that settlement, Toyota admitted to past lying to NHTSA, Congress, and the public about unintended acceleration and also to putting its brand before public safety. Yet Toyota still has made no safety recalls for the defective engine software.

On April 1, 2014, I gave a keynote speech at the EE Live conference, which touched on the Toyota litigation in the context of lethal embedded software failures of the past and the coming era of self-driving vehicles. The slides from that presentation are available for download at http://www.barrgroup.com/killer-apps/.

On September 18, 2014, Professor Phil Koopman, of Carnegie Mellon University, presented a talk about his public findings in these Toyota cases entitled “A Case Study of Toyota Unintended Acceleration and Software Safety“.

On October 30, 2014, Italian computer scientist Roberto Bagnara presented a talk entitled “On the Toyota UA Case
and the Redefinition of Product Liability for Embedded Software
” at the 12th Workshop on Automotive Software & Systems, in Milan.

Intellectual Property Protections for Embedded Software: A Primer

June 11th, 2013 by Michael Barr

My experiences as a testifying expert witness in numerous lawsuits involving software and source code have taught me a thing or two about the various intellectual property protections that are available to the creators of software. These are areas of the law that you, as an embedded software engineer, should probably know at least a little about. Hence, this primer.

Broadly speaking, software is protectable under three areas of intellectual property law: patent law, copyright law, and trade secret law. Each of these areas of the law protects your software in a different way and you may choose to rely on none, some, or all three such protections. (The name of your product may also be protectable by trademark law, though that has nothing specifically to do with software.)

Embedded Software and Patent Law

Patent law can be used to protect one or more innovative IDEAS that your product uses to get the job done. If you successfully patent a mathematical algorithm specific to your product domain (e.g., an algorithm for detecting or handling a specific arrhythmia used in your pacemaker) then you own a (time-limited) monopoly on that idea. If you believe another company is using the same algorithm in their product then you have the right to bring an infringement suit (e.g., in the ITC or U.S. District Court).

In the process of such a suit, the competitor’s schematics, source code, and design documents will generally be made available to independent expert witnesses (i.e., not to you directly). The expert(s) will then spend time reviewing the competitor’s source code to determine if one or more of the claims of the asserted patent(s) is infringed. It is a useful analogy to think of the claims of a patent as a description of the boundaries of real property and of infringement of the patent as trespassing.

Patents protect ideas regardless of how they are expressed. For example, you may have heard about (purely) “software patents” being new and somewhat controversial. However, the patents that protect most embedded systems typically cover a combination of at least electronics and software. Patent protection is typically broad enough to cover purely hardware, purely software, as well as hardware-software. Thus the protection can span a range of hardware vs. software decompositions and provides protection within software even when the programming languages and/or function and variable names differ.

To apply for a patent on your work you must file certain paperwork with and pay registration fees to the U.S. Patent and Trademark Office. This process generally begins with a prior art search conducted by an attorney and takes at least several years to complete. You should expect the total cost (not including your own time), per patent, to be measured in the tens of thousands of dollars.

Embedded Software and Copyright Law

Copyright law can be used to protect one or more creative EXPRESSIONS that the authors of the source code employed to get the job done. Unlike patent law, copyright law cannot be used to protect ideas or algorithms. Rather, copyright can only protect the way that you specifically creatively choose to implement those ideas. Indeed if there is only one or a handful of ways to implement a particular algorithm, or only one way to do so efficiently or in your chosen language, you may not be able to protect that aspect of your software with copyright.

The attorneys in a source code copyright infringement lawsuit wind up arguing over two primary issues. First, they argue which individual parts of the source code (e.g., function prototypes in an API) are protectable because they are sufficiently creative. The judge generally decides this issue, based on expert analysis. Second, they argue how the selection and arrangement of these individually protectable “islands” together shows a pattern of “substantial similarity”. The jury decides that.

Source code copyright infringement is easiest to prove when the two programs have source code that looks similar in some important way. That is, when the programming languages are the same and the function and variable names are similar. However, it is rare that the programs are identical in every detail. Thus, due to the possibility of the accused software developers independently creating something similar by coincidence rather than malfeasance, the legal standard for proving copyright infringement is much higher when it cannot be shown that the defendants had “access” to some version of the source code.

Unlike patents, copyrights do not need to be awarded. You, or your employer, own a copyright in your work merely by creating it. (Whether you write “Copyright (c) 2013 by MyCompany, Inc.” at the top of every source code file or not.) However, there are some advantages to registering your copyright (by submitting a sample) in a work of software with the U.S. Copyright Office before any alleged infringement occurs. Even if you outsource it to an attorney, the cost of registering a copyright should only be about a thousand dollars at most.

As source code frequently changes and new versions will inevitably be released, you should be reassured that a single copyright extends to “derivative works”, which generally includes later versions of the software. You don’t have to keep registering every minor release with the Copyright Office. And, very importantly, the binary executable version of your software (e.g., the contents of Flash or a library of object code) is extended copyright protection as a derivative work of the source code. Thus someone who copies your binary can also be found to have infringed your copyright.

Interestingly, both patent law and copyright law are called for in the U.S. Constitution. However, of course, the extension of these areas of law to software is a modern development.

Embedded Software and Trade Secret Law

Unlike patent and copyright law, which each at best protects only a portion (“islands”) of your source code, trade secret law can be used to protect the entirety of the SECRETS within the source code. Secrets need not be innovative ideas nor creative expressions. The key requirement for this area of law to apply is that you take reasonable steps to keep the source code “secret”. So, for example, though open source software may be protectable by patent law and copyright law it cannot be protected by trade secret law due to the lack of secrecy.

You may think that there is a fundamental conflict between registering the copyright in your software, which requires submitting a copy to the government, and keeping your source code secret. However, the U.S. Copyright Office only requires that a small portion of the source code of your program be filed to successfully identify the copyrighted software and its owner; the vast majority of the source code need not be submitted.

Preserving this secrecy is one of the reasons for the inconveniences software developers often encounter at the companies that employ them (e.g., not being able to take source code home). (And certain terms of their employment agreements.) Protecting software like the secret formula for Coca-Cola or Krabby Patties helps an owner prove that the source code is a trade secret and thus opens the door to this additional legal basis for bringing a lawsuit against a competitor. Trade secrets cases I have been involved with as an expert have involved allegations that one or more insiders left a company and subsequently misappropriated it’s software secrets to compete via a startup or existing competitor.

Final Thoughts

In my work as an expert, I always look to the attorneys for more precise definitions of legal terms. Importantly, there are many terms and concepts I have purposefully avoided using here to keep this at an introductory level of detail. You should, of course, always consult with an attorney about your specific situation. You should never simply rely on what you read on the Internet. Hopefully, there is enough information in this primer to help you at least understand the types of protections potentially available to you and to find a lawyer who specializes in the right field.

Dead Code, the Law, and Unintended Consequences

February 6th, 2013 by Michael Barr

Dead code is source code that is not executed in the final system. It comes in two forms. First, there is dead code that is commented out or removed via #ifdef’s. That dead code has no corresponding form in the binary. Other dead code is present in the binary but cannot be or is never invoked. Either way, dead code is a vestige or unnecessary part of the product.

One of the places I have seen a lot of dead code is in my work as an expert witness. And I’ve observed that the presence of dead code can have unintended legal consequences. In at least one case I was involved in it is likely that strings in certain dead code in the binary was a major cause of a lawsuit being brought against a maker of embedded system products. I have also observed several scenarios in which dead code (at least in part) heightened the probability of a loss in court.

One way that dead code can increases the probability of a loss in court is if the dead code implements part (or all) of a patented algorithm. When a patent infringement suit is brought against your company, one or more versions of your source code–when potentially relevant–must be produced to the other side’s legal team. The patent owner’s expert(s) will pore over this source code for many hours, seeking to identify portions of the code that implement each part of the algorithm. If one of those parts is implemented in dead code that becomes part of the binary the product may still infringe an asserted claim of the patent–even if it is never invoked. (I’m not a lawyer and not sure if dead code does legally infringe, but consider it at least possible that neither side’s expert will notice it is dead or that the judge or jury won’t be convinced by a dead code defense.)

Another potential consequence of dead code is that an expert assessing the quality of your source code (e.g., in a product liability suit involving injury or death) may use as one basis of her opinion of poor quality that the source code she examined is overly-complex and riddled with commented out code and/or preprocessing directives. As you know, source code that is hard to read is harder than it needs to be to maintain. And,I think most experts would agree, code that is hard to read and maintain is more likely to contain bugs. In such a scenario, your engineering team may come off as sloppy or incompetent to the jury, which is not exactly the first impression you want to make when your product is alleged to have caused injury or death. Note that overly-complex code also increases the cost of litigation–as both side’s experts will need to spend more time reviewing the source code to understand it fully.

In a source code copyright (or copyleft) suit the mere presence of another party’s source code may result be sufficient to prove infringement–even if it is isn’t actually built into the binary! Consider the risks of your code containing files or functions of open source software that, by their mere existence in your source code, attaches an open source license to all of your proprietary code.

Bottom line advice: If source code is dead remove it. If you think you might need to refer to that code again later, well that is what version control systems are for–make a searchable comment about what you’ve removed at such a checkin. Do this as soon as you are certain it won’t be in a release version of your firmware.