Posts Tagged ‘architecture’

A Look Back at the Audi 5000 and Unintended Acceleration

Friday, March 14th, 2014 Michael Barr

I was in high school in the late 1980’s when NHTSA (pronounced “nit-suh”), Transport Canada, and others studied complaints of unintended acceleration in Audi 5000 vehicles. Looking back on the Audi issues, and in light of my own recent role as an expert investigating complaints of unintended acceleration in Toyota vehicles, there appears to be a fundamental contradiction between the way that Audi’s problems are remembered now and what NHTSA had to say officially at the time.

Here’s an example from a pretty typical remembrance of what happened, from a 2007 article written “in defense of Audi”:

In 1989, after three years of study[], the National Highway Traffic Safety Administration (NHTSA) issued their report on Audi’s “sudden unintended acceleration problem.” NHTSA’s findings fully exonerated Audi… The report concluded that the Audi’s pedal placement was different enough from American cars’ normal set-up (closer to each other) to cause some drivers to mistakenly press the gas instead of the brake.

And here’s what NHTSA’s official Audi 5000 report actually concluded:

Some versions of Audi idle-stabilization system were prone to defects which resulted in excessive idle speeds and brief unanticipated accelerations of up to 0.3g. These accelerations could not be the sole cause of [long-duration unintended acceleration incidents], but might have triggered some [of the long-duration incidents] by startling the driver.”

Contrary to the modern article, NHTSA’s original report most certainly did not “fully exonerate” Audi. Similarly, though there were differences in pedal configuration compared to other cars, NHTSA seems to have concluded that the first thing that happened was a sudden unexpected surge of engine power that startled drivers and that the pedal misapplication sometimes followed that.

This sequence of, first, a throttle malfunction and, then, pedal confusion was summarized in a 2012 review study by NHTSA:

Once an unintended acceleration had begun, in the Audi 5000, due to a failure in the idle-stabilizer system (producing an initial acceleration of 0.3g), pedal misapplication resulting from panic, confusion, or unfamiliarity with the Audi 5000 contributed to the severity of the incident.

The 1989 NHTSA report elaborates on the design of the throttle, which included an “idle-stabilization system” and documents that multiple “intermittent malfunctions of the electronic control unit were observed and recorded”. In a nutshell, the Audi 5000 had a main mechanical throttle control, wherein the gas pedal pushed and pulled on the throttle valve with a cable, as well as an electronic throttle control idle adjustment.

It is unclear whether the “electronic control unit” mentioned by NHTSA was purely electronic or if it also had embedded software. (ECU, in modern lingo, includes firmware.) It is also unclear what percentage of the Audi 5000 unintended acceleration complaints were short-duration events vs. long-duration events. If there was software in the ECU and short-duration events were more common, well that would lead to some interesting questions. Did NHTSA and the public learn all of the right lessons from the Audi 5000 troubles?

An Update on Toyota and Unintended Acceleration

Saturday, October 26th, 2013 Michael Barr

In early 2011, I wrote a couple of blog posts (here and here) as well as a later article (here) describing my initial thoughts on skimming NASA’s official report on its analysis of Toyota’s electronic throttle control system. Half a year later, I was contacted and retained by attorneys for numerous parties involved in suing Toyota for personal injuries and economic losses stemming from incidents of unintended acceleration. As a result, I got to look at Toyota’s engine source code directly and judge for myself.

From January 2012, I’ve led a team of seven experienced engineers, including three others from Barr Group, in reviewing Toyota’s electronic throttle and some other source code as well as related documents, in a secure room near my home in Maryland. This work proceeded in two rounds, with a first round of expert reports and depositions issued in July 2012 that led to a billion-dollar economic loss settlement as well as an undisclosed settlement of the first personal injury case set for trial in U.S. Federal Court. The second round began with an over 750 page formal written expert report by me in April 2013 and culminated this week in an Oklahoma jury’s decision that the multiple defects in Toyota’s engine software directly caused a September 2007 single vehicle crash that injured the driver and killed her passenger.

It is significant that this was the first and only jury so far to hear any opinions about Toyota’s software defects. Earlier cases either predated our source code access, applied a non-software theory, or was settled by Toyota for an undisclosed sum.

In our analysis of Toyota’s source code, we built upon the prior analysis by NASA. First, we looked more closely at more lines of the source code for more vehicles for more man months. And we also did a lot of things that NASA didn’t have time to do, including reviewing Toyota’s operating system’s internals, reviewing the source code for Toyota’s “monitor CPU”, performing an independent worst-case stack depth analysis, running portions of the main CPU software including the RTOS in a processor simulator, and demonstrating–in 2005 and 2008 Toyota Camry vehicles–a link between loss of throttle control and the numerous defects we found in the software.

In a nutshell, the team led by Barr Group found what the NASA team sought but couldn’t find: “a systematic software malfunction in the Main CPU that opens the throttle without operator action and continues to properly control fuel injection and ignition” that is not reliably detected by any fail-safe. To be clear, NASA never concluded software wasn’t at least one of the causes of Toyota’s high complaint rate for unintended acceleration; they just said they weren’t able to find the specific software defect(s) that caused unintended acceleration. We did.

Now it’s your turn to judge for yourself. Though I don’t think you can find my expert report outside the Court system, here are links to the trial transcript of my expert testimony to the Oklahoma jury and a (redacted) copy of the slides I shared with the jury in Bookout, et.al. v. Toyota.

Note that the jury in Oklahoma found that Toyota owed each victim $1.5 million in compensatory damages and also found that Toyota acted with “reckless disregard”. The latter legal standard meant the jury was headed toward deliberations on additional punitive damages when Toyota called the plaintiffs to settle (for yet another undisclosed amount). It has been reported that an additional 400+ personal injury cases are still working their way through various courts.

Related Stories

Updates

On December 13, 2013, Toyota settled the case that was set for the next trial, in West Virginia in January 2014, and announced an “intensive” settlement process to try to resolve approximately 300 of the remaining personal injury case, which are consolidated in U.S. and California courts.

Toyota continues to publicly deny there is a problem and seems to have no plans to address the unsafe design and inadequate fail safes in its drive-by-wire vehicles–the electronics and software design of which is similar in most of the Toyota and Lexus (and possibly Scion) vehicles manufactured over at least about the last ten model years. Meanwhile, incidents of unintended acceleration continue to be reported in these vehicles (see also the NHTSA complaint database) and these new incidents, when injuries are severe, continue to result in new personal injury lawsuits against Toyota.

In March 2014, the U.S. Department of Justice announced a $1.2 billion settlement in a criminal case against Toyota. As part of that settlement, Toyota admitted to past lying to NHTSA, Congress, and the public about unintended acceleration and also to putting its brand before public safety. Yet Toyota still has made no safety recalls for the defective engine software.

On April 1, 2014, I gave a keynote speech at the EE Live conference, which touched on the Toyota litigation in the context of lethal embedded software failures of the past and the coming era of self-driving vehicles. The slides from that presentation are available for download at http://www.barrgroup.com/killer-apps/.

On September 18, 2014, Professor Phil Koopman, of Carnegie Mellon University, presented a talk about his public findings in these Toyota cases entitled “A Case Study of Toyota Unintended Acceleration and Software Safety“.

On October 30, 2014, Italian computer scientist Roberto Bagnara presented a talk entitled “On the Toyota UA Case
and the Redefinition of Product Liability for Embedded Software
” at the 12th Workshop on Automotive Software & Systems, in Milan.

Introducing Barr Group

Wednesday, December 26th, 2012 Michael Barr

In the ten months since forming Barr Group, I have received many questions about the new company. As we enter the new year, I thought it a good time to use this blog post to answer the most frequently asked questions, such as:

  • What does Barr Group do?
  • Who are Barr Group’s clients?
  • How is Barr Group different than my former company?
  • Who is our CEO and what skills does he bring?
  • What is my role in Barr Group?

If I had to describe Barr Group (http://www.barrgroup.com) in a single sentence, I would say that “Barr Group helps companies that design embedded systems make their products more reliable and more secure.” We do sell a few small items–such as the Embedded C Coding Standard book and Embedded Software Training in a Box kit–but our company is not really about our own products. Rather, we achieve our mission of improving embedded systems reliability and security by delivering business-to-business services of primarily three types: (1) consulting, (2) training, and (3) engineering.

Barr Group serves clients from small startups to well-known Fortune 100 companies that make embedded systems used in a wide range of industries. Representative clients include: Adtran, Medtronic, Renesas, TI, and Xerox. Barr Group’s staff has expertise and experience in the design of medical devices, industrial controls, consumer electronics, telecommunications, transportation equipment, smart grid technologies, and many other types of embedded systems.

Barr Group’s consulting services are sold to engineering managers, engineering directors, or vice presidents of engineering. Typical consulting engagements are short-duration/high-value projects aimed at answering strategically important questions related to embedded systems architecture and embedded software development processes. For example, in the area of architecture for reliability and security we offer services specifically in the following areas: system design review, software design review, system (re)architecture, software (re)architecture, source code review, cost reduction, reverse engineering, and security analysis. Of course, we often address more targeted issues as well, including embedded software development process improvements. Because we are unaffiliated with any processor, RTOS, or tool vendor, all of our advice is independent of any external influence; we aim only to find the best path forward for our clients, favoring alternatives that require only 20% of the effort to achieve 80% of the available benefits.

Barr Group’s training courses are designed to raise the quality of engineers and engineering teams and many of them include hands-on programming exercises. We teach these courses both privately and publicly. Private training is held at the client’s office and every engineer in attendance works for the client. By contrast, any individual or small group of engineers can purchase a ticket to our public training courses. Our Spring 2013 training calendar includes four week-long hands-on courses: Embedded Software Boot Camp (Maryland), Embedded Security Boot Camp (Silicon Valley), Embedded Android Boot Camp (Maryland), and Agile and Test-Driven Embedded Development (Florida).

Barr Group’s engineering design services include outsourced development of: electronics (including FPGA and PCB design); device drivers for operating systems such as MicroC/OS, VxWorks, Windows, Linux, Android, and others; embedded software; mechanical enclosures; and everything in between. In one representative project that was recently completed, a cross-functional team of talented Barr Group engineers worked together to perform all of the mechanical, electrical, software, reliability, and security engineering for a long-lived high voltage electrical switching system for deployment in a modern “smart grid” electrical distribution network.

In relation to my earlier company, which was founded in 1999, the principal difference in all of the above is Barr Group’s additional focus on embedded systems security, compared with reliability alone. Like Netrino, some members of our engineering staff also work as expert witnesses in complex technical litigation–with a range of cases involving allegations of product liability, patent infringement, and source code copyright infringement.

Finally, under the new leadership of seasoned technology executive (and fellow electrical engineer) Andrew Girson, Barr Group has added a suite of Engineer-Centric Market ResearchTM services, which assist IC makers, RTOS vendors, and other companies serving the embedded systems design community improve their products and marketing by better understanding the mind of the engineer. These services have been specifically enabled by the combination of Mr. Girson’s skills and expertise in strategic technical marketing with Barr Group’s extensive contacts in the embedded systems industry, including the over 20,000 Firmware Update newsletter subscribers.

My role in Barr Group is chief technology officer. The switch from my role as president of the old company to CTO of the new company has freed up considerably more of my time to work on engineering and expert witness projects. The extra time allows me to focus on sharing my technical expertise with as many clients as possible while also developing the other engineers who work individuals projects.

All in all, it has been great fun (if a lot of work) launching the new company this year. I look forward to another successful year for Barr Group in 2013. Please don’t hesitate to contact me or call us at (866) 653-6233 if we can be of assistance to your company. And happy new year!

Building Reliable and Secure Embedded Systems

Tuesday, March 13th, 2012 Michael Barr

In this era of 140 characters or less, it has been well and concisely stated that, “RELIABILITY concerns ACCIDENTAL errors causing failures, whereas SECURITY concerns INTENTIONAL errors causing failures.” In this column I expand on this statement, especially as regards the design of embedded systems and their place in our network-connected and safety-concious modern world.

As the designers of embedded systems, the first thing we must accomplish on any project is to make the hardware and software work. That is to say we need to make the system behave as it was designed to. The first iteration of this is often flaky; certain uses or perturbations of the system by testers can easily dislodge the system into a non-working state. In common parlance, “expect bugs.”

Given time, tightening cycles of debug and test can get us past the bugs and through to a shippable product. But is a debugged system good enough? Neither reliability nor security can be tested into a product. Each must be designed in from the start. So let’s take a closer look at these two important design aspects for modern embedded systems and then I’ll bring them back together at the end.

Reliable Embedded Systems

A product can be stable yet lack reliability. Consider, for example, an anti-lock braking computer installed in a car. The software in the anti-lock brakes may be bug-free, but how does it function if a critical input sensor fails?

Reliable systems are robust in the face of adverse run-time environments. Reliable systems are able to work around errors encountered as they occur to the system in the field–so that the number and impact of failures are minimized. One key strategy for building reliable systems is to eliminate single-points-of-failure. For example, redundancy could be added around that critical input sensor–perhaps by adding a second sensor in parallel with the first.

Another aspect of reliability that is under the complete control of designers (at least when they consider it from the start) are the “fail-safe” mechanisms. Perhaps a suitable but lower-cost alternative to a redundant sensor is detection of the failed sensor with a fall back to mechanical braking.

Failure Mode and Effect Analysis (FMEA) is one of the most effective and important design processes used by engineers serious about designing reliability into their systems. Following this process, each possible failure point is traced from the root failure outward to its effects. In an FMEA, numerical weights can be applied to the likelihoods of each failure as well as the seriousness of consequences. An FMEA can thus help guide you to a cost effective but higher reliability design by highlighting the most valuable places to insert the redundancy, fail-safes, or other elements that reinforce the system’s overall reliability.

In certain industries, reliability is a key driver of product safety. And that is why you see these techniques and FMEA and other design for reliability processes being applied by the designers of safety-critical automotive, medical, avionics, nuclear, and industrial systems. The same techniques can, of course, be used to make any type of embedded system more reliable.

Regardless of your industry, it is typically difficult or impossible to make your product as reliable via patches. There’s no way to add hardware like that redundant sensor, so your options may reduce to a fail-safe that is helpful but less reliable overall. Reliability cannot be patched or tested or debugged into your system. Rather, reliability must be designed in from the start.

Secure Embedded Systems

A product can also be stable yet lack security. For example, an office printer is the kind of product most of us purchase and use without giving a minute of thought to security. The software in the printer may be bug-free, but is it able to prevent a would-be eavesdropper from capturing a remote electronic copy of everything you print, including your sensitive financial documents?

Secure systems are robust in the face of persistent attack. Secure systems are able to keep hackers out by design. One key strategy for building secure systems is to validate all inputs, especially those arriving over an open network connection. For example, security could be added to a printer by ensuring against buffer overflows and encrypting and digitally signing firmware updates.

One of the unfortunate facts of designing secure embedded systems is that the hackers who want to get in only need to find and exploit a single weakness. Adding layers of security is good, but if even any one of those layers remains fundamentally weak, a sufficiently motivated attacker will eventually find and breach that defense. But that’s not an excuse for not trying.

For years, the largest printer maker in the world apparently gave little thought to the security of the firmware in its home/office printers, even as it was putting tens of millions of tempting targets out into the world. Now the security of those printers has been breached by security researchers with a reasonable awareness of embedded systems design. Said one of the lead researchers, “We can actually modify the firmware of the printer as part of a legitimate document. It renders correctly, and at the end of the job there’s a firmware update. … In a super-secure environment where there’s a firewall and no access — the government, Wall Street — you could send a résumé to print out.”

Security is a brave new world for many embedded systems designers. For decades we have relied on the fact that the microcontrollers and Flash memory and real-time operating systems and other less mainstream technologies we use will protect our products from attack. Or that we can gain enough “security by obscurity” by keeping our communications protocols and firmware upgrade processes secret. But we no longer live in that world. You must adapt.

Consider the implications of an insecure design of an automotive safety system that is connected to another Internet-connected computer in the car via CAN; or the insecure design of an implanted medical device; or the insecure design of your product.

Too often, the ability to upgrade a product’s firmware in the field is the very vector that’s used to attack. This can happen even when a primary purpose for including remote firmware updates is motivated by security. For example, as I’ve learned in my work as an expert witness in numerous cases involving reverse engineering of the techniques and technology of satellite television piracy, much of that piracy has been empowered by the same software patching mechanism that allowed the broadcasters to perform security upgrades and electronic countermeasures. Ironically, had the security smart cards in those set-top boxes had only masked ROM images the overall system security may have been higher. This was certainly not what the designers of the system had in mind. But security is also an arms race.

Like reliability, security must be designed in from the start. Security can’t be patched or tested or debugged in. You simply can’t add security as effectively once the product ships. For example, an attacker who wished to exploit a current weakness in your office printer or smart card might download his hack software into your device and write-protect his sectors of the flash today so that his code could remain resident even as you applied security patches.

Reliable and Secure Embedded Systems

It is important to note at this point that reliable systems are inherently more secure. And that, vice versa, secure systems are inherently more reliable. So, although, design for reliability and design for security will often individually yield different results–there is also an overlap between them.

An investment in reliability, for example, generally pays off in security. Why? Well, because a more reliable system is more robust in its handling of all errors, whether they are accidental or intentional. An anti-lock braking system with a fall back to mechanical braking for increased reliability is also more secure against an attack against that critical hardware input sensor. Similarly, those printers wouldn’t be at risk of fuser-induced fire in the case of a security breach if they were never at risk of fire in the case of any misbehavior of the software.

Consider, importantly, that one of the first things a hacker intent on breaching the security of your embedded device might do is to perform a (mental, at least) fault tree analysis of your system. This attacker would then target her time, talents, and other resources at one or more single points of failure she considers most likely to fail in a useful way.

Because a fault tree analysis starts from the general goal and works inward deductively toward the identification of one or more choke points that might produce the desired erroneous outcome, attention paid to increasing reliability such as via FMEA usually reduces choke points and makes the attacker’s job considerably more difficult. Where security can break down even in a reliable system is where the possibility of an attacker’s intentionally induced failure is ignored in the FMEA weighting and thus possible layers of protection are omitted.

Similarly, an investment in security may pay off in greater reliability–even without a directed focus on reliability. For example, if you secure your firmware upgrade process to accept only encrypted and digitally signed binary images you’ll be adding a layer of protection against an inadvertently corrupted binary causing an accidental error and product failure. Anything you do to improve the security of communications (i.e., checksums, prevention of buffer overflows, etc.) can have a similar effect on reliability.

The Only Way Forward

Each year it becomes increasingly important for all of us in the embedded systems design community to learn to design reliable and secure products. If you don’t, it might be your product making the wrong kind of headlines and your source code and design documents being poured over by lawyers. It is no longer acceptable to stick your head in the sand on these issues.

Embedded Software Training in a Box

Friday, May 6th, 2011 Michael Barr

Embedded Software Training in a BoxI am beaming with pride. I think we have finally achieved the holy grail of firmware training: Embedded Software Training in a Box. Priced at just $599, the kit includes Everything-You-Need-to-Know-to-Develop-Quality-Reliable-Firmware-in-C, including software for real-time safety-critical systems such as medical devices.

In many ways, this product is the culmination of about the last fifteen years of my career. The knowledge and skills imparted in the kit are drawn from my varied experiences as:

This kit also–at long last–answers the question I’ve been receiving from around the world since I first started writing articles and books about embedded programming: “Where/How can I learn to be a great embedded programmer?” I believe the answer is now as easy as: “Embedded Software Boot Camp in a Box!”