Posts Tagged ‘safety’

Don’t Follow These 5 Dangerous Coding Standard Rules

Tuesday, August 30th, 2011 Michael Barr

Over the summer I happened across a brief blog post by another firmware developer in which he presented ten C coding rules for better embedded C code. I had an immediate strong negative reaction to half of his rules and later came to dislike a few more, so I’m going to describe what I don’t like about each. I’ll refer to this author as BadAdvice. I hope that if you have followed rules like the five below my comments will persuade you to move away from those toward a set of embedded C coding rules that keep bugs out. If you disagree, please start a constructive discussion in the comments.

Bad Rule #1: Do not divide; use right shift.

As worded, the above rule is way too broad. It’s not possible to always avoid C’s division operator. First of all, right shifting only works as a substitute for division when it is integer division and the denominator is a power of two (e.g., right shift by one bit to divide by 2, two bits to divide by 4, etc.). But I’ll give BadAdvice the benefit of the doubt and assume that he meant to say you should “use right shift as a substitute for division whenever possible”.

For his example, BadAdvice shows code to compute an average over 16 integer data samples, which are accumulated into a variable sum, during the first 16 iterations of a loop. On the 17th iteration, the average is computed by right shifting sum by 4 bits (i.e., dividing by 16). Perhaps the worst thing about this example code is how much it is tied a pair of #defines for the magic numbers 16 and 4. A simple but likely refactoring to average over 15 instead of 16 samples would break the entire example–you’d have to change from the right shift to a divide proper. It’s also easy to imagine someone changing AVG_COUNT from 16 to 15 without realizing about the shift; and if you didn’t change this, you’d get a bug in that the sum of 15 samples would still be right shifted by 4 bits.

Better Rule: Shift bits when you mean to shift bits and divide when you mean to divide.

There are many sources of bugs in software programs. The original programmer creates some bugs. Other bugs result from misunderstandings by those who later maintain, extend, port, and/or reuse the code. Thus coding rules should emphasize readability and portability most highly. The choice to deviate from a good coding rule in favor of efficiency should be taken only within a subset of the code. Unless there is a very specific function or construct that needs to be hand optimized, efficiency concerns should be left to the compiler.

Bad Rule #2: Use variable types in relation to the maximum value that variable may take.

BadAdvice gives the example of a variable named seconds, which holds integer values from 0 to 59. And he shows choosing char for the type over int. His stated goal is to reduce memory use.

In principle, I agree with the underlying practices of not always declaring variables int and choosing the type (and signedness) based on the maximum range of values. However, I think it essential that any practice like this be matched with a corresponding practice of always declaring specifically sized variables using C99’s portable fixed-width integer types.

It is impossible to understand the reasoning of the original programmer from unsigned char seconds;. Did he choose char because it is big enough or for some other reason? (Remember too that a plain char may be naturally signed or unsigned, depending on the compiler. Perhaps the original programmer even knows his compiler’s chars are default unsigned and omits that keyword.) The intent behind variables declared short and long is at least as difficult to decipher. A short integer may be 16-bits or 32-bits (or something else), depending on the compiler; a width the original programmer may have (or may not have) relied upon.

Better Rule: Whenever the width of an integer matters, use C99’s portable fixed-width integer types.

A variable declared uint16_t leaves no doubt about the original intent as it is very clearly meant to be a container for an unsigned integer value no wider than 16-bits. This type selection adds new and useful information to the source code and makes programs both more readable and more portable. Now that C99 has standardized the names of fixed-width integer types, declarations involving short and long should no longer be used. Even char should only be used for actual character (i.e., ASCII) data. (Of course, there may still be int variables around, where size does not matter, such as in loop counters.)

Bad Rule #3: Avoid >= and use <.

As worded above, I can’t say I understand this rule or its goal sufficiently, but to illustrate it BadAdvice gives the specific example of an if-else if wherein he recommends if (speed < 100) ... else if (speed > 99) instead of if (speed < 100) ... else if (speed >= 100). Say what? First of all, why not just use else for that specific scenario, as speed must be either below 100 or 100 or above.

Even if we assume we need to test for less than 100 first and then for greater than or equal to 100 second, why would anyone in their right mind prefer to use greater than 99? That would be confusing to any reader of the code. To me it reads like a bug and I need to keep going back over it to find the logical problem with the apparently mismatched range checks. Additionally, I believe that BadAdvice’s terse rationale that “Benefits: Lesser Code” is simply untrue. Any half decent compiler should be able to optimize either comparison as needed for the underlying processor.

Better Rule: Use whatever comparison operator is easiest to read in a given situation.

One of the very best things any embedded programmer can do is to make their code as readable as possible to as broad an audience as possible. That way another programmer who needs to modify your code, a peer doing code review to help you find bugs, or even you years later, will find the code hard to misinterpret.

Bad Rule #4: Avoid variable initialization while defining.

BadAdvice says that following the above rule will make initialization faster. He gives the example of unsigned char MyVariable = 100; (not preferred) vs:


#define INITIAL_VALUE 100
unsigned char MyVariable;
// Before entering forever loop in main
MyVariable = INITIAL_VALUE

Though it’s unclear from the above, let’s assume that MyVariable is a local stack variable. (It could also be global, the way his pseudo code is written.) I don’t think there should be a (portably) noticeable efficiency gain from switching to the latter. And I do think that following this rule creates an opening to forget to do the initialization or to unintentionally place the initialization code within a conditional clause.

Better Rule: Initialize every variable as soon as you know the initial value.

I’d much rather see every variable initialized on creation with perhaps the creation of the variable postponed as long as possible. If you’re using a C99 or C++ compiler, you can declare a variable anywhere within the body of a function.

Bad Rule #5: Use #defines for constant numbers.

The example given for this rule is of defining three constant values, including #define ON 1 and #define OFF 0. The rationale is “Increased convenience of changing values in a single place for the whole file. Provides structure to the code.” And I agree that using named constants instead of magic numbers elsewhere in the code is a valuable practice. However, I think there is an even better way to go about this.

Better Rule: Declare constants using const or enum.

C’s const keyword can be used to declare a variable of any type as unable to be changed at run-time. This is a preferable way of declaring constants, as they are in this way given a type that can be used to make comparisons properly and enabling them to be type-checked by the compiler if they are passed as parameters to function calls. Enumeration sets may be used instead for integer constants that come in groups, such as enum { OFF = 0, ON };.

Final Thoughts

There are two scary things about these and a few of the other rules on BadAdvice’s blog. First, is that they are out there on the Internet to be found with a search for embedded C coding rules. Second, is that BadAdvice’s bio says he works on medical device design. I’m not sure which is worse. But I do hope the above reasoning and proposed better rules gets you thinking about how to develop more reliable embedded software with fewer bugs.

Is “(uint16_t) -1” Portable C Code?

Thursday, June 2nd, 2011 Michael Barr

Twice recently, I’ve run across third-party middleware that included a statement of the form:

uint16_t variable = (uint16_t) -1;

which I take as the author’s clever way of coding:

0xFFFF

I’m not naturally inclined to like the obfuscation, but wondered if “(uint16_t) -1” is even portable C code? And, supposing it is portable, is there some advantage I don’t know about that suggests using that form over the hex literal? In the process of researching these issues, I learned a helpful fact or two worth sharing.

Q: Is the result of “(uint16_t) -1” guaranteed (by the ISO C standard) to be 0xFFFF?

A: No. But it’s likely the result will be 0xFFFF on most compilers/processors, since there is really just the one common internal CPU representation of unsigned integers. (For signed integers, most/all processors will use the common 2’s complement representation underneath–even though that’s not required in any way by the language standard.)

Q: Is there any advantage to writing 0xFFFF that way?

A: According to the C99 Standard, all conforming implementations support uint_least16_t, but some may not support uint16_t. If the platform doesn’t support uint16_t, then “(uint16_t) -1” won’t compile, but 0xFFFF will compile as a value of some larger unsigned integer type (i.e., a bug waiting to happen).

Of course, platforms that don’t have a fixed-width 16-bit unsigned capability are rare, though it may be that some DSPs fall into that category. The same issue applies to uint32_t and 0xFFFFFFFF, of course. However, I suspect platforms that don’t have a fixed-width 32-bit unsigned capability are even rarer.

Q: What is the best way to represent the maximum unsigned integer value of a given size?

A: The very best way to represent the maximum values for unsigned (and signed) fixed-width types is to use the constants named in C99’s stdint.h header file. These are of the form UINTn_MAX (and INTn_MAX) where n is the number of bits (e.g., UINT16_MAX). That is guaranteed to either work or not compile, with no middle ground for bugs.

Hat Tip: Many thanks to C and C++ standards guru Dan Saks for help with these answers.

How to Enforce Coding Standards (Automatically)

Wednesday, May 25th, 2011 Michael Barr

Coding standards can be an important tool in the fight to keep bugs out of embedded software. Unfortunately, too many well-intentioned (especially, corporate) coding standards are ineffective and gather more dust than followers. The hard truth is that enforcement of coding standards too often depends on programmers already under deadline pressure to be disciplined while they code and/or to make time to perform peer code reviews. And when peer reviews are done in this scenario, they too easily devolve into compliance discussions that miss the forest for the trees.

To ensure your selected coding standard is followed and thus effective, your team should find as many automated ways to enforce as many of its rules as possible. And you should make such automated rule checking part of the everyday software build process. Ideally, you would also restrict version control check-ins to just code that has passed all the automated checks. No code that breaks any of these automatable rules should be allowed in peer code reviews. That way, code reviewers can focus their limited hours on (1) what the code is supposed to do, (2) whether it does so correctly, and (3) whether the code and comments together are easily understood by all involved.

One of the ways Barr Group has found to increase compliance with its Embedded C Coding Standard is by configuring static analysis tools we already use to automatically enforce individual rules.

Perhaps the best option is to use a static analysis tool that includes built-in support for your chosen coding standard and/or the ability to be customized to proprietary rules. LDRA‘s static analysis engine, a screenshot from which is shown in the image below, comes preconfigured to check about 80% of the “Barr Group” coding standard rules, as well as a number of other widely used coding standards.

But there are other, less expensive, options available as well. Two tools that we have used are RSM and PC-Lint from M Squared Technologies and Gimpel Software, respectively. Neither can easily and independently enforce all of our Embedded C Coding Standard rules. But used together and properly configured, these two tools offer a low-cost automated coding standards enforcement mechanism that covers a large percentage of the rules.

Configuring RSM

The following RSM outputs can be examined and configuration options set (in version 7.75) to assist in the automated enforcement of the identified rules from the Embedded C Coding Standard. Pricing for the RSM tool, which runs on Windows, Linux, and other versions of Unix (including Mac OS X), is online at http://msquaredtechnologies.com/m2rsm/order/.

Rule # Brief Description Tool Configuration Notes
1.2.a Line length limited to 80 characters. Quality Notice 1
1.3.a Braces surround all blocks of code. Quality Notice 22
1.7.c Keyword goto not used. Quality Notice 9
1.7.d Keyword continue not used. Quality Notice 43
1.7.e Keyword break not used outside switch. Quality Notice 44
2.2 Location and content of comments. Quality Notices 17, 20, 51; Options -Es -Ec -EC
3.1 Use of white space. Quality Notices 16, 19
3.5.a No tab characters. Quality Notice 30; Option -Dt
3.6.a UNIX-style (single-character) linefeeds. Option -Du
4.1.a-d Module naming conventions. Option -Rn
4.2.a Precisely one header file per source file. Option -Rn
6.1.a-e Precisely one header file per source file. Quality Notice 2; Option -l
6.2 Cyclomatic complexity of functions. Quality Notices 10, 18, 27; Option -c
6.3.b.i-iii Preprocessor macro safety mechanisms. Option -m
8.2.d If-else if statements always have else. Quality Notice 22
8.3.b Switch statements always have default. Quality Notices 13, 14, 56
8.5.a No unconditional jumps. Quality Notices 9, 43, 444

Configuring PC-Lint

In a future blog post, I will similarly identify PC-Lint configurations that can be used (in version 9.0) to assist in the automated enforcement of specific rules from the Embedded C Coding Standard. Pricing for the PC-Lint tool, which runs on Windows, Linux, and other versions of Unix (including Mac OS X), is online at http://gimpel.com/html/order.htm.

What NHTSA/NASA Didn’t Consider re: Toyota’s Firmware

Wednesday, March 2nd, 2011 Michael Barr

In a blog post yesterday (Unintended Acceleration and Other Embedded Software Bugs), I wrote extensively on the report from NASA’s technical team regarding their analysis of the embedded software in Toyota’s ETCS-i system. My overall point was that it is hard to judge the quality of their analysis (and thereby the overall conclusion that the software isn’t to blame for unintended accelerations) given the large number of redactions.

I need to put the report down and do some other work at this point, but I have a few other thoughts and observations worth writing down.

Insufficient Explanations

First, some of the explanations offered by Toyota, and apparently accepted by NASA, strike me as insufficent. For example, at pages 129-132 of Appendix A to the NASA Report there is a discussion of recursion in the Toyota firmware. “The question then is how to verify that the indirect recursion in the ETCS-i does in fact terminate (i.e., has no infinite recursion) and does not cause a stack overflow.”

“For the case of stack overflow, [redacted phrase], and therefore a stack overflow condition cannot be detected precisely. It is likely, however, that overflow would cause some form of memory corruption, which would in turn cause some bad behavior that would then cause a watchdog timer reset. Toyota relies on this assumption to claim that stack overflow does not occur because no reset occurred during testing.” (emphasis added)

I have written about what really happens during stack overflow before (Firmware-Specific Bug #4: Stack Overflow) and this explains why a reset may not result and also why it is so hard to trace a stack overflow back to that root cause. (From page 20, in NASA’s words: “The system stack is limited to just 4096 bytes, it is therefore important to secure that no execution can exceed the stack limit. This type of check is normally simple to perform in the absence of recursive procedures, which is standard in safety critical embedded software.”)

Similarly, “Toyota designed the software with a high margin of safety with respect to deadlines and timeliness. … [but] documented no formal verification that all tasks actually meet this deadline requirement.” and “All verification of timely behavior is accomplished with CPU load measurements and other measurement-based techniques.” It’s not clear to me if the NASA team is saying it buys those Toyota explanations or merely wanted to write them down. However, I do not see a sufficient explanation in this wording from page 132:

“The [worst case execution time] analysis and recursion analysis involve two distinctly different problems, but they have one thing in common: Both of their failure modes would result in a CPU reset. … These potential malfunctions, and many others such as concurrency deadlocks and CPU starvation, would eventually manifest as a spontaneous system reset.” (emphasis added)

Might not a deadlock, starvation, priority inversion, or infinite recursion be capable of producing a bit of “bad behavior” (perhaps even unintended acceleration) before that “eventual” reset? Or might not a stack overflow just corrupt one or a few important variables a little bit and that result in bad behavior rather than or before a result? These kinds of possibilities, even at very low probabilities, are important to consider in light of NASA’s calculation that the U.S.-owned Camry 2002-2007 fleet alone is running this software a cumulative one billion hours per year.

Paths Not Taken

My second observation is based upon reflection on the steps NASA might have taken in its review of Toyota’s ETCS-i firmware, but apparently did not. Specifically, there is no mention anywhere (unless it was entirely redacted) of:

  • rate monotonic analysis, which is a technique that Toyota could have used to validate the critical set of tasks with deadlines and higher priority ISRs (and that NASA could have applied in its review),
  • cyclomatic complexity, which NASA might have used as an additional winnowing tool to focus its limited time on particularly complex and hard to test routines,
  • hazard analysis and mitigation, as those terms are defined by FDA guidelines regarding software contained in medical devices, nor
  • any discussion or review of Toyota’s specific software testing regimen and bug tracking system.

Importantly, there is also a complete absence of discussion of how Toyota’s ETCS-i firmware versions evolved over time. Which makes and models (and model years) had which versions of that firmware? (Presumably there were also hardware changes worthy of note.) Were updates or patches ever made to cars once they were sold, say while at the dealer during official recalls or other types of service?

Unintended Acceleration and Other Embedded Software Bugs

Tuesday, March 1st, 2011 Michael Barr

Last month, NHTSA and the NASA Engineering and Safety Center (NESC) published reports of their joint investigation into the causes of unintended acceleration in Toyota vehicles. NASA’s multi-disciplinary NESC technical team was asked, by Congress, to assist NHTSA by performing a review of Toyota’s electronic throttle control and the associated embedded software. In carefully worded concluding statement, NASA stated that it “found no electronic flaws in Toyota vehicles capable of producing the large throttle openings required to create dangerous high-speed unintended acceleration incidents.” (The official reports and a number of supporting files are available for download at http://www.nhtsa.gov/UA.)

The first thing you will notice if you join me in trying to judge the technical issues for yourself are the redactions: pages and pages of them. In parts and entirely for unexplained reasons, this report on automotive electronics reads like the public version of a CIA Training Manual. I’ve observed that approximately 193 of the 1,061 pages released so far feature some level of redaction (via black boxes, which obscure from a single number, word, or phrase to a full table, page, or section). The redactions are at their worst in NASA’s Appendix A, which describes NASA’s review of Toyota’s embedded software in detail. More than half of all the pages with redactions (including the vast majority of fully redacted tables, pages, and sections) are in that Appendix.

Despite the redactions, we can still learn some interesting facts about Toyota’s embedded software and NASA’s technical review of the same. The bulk of the below outlines what I’ve been able to make sense of in about two days of reading. Throughout, my focus is on embedded software inside the electronic throttle control, so I’m leaving out considerations of other potential causes, including EMI (which NASA also investigated). First a little background on the investigation.

Background

Although the inquiry was taken to examine unintended acceleration reports across all Toyota, Scion, and Lexus models, NASA focused its technical inquiry almost entirely on Toyota Camry models equipped with the Electronic Throttle Control System, Intelligent (ETCS-i). The Camry has long been among the top cars bought in the U.S., so this choice probably made finding relevant complaint data and affected vehicles easier for NHTSA. (BTW, NASA says the voluntary complaint database shows both that unintended accelerations were reported before the introduction of electronic throttle control and that press coverage and Congressional hearings can increase the volume of complaints.)

According to a press release by the company made upon publication of the NHTSA and NASA reports, Toyota’s ETCS-i has been installed in “more than 40 million cars and trucks sold around the world, including more than 16 million in the United States.” Undoubtedly, ETCS-i has also “made possible significant safety advances such as vehicle stability control and traction control.” But as with any other embedded system there have been refinements made through the years to both the electronics and the embedded software.

Though Toyota apparently made available, under agreed terms and via its attorneys, schematics, design documents, and source code “for multiple Camry years and versions” (Appendix A, p. 9) as well as many of the Japanese engineers involved in its design and evolution, NASA only closely examined one version. In NASA’s words, “The area of emphasis will be the 2005 Toyota Camry because this vehicle has a consistently high rate of reported ‘UA events’ over all Toyota models and all years, when normalized to the number of each model and year, according to NHTSA data.” (p. 7) Except as otherwise stated, everything else in this column concerns the electronics and firmware found in that year, make, and model.

Event Data Recorders

Event Data Recorder (EDR) is the generic term for the automotive equivalent of an aircraft black box flight data recorder. EDRs were first installed in cars in the early 1990s and have increased in use as well as sophistication in the years since. Generally speaking, the event data recorder is an embedded system residing within the airbag control module located in the front center of the engine compartment. The event data recorder is connected to other parts of the car’s electronics via the CAN bus and is always monitoring vehicle speed, the position of the brake and accelerator pedals, and other key parameters.

In the event of an impossibly high (for the vehicle operating normally) acceleration or deceleration sensor reading, Toyota’s latest event data recorders save the prior five 1Hz samples of these parameters in a non-volatile memory area. Once saved, an event record can be read over the car’s On-Board Diagnostics (OBD) port (or, in the event of a more severe accident, directly from the airbag control module) via a special cable and PC software. If the airbag actually deploys, the event record will be permanently locked. The last 2 or 3 (depending on version) lesser “bump” records are also stored, but may be overwritten in a FIFO manner.

This investigation of Toyota’s unintended acceleration marked the first time that anyone from NHTSA had ever read data from a Toyota event data recorder. (Toyota representatives apparently testified in Congress that there had previously just been one copy of the necessary PC software in the U.S.) As part of this study, NHTSA validated and used tools provided by Toyota to extract historical data from 52 vehicles involved in incidents of unintended acceleration, with acknowledged bias toward geographically reachable recent events. After reviewing driver and other witness statements and examining said black box data, NHTSA concluded that 39 of these 52 events were explainable as “pedal misapplications.” That’s a very nice way of saying that whenever the driver reported “stepping on the brake” he or she had pressed the accelerator pedal by mistake. Figure 5 of a supplemental report describing these facts portrays an increasing likelihood of such incidents with driver age vs. the bell curve of Camry ownership by age.

Note that no record is apparently ever made, in the event data recorder or elsewhere, of any events or state changes within the ETCS-i firmware. So-called “Diagnostic Trouble Codes” concerning sensor and other hardware failures are recorded in non-volatile memory and the presence of one or more such codes enables the “Check Engine” light on the dashboard. But no logging is done of significant software faults, including but not limited to watchdog-initiated resets.

Engine Control Module

ETCS-i is a collection of components and features that was changed in the basic engine design when Toyota switched from mechanical to electronic throttle control. (Electronic throttle control is also known as “throttle-by-wire”.) Toyota has used two different types of pedal sensors in the ETCS-i system, always in a redundant fashion. The earlier design, pre-2007, using potentiometers was susceptible to current leakage via growth of tin whiskers. Though this type of failure was not known to cause sudden high-speed behaviors, it did seem to be associated with a higher number of warranty claims. The newer pedal sensor design uses Hall effect sensors.

Importantly, the brakes are not a part of the ETCS-i system. In the 2005 Camry, Toyota’s brake pedal was mechanically controlled. (It may still be.) It appears this is one of the reasons the NASA team felt comfortable with their conclusion that driver reports of wide open throttle behavior that could not be stopped with the brakes were not caused by software failures (alone). “The NESC team did not find an electrical path from the ETCS-i that could disable braking.” (NASA Report, p. 15) It is clear, though, that power assisted brakes lose the enabling vacuum pressure when the throttle is wide open and the driver subsequently pumps the brakes; thus any system failure that opened the throttle could indirectly make bringing the vehicle to a stop considerably harder.

The Engine Control Module at the heart of the ETCS-i consists of a Main-CPU and a Sub-CPU located within a pair of ASICs. The Sub-CPU contains a set of A/D converters that translates raw sensor inputs, such as voltages VPA and VPA1 from the accelerator pedal, into digital position values and sends them to the Main-CPU via a serial interface. In addition, the Sub-CPU monitors the outputs of the Main-CPU and is able to reset (in the manner of a watchdog timer) the Main-CPU.

The Main-CPU is reported to be a V850E1 microcontroller, which is “a 32-bit RISC CPU core for ASIC” designed by Renesas (nee NEC). The V850E1 processor has a 64MB program address space, which is part of an overall 4GB linear address space. The Main-CPU also keeps tabs on the Sub-CPU and can reset it if anything is found wrong.

NASA reports that the embedded software in the Main-CPU is written (mostly) in ANSI C and compiled using a GreenHills C compiler (Appendix A, p. 14). Furthermore, an OSEK-compliant real-time operating system with fixed-priority preemptive scheduling is used to manage a redacted (but apparently larger than ten, based on the size of the redaction) number of real-time tasks. The actual firmware development (design, coding and unit testing) was outsourced to Denso (p. 19). Toyota apparently performed integration testing and ran several commercial and in-house static analysis tools, including QAC (p. 20). The code was written in English, with Japanese comments and design documents, and follows a proprietary Toyota naming convention/coding standard that predates but half overlaps with the 1998 version of MISRA-C.

Are There Bugs in Toyota’s Firmware?

In the NASA Report’s executive summary it is made clear that “because proof that the ETCS-i caused the reported UAs was not found does not mean it could not occur.” (NASA Report, p. 17) The report also states that NASA’s analysis was time-limited and top-down, remarking “The Toyota Electronic Throttle Control (ETC) was far more complex than expected involving hundreds of thousands of lines of software code” and that this affected the quality of a planned peer review.

It’s stated that “Reported [Unintended Accelerations (UAs)] are rare events. Typically, the reporting of UAs is about 1/100,000 vehicles/year.” But there are millions of cars on the road, and so NHTSA has collected some “831 UA reports for Camry” alone. “Over one-half of the reported events described large (greater than 25 degrees) high-throttle opening UAs of unknown cause” (NASA Report, p. 14), the causes of which are never fully explained in these reports.

The NASA apparently identified some lesser firmware bugs themselves, saying “[our] logic model verifications identified a number of potential issues. All of these issues involved unrealistic timing delays in the multiprocessing, asynchronous software control flow.” (Appendix A, p. 11) NASA also spent time simulating possible race conditions due to worrisome “recursively nested interrupt masking” (pp, 44-46); note, though, that simulation success is not a sufficient proof of lack of races. As well, the NASA team seems to recommend “reducing the amount of global data” (p. 38) and eliminating “dead code” (p. 40).

Additionally, the redacted text in other parts of Appendix A seems to be obscuring that:

  • The standard gcc compiler version 4″ generated a redacted number of warnings (probably larger than 100) about the code, in 11 different warning categories. (p. 25)
  • Coverity version 4.2″ generated a redacted number of warnings (probably larger than 154) about the code, in 10 different warning categories. (p. 27)
  • Codesonar version 3.6p1″ generated a redacted number of warnings (probably larger than 136) about the code, in 10 different warning categories.
  • Uno version 2.12″ generated a redacted number of warnings (probably larger than 72) about the code, in 9 different warning categories.
  • The code contained at least 347 deviations from a subset of 14 of the MISRA-C rules.
  • The code contained at least 243 violations of a subset of 9 of the 10 “Power of 10–Rules for Developing Safety Critical Code,” which was published in IEEE Computer in 2006 by NASA team member Gerard Holzmann.

It looks to me like Figure 6.2.3-1 of the NASA Report (p. 30) shows that UA complaints filed with NHTSA increased in the year of introduction of electronic throttle control for the vast majority of Toyota, Scion, and Lexus models–and that complaint counts have remained higher but generally declined over time since those transitions years. Such a complaint data pattern is perhaps consistent with firmware bugs. (Note to NHTSA: It would be helpful to see this same chart normalized by number of vehicles sold by model year and with the rows sorted by the year of ETC introduction. It would also be nice to see a chart of ETCS-i firmware versions and updates, which vehicles they apply to, and the dates on which each was put into new production vehicles or distributed through dealers.)

Final Thoughts

I am not privy to all of the facts considered by the NHTSA or NASA review teams and thus cannot say if I agree or disagree with their overall conclusion that embedded software bugs are not to blame for reports of unintended acceleration in Toyota vehicles. How about you? If you’ve spotted something I missed in the reports from NHTSA or NASA, please send me an e-mail or leave a comment below. Let’s keep the conversation going.