Posts Tagged ‘barrgroup’

C’s goto Keyword: Should we Use It or Lose It?

Wednesday, June 6th, 2018 Michael Barr

In the 2008 and 2013 editions of my bug-killing Embedded C Coding Standard, I included this rule:

Rule 1.7.c. The goto keyword shall not be used.

Despite this coding standard being followed by about 1 in 8 professional embedded systems designers, none of us at Barr Group have heard any complaints that this rule is too strict. (Instead we hear lots of praise for our unique focus on reducing intra-team stylistic arguments by favoring above all else C coding rules that prevent bugs.)

However, there exist other coding standards with a more relaxed take on goto. I’ve thus been revisiting this (and related) rules for an updated 2018 edition of the BARR-C standard.

Specifically, I note that the current (2012) version of the MISRA-C Guidelines for the Use of the C Programming Language in Critical Systems merely advises against the use of goto:

Rule 15.1 (Advisory): The goto statement should not be used

though at least the use of goto is required to be appropriately narrowed:

Rule 15.2 (Required): The goto statement shall jump to a label declared later in the same function

Rule 15.3 (Required): Any label referenced by a goto statement shall be declared in the same block, or in any block enclosing the goto statement

Generally speaking, the rules of the MISRA-C:2012 standard are either the same as or stricter than those in my BARR-C standard. In addition to overlaps, they have more and stricter coding rules and we add stylistic advice. It is precisely because MISRA-C’s rules are stricter and only BARR-C includes stylistic rules that these two most popular embedded programming standards frequently and easily combined.

Which all leads to the key question:

Should the 2018 update to BARR-C relax the prior ban on all use of goto?

According to the authors of the C programming language, “Formally, [the goto keyword is] never necessary” as it is “almost always easy to write code without it”. They go on to recommend that goto “be used rarely, if at all.”

Having, in recent days, reviewed the arguments for and against goto across more than a dozen C programming books as well as other C/C++ coding standards and computer science papers, I believe there is just one best bug-killing argument for each side of this rule. And I’ll have to decide between them this week to keep the new edition on track for a June release.

Allow goto: For Better Exception Handling

There seems to be universal agreement that goto should never be used to branch UP to an earlier point in a function. Likewise, branching INTO a deeper level of nesting is a universal no-no.

However, branching DOWN in the function and OUT to an outer level of nesting are often described as powerful uses of the goto keyword. For example, a single goto statement can be used to escape from two or more levels of nesting. And this is not a behavior that can be done with a single break or continue. (To accomplish the same behavior without using goto, one could, e.g., set a flag variable in the inner block and check it in each outer block.)

Given this, the processing of exceptional conditions detected inside nested blocks is a potentially valuable application of goto that could be implemented in a manner compliant with the three MISRA-C:2012 rules quoted above. Such a jump could even proceed from two or more points in a function to a common block of error recovery code.

Because good exception handling is a property of higher reliabilty software and is therefore a potential bug killer, I believe I must consider relaxing Rule 1.7.c in BARR-C:2018 to permit this specific use of goto. A second advantage of relaxing the rule would be increasing the ease of combining rules from MISRA-C with those from BARR-C (and vice versa), which is a primary driver for the 2018 update.

Ban goto: Because It’s Even Riskier in C++

As you are likely aware, there are lots of authors who opine that use of goto “decreases readability” and/or “increases potential for bugs”. And none other than Dijkstra (50 years ago now!) declared that the “quality of programmers is [inversely proportional to their number] of goto statements”.

But–even if all true–none of these assertions is a winning argument for banning the use of goto altogether vs. the above “exception handling exception” suggested by the above.

The best bug-killing argument I have heard for continued use of the current Rule 1.7.c is that the constructors and destructors of C++ create a new hazard for goto statements. That is, if a goto jumps over a line of code on which an object would have been constructed or destructed then that step would never occur. This could, for example, lead to a very subtle and difficult type of bug to detect–such as a memory leak.

Given that C++ is a programming language intentionally backward compatible with legacy C code and that elsewhere in BARR-C there is, e.g., a rule that variables (which could be objects) should be declared as close as possible to their scope of use, relaxing the existing ban on all use of goto could forseeably create new hazards for the followers of the coding standard who later migrate to C++. Indeed, we already have many programmers following our C coding rules while crafting C++ programs.

So what do you think? What relevant experiences can bring to bear on this issue in the comments area below? Should I relax the ban on goto or maintain it? Are there any better (bug-killing) arguments for or against goto?

Apple’s #gotofail SSL Security Bug was Easily Preventable

Monday, March 3rd, 2014 Michael Barr

If programmers at Apple had simply followed a couple of the rules in the Embedded C Coding Standard, they could have prevented the very serious `Gotofail` SSL bug from entering the iOS and OS X operating systems. Here’s a look at the programming mistakes involved and the easy-to-follow coding standard rules that could have easily prevent the bug.

In case you haven’t been following the computer security news, Apple last week posted security updates for users of devices running iOS 6, iOS 7, and OS X 10.9 (Mavericks). This was prompted by a critical bug in Apple’s implementation of the SSL/TLS protocol, which has apparently been lurking for over a year.

In a nutshell, the bug is that a bunch of important C source code lines containing digital signature certificate checks were never being run because an extraneous goto fail; statement in a portion of the code was always forcing a jump. This is a bug that put millions of people around the world at risk for man-in-the-middle attacks on their apparently-secure encrypted connections. Moreover, Apple should be embarrassed that this particular bug also represents a clear failure of software process at Apple.

There is debate about whether this may have been a clever insider-enabled security attack against all of Apple’s users, e.g., by a certain government agency. However, whether it was an innocent mistake or an attack designed to look like an innocent mistake, Apple could have and should have prevented this error by writing the relevant portion of code in a simple manner that would have always been more reliable as well as more secure. And thus, in my opinion, Apple was clearly negligent.

Here are the lines of code at issue (from Apple’s open source code server), with the extraneous goto in bold:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, ...)
{
    OSStatus  err;
    ...

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;
    ...

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;
}

The code above violates at least two rules from Barr Group‘s Embedded C Coding Standard book. Importantly, had Apple followed at least the first of these rules, in particular, this dangerous bug should almost certainly have been prevented from ever getting into even a single device.

Rule 1.3.a

Braces shall always surround the blocks of code (a.k.a., compound statements), following if, else, switch, while, do, and for statements; single statements and empty statements following these keywords shall also always be surrounded by braces.

Had Apple not violated this always-braces rule in the SSL/TLS code above, there would have been either just one set of curly braces after each if test or a very odd looking hard-to-miss chunk of code with two sets of curly braces after the if with two gotos. Either way, this bug was preventable by following this rule and performing code review.

Rule 1.7.c

The goto keyword shall not be used.

Had Apple not violated this never-goto rule in the SSL/TLS code above, there would not have been a double goto fail; line to create the unreachable code situation. Certainly if that forced each of the goto lines to be replaced with more than one line of code, it would have forced programmers to use curly braces.

On a final note, Apple should be asking its engineers and engineering managers about the failures of process (at several layers) that must have occurred for this bug to have gone into end user’s devices. Specifically:

  • Where was the peer code review that should have spotted this, or how did the reviewers fail to spot this?
  • Why wasn’t a coding standard rule adopted to make such bugs easier to spot during peer code reviews?
  • Why wasn’t a static analysis tool, such as Klocwork, used, or how did it fail to detect the unreachable code that followed? Or was it users of such a tool, at Apple, who failed to act?
  • Where was the regression test case for a bad SSL certificate signature, or how did that test fail?

Dangerous bugs, like this one from Apple, often result from a combination of accumulated errors in the face of flawed software development processes. Too few programmers recognize that many bugs can be kept entirely out of a system simply by adopting (and rigorously enforcing) a coding standard that is designed to keep bugs out.

Introducing Barr Group

Wednesday, December 26th, 2012 Michael Barr

In the ten months since forming Barr Group, I have received many questions about the new company. As we enter the new year, I thought it a good time to use this blog post to answer the most frequently asked questions, such as:

  • What does Barr Group do?
  • Who are Barr Group’s clients?
  • How is Barr Group different than my former company?
  • Who is our CEO and what skills does he bring?
  • What is my role in Barr Group?

If I had to describe Barr Group (http://www.barrgroup.com) in a single sentence, I would say that “Barr Group helps companies that design embedded systems make their products more reliable and more secure.” We do sell a few small items–such as the Embedded C Coding Standard book and Embedded Software Training in a Box kit–but our company is not really about our own products. Rather, we achieve our mission of improving embedded systems reliability and security by delivering business-to-business services of primarily three types: (1) consulting, (2) training, and (3) engineering.

Barr Group serves clients from small startups to well-known Fortune 100 companies that make embedded systems used in a wide range of industries. Representative clients include: Adtran, Medtronic, Renesas, TI, and Xerox. Barr Group’s staff has expertise and experience in the design of medical devices, industrial controls, consumer electronics, telecommunications, transportation equipment, smart grid technologies, and many other types of embedded systems.

Barr Group’s consulting services are sold to engineering managers, engineering directors, or vice presidents of engineering. Typical consulting engagements are short-duration/high-value projects aimed at answering strategically important questions related to embedded systems architecture and embedded software development processes. For example, in the area of architecture for reliability and security we offer services specifically in the following areas: system design review, software design review, system (re)architecture, software (re)architecture, source code review, cost reduction, reverse engineering, and security analysis. Of course, we often address more targeted issues as well, including embedded software development process improvements. Because we are unaffiliated with any processor, RTOS, or tool vendor, all of our advice is independent of any external influence; we aim only to find the best path forward for our clients, favoring alternatives that require only 20% of the effort to achieve 80% of the available benefits.

Barr Group’s training courses are designed to raise the quality of engineers and engineering teams and many of them include hands-on programming exercises. We teach these courses both privately and publicly. Private training is held at the client’s office and every engineer in attendance works for the client. By contrast, any individual or small group of engineers can purchase a ticket to our public training courses. Our Spring 2013 training calendar includes four week-long hands-on courses: Embedded Software Boot Camp (Maryland), Embedded Security Boot Camp (Silicon Valley), Embedded Android Boot Camp (Maryland), and Agile and Test-Driven Embedded Development (Florida).

Barr Group’s engineering design services include outsourced development of: electronics (including FPGA and PCB design); device drivers for operating systems such as MicroC/OS, VxWorks, Windows, Linux, Android, and others; embedded software; mechanical enclosures; and everything in between. In one representative project that was recently completed, a cross-functional team of talented Barr Group engineers worked together to perform all of the mechanical, electrical, software, reliability, and security engineering for a long-lived high voltage electrical switching system for deployment in a modern “smart grid” electrical distribution network.

In relation to my earlier company, which was founded in 1999, the principal difference in all of the above is Barr Group’s additional focus on embedded systems security, compared with reliability alone. Like Netrino, some members of our engineering staff also work as expert witnesses in complex technical litigation–with a range of cases involving allegations of product liability, patent infringement, and source code copyright infringement.

Finally, under the new leadership of seasoned technology executive (and fellow electrical engineer) Andrew Girson, Barr Group has added a suite of Engineer-Centric Market ResearchTM services, which assist IC makers, RTOS vendors, and other companies serving the embedded systems design community improve their products and marketing by better understanding the mind of the engineer. These services have been specifically enabled by the combination of Mr. Girson’s skills and expertise in strategic technical marketing with Barr Group’s extensive contacts in the embedded systems industry, including the over 20,000 Firmware Update newsletter subscribers.

My role in Barr Group is chief technology officer. The switch from my role as president of the old company to CTO of the new company has freed up considerably more of my time to work on engineering and expert witness projects. The extra time allows me to focus on sharing my technical expertise with as many clients as possible while also developing the other engineers who work individuals projects.

All in all, it has been great fun (if a lot of work) launching the new company this year. I look forward to another successful year for Barr Group in 2013. Please don’t hesitate to contact me or call us at (866) 653-6233 if we can be of assistance to your company. And happy new year!