Posts Tagged ‘trends’

Real Men [Still] Program in C

Wednesday, March 29th, 2017 Michael Barr

It’s hard for me to believe, but it’s been nearly 8 years since I wrote the popular “Real Men Program in C” blog post (turned article). That post was prompted by a conversation with a couple of younger programmers who told me: “C is too hard for programmers of our generation to bother mastering.”

I ended then:

If you accept [] that C shall remain important for the foreseeable future and that embedded software is of ever-increasing importance, then you’ll begin to see trouble brewing. Although they are smart and talented computer scientists, [younger engineers] don’t know how to competently program in C. And they don’t care to learn.

But someone must write the world’s ever-increasing quantity of embedded software. New languages could help, but will never be retrofitted onto all the decades-old CPU architectures we’ll continue to use for decades to come. As turnover is inevitable, our field needs to attract a younger generation of C programmers.

What is the solution? What will happen if these trends continue to diverge?

Now that a substantial period of years has elapsed, I’d like to revisit two key phrases from that quote: Is C still important? and Is there a younger generation of C programmers? There’s no obvious sign of any popular “new language” nor of any diminution of embedded systems.

Is C Still Important?

The original post used survey data from 1997-2009 to establish that C was (through that entire era) the dominant programming language for embedded systems. The “primary” programming languages used in the final year were C (62%), C++ (24%), and Assembly (5%).

As the figure below shows (data from Barr Group‘s 2017 Embedded Systems Safety & Security Survey), C has now consolidated its dominance as the lingua franca of embedded programmers: now at 71%. Use of C++ remains at about the same level (22%) while use of assembly as the primary language has basically disappeared.

Primary Programming Language

Conclusion: Obviously, C is still important in embedded systems.

Is There a Younger Generation of C Programmers?

The next figure shows the years of paid, professional experience of embedded system designers (data from the same source). Unfortunately, I don’t have data from that older time period about the average ages of embedded programmers. But what looks potentially telling about this is that the average years of experience of American designers (two decades) is much higher than the averages in Europe (14 years) and Asia (11). I dug into the data on the U.S. engineers a bit and found that the experience curve was essentially flat, with no bigger younger group like in the worldwide data.

Years of Experience

Conclusion: The jury is still out. It’s possible there is already a missing younger generation in the U.S., but there also seems to be some youth coming up into our field in Asia at least.

It should be really interesting to see how this all plays out in the next 8 years. I’m putting a tickler in my to-do list to blog about this topic again then!

Footnote: Same as last time, I’m not excluding women. There are plenty of great embedded systems designers who are women–and they mostly program in C too, I presume.

Government-Sponsored Hacking of Embedded Systems

Wednesday, March 11th, 2015 Michael Barr

Everywhere you look these days, it is readily apparent that embedded systems of all types are under attack by hackers.

In just one example from the last few weeks, researchers at Kaspersky Lab (a Moscow-headquartered maker of anti-virus and other software security products) published a report documenting a specific pernicious and malicious attack against “virtually all hard drive firmware”. The Kaspersky researchers deemed this particular data security attack the “most advanced hacking operation ever uncovered” and confirmed that at least hundreds of computers, in dozens of countries, have already been infected.

Here are the technical facts:

  • Disk drives contain a storage medium (historically one or more magnetic spinning platters; but increasingly solid state memory chips) upon which the user stores data that is at least partly private information;
  • Disk drives are themselves embedded systems powered by firmware (mostly written in C and assembly, sans formal operating system);
  • Disk drive firmware (stored in non-volatile memory distinct from the primary storage medium) can be reflashed to upgrade it;
  • The malware at issue comprises replacement firmware images for all of the major disk drive brands (e.g., Seagate, Western Digital) that can perform malicious functions such as keeping copies of the user’s private data in a secret partition for later retrieval;
  • Because the malicious code resides in the firmware, existing anti-virus software cannot detect it (even when they scan the so-called Master Boot Record); and
  • Even a user who erases and reformats his drive will not remove the malware.

The Kaspersky researchers have linked this hack to a number of other sophisticated hacks over the past 14 years, including the Stuxnet worm attack on embedded systems within the Iranian nuclear fuel processing infrastructure. Credited to the so-called “Equation Group,” these attacks are believed be the the work of a single group: NSA. One reason: a similar disk drive firmware hack code-named IRATEMONK is described in an internal NSA document made public by Edward Snowden.

I bring this hack to your attention because it is indicative of a broader class of attacks that embedded systems designers have not previously had to worry about. In a nutshell:

Hackers gonna hack. Government-sponsored hackers with unlimited black budgets gonna hack the shit out of everything.

This is a sea change. Threat modeling for embedded systems most often identifies a range of potential attacker groups, such as: hobbyist hackers (who only hack for fun, and don’t have many resources), academic researchers (who hack for the headlines, but don’t care if the hacks are practical), and company competitors (who may have lots of resources, but also need to operate under various legal systems).

For example, through my work history I happen to be an expert on satellite TV hacking technology. In that field, a hierarchy of hackers emerged in which organized crime syndicates had the best resources for reverse engineering and achieved practical hacks based on academic research; the crime syndicates initially tightly-controlled new hacks in for-profit schemes; and most hacks eventually trickled down to the hobbyist level.

For those embedded systems designers making disk drives and other consumer devices, security has not historically been a consideration at all. Of course, well-resourced competitors sometimes reverse engineered even consumer products (to copy the intellectual property inside), but patent and copyright laws offered other avenues for reducing and addressing that threat.

But we no longer live in a world where we can ignore the security threat posed by the state-sponsored hackers, who have effectively unlimited resources and a new set of motivations. Consider what any interested agent of the government could learn about your private business via a hack of any microphone-(and/or camera-)equipped device in your office (or bedroom).

Some embedded systems with microphones are just begging to be easily hacked. For example, the designers of new smart TVs with voice control capability are already sending all of the sounds in the room (unencrypted) over the Internet. Or consider the phone on your office desk. Hacks of at least some VOIP phones are known to exist and allow for remotely listening to everything you say.

Of course, the state-sponsored hacking threat is not only about microphones and cameras. Consider a printer firmware hack that remotely prints or archives a copy of everything you ever printed. Or a motion/sleep tracker or smart utility meter that lets burglars detect when you are home or away. Broadband routers are a particularly vulnerable point of most small office/home office intranets, and one that is strategically well located for sniffing on and interfering with devices deeper in the network.

How could your product be used to creatively spy on or attack its users?

Do we have an ethical duty or even obligation, as professionals, to protect the users of our products from state-sponsored hacking? Or should we simply ignore such threats, figuring this is just a fight between our government and “bad guys”? “I’m not a bad guy myself,” you might (like to) think. Should the current level of repressiveness of the country the user is in while using our product matter?

I personally think there’s a lot more at stake if we collectively ignore this threat, and refer you to the following to understand why:

Imagine what Edward Snowden could have accomplished if he had a different agenda. Always remember, too, that the hacks the NSA has already developed are now–even if they weren’t before–known to repressive governments. Furthermore, they are potentially in the hands of jilted lovers and blackmailers everywhere. What if someone hacks into an embedded system used by a powerful U.S. Senator or Governor; or by the candidate for President (that you support or that wants to reign in the electronic security state); or a member of your family?

P.S. THIS JUST IN: The CIA recently hired a major defense contractor to develop a variant of an open-source compiler that would secretly insert backdoors into all of the programs it compiled. Is it the compiler you use?

First Impressions of Google Glass 2.0

Tuesday, April 22nd, 2014 Michael Barr

Last week I took advantage of Google’s special 1-day-only buying opportunity to purchase an “Explorer” edition of Google Glass 2.0. My package arrived over the weekend and I finally found a few hours this morning for the unboxing and first use.

Let me begin by saying that the current price is quite high and that the buying process itself is cumbersome. To buy Google Glass you must shell out $1,500 (plus taxes and any accessories) and you can only pay this entrance fee via a Google Wallet account. I didn’t have a Google Wallet account setup until last week and various problems associated with setting up Wallet and linking it to my credit card had prevented me from using an earlier Explorer email invite. Google absolutely needs to make Glass cheaper and easier to purchase if they are to have any hope of making this a mainstream product.

Upon opening the box and donning Glass, I was initially at a loss for how to actually use the thing. There were instructions for turning it on in the box, but I had to find and watch YouTube videos on my own (like this one) to grok the touchpad controls “menu”/UI paradigm. I also quickly came to learn that Glass is only useable when you have at least all of the following: (a) a Google+ account; (b) an Android or iOS smartphone; (c) the My Glass app installed on said smartphone; and (d) a Bluetooth-tethered or WiFi connection to the Internet. (Well, and also the USB charging cable and a power supply–given the very short battery life I’ve experienced so far)

At the present time there are very few apps available. Here’s a master list of what is currently just 44 “Glassware” apps. And none of either the built-in capabilities or those apps strikes me as the kind of must-have feature that’s likely to drive widespread adoption of Glass as a mainstream computing platform with a vibrant application developer community.

I’ll finish out the negatives by saying that the current form factor makes you look like an uber-geek (when you are not too busy being physically attacked for some assumed offense) and that the touchpad area on the right side of your head gets surprisingly hot during normal use.

Now for the few positives. First, the location of the heads-up display just above your line of sight feels right for an always-available computer. As someone who walks for miles every day for exercise, I would so love to replace my handheld smartphone form factor with a heads-up display like this. So it’s too bad that browsing the web and reading email aren’t viable on Glass’ meager 640×360 display. I think there are probably dozens of hands-on jobs in which those who do them would be made more productive with a screen (and the right application) in this form factor. I also think the heads-up wearable form factor feels like a great place for quick reference information, such as maps/navigation, pop-up weather alerts, etc., while the wearer is otherwise busy walking, biking, or even driving.

A second positive is that the voice recognition is really very surprisingly good. Dictation, for example, seems to work far better on Glass so far than it ever has on my iPhone 5. You can’t always talk to Glass (hint: generally only when the words “ok glass” are on screen or there is a microphone icon), but when you do talk Glass seems to listen quite well. Good dictation is key, of course, because there is no obvious way to edit the things you’ve drafted if they are misspelled or improperly formatted; you either hit send or start over. And the only application launcher is your voice via “ok google” home screen/clock.

Giving Glass instructions such as “okay glass, listen to ” is an extremely intuitive user interface. And so far that music feature combined with a $10/month “All Access” Google Play music account seems like the only thing I might like to use everyday. I also like the idea of dictating SMS and email messages or taking and sharing photos while doing other things with my hands, though the SMS feature doesn’t work when Glass is paired with an iPhone and the only multi-person sharing option seems to be via Google+. So far the SMS, email, and outgoing call features are not impressing me enough to see me using them regularly or even to convince me to entrust Google with access to my full iPhone contacts database. And searching through a lot of contacts appears to be a real chore too, unless they match with voice recognition on the first try.

In terms of applications that seem interesting, Evernote seems a reasonable near-term substitute for the lack of a To Do list interface to Toodledo or RememberTheMilk. And I sure do like the idea of receiving pop-up extreme weather alerts based on my location. Some of the simplistic sample games (such as balancing blocks on your head) are fun and I could see this form factor perhaps changing multi-player gaming in a few interesting ways. But that’s about it for the interesting apps so far.

To summarize my thinking, Google Glass so far makes me think about the Apple Newton. Everyone knew that Apple was on to something with the Newton MessagePad, way back circa 1993. But the Newton was also too far ahead of its time in terms of cost and size relative to practical usefulness. Eventually Apple came back and did the “communicator” platform right more than a decade later with the iPhone, which it has continued to improve even more dramatically in the half decade since. I think the same is likely to be the hindsight experience for Google Glass in terms of it being an agreed precursor of what’s to come in terms of a heads up wearable but a near term flop. If it does fail, let it be known that Forbes says Google dug its own grave by putting it out there in too many hands too soon.

Apple’s #gotofail SSL Security Bug was Easily Preventable

Monday, March 3rd, 2014 Michael Barr

If programmers at Apple had simply followed a couple of the rules in the Embedded C Coding Standard, they could have prevented the very serious `Gotofail` SSL bug from entering the iOS and OS X operating systems. Here’s a look at the programming mistakes involved and the easy-to-follow coding standard rules that could have easily prevent the bug.

In case you haven’t been following the computer security news, Apple last week posted security updates for users of devices running iOS 6, iOS 7, and OS X 10.9 (Mavericks). This was prompted by a critical bug in Apple’s implementation of the SSL/TLS protocol, which has apparently been lurking for over a year.

In a nutshell, the bug is that a bunch of important C source code lines containing digital signature certificate checks were never being run because an extraneous goto fail; statement in a portion of the code was always forcing a jump. This is a bug that put millions of people around the world at risk for man-in-the-middle attacks on their apparently-secure encrypted connections. Moreover, Apple should be embarrassed that this particular bug also represents a clear failure of software process at Apple.

There is debate about whether this may have been a clever insider-enabled security attack against all of Apple’s users, e.g., by a certain government agency. However, whether it was an innocent mistake or an attack designed to look like an innocent mistake, Apple could have and should have prevented this error by writing the relevant portion of code in a simple manner that would have always been more reliable as well as more secure. And thus, in my opinion, Apple was clearly negligent.

Here are the lines of code at issue (from Apple’s open source code server), with the extraneous goto in bold:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, ...)
{
    OSStatus  err;
    ...

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;
    ...

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;
}

The code above violates at least two rules from Barr Group‘s Embedded C Coding Standard book. Importantly, had Apple followed at least the first of these rules, in particular, this dangerous bug should almost certainly have been prevented from ever getting into even a single device.

Rule 1.3.a

Braces shall always surround the blocks of code (a.k.a., compound statements), following if, else, switch, while, do, and for statements; single statements and empty statements following these keywords shall also always be surrounded by braces.

Had Apple not violated this always-braces rule in the SSL/TLS code above, there would have been either just one set of curly braces after each if test or a very odd looking hard-to-miss chunk of code with two sets of curly braces after the if with two gotos. Either way, this bug was preventable by following this rule and performing code review.

Rule 1.7.c

The goto keyword shall not be used.

Had Apple not violated this never-goto rule in the SSL/TLS code above, there would not have been a double goto fail; line to create the unreachable code situation. Certainly if that forced each of the goto lines to be replaced with more than one line of code, it would have forced programmers to use curly braces.

On a final note, Apple should be asking its engineers and engineering managers about the failures of process (at several layers) that must have occurred for this bug to have gone into end user’s devices. Specifically:

  • Where was the peer code review that should have spotted this, or how did the reviewers fail to spot this?
  • Why wasn’t a coding standard rule adopted to make such bugs easier to spot during peer code reviews?
  • Why wasn’t a static analysis tool, such as Klocwork, used, or how did it fail to detect the unreachable code that followed? Or was it users of such a tool, at Apple, who failed to act?
  • Where was the regression test case for a bad SSL certificate signature, or how did that test fail?

Dangerous bugs, like this one from Apple, often result from a combination of accumulated errors in the face of flawed software development processes. Too few programmers recognize that many bugs can be kept entirely out of a system simply by adopting (and rigorously enforcing) a coding standard that is designed to keep bugs out.

Security Risks of Embedded Systems

Wednesday, January 15th, 2014 Michael Barr

In the words of security guru and blogger Bruce Schneier “The Internet of Things is Wildly Insecure — and Often Unpatchable”. As Bruce describes the current state of affairs in a recent Wired magazine article:

We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself — as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.

It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard — if not impossible — to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure — publishing vulnerabilities to force companies to issue patches quicker — and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.

But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.

If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them — including some of the most popular and common brands.

I agree with Bruce and like to see a mainstream security guru talking about embedded systems. I recommend you read the whole article here.