Posts Tagged ‘trends’

New BlueBorne Security Flaw Affects Embedded Systems Running Linux

Monday, October 16th, 2017 Michael Barr

A major security flaw in the Bluetooth communications protocol was recently discovered and has since been confirmed as exploitable in the real world. It is important that designers of embedded systems are aware of this security issue potentially affecting their products.

So-called “BlueBorne” is an attack that can be performed over the air against an estimated 8.2 billion Bluetooth-enabled computers, including those that run operating systems variants such as Microsoft Windows, Apple’s OS-X and iOS, Google’s AndroidOS, and many Linux distros.

Many of the vulnerable computers are embedded systems or Internet-of-Things devices.

Specifically, any system running Linux kernel version 3.3-rc1 or later may be vulnerable to a remote code execution (RCE) attack following compromise by BlueBorne. This includes rebranded Linux derivatives such as Samsung’s Tizen operating system.

BlueBorne is potentially a very serious issue for embedded systems designers. For example, a medical device or other mission-critical product built on Linux within the last 5 or 6 years could be vulnerable to attack. Such an attack could include remote code execution or system takeover by an ill-willed party. In one hypothetical (for the moment, phew!) scenario, a BlueBorne-powered worm could deploy a ransomware attack that shuts down your products until a ransom is paid by you or your customers.

Designers of systems that may be affected should read this white paper for technical details:

Importantly, until a patch can be applied to your product to eliminate this vulnerability it is only possible to ensure system security by DISABLING Bluetooth entirely. That’s because BlueBorne is able to attack systems even when they are not in “discovery mode” or pairing.

Should your team need help securing affected products, Barr Group has security experts who can help.

Cyberspats on the Internet of Things

Thursday, April 6th, 2017 Michael Barr

When you hear the words “weaponization” and “internet” in close proximity you naturally assume the subject is the use of hacks and attacks by terrorists and nation-state actors.

But then comes today’s news about an IoT garage door startup that remotely disabled a customer’s opener in response to a negative review. In a nutshell, a man bought the startup’s Internet-connected opener, installed it in his home, was disappointed with the quality, and wrote negative reviews on the company’s website and Amazon. In response, the company disabled his unit.

In context of the explosion of Internet connections in embedded systems, this prompts several thoughts.

First and foremost: What does it mean to buy or own a product that relies for some functionality on a cloud-based server that you might not always be able to access? Is it your garage door opener or the manufacturer’s? And how much is that determined by fine print in a contract you’ll need a lawyer to follow?

Additionally: What if in this specific situation the company hadn’t made any public statements at all and had just remotely made the customer’s garage door opener less functional. There’d have then been no fodder for a news story. The company would’ve gotten it’s “revenge” on the customer. And the customer might never have known anything except that the product wasn’t to his liking. Investigating might cost him time and money he did not have.

It’s almost certainly the case that this company would have seen better business outcomes if it had quietly disabled the unit in question. And there are so many ways other insidious ways to go about it, including: bricking the unit, refusing it future firmware updates, or even subtlety downgraded its functionality.

Which brings us back to the weaponization of the Internet. Consumers have no choice but to trust the makers of their products, who have complete knowledge of the hardware and software design (and maybe also the digital signatures needed to make secure firmware updates). And these companies typically have all kinds of identifying data about individual customers: name, geographic location, phone and email address, product usage history, credit card numbers, etc. So what happens when the makers of those products are unhappy with one or more customers: from those posting bad product reviews all the way up to politicians and celebrities they may dislike?

Perhaps private companies are already attacking specific customers in subtle ways… How would we know?

Real Men [Still] Program in C

Wednesday, March 29th, 2017 Michael Barr

It’s hard for me to believe, but it’s been nearly 8 years since I wrote the popular “Real Men Program in C” blog post (turned article). That post was prompted by a conversation with a couple of younger programmers who told me: “C is too hard for programmers of our generation to bother mastering.”

I ended then:

If you accept [] that C shall remain important for the foreseeable future and that embedded software is of ever-increasing importance, then you’ll begin to see trouble brewing. Although they are smart and talented computer scientists, [younger engineers] don’t know how to competently program in C. And they don’t care to learn.

But someone must write the world’s ever-increasing quantity of embedded software. New languages could help, but will never be retrofitted onto all the decades-old CPU architectures we’ll continue to use for decades to come. As turnover is inevitable, our field needs to attract a younger generation of C programmers.

What is the solution? What will happen if these trends continue to diverge?

Now that a substantial period of years has elapsed, I’d like to revisit two key phrases from that quote: Is C still important? and Is there a younger generation of C programmers? There’s no obvious sign of any popular “new language” nor of any diminution of embedded systems.

Is C Still Important?

The original post used survey data from 1997-2009 to establish that C was (through that entire era) the dominant programming language for embedded systems. The “primary” programming languages used in the final year were C (62%), C++ (24%), and Assembly (5%).

As the figure below shows (data from Barr Group‘s 2017 Embedded Systems Safety & Security Survey), C has now consolidated its dominance as the lingua franca of embedded programmers: now at 71%. Use of C++ remains at about the same level (22%) while use of assembly as the primary language has basically disappeared.

Primary Programming Language

Conclusion: Obviously, C is still important in embedded systems.

Is There a Younger Generation of C Programmers?

The next figure shows the years of paid, professional experience of embedded system designers (data from the same source). Unfortunately, I don’t have data from that older time period about the average ages of embedded programmers. But what looks potentially telling about this is that the average years of experience of American designers (two decades) is much higher than the averages in Europe (14 years) and Asia (11). I dug into the data on the U.S. engineers a bit and found that the experience curve was essentially flat, with no bigger younger group like in the worldwide data.

Years of Experience

Conclusion: The jury is still out. It’s possible there is already a missing younger generation in the U.S., but there also seems to be some youth coming up into our field in Asia at least.

It should be really interesting to see how this all plays out in the next 8 years. I’m putting a tickler in my to-do list to blog about this topic again then!

Footnote: Same as last time, I’m not excluding women. There are plenty of great embedded systems designers who are women–and they mostly program in C too, I presume.

Government-Sponsored Hacking of Embedded Systems

Wednesday, March 11th, 2015 Michael Barr

Everywhere you look these days, it is readily apparent that embedded systems of all types are under attack by hackers.

In just one example from the last few weeks, researchers at Kaspersky Lab (a Moscow-headquartered maker of anti-virus and other software security products) published a report documenting a specific pernicious and malicious attack against “virtually all hard drive firmware”. The Kaspersky researchers deemed this particular data security attack the “most advanced hacking operation ever uncovered” and confirmed that at least hundreds of computers, in dozens of countries, have already been infected.

Here are the technical facts:

  • Disk drives contain a storage medium (historically one or more magnetic spinning platters; but increasingly solid state memory chips) upon which the user stores data that is at least partly private information;
  • Disk drives are themselves embedded systems powered by firmware (mostly written in C and assembly, sans formal operating system);
  • Disk drive firmware (stored in non-volatile memory distinct from the primary storage medium) can be reflashed to upgrade it;
  • The malware at issue comprises replacement firmware images for all of the major disk drive brands (e.g., Seagate, Western Digital) that can perform malicious functions such as keeping copies of the user’s private data in a secret partition for later retrieval;
  • Because the malicious code resides in the firmware, existing anti-virus software cannot detect it (even when they scan the so-called Master Boot Record); and
  • Even a user who erases and reformats his drive will not remove the malware.

The Kaspersky researchers have linked this hack to a number of other sophisticated hacks over the past 14 years, including the Stuxnet worm attack on embedded systems within the Iranian nuclear fuel processing infrastructure. Credited to the so-called “Equation Group,” these attacks are believed be the the work of a single group: NSA. One reason: a similar disk drive firmware hack code-named IRATEMONK is described in an internal NSA document made public by Edward Snowden.

I bring this hack to your attention because it is indicative of a broader class of attacks that embedded systems designers have not previously had to worry about. In a nutshell:

Hackers gonna hack. Government-sponsored hackers with unlimited black budgets gonna hack the shit out of everything.

This is a sea change. Threat modeling for embedded systems most often identifies a range of potential attacker groups, such as: hobbyist hackers (who only hack for fun, and don’t have many resources), academic researchers (who hack for the headlines, but don’t care if the hacks are practical), and company competitors (who may have lots of resources, but also need to operate under various legal systems).

For example, through my work history I happen to be an expert on satellite TV hacking technology. In that field, a hierarchy of hackers emerged in which organized crime syndicates had the best resources for reverse engineering and achieved practical hacks based on academic research; the crime syndicates initially tightly-controlled new hacks in for-profit schemes; and most hacks eventually trickled down to the hobbyist level.

For those embedded systems designers making disk drives and other consumer devices, security has not historically been a consideration at all. Of course, well-resourced competitors sometimes reverse engineered even consumer products (to copy the intellectual property inside), but patent and copyright laws offered other avenues for reducing and addressing that threat.

But we no longer live in a world where we can ignore the security threat posed by the state-sponsored hackers, who have effectively unlimited resources and a new set of motivations. Consider what any interested agent of the government could learn about your private business via a hack of any microphone-(and/or camera-)equipped device in your office (or bedroom).

Some embedded systems with microphones are just begging to be easily hacked. For example, the designers of new smart TVs with voice control capability are already sending all of the sounds in the room (unencrypted) over the Internet. Or consider the phone on your office desk. Hacks of at least some VOIP phones are known to exist and allow for remotely listening to everything you say.

Of course, the state-sponsored hacking threat is not only about microphones and cameras. Consider a printer firmware hack that remotely prints or archives a copy of everything you ever printed. Or a motion/sleep tracker or smart utility meter that lets burglars detect when you are home or away. Broadband routers are a particularly vulnerable point of most small office/home office intranets, and one that is strategically well located for sniffing on and interfering with devices deeper in the network.

How could your product be used to creatively spy on or attack its users?

Do we have an ethical duty or even obligation, as professionals, to protect the users of our products from state-sponsored hacking? Or should we simply ignore such threats, figuring this is just a fight between our government and “bad guys”? “I’m not a bad guy myself,” you might (like to) think. Should the current level of repressiveness of the country the user is in while using our product matter?

I personally think there’s a lot more at stake if we collectively ignore this threat, and refer you to the following to understand why:

Imagine what Edward Snowden could have accomplished if he had a different agenda. Always remember, too, that the hacks the NSA has already developed are now–even if they weren’t before–known to repressive governments. Furthermore, they are potentially in the hands of jilted lovers and blackmailers everywhere. What if someone hacks into an embedded system used by a powerful U.S. Senator or Governor; or by the candidate for President (that you support or that wants to reign in the electronic security state); or a member of your family?

P.S. THIS JUST IN: The CIA recently hired a major defense contractor to develop a variant of an open-source compiler that would secretly insert backdoors into all of the programs it compiled. Is it the compiler you use?

First Impressions of Google Glass 2.0

Tuesday, April 22nd, 2014 Michael Barr

Last week I took advantage of Google’s special 1-day-only buying opportunity to purchase an “Explorer” edition of Google Glass 2.0. My package arrived over the weekend and I finally found a few hours this morning for the unboxing and first use.

Let me begin by saying that the current price is quite high and that the buying process itself is cumbersome. To buy Google Glass you must shell out $1,500 (plus taxes and any accessories) and you can only pay this entrance fee via a Google Wallet account. I didn’t have a Google Wallet account setup until last week and various problems associated with setting up Wallet and linking it to my credit card had prevented me from using an earlier Explorer email invite. Google absolutely needs to make Glass cheaper and easier to purchase if they are to have any hope of making this a mainstream product.

Upon opening the box and donning Glass, I was initially at a loss for how to actually use the thing. There were instructions for turning it on in the box, but I had to find and watch YouTube videos on my own (like this one) to grok the touchpad controls “menu”/UI paradigm. I also quickly came to learn that Glass is only useable when you have at least all of the following: (a) a Google+ account; (b) an Android or iOS smartphone; (c) the My Glass app installed on said smartphone; and (d) a Bluetooth-tethered or WiFi connection to the Internet. (Well, and also the USB charging cable and a power supply–given the very short battery life I’ve experienced so far)

At the present time there are very few apps available. Here’s a master list of what is currently just 44 “Glassware” apps. And none of either the built-in capabilities or those apps strikes me as the kind of must-have feature that’s likely to drive widespread adoption of Glass as a mainstream computing platform with a vibrant application developer community.

I’ll finish out the negatives by saying that the current form factor makes you look like an uber-geek (when you are not too busy being physically attacked for some assumed offense) and that the touchpad area on the right side of your head gets surprisingly hot during normal use.

Now for the few positives. First, the location of the heads-up display just above your line of sight feels right for an always-available computer. As someone who walks for miles every day for exercise, I would so love to replace my handheld smartphone form factor with a heads-up display like this. So it’s too bad that browsing the web and reading email aren’t viable on Glass’ meager 640×360 display. I think there are probably dozens of hands-on jobs in which those who do them would be made more productive with a screen (and the right application) in this form factor. I also think the heads-up wearable form factor feels like a great place for quick reference information, such as maps/navigation, pop-up weather alerts, etc., while the wearer is otherwise busy walking, biking, or even driving.

A second positive is that the voice recognition is really very surprisingly good. Dictation, for example, seems to work far better on Glass so far than it ever has on my iPhone 5. You can’t always talk to Glass (hint: generally only when the words “ok glass” are on screen or there is a microphone icon), but when you do talk Glass seems to listen quite well. Good dictation is key, of course, because there is no obvious way to edit the things you’ve drafted if they are misspelled or improperly formatted; you either hit send or start over. And the only application launcher is your voice via “ok google” home screen/clock.

Giving Glass instructions such as “okay glass, listen to ” is an extremely intuitive user interface. And so far that music feature combined with a $10/month “All Access” Google Play music account seems like the only thing I might like to use everyday. I also like the idea of dictating SMS and email messages or taking and sharing photos while doing other things with my hands, though the SMS feature doesn’t work when Glass is paired with an iPhone and the only multi-person sharing option seems to be via Google+. So far the SMS, email, and outgoing call features are not impressing me enough to see me using them regularly or even to convince me to entrust Google with access to my full iPhone contacts database. And searching through a lot of contacts appears to be a real chore too, unless they match with voice recognition on the first try.

In terms of applications that seem interesting, Evernote seems a reasonable near-term substitute for the lack of a To Do list interface to Toodledo or RememberTheMilk. And I sure do like the idea of receiving pop-up extreme weather alerts based on my location. Some of the simplistic sample games (such as balancing blocks on your head) are fun and I could see this form factor perhaps changing multi-player gaming in a few interesting ways. But that’s about it for the interesting apps so far.

To summarize my thinking, Google Glass so far makes me think about the Apple Newton. Everyone knew that Apple was on to something with the Newton MessagePad, way back circa 1993. But the Newton was also too far ahead of its time in terms of cost and size relative to practical usefulness. Eventually Apple came back and did the “communicator” platform right more than a decade later with the iPhone, which it has continued to improve even more dramatically in the half decade since. I think the same is likely to be the hindsight experience for Google Glass in terms of it being an agreed precursor of what’s to come in terms of a heads up wearable but a near term flop. If it does fail, let it be known that Forbes says Google dug its own grave by putting it out there in too many hands too soon.