embedded software boot camp

Is it a Bug or an Error?

January 31st, 2018 by Michael Barr

Probably you’ve heard the story of how Adm. Grace Hopper attached a moth that was dislodged from a relay in the Harvard Mark II mainframe to an engineering notebook and labeled it the “First actual case of bug being found.”

hoppers_moth_bug

Designers of electronics, including Thomas Edison, had been using the term bug for decades. But it was mostly after this amusing 1947 event hat the use of words like “bugs” and “debugging” took off in the emerging software realm.

So why is it that if a bridge collapses we say it was a failure of the design and not attributable to a mere “bug”? As if it were an external force or an act of god that caused the failure? Why do only software engineers get this linguistic pass when failures are caused by their mistakes the same as other types of engineers?

Failures of software are commonplace everyday events. Yet such failures are not typically the result of a moth or other “actual bug”. Each such failure is instead caused by human error: some mistake has been made either in the requirements or in the implementation and these human mistake then have real world consequences, including sometimes compromising the safety and security of product users.

Should we, as a community of professionals, stop using the word “bug” and instead replace it with some other more honest term such as “error” or “mistake”? Might this help to raise the seriousness with which we approach our work and thereby the safety of the users of our product? What do you think? Comment below.

New BlueBorne Security Flaw Affects Embedded Systems Running Linux

October 16th, 2017 by Michael Barr

A major security flaw in the Bluetooth communications protocol was recently discovered and has since been confirmed as exploitable in the real world. It is important that designers of embedded systems are aware of this security issue potentially affecting their products.

So-called “BlueBorne” is an attack that can be performed over the air against an estimated 8.2 billion Bluetooth-enabled computers, including those that run operating systems variants such as Microsoft Windows, Apple’s OS-X and iOS, Google’s AndroidOS, and many Linux distros.

Many of the vulnerable computers are embedded systems or Internet-of-Things devices.

Specifically, any system running Linux kernel version 3.3-rc1 or later may be vulnerable to a remote code execution (RCE) attack following compromise by BlueBorne. This includes rebranded Linux derivatives such as Samsung’s Tizen operating system.

BlueBorne is potentially a very serious issue for embedded systems designers. For example, a medical device or other mission-critical product built on Linux within the last 5 or 6 years could be vulnerable to attack. Such an attack could include remote code execution or system takeover by an ill-willed party. In one hypothetical (for the moment, phew!) scenario, a BlueBorne-powered worm could deploy a ransomware attack that shuts down your products until a ransom is paid by you or your customers.

Designers of systems that may be affected should read this white paper for technical details:

Importantly, until a patch can be applied to your product to eliminate this vulnerability it is only possible to ensure system security by DISABLING Bluetooth entirely. That’s because BlueBorne is able to attack systems even when they are not in “discovery mode” or pairing.

Should your team need help securing affected products, Barr Group has security experts who can help.

C’s strcpy_s(): C11’s More Secure Version of strcpy()

August 31st, 2017 by Michael Barr

Buffer overflows are a well-known port of entry for hackers and attackers of computerized systems. One of the easiest ways to create a buffer overflow weakness in a C program has long been to rely on the strcpy() function of the C standard library to overwrite data.

There’s a decent explanation of the problem at http://www.thegeekstuff.com/2013/06/buffer-overflow/. But the nutshell version is that you have a buffer of size X somewhere in memory that your code uses strcpy() to overwrite new nul-terminated strings. If an attacker can somehow feed a string longer than X bytes to your function then data beyond the bounds of the original array will be overwritten too: thereby rewriting code or data that serves some other purpose.

You should know that the new C11 update to the C programming language provides for a replacement “safe” version of this function, which is named strcpy_s(). The parameter lists and return types differ:

char *strcpy(char *strDestination, const char *strSource);

versus:

errno_t strcpy_s(char *strDestination, size_t numberOfElements, const char *strSource);

The new “numberOfElements” parameter is used by strcpy_s() to check that the strSource is not bigger than the buffer. And, when there is a problem, an error code is returned.

The Microsoft Developer Network website is one source of additional detail on this and other of C11’s “safe” functions.

Did a Cyberattack Cause Recent Crashes of U.S. Naval Destroyers?

August 23rd, 2017 by Michael Barr

Crashes involving naval vessels are rare events. Yet somehow two of the U.S. Navy’s guided-missile destroyers have crashed into other ships in as many months:

Might these deadly crashes share a common root cause? Both ships are part of the Seventh Fleet, which is headquartered in Yokosuka, Japan.

The word is that the second accident was caused by a “steering failure“.

As the public learned back in 1998, when another naval vessel had to be towed back to port after a software crash, this bit of critical American infrastructure was then dependent on navigational software that runs on Windows NT.

Are U.S. Navy ships still powered by a version of Microsoft Windows? And vulnerable to viruses? Could a single individual have smuggled a computer virus aboard both of these destroyers?

I’m no conspiracy theorist, but merely suggest that the possibility of a cyberattack at least be considered by those investigating if these crashes have a common root cause. It strikes me as likely that at least Russia, North Korea, and China would employ hackers to look for ways to weaken American naval power.

Cyberspats on the Internet of Things

April 6th, 2017 by Michael Barr

When you hear the words “weaponization” and “internet” in close proximity you naturally assume the subject is the use of hacks and attacks by terrorists and nation-state actors.

But then comes today’s news about an IoT garage door startup that remotely disabled a customer’s opener in response to a negative review. In a nutshell, a man bought the startup’s Internet-connected opener, installed it in his home, was disappointed with the quality, and wrote negative reviews on the company’s website and Amazon. In response, the company disabled his unit.

In context of the explosion of Internet connections in embedded systems, this prompts several thoughts.

First and foremost: What does it mean to buy or own a product that relies for some functionality on a cloud-based server that you might not always be able to access? Is it your garage door opener or the manufacturer’s? And how much is that determined by fine print in a contract you’ll need a lawyer to follow?

Additionally: What if in this specific situation the company hadn’t made any public statements at all and had just remotely made the customer’s garage door opener less functional. There’d have then been no fodder for a news story. The company would’ve gotten it’s “revenge” on the customer. And the customer might never have known anything except that the product wasn’t to his liking. Investigating might cost him time and money he did not have.

It’s almost certainly the case that this company would have seen better business outcomes if it had quietly disabled the unit in question. And there are so many ways other insidious ways to go about it, including: bricking the unit, refusing it future firmware updates, or even subtlety downgraded its functionality.

Which brings us back to the weaponization of the Internet. Consumers have no choice but to trust the makers of their products, who have complete knowledge of the hardware and software design (and maybe also the digital signatures needed to make secure firmware updates). And these companies typically have all kinds of identifying data about individual customers: name, geographic location, phone and email address, product usage history, credit card numbers, etc. So what happens when the makers of those products are unhappy with one or more customers: from those posting bad product reviews all the way up to politicians and celebrities they may dislike?

Perhaps private companies are already attacking specific customers in subtle ways… How would we know?