Posts Tagged ‘firmware’

Survey Says: The Commercial RTOS Business is Doomed

Thursday, February 22nd, 2018 Michael Barr

Nearly two decades years ago I was the moderator of an interesting Embedded Systems Conference panel discussion titled “The Great RTOS Debate: Buy or Roll Your Own?” At that time, near the turn of the century, the market for commercial real-time operating systems (RTOSes) was growing rapidly year over year.

The big trend then was away from custom-written “proprietary” kernels toward commercial RTOSes typically licensed with a per unit royalty. From 1997 until their merger in 2000, Wind River and Integrated System together dominated this part of the market. According to surveys taken at the time, either VxWorks or pSOS was the operating system of choice for about 1 in 4 new embedded systems designs.

As embedded Linux entered the market in full and that merger took place, the market was divided roughly as follows: 39% no OS, 31% commercial RTOS, 18% proprietary OS, and 12% Linux.

The selection of operating systems by embedded systems designers has changed considerably since then. According to a preliminary analysis of data collected in Barr Group’s 2018 Embedded Systems Safety & Security Survey there are still quite a few new systems that run “no operating system” on their primary processor. However, this is down from 39% to just 22%. Use of proprietary operating systems is also down about half over the intervening 18 years, from 18% to just 8%.

The most popular category of actual operating system is now Linux at 22%, which is a change from prior years when “RTOS” (an aggregate of those paying for a commercial RTOS and those provided an operating system from chip vendor, which now ranks behind Linux at 19%) was most popular. Following ever more closely on the heels of those “commercial” RTOSes are the open source operating systems (e.g., FreeRTOS) that lack licensing fees.

We can get some sense of the range in the architecture of embedded systems by comparing the rankings of the five most popular operating system choices on the primary processor as the total number of processors goes up. As shown in the table below, the percentage of designers writing their own “proprietary” operating system is about the same (7-9%) regardless of processor count. But Linux clearly becomes a much more popular choice (climbs from 11% to 32%) as the number of processors increases, while “open source” and “no operating system” drop in popularity.

rtos_use_by_cpu_count

But it’s also worth looking at the timeline trend over these years, as I have assembled in the following graph from survey data compiled from 2005-2014 by Embedded.com and in the last three years by Barr Group.

rtos_use_trend_long_term

What I make of all of this is that those who depend for their livelihood on operating system licensing fees from designers of embedded systems should start looking for other sources of income.

What other trends do you see?

C: The Immortal Programming Language

Thursday, February 22nd, 2018 Michael Barr

Barr Group’s 2018 Embedded Systems Safety & Security Survey is now closed and I am in the midst of analyzing the data. This year a portion of my analysis is focused on multi-year trends. One trend that really stands out to me is that the C programming language refuses to give up the ghost.

A longitudinal study of survey data spanning almost a decade and a half shows that C remains the primary programming language of embedded software. Remarkably, in that time C has actually gained market share from 50% to about 70%–at the expense of assembly, C++, and Java.

The graph below shows the relevant data from 2005 through 2018. The first decade of this data is drawn from annual surveys published by the publishers of Embedded.com with the most recent data coming from Barr Group’s annual survey. Each of these surveys of embedded systems designers phrased the relevant question similarly, either “My current embedded project is programmed mostly in [pick one]” or “What is the primary programming language for your current project? [pick one]”.

Language Use Trend Plot

It makes total sense that the use of assembly language as a primary programming language is falling. The last time I wrote an embedded program mostly in assembly was about twenty years ago. Of course, there will always be some low level code that needs to be written in the native language of the machine–if only to bring up the higher-level language execution and for drivers and kernel code. But with inexpensive (and mostly 32-bit ARM-based) microcontrollers increasingly at the heart of our systems there’s no sense wasting time coding the application code in assembly.

We can attribute about 7 percentage points of the growth in use of C to the reduction in use of assembly during these years. This trend has helped use of C grow to about 60%.

But what’s also been happening in this time is that C++ has failed to capitalize on earlier gains. The peak year for C++ use was apparently 2006, when it had 33% share. Use of C++ as the primary language has since fallen and thus added about 10 percentage points to use of C.

I didn’t include Java in the graph, but it’s use has been less common than assembly in every survey year, with high points of 3% and now down around 1% the last three years. And no other language has emerged to maintain greater than 1% share.

What I make of all of this is that C remains the most cost-effective way to write embedded software. In hindsight, object-oriented languages have been tried but failed to establish their value to most programmers. C++ is a player but looks unlikely to ever eclipse its namesake.

What do you see in the data?

Cyberspats on the Internet of Things

Thursday, April 6th, 2017 Michael Barr

When you hear the words “weaponization” and “internet” in close proximity you naturally assume the subject is the use of hacks and attacks by terrorists and nation-state actors.

But then comes today’s news about an IoT garage door startup that remotely disabled a customer’s opener in response to a negative review. In a nutshell, a man bought the startup’s Internet-connected opener, installed it in his home, was disappointed with the quality, and wrote negative reviews on the company’s website and Amazon. In response, the company disabled his unit.

In context of the explosion of Internet connections in embedded systems, this prompts several thoughts.

First and foremost: What does it mean to buy or own a product that relies for some functionality on a cloud-based server that you might not always be able to access? Is it your garage door opener or the manufacturer’s? And how much is that determined by fine print in a contract you’ll need a lawyer to follow?

Additionally: What if in this specific situation the company hadn’t made any public statements at all and had just remotely made the customer’s garage door opener less functional. There’d have then been no fodder for a news story. The company would’ve gotten it’s “revenge” on the customer. And the customer might never have known anything except that the product wasn’t to his liking. Investigating might cost him time and money he did not have.

It’s almost certainly the case that this company would have seen better business outcomes if it had quietly disabled the unit in question. And there are so many ways other insidious ways to go about it, including: bricking the unit, refusing it future firmware updates, or even subtlety downgraded its functionality.

Which brings us back to the weaponization of the Internet. Consumers have no choice but to trust the makers of their products, who have complete knowledge of the hardware and software design (and maybe also the digital signatures needed to make secure firmware updates). And these companies typically have all kinds of identifying data about individual customers: name, geographic location, phone and email address, product usage history, credit card numbers, etc. So what happens when the makers of those products are unhappy with one or more customers: from those posting bad product reviews all the way up to politicians and celebrities they may dislike?

Perhaps private companies are already attacking specific customers in subtle ways… How would we know?

Government-Sponsored Hacking of Embedded Systems

Wednesday, March 11th, 2015 Michael Barr

Everywhere you look these days, it is readily apparent that embedded systems of all types are under attack by hackers.

In just one example from the last few weeks, researchers at Kaspersky Lab (a Moscow-headquartered maker of anti-virus and other software security products) published a report documenting a specific pernicious and malicious attack against “virtually all hard drive firmware”. The Kaspersky researchers deemed this particular data security attack the “most advanced hacking operation ever uncovered” and confirmed that at least hundreds of computers, in dozens of countries, have already been infected.

Here are the technical facts:

  • Disk drives contain a storage medium (historically one or more magnetic spinning platters; but increasingly solid state memory chips) upon which the user stores data that is at least partly private information;
  • Disk drives are themselves embedded systems powered by firmware (mostly written in C and assembly, sans formal operating system);
  • Disk drive firmware (stored in non-volatile memory distinct from the primary storage medium) can be reflashed to upgrade it;
  • The malware at issue comprises replacement firmware images for all of the major disk drive brands (e.g., Seagate, Western Digital) that can perform malicious functions such as keeping copies of the user’s private data in a secret partition for later retrieval;
  • Because the malicious code resides in the firmware, existing anti-virus software cannot detect it (even when they scan the so-called Master Boot Record); and
  • Even a user who erases and reformats his drive will not remove the malware.

The Kaspersky researchers have linked this hack to a number of other sophisticated hacks over the past 14 years, including the Stuxnet worm attack on embedded systems within the Iranian nuclear fuel processing infrastructure. Credited to the so-called “Equation Group,” these attacks are believed be the the work of a single group: NSA. One reason: a similar disk drive firmware hack code-named IRATEMONK is described in an internal NSA document made public by Edward Snowden.

I bring this hack to your attention because it is indicative of a broader class of attacks that embedded systems designers have not previously had to worry about. In a nutshell:

Hackers gonna hack. Government-sponsored hackers with unlimited black budgets gonna hack the shit out of everything.

This is a sea change. Threat modeling for embedded systems most often identifies a range of potential attacker groups, such as: hobbyist hackers (who only hack for fun, and don’t have many resources), academic researchers (who hack for the headlines, but don’t care if the hacks are practical), and company competitors (who may have lots of resources, but also need to operate under various legal systems).

For example, through my work history I happen to be an expert on satellite TV hacking technology. In that field, a hierarchy of hackers emerged in which organized crime syndicates had the best resources for reverse engineering and achieved practical hacks based on academic research; the crime syndicates initially tightly-controlled new hacks in for-profit schemes; and most hacks eventually trickled down to the hobbyist level.

For those embedded systems designers making disk drives and other consumer devices, security has not historically been a consideration at all. Of course, well-resourced competitors sometimes reverse engineered even consumer products (to copy the intellectual property inside), but patent and copyright laws offered other avenues for reducing and addressing that threat.

But we no longer live in a world where we can ignore the security threat posed by the state-sponsored hackers, who have effectively unlimited resources and a new set of motivations. Consider what any interested agent of the government could learn about your private business via a hack of any microphone-(and/or camera-)equipped device in your office (or bedroom).

Some embedded systems with microphones are just begging to be easily hacked. For example, the designers of new smart TVs with voice control capability are already sending all of the sounds in the room (unencrypted) over the Internet. Or consider the phone on your office desk. Hacks of at least some VOIP phones are known to exist and allow for remotely listening to everything you say.

Of course, the state-sponsored hacking threat is not only about microphones and cameras. Consider a printer firmware hack that remotely prints or archives a copy of everything you ever printed. Or a motion/sleep tracker or smart utility meter that lets burglars detect when you are home or away. Broadband routers are a particularly vulnerable point of most small office/home office intranets, and one that is strategically well located for sniffing on and interfering with devices deeper in the network.

How could your product be used to creatively spy on or attack its users?

Do we have an ethical duty or even obligation, as professionals, to protect the users of our products from state-sponsored hacking? Or should we simply ignore such threats, figuring this is just a fight between our government and “bad guys”? “I’m not a bad guy myself,” you might (like to) think. Should the current level of repressiveness of the country the user is in while using our product matter?

I personally think there’s a lot more at stake if we collectively ignore this threat, and refer you to the following to understand why:

Imagine what Edward Snowden could have accomplished if he had a different agenda. Always remember, too, that the hacks the NSA has already developed are now–even if they weren’t before–known to repressive governments. Furthermore, they are potentially in the hands of jilted lovers and blackmailers everywhere. What if someone hacks into an embedded system used by a powerful U.S. Senator or Governor; or by the candidate for President (that you support or that wants to reign in the electronic security state); or a member of your family?

P.S. THIS JUST IN: The CIA recently hired a major defense contractor to develop a variant of an open-source compiler that would secretly insert backdoors into all of the programs it compiled. Is it the compiler you use?

A Look Back at the Audi 5000 and Unintended Acceleration

Friday, March 14th, 2014 Michael Barr

I was in high school in the late 1980’s when NHTSA (pronounced “nit-suh”), Transport Canada, and others studied complaints of unintended acceleration in Audi 5000 vehicles. Looking back on the Audi issues, and in light of my own recent role as an expert investigating complaints of unintended acceleration in Toyota vehicles, there appears to be a fundamental contradiction between the way that Audi’s problems are remembered now and what NHTSA had to say officially at the time.

Here’s an example from a pretty typical remembrance of what happened, from a 2007 article written “in defense of Audi”:

In 1989, after three years of study[], the National Highway Traffic Safety Administration (NHTSA) issued their report on Audi’s “sudden unintended acceleration problem.” NHTSA’s findings fully exonerated Audi… The report concluded that the Audi’s pedal placement was different enough from American cars’ normal set-up (closer to each other) to cause some drivers to mistakenly press the gas instead of the brake.

And here’s what NHTSA’s official Audi 5000 report actually concluded:

Some versions of Audi idle-stabilization system were prone to defects which resulted in excessive idle speeds and brief unanticipated accelerations of up to 0.3g. These accelerations could not be the sole cause of [long-duration unintended acceleration incidents], but might have triggered some [of the long-duration incidents] by startling the driver.”

Contrary to the modern article, NHTSA’s original report most certainly did not “fully exonerate” Audi. Similarly, though there were differences in pedal configuration compared to other cars, NHTSA seems to have concluded that the first thing that happened was a sudden unexpected surge of engine power that startled drivers and that the pedal misapplication sometimes followed that.

This sequence of, first, a throttle malfunction and, then, pedal confusion was summarized in a 2012 review study by NHTSA:

Once an unintended acceleration had begun, in the Audi 5000, due to a failure in the idle-stabilizer system (producing an initial acceleration of 0.3g), pedal misapplication resulting from panic, confusion, or unfamiliarity with the Audi 5000 contributed to the severity of the incident.

The 1989 NHTSA report elaborates on the design of the throttle, which included an “idle-stabilization system” and documents that multiple “intermittent malfunctions of the electronic control unit were observed and recorded”. In a nutshell, the Audi 5000 had a main mechanical throttle control, wherein the gas pedal pushed and pulled on the throttle valve with a cable, as well as an electronic throttle control idle adjustment.

It is unclear whether the “electronic control unit” mentioned by NHTSA was purely electronic or if it also had embedded software. (ECU, in modern lingo, includes firmware.) It is also unclear what percentage of the Audi 5000 unintended acceleration complaints were short-duration events vs. long-duration events. If there was software in the ECU and short-duration events were more common, well that would lead to some interesting questions. Did NHTSA and the public learn all of the right lessons from the Audi 5000 troubles?