embedded software boot camp

Let’s Go Wireless

November 18th, 2002 by Michael Barr

Being an electrical engineer with a home office who’s moved five times in a decade, I’ve gotten quite adept at setting up and maintaining a small network of computers, telephones, printers, and other office equipment on my own. In fact, I’d say I rather enjoy doing these things. I typically set aside one day at the beginning of each move—before bringing in any boxes or furniture—to configure the phone jacks for separate home and work lines, fax, and Internet access, and to run CAT5 cables and install jacks in each room for Ethernet.

I used to have to buy a few new cables, connectors, or tools each time I moved, but in recent years I’ve generally found that everything I need I already own. And as my resources and experience in this area have grown, I’ve even developed a filing system of sorts for the necessary equipment and cables. In my system there are precisely four categories of cables: power, audio/video, telecom/datacom, and computer. Related equipment such as power strips, antennae, network hubs, and spare hard drives are kept with the cables of their type.

I realized only during my most recent move that a significant change is in the air—literally. It turns out that the time required to connect all this stuff together is going down not up. And that’s not simply because of my increasing experience. The fact is that wireless connections are becoming a mainstream reality—particularly in the communications area. And this is making my pile of telecom/datacom cables and cabling equipment increasingly irrelevant.

In our new home we have 900 MHz and 2.4 GHz wireless telephones as well as a wireless 802.11 LAN. Our early adoption of DSL a few years back eliminated a dedicated incoming phone line, which used to be dedicated to low-speed dial-up (at about the same monthly cost overall). The wireless phones and 802.11 LAN make the physical location of the remaining two incoming phone lines irrelevant. So long as I go wireless, the DSL modem, router/firewall, and hub need not even be located near the computers in the office.

In fact, the whole concept of a home office is changing. That term used to mean that I was a slave to the phone and computer on my desk—just like most office workers. As I write this, however, I’m out on our new patio seated comfortably with my laptop, cordless phone by my side. I can access files on the local server, the Internet, and even over on the publisher’s side of a VPN. All of this took just minutes to set up, instead of the usual hours.

But now that I’ve seen the future, I’m disappointed that there are still so many wires left in other areas of my home and office. Ultimately, I’d like to see my collection of coaxial and RCA cables made obsolete along with those for serial, parallel, and USB devices. I won’t be happy until my Palm and server can sync simply by being in the same room, I can debug embedded software remotely from the patio as well, and my TV can show movies from a DVD player (or hard drive) anywhere.

As a consumer, I don’t care how any of this is accomplished. And though I want it to be secure, I don’t care how that is accomplished either. Whether it’s ultimately 802.11, Bluetooth, the proposed 802.15 hybrid of the two, or some other technology doesn’t much matter from the consumer perspective. As Nike says, we need to just do it.

Get Rich Slow

October 23rd, 2002 by Michael Barr

By a show of hands, how many of you jumped ship from a stable engineering company to a startup in the late 1990s? I bet if you didn’t, you at least thought about it or had a few offers. I never jumped ship myself, but was straddling two boats at once trying to make a quick extra million on a tight budget of night and weekend hours.

I guess I always knew the air would come out of the bubble at some point. And I sure as heck knew it wouldn’t be good to be on top if that happened. So my partners and I kept our day jobs and focused on long term issues in our business planning. How would we profit from our ideas, in ways other than a quick sale of the company or an IPO? We figured we’d identified a product and a market for it, so needed only develop the code and keep expenses lower than revenues while we tried to increase sales.

Still, though, we crossed our fingers and hoped as much as the next guy that we would time our business just right and make a bundle somehow. We certainly weren’t going to turn down a multi-million dollar purchase offer—and even felt confident enough it was worth that to turn down a bona fide small private funding source and free help from an experienced CEO that would’ve valued the company far less initially.

In the end, unfortunately, the bursting of the dot com bubble took not only the really bad ideas but also many good ones (including ours?) down with it. By the time we had filed our patent application, developed our prototype, and written our business plan, all of the funding sources for search engine enhancements had dried up. I suspect several of the venture capital firms and search engine companies we talked to would’ve jumped at the chance to be involved with our idea just a few months earlier. But the game was up. And two years later—long after we wrote off our personal investments in the company—the venture capital we needed to quit our jobs and work toward profitability still isn’t available. So I have a new plan.

My new plan is to get rich slow. To play the part of the tortoise rather than the hare. Engineering is a good stable profession, and one that generally pays well—especially if you have a specialty as in demand as real-time embedded systems design. It’s really not a bad life, if you can get it.

So rather than try to outwit or outplay, I’ll just try to outlast—and I’ll save every penny I can along the way. Besides, it’s quite a lot easier to stick to your core values and make the world a better place little by little when you’re not busy making an end run. To see how extremely disjoint the two paths can become, witness any of the recent corporate scandals.

Honesty, integrity, and responsibility should be the core values of all practicing engineers. And we should practice them outside of work as well. As fun as both are, there’s more to life than engineering and money. So I’m stopping to smell the roses now more too. It sure is nice to have my nights and weekends back! And that’s worth a lot more than a million dollars to me!

Bad Code

September 20th, 2002 by Michael Barr

Enough with the bad code already! While I’ve been discussing the subject of which language to use for embedded programming and how best to ensure a quality result the past few months, millions of lines of “bad code” have been newly written.

You’ve seen the kind of code I mean: modules and procedures carelessly divided (broken up as if to meet some arbitrary length limit, for example, rather than by purpose); variables randomly named, mostly global, and with a large percentage no longer in use; compiler warnings flagging a myriad of suspicious pointer and type conversions unheeded; comments—what few there are—mostly outdated and in conflict with the nearby code; other comments full of code that once did or meant something to somebody, but now doesn’t (or does it?).

Bad programmers can write bad code in any language. It’s time they and their code were dragged into the light. I’ve encountered bad assembly code, bad C code, and bad C++ code. I’m sure those who program regularly in Ada, Java, and every other language have uncovered bad code in those languages as well.

To achieve the best long-term results, it is often necessary to have the courage to discard such code and rewrite it. If an organization can accept that the existing code was never worth the money spent to develop it in the first place, they can move on and look forward to a brighter future. Ultimately, the costs (including the rewrite) will probably be much lower.

I’ve replaced bad assembly code with new C code that was smaller, more efficient, and easier to maintain. It was also developed more quickly and cheaply than the bad code and had far fewer bugs at integration. I’ve similarly replaced bug-ridden C code with new C++ code that required half the code and data memory—and was just as efficient.

I’m not trying to suggest that C is better than assembly, that C++ is better than C, or even that the original authors in these examples chose the wrong languages to begin with. (I’ve also rewritten bad code in the same language as the original.) I’m just trying to make the point that assembly doesn’t always result in the most compact code; there’s skill involved in achieving that result. And C++ code can be just as compact and efficient as C code—if you know what you’re doing.

That, of course, is the important part: The programmer must know what he is doing. Too often that isn’t the case. However they manage to get themselves hired, bad programmers seem to exist in every organization. The decisions they make and the code they write create more problems, hassles, and bugs than any interviewer can imagine. The costs are unbearable, particularly in real-time/embedded devices.

Well? Don’t just stand there. Do something. If it’s your own code that needs the fixing: read a book like Code Complete and start learning how to write well-structured easy-to-read code, obey the Ten Commandments for C Programmers, and get a copy of lint. The version control system you aren’t using should help you feel comfortable deleting no-longer needed code rather than commenting it out. If the fault lies elsewhere: tell someone who can do something about it before any more serious damage is done.

Quantum Programming

August 17th, 2002 by Michael Barr

I don’t buy many books about embedded systems. Most books of relevance find their way to me as review copies. I try to read, or at least skim, all those that look promising. Unfortunately, I’ve found that relatively few are worthy of permanent space on my bookshelf or recommendation to others. Which is why I want to tell you about one recent standout.

Perhaps you recall the article “State-Oriented Programming,” which appeared precisely two years ago in Embedded Systems Programming magazine? If not, suffice it to say it described a very simple manual coding technique for turning hierarchical state machines into compact C or C++ programs. Though I was certainly impressed by the authors and their approach, I didn’t then recognize the brilliance of the underlying ideas.

The primary author’s recent publication of a book-length treatment of the topic has helped me see the light. The book, by Miro Samek, is titled Practical Statecharts in C/C++ (CMP Books). However, the title of the book is a major understatement. What stands out in my mind is not the practical nature of the solution it provides to a common recurring problem, but the downright revolutionary ideas behind that solution.

The author’s so-called “Quantum Programming” may ultimately change the way embedded software is designed. Never before has there been a viable alternative to the traditional main()+ISR vs. RTOS problem. Preemptive multitasking only works well in specific scenarios, while main()+ISR comes with its own set of problems when you try to scale it. A third way is needed.

This book presents a solution, based around state machines, that is compact (5KB is all that’s typically required), realized in C and C++ (no fee or royalties), of theoretical value in any language (even assembly), and can support multiple state machines running in parallel if necessary. It’s also the first good way I’ve seen to deal with the inheritance of state behavior in hierarchical state machines. Samek calls his implementation the “Quantum Framework.”

Before quantum programming, there were basically three approaches to state machine implementation: switch statements, tables of function pointers, and object-oriented programming constructs. The handling of substates in hierarchical state machines was complex in all three approaches. Hierarchical state machines are common, with part of each state’s behavior being determined by its parent state and the rest by the substate itself. This is difficult to implement in the traditional approaches because it either requires duplication of code or additional function/method calls. At best, the results tend toward spaghetti code.

In a nutshell, the quantum programming technique is a design pattern for direct efficient implementation of flat or hierarchical state machines. It uses the popular and proven UML statecharts as its graphical specification language, and leaves the choice of implementation programming language up to the developer. Hierarchical states are implemented via an externally-driven “event processor,” the use of which ensures that substates need not duplicate the functionality of their parents (and grandparents).

I believe that Miro Samek’s innovative techniques will quickly become popular, and I have already put them to good use. If you read only one book about embedded systems this year, make it Practical Statecharts in C and C++.

Clash of Titans

July 6th, 2002 by Michael Barr

In the March 2002 edition of Embedded Systems Programming magazine, Jim Turley predicted “The Death of Hardware Engineering.” For evidence, he pointed to the fact that hardware design, especially chip design, has become almost exclusively the domain of programmers using Verilog and VHDL. An emerging trend toward the use of more popular software development languages—such as C, C++, or Java—to perform “system design” (with the compiler automatically deciding what is best done in hardware and software) could indeed put an end to much of hardware design as we know it.

However, I’m not convinced the trend is as one-sided as Jim says. At the same time that hardware designers are moving toward writing more code, an increasing amount of software designers are moving toward more graphical forms of design. Automatic state machine generators and similar tools have started us down this path. A technology called Executable and Translatable UML offers capabilities that could ultimately make this a broader industry trend.

Of course, the increasing overlap between hardware and software does lead to some present confusion, and much uncertainty about the future. Hardware itself is becoming increasingly “soft.” Custom hardware on a board has already given way to custom hardware on a chip. And we’re now seeing a direct change toward integrated chips with a fixed processor surrounded by a flexible array of programmable logic. That’s the perfect target platform for a “system design language” to dominate.

Traditional hardware and software tool vendors are eyeing each other nervously across this narrowing digital divide. If you only design a system, instead of a system consisting of separate hardware and software designs, where will you go for your tools? To Wind River or Mentor Graphics? Though not traditionally direct competitors, companies like these are concerned they may increasingly vie with one another in the near future.

The named companies, both leaders in their respective domains, are beginning to position themselves accordingly. Wind River has been working with Xilinx, through a partnership. Apparently, their goal is to create a version of the Tornado development tool suite that includes hardware design and synthesis capabilities for Xilinx’s programmable logic surrounding an embedded processor-plus-VxWorks software environment. They’ve already delivered a set of tools for working with the current generation of Xilinx FPGAs.

Meanwhile, in another market, Mentor Graphics is laying a path for its current codesign/coverification customers to follow toward more involvement in software development. Hence, in my opinion, their recent acquisition of Accelerated Technology. The threat to Wind River is implicit in that acquisition. Though Mentor already had an RTOS product of its own (VRTX), they no longer had the software development perspective or inhouse experience they will need to compete in the not-so-distant future. The former Accelerated, led by its former president Neil Henderson, is now Mentor’s Embedded Systems Division.

Working separately, from their own markets, but both following this convergence trend, these companies (and others) will inevitably collide as the hardware and software do. That’s when things will get really interesting in the tools market. But I doubt it will truly be the end of hardware engineering.