embedded software boot camp

RIP VOIP

December 4th, 2006 by Nigel Jones

As someone that has worked in telecomms, I was excited by the arrival of VOIP. However, after two years of variable quality, extended outages and just plain weird behaviour I’ve had it. It’s clear to me that VOIP just isn’t ready for prime time and so I have decided to pull the plug. The latest frustration – an inability to receive incoming calls for the last four days – with no resolution in sight. The technical support department informs me that it’s a ‘router programming error’. Whether they really mean a router configuration error, or a bug in the router firmware is unclear. Regardless, it’s presumably a tough enough problem that it can’t be fixed in four days.

The really bad news here is my experience when I tried to get Verizon to provide me with a POTS line. One of my prime reasons for jumping on VOIP as soon as I could was my feeling that Verizon was a dreadful company – one with questionable ethics and really awful customer service. Today, despite calling the number on the Verizon website for ‘add a new line’, I had to endure a voice prompted menu system and three different people before I could do the most mundane thing Verizon has to offer – order telephone service. For this privilege, Verizon is charging me a $44 start up fee (to plug a few numbers into a computer) and a cost double that offered by my VOIP provider. Apparently Verizon has not had its business suffer enough – yet.

So what’s the relevance of this tail of woe to embedded systems? Not much really, other than to note that when the latest and greatest doesn’t live up to its billing – one ends up with very annoyed customers. So next time marketing wants to over-hype what you can deliver, rein them in hard and fast. Your customers will thank you.

Home

Help! My third party source code doesn’t comply with my coding standards

November 24th, 2006 by Nigel Jones

Two big trends in the embedded world are on a collision course – and the resolution isn’t going to be easy. The two trends are the requirements that all code meet internal coding standards and the use of third party code.

Organizations have been gradually been getting religion about having and enforcing coding standards. As well as spelling out what the source code should look like, and making rules for what is kosher, many internal standards now also require code to be ‘Lint free’, and also possibly that it conform to various standards, such as those laid down by MISRA.

Simultaneously, organizations have been striving to improve productivity. One way of doing this is to turn to code re-use. Code re-use is normally discussed in the context of code that you’ve already developed being re-used in subsequent projects. However, a far more powerful paradigm is to use code that others have developed. Need a CRC algorithm, or a way of computing a MD5 hash – head to the Internet to find your source code. Have a need to develop a complex state handler – hello visualSTATE. Need to develop a GUI – take your pick from a plethora of component suppliers. Now if you were developing for a PC, most of this code would be supplied in binary format. However with the plethora of embedded targets and compilers, the chances are you’ll get source code that you’ll need to compile.

Now, the chances of the source code matching your coding standards should be nil. So what is to be done? My experiences to date have been pragmatic – but not pretty.

For small pieces of code, I simply rewrite them to bring them up to standard.
For third party libraries, such as a graphics library, it is usually impractical, if not illegal, to modify the source code, and so one is forced to accept the code as is.
For machine generated code, even if it’s small, rewriting the code is pointless, since the chances are you’ll be regenerating it later and over-writing your work. Thus, once again, one is forced to accept the code as is.

So what is to be done? At present, my coding standards procedure allows one to issue a variance where code doesn’t comply (in pretty much the same way that MISRA allows variances to be issued). Although this is OK, let’s recognize it for what it is – a cop out. What we really need are the suppliers of source code to recognize and adhere to various ‘standards’. For example:

1. Use the C99 data types folks. I’m tired of seeing UINT8 definitions everywhere when ISO has stipulated that a uint8_t data type is an 8 bit unsigned type.
2. Make your code Lint free. If you’re selling source code, it’s in your interest to make it as clean as possible. PC-Lint from Gimpel is the gold standard, so make sure you can pass it with a clean bill of health (and I don’t mean by suppressing every complaint it has).
3. Make your code MISRA compliant. MISRA can be a pain – but their intentions are good. If nothing else, making your code MISRA compliant will increase the size of your target market. This issue has been recognized by IAR to whom I’d like to congratulate for making the code generated by the upcoming new release of visualSTATE MISRA compliant.

What if you are just an honest Joe, just putting code out there for all to use and enjoy? Well why not adhere to the same rules? It’ll make your code more useful – and after all isn’t that the point of publishing it in the first place?

Home

Unexpected uses and the consequences thereof

November 3rd, 2006 by Nigel Jones

I’ll pose today’s blog in the form of one of those lateral thinking questions – which you may want to try and solve before moving on to the rest of the post.

An engineer walks into a meeting, unpacks his laptop and an Ethernet hub, powers both up and then connects an Ethernet cable between the laptop and the hub. No other connections are made to the hub. Explain.

Well I suppose two obvious answers are that the engineer is nuts (likely), or that the engineer doesn’t understand the basics of Ethernet technology (less likely). Of course, in this case, the engineer is me, and while I can’t really attest to my mental state, I do know a thing or two about Ethernet. So what is causing this strange behaviour?

Well, like many engineers, I use some very expensive software. The vendors of this software, in an effort to protect their product from unpaid copying, lock the software to the computer’s NIC. (For the uninitiated, every Ethernet interface IC on the planet has a unique MAC address. Thus any computer with a NIC has a built in unique identifier). Now the vendor of my laptop (Toshiba), in a sensible effort to conserve power, powers down the NIC when it detects no valid signal on the Ethernet port. When the NIC is powered down, it can’t respond to requests for its MAC address, and so the copy protection scheme complains and I can’t run my expensive software.

Who is to blame here? I can’t really fault the SW vendor for wanting to protect their investment, and I can’t blame Toshiba for wanting to minimize the power consumption of their product. I suppose it would be nice if Toshiba provided a utility to prevent the auto power down – but that’s probably inconsistent with them trying to make the system easy to use for the average consumer. I think the answer is that the fault lies with us in the engineering community. We value great tools, but apparently enough of us (and our employers) are dishonest enough that we’ll copy them if we get the chance. Apparently part of the price we pay for this is looking like idiots when we walk into meetings…

Home

Knowledge versus Understanding

October 11th, 2006 by Nigel Jones

Every month or two a ‘Technical Recruiter’ from one of the larger placement companies calls me up to see if I’m available for work. Most of the time I’m not, and so the conversation terminates quite quickly. However, once in a while I am available, and so the inevitable request for an updated resume is made. After sending an updated resume, the ‘Technical Recruiter’ calls back to discuss what you have to offer.

Well, for the first time in several years I recently went through this rigmarole. The conversation with the recruiter was both illuminating and yet rather depressing. To paraphrase, the conversation went like this:

Recruiter: “What RTOS experience do you have?”

Me: “VxWorks, MicroC/OSII, Embedded Linux, various bespoke systems”

Recruiter: “No others? ”

Me: “Isn’t it more important that someone understands the benefits and limitations of an RTOS rather than knowing the particular API of a specific RTOS?”

Recruiter: After a long pause. “Our clients like someone that can hit the ground running.”

I see two possibilities here.
1. The ‘technical recruiter’ has no technical knowledge and is nothing more than a matcher of acronyms and buzz words.
2. His clients really are saying to him, we need someone with experience of XYZ RTOS.

If it’s the latter, then it appears that knowledge is a more highly prized commodity than understanding. Personally, given the choice between someone that knows an RTOS API and someone that really understands priority inversion, can discuss the pros and cons of RMA as a scheduling algorithm, and can explain the implications of making an RTOS call from within an ISR, then I’d take the latter any day. Of course, one might claim that an experienced user of XYZ RTOS should be aware of these sorts of issues. However, in my experience, large swathes of the folks out there using an RTOS really don’t have a clue about what it’s doing for them – and what it’s costing them.

Thus my point is this. Next time you are looking for help, think about what you’d like the person to understand – as well as what they should know. I suspect you’ll end up with better help.

Home

Fuse Blown

October 3rd, 2006 by Nigel Jones

As a professional developer of embedded systems, I use a lot of sophisticated tools and best practices in order to create the best embedded applications I can. Today, however, I ran into an issue which makes a mockery of much of what I (and presumably you) do.

So what is this issue you ask? Well, in case you are unaware, two very popular microprocessor families (PIC & AVR & probably others) require one to configure multiple parameters in their microcontrollers via fuse bits. These parameters typically cover critical hardware parameters, such as oscillator type and frequency, brown out settings, code protection bits and so on. These fuse settings are NOT programmable from within the application and hence are typically outside the programmer’s direct control. Thus a solution based upon devices in these families consists of both the programming image (i.e. the binary representation of your code) and also the fuse bit settings.

Now, this wouldn’t be too bad if there was some way to combine both sets of information in to one master programming file. In fact both Microchip and Atmel allow one to do this within their IDE’s. However, what happens when one needs to have the microprocessors programmed on a high speed production gang programmer? Well, I found out today – and it isn’t pretty.

The procedure is to supply an Intel Hex record file for the application, and to provide the programming house with an email detailing the required fuse settings! So, after using all the sophisticated tools at my disposal to craft a working embedded system, I ultimately have to rely upon the manual transcription of configuration bits in to a programmer to ensure that the end product is actually programmed the way I need it to be.

This is patently absurd! We need an industry standard programming file that allows both the program image and the configuration bits to be defined, independent of the manufacturer, so that we can be confident that devices are programmed the way we want them to be. (Incidentally, checking a first article device is only of limited benefit, since in many cases, we want to set a lock bit that prevents anyone (including oneself) from reading anything about the device).

Does anyone out there have any ideas on how we can get this problem solved?

Home