Archive for the ‘Compilers / Tools’ Category

visualSTATE

Tuesday, May 20th, 2008 Nigel Jones

I have been writing this blog now for about 18 months and in reviewing my posts I’ve noticed that my posts are often critical of technologies, manufacturers and or products. Well today is a first for me, because I’d like to offer my first product endorsement. The endorsement goes to visualSTATE from IAR . I’ve been using this product for about the same length of time I’ve had this blog and have concluded that it represents the biggest step forward in productivity for me since I made the move from assembly language to C. (Yes folks, the move from C to C++ was a virtual non-event for me, as I found almost no improvement in my productivity, mainly I suspect because I have written for years in object oriented C).

Anyway, back to the topic of visualSTATE. If you aren’t familiar with it, then you should be. It allows you to design complex, hierarchical state machines with ease and to push a button and obtain code that just seems to work. I have now completed three projects using this tool and am well on the way to finishing a fourth. In all cases, the boost to my productivity has been astonishing. I find that I spend most of my time on the functional design and almost no time on debugging the high level application.

visualSTATE’s main strengths seem to be in the following areas:

1. Products that are highly modal – i.e. a product can be in one of N operating modes depending upon circumstances..
2. User interfaces. I’ve had great success with products that contain bespoke LCD and membrane keypads.
3. Products that contain complex sequencing requirements, particularly when coupled with a plethora of failure modes that have to be handled.

I’ve found the learning curve on visualSTATE to be quite long – but definitely worth it. Although you can certainly be up and running in a day or so, I found that it took me a lot longer to work out how best to partition a problem between visualSTATE and traditional code. However, with experience I’m now finding that I rarely get it wrong anymore.

I’ve also found some very nice and unexpected benefits from visualSTATE. To wit:

1. Code reuse. visualSTATE does of course require some code support. However, I’ve found that a lot of this code can be reused. As a result, I can now bring up a new board with a visualSTATE processing engine running on it in a matter of hours. Try doing that with your average RTOS.
2. Although we all know that lots of small functions are “better” than a few big functions, human nature being what it is, we tend to just expand an existing function rather than decomposing it into its constituent parts. Well when using visualSTATE I find that it almost forces one in to writing lots of small (less than 5 lines) functions. I suspect that these small functions are part of the reason that my visualSTATE projects just seem to work with almost no debugging time.
3. Documentation. As well as the documentation benefits associated with small functions (i.e. the comments actually match the code!), visualSTATE comes with a terrific documentation tool. Many of my clients quite rightly demand excellent documentation on the designs I do for them. The documentation engine in visualSTATE makes this a breeze!
4. Communication. My clients often ask questions such as “what does the code do if …”. In a traditional project this usually means pouring through complex code trying to ascertain the answer. With visualSTATE projects I find that most of the time I simply look at the state charts. Since the state charts are effectively the code (since they are tied together), then I can give an answer quickly and authoritatively – which makes my clients happy and helps assure me of future business.

All in all, kudos to IAR for such a great tool.

Home

Omniscient Code Generation

Sunday, January 13th, 2008 Nigel Jones

Hi Tech Software has recently been making a lot of noise about its “Omniscient Code Generation”. In a nutshell, the technology appears to defer code generation until the entire program has been compiled, and then look at everything before generating the final object code. The end result is a dramatically more compact (and presumably faster running) program image. I haven’t had a chance to play with the compiler yet (in part because it’s still in beta testing). If they have done what they claim, then Hi Tech should be commended. On my list of things to check out about the technology will be:

  • Is the technology smart enough to track function calls via function pointers? If it is, then this is truly a neat piece of technology. If instead, it’s one of the limitations of the product, then its usefulness to me has just plummeted.
  • Does the technology also track function calls from within interrupts? My experience is that interrupt handling is still the poor relation of compiler technology. If Hi Tech does this, then I’ll be impressed.

Also of interest to me is how other compiler manufacturers will respond. Keil has performed global register coloring on its 8051 compiler for years. I suspect that the Hi Tech approach is a step beyond this, so there’s a chance that Keil will be finally knocked from their #1 position in 8051 code generation. IAR offers a multi unit compilation option with some of its compilers. However, this option isn’t integrated into its Embedded Workbench, so it’s practically useless. With Hi Tech offering compilers for ARM, PIC & MSP430 I can see this really creating a burst of competition in the industry. Excellent!

Home

Understanding Stack Overflow

Monday, June 4th, 2007 Nigel Jones

I suspect that many, if not all bloggers are somewhat narcissistic. In my case it shows through in that I use one of the free services that keeps track of how many visitors I get and what brought them to this blog. Well, it turns out that many of the visitors to this blog get here not because of the brilliance of my writing, but because they did a Google search on “stack overflow” often qualified by PIC, or MSP430 etc. For many of these visitors I suspect they leave empty handed. Thus in an attempt to make these visits less pointless, let me give you my take on what causes a stack overflow in an embedded system.

First of all, go read the Wikipedia description of stack overflow. There’s nothing wrong with the description – it’s just incomplete from an embedded systems perspective.

If you are having problems with 8 bit PICs, then you should read this. For other architectures, read on…

On the assumption that you are getting a stack overflow and that you aren’t performing recursion or attempting to allocate a large amount of storage on the stack, what can be going wrong? Here’s a check list.

  1. What’s your stack size set to? If you don’t understand the question then you need an introductory course to embedded systems programming. If you do understand the question – but don’t know the answer – then this is the most likely source of your problem. How can this be you ask? Well, most embedded systems compilers are designed to work with a particular family of processors. The low end of the family may have a tiny amount of memory (e.g. 128 bytes). As such setting the default stack size to 16 bytes may be a sensible thing to do. Thus, your first step is to ensure that the stack size is set to something reasonable for your system. Click here for advice on how to do this.
  2. Which stack is overflowing? Many processors / compilers support / implement multiple stacks. A typical dichotomy is a call stack (upon which the return addresses of functions are stored) and a data or parameter stack (upon which automatic variables are stored). If you are using an RTOS, then typically there will be a shared call stack while each thread will have its own data stack. Thus is it the shared call stack that is overflowing, or is it the parameter stack associated with a particular task? Once you’ve made the determination which stack is overflowing then finding out exactly what gets placed on that stack will help lead you to the solution to your problem. If you can see no obvious high level language construct that is causing the problem, then the single most likely cause of your misery is an interrupt service routine…
  3. An interrupt service routine can use up an extraordinary amount of space on the stack. For a discussion of how this arises and its impact on performance, see this article. This problem is compounded if your system allows interrupts to be nested (that is, it allows an ISR to itself be interrupted).
  4. Certain library functions (printf() and its brethren are prime offenders) can use an enormous amount of stack space.
  5. If you are writing partially in assembly language, are you failing to pop every register that you pushed? This often occurs if you have more than one exit point from a function or ISR.
  6. If you are writing entirely in assembly language, did you set up the stack pointer correctly and do you know which way the stack grows?
  7. Have you made the mistake of programming a microcontroller that you don’t understand? For example, low end PIC processors have a tiny call stack which is easily overflowed. If you are programming a PIC and don’t know about this limitation, then quite frankly, I’m not surprised you are having problems.
  8. If none of the above solve your problem, then I’m afraid you are most likely in to a stack over-write problem. That is, a pointer is being de-referenced that results in the stack being overwritten. This can often arise when you allocate an array on the stack and then access an element beyond the end of the array. Lint will find a lot of these problems for you. If you don’t know what Lint is, see this article. If you do know what Lint is and aren’t using it then you deserve to be faced with these sorts of problems.

I have also written a related article on setting your stack size that you may find useful.

Home

Tool Upgrades

Saturday, March 31st, 2007 Nigel Jones

As a consultant that does hardware , firmware & software work for my clients, I use a large array of software tools – half a dozen compilers, schematic capture and PCB layout tools, analysis tools as well as the usual gaggle of productivity tools that non-engineers also use. Throw in the tools for running a business and my PC is a regular treasure trove of applications.

With all these tools, the number of upgrades / updates is starting to get out of hand. Every week, it seems I’m updating a major application. The most common scenario seems to be:

  1. I haven’t used a tool in a month or so.
  2. I invoke it – and it tells me that an update is available. Often the mandate is ‘mandatory’ or at least ‘recommended’.
  3. I accept the update.
  4. The download proceeds. Some of them are simply enormous (Ever downloaded the Xilinx Webpack IDE?)
  5. The patch then proceeds. The time to execute the patch is often considerable.
  6. Finally – the dreaded ‘You must restart your computer’ directive. I’ve a dozen applications open, web pages marked, manuals at strategic places – and now I have to close them all down.

Having gone through all this rigmarole, I can finally start using the tool. Of course by now, I just want to ‘get on with it’, and so the release notes often get cursory attention. Inevitably, if I do read the release notes then I find the upgrade is completely useless to me (e.g. support for a new device that I’m not using). If I don’t read the release notes then of course there’s this really neat feature that’s been added that really makes life easier – and I don’t find out about it until weeks later.

Well – enough complaining. Do I have any suggestions? I think so. I’d like tool vendors to realize that their tool isn’t the only one in the box – and that many of us use it on a less than daily basis. With this perspective, I’d like the tool vendors to do the following:

  1. Download upgrades in the background. A lot of applications already do this – they all should.
  2. Inform me there is an update available when I close the tool rather than open it. That way I can allow the update to occur while I’m off doing productive work elsewhere.
  3. Do everything you can to avoid requiring the user to re-boot their computer.
  4. Limit updates to one or two a year. I know product managers want folks on support contracts to feel they are getting their money’s worth – but this only works if my life revolves around that tool – and it doesn’t!

Home

Unexpected uses and the consequences thereof

Friday, November 3rd, 2006 Nigel Jones

I’ll pose today’s blog in the form of one of those lateral thinking questions – which you may want to try and solve before moving on to the rest of the post.

An engineer walks into a meeting, unpacks his laptop and an Ethernet hub, powers both up and then connects an Ethernet cable between the laptop and the hub. No other connections are made to the hub. Explain.

Well I suppose two obvious answers are that the engineer is nuts (likely), or that the engineer doesn’t understand the basics of Ethernet technology (less likely). Of course, in this case, the engineer is me, and while I can’t really attest to my mental state, I do know a thing or two about Ethernet. So what is causing this strange behaviour?

Well, like many engineers, I use some very expensive software. The vendors of this software, in an effort to protect their product from unpaid copying, lock the software to the computer’s NIC. (For the uninitiated, every Ethernet interface IC on the planet has a unique MAC address. Thus any computer with a NIC has a built in unique identifier). Now the vendor of my laptop (Toshiba), in a sensible effort to conserve power, powers down the NIC when it detects no valid signal on the Ethernet port. When the NIC is powered down, it can’t respond to requests for its MAC address, and so the copy protection scheme complains and I can’t run my expensive software.

Who is to blame here? I can’t really fault the SW vendor for wanting to protect their investment, and I can’t blame Toshiba for wanting to minimize the power consumption of their product. I suppose it would be nice if Toshiba provided a utility to prevent the auto power down – but that’s probably inconsistent with them trying to make the system easy to use for the average consumer. I think the answer is that the fault lies with us in the engineering community. We value great tools, but apparently enough of us (and our employers) are dishonest enough that we’ll copy them if we get the chance. Apparently part of the price we pay for this is looking like idiots when we walk into meetings…

Home