For regular readers of this blog I apologize for turning once again to the topic of my Nom de Guerre. If you really don’t want to read about stack overflow again, then just skip to the second section of this posting where I address the far more interesting topic of why anyone uses an 8-bit PIC in the first place.
Anyway, the motivation for this post is that the most common search term that drives folks to this blog is ‘PIC stack overflow’. While I’ve expounded on the topic of stacks in general here and here, I’ve never explicitly addressed the problem with 8 bit PICs. So to make my PIC visitors happy, I thought I’ll give them all they need to know to solve the problem of stack overflow on their 8 bit PIC processors.
The key thing to understand about the 8 bit PIC architecture is that the stack size is fixed. It varies from a depth of 2 for the really low end devices to 31 for the high end 8 bit devices. The most popular parts (such as the 16F877) have a stack size of 8. Every (r)call consumes a level, as does the interrupt handler. To add insult to injury, if you use the In Circuit Debugger (ICD) rather than a full blown ICE, then support for the ICD also consumes a level. So if you are using a 16 series part (for example) with an ICD and interrupts, then you have at most 6 levels available to you. What does this mean? Well if you are programming in assembly language (which when you get down to it was always the intention of the PIC designers) it means that you can nest function calls no more than six deep. If you are programming in C then depending on your compiler you may not even be able to nest functions this deep, particularly if you are using size optimization.
So on the assumption that you are overflowing the call stack, what can you do? Here’s a checklist:
- Switch from the ICD to an ICE. It’s only a few thousand dollars difference…
- If you don’t really need interrupt support, then eliminate it.
- If you need interrupt support then don’t make any function calls from within the ISR (as this subtracts from your available levels).
- Inline low level functions
- Use speed optimization (which effectively inlines functions)
- Examine your call tree and determine where the greatest call depth occurs. At this point either restructure the code to reduce the call depth, or disable interrupts during the deepest point.
- Structure your code such that calls can be replaced with jumps. You do this by only making calls at the very end of the function, so that the compiler can simply jump to the new function. (Yes this is a really ugly technique).
- Buy a much better compiler.
If you are still stuck after trying all these, then you really are in a pickle. You could seek paid expert help (e.g. from me or some of the other folks that blog here at embeddedgurus) or you could change CPU architectures. Which leads me to:
So why are you using a PIC anyway?
The popularity of 8 bit PICs baffles me. Its architecture is awful – the limited call stack is just the first dreadful thing. Throw in the need for paging and banking together with the single interrupt vector and you have a nightmare of a programming model. It would be one thing if this was the norm for 8 bit devices – but it isn’t. The AVR architecture blows the PIC away, while the HC05 / HC08 are also streets ahead of the PIC. Given the choice I think I’d even take an 8051 over the PIC. I don’t see any cost advantages, packaging advantages (Atmel has just released a SOT23-6 AVR which is essentially instruction set compatible with their largest devices) or peripheral set advantages. In short, I don’t get it! Incidentally, this isn’t an indictment of Microchip – they are a great company and I really like a lot of their other products, their web site, tech support and so on (perhaps this is why the PIC is so widely used?). So to the (ir)regular readers of this blog – if you are you using 8 bit PICs perhaps you could use the comment section to explain why. Let the debate begin!