embedded software boot camp

PIC stack overflow

Saturday, April 25th, 2009 by Nigel Jones

For regular readers of this blog I apologize for turning once again to the topic of my Nom de Guerre. If you really don’t want to read about stack overflow again, then just skip to the second section of this posting where I address the far more interesting topic of why anyone uses an 8-bit PIC in the first place.

Anyway, the motivation for this post is that the most common search term that drives folks to this blog is ‘PIC stack overflow’. While I’ve expounded on the topic of stacks in general here and here, I’ve never explicitly addressed the problem with 8 bit PICs. So to make my PIC visitors happy, I thought I’ll give them all they need to know to solve the problem of stack overflow on their 8 bit PIC processors.

The key thing to understand about the 8 bit PIC architecture is that the stack size is fixed. It varies from a depth of 2 for the really low end devices to 31 for the high end 8 bit devices. The most popular parts (such as the 16F877) have a stack size of 8. Every (r)call consumes a level, as does the interrupt handler. To add insult to injury, if you use the In Circuit Debugger (ICD) rather than a full blown ICE, then support for the ICD also consumes a level. So if you are using a 16 series part (for example) with an ICD and interrupts, then you have at most 6 levels available to you. What does this mean? Well if you are programming in assembly language (which when you get down to it was always the intention of the PIC designers) it means that you can nest function calls no more than six deep. If you are programming in C then depending on your compiler you may not even be able to nest functions this deep, particularly if you are using size optimization.

So on the assumption that you are overflowing the call stack, what can you do? Here’s a checklist:

  • Switch from the ICD to an ICE. It’s only a few thousand dollars difference…
  • If you don’t really need interrupt support, then eliminate it.
  • If you need interrupt support then don’t make any function calls from within the ISR (as this subtracts from your available levels).
  • Inline low level functions
  • Use speed optimization (which effectively inlines functions)
  • Examine your call tree and determine where the greatest call depth occurs. At this point either restructure the code to reduce the call depth, or disable interrupts during the deepest point.
  • Structure your code such that calls can be replaced with jumps. You do this by only making calls at the very end of the function, so that the compiler can simply jump to the new function. (Yes this is a really ugly technique).
  • Buy a much better compiler.

If you are still stuck after trying all these, then you really are in a pickle. You could seek paid expert help (e.g. from me or some of the other folks that blog here at embeddedgurus) or you could change CPU architectures. Which leads me to:

So why are you using a PIC anyway?

The popularity of 8 bit PICs baffles me. Its architecture is awful – the limited call stack is just the first dreadful thing. Throw in the need for paging and banking together with the single interrupt vector and you have a nightmare of a programming model. It would be one thing if this was the norm for 8 bit devices – but it isn’t. The AVR architecture blows the PIC away, while the HC05 / HC08 are also streets ahead of the PIC. Given the choice I think I’d even take an 8051 over the PIC. I don’t see any cost advantages, packaging advantages (Atmel has just released a SOT23-6 AVR which is essentially instruction set compatible with their largest devices) or peripheral set advantages. In short, I don’t get it! Incidentally, this isn’t an indictment of Microchip – they are a great company and I really like a lot of their other products, their web site, tech support and so on (perhaps this is why the PIC is so widely used?). So to the (ir)regular readers of this blog – if you are you using 8 bit PICs perhaps you could use the comment section to explain why. Let the debate begin!

Home

Tags: , , ,

38 Responses to “PIC stack overflow”

  1. Gauthier says:

    Could it be that Microchip is better at marketing?Don’t they give free samples to universities? How about Atmel?

  2. Anonymous says:

    They are good at marketing for sure. Inferior product at higher price, and on of the least compiler friendly micros out there.Free samples for hobbyists and educational puposes worldwide (I got all the PICs this way, back when I developed my EE degree projects).

  3. Nigel Jones says:

    If it is a marketing issue, then what does it say about engineers? Aren’t we supposed to be objective and logical in our design decisions? If we aren’t, then the only explanation is that we go with what we are comfortable with, rather than what is right. Perhaps this explains a lot!

    • Anonymous says:

      One thing Microchip is good at is the way they support their customer. I have worked with other MCU manufactured by other companies and usually we are getting support from their 3rd party distributor rather from the MCU company itself. We as engineers don’t have the luxury of time on our product development and having a good support is equally important as technical specifications.

      Microchip’s datasheet is also way better than others(although I find NEC/Renesas datasheet easy to read as well)

  4. Gauthier says:

    Objective and logical in our decisions, yes… given the time.If management pushes for fast development, there may be no other way than going with what you already know works (although not best). For example what you used in college, which seems a good place for marketing to focus on.That’s why I love to read blogs such as yours, and embedded.com for example. It gives me insights in other alternatives.

  5. Nigel Jones says:

    I think you are correct about management induced time pressures. It’s another symptom of ‘if you aren’t coding or debugging then you aren’t working’. Time spent on the front end of a project always pays dividends on the back end – the trick is how to convince management.

  6. GregK says:

    Have You notice that almost every problem we can treat like management problem, and in fact I think it is.Often for management software development is ‘transparent’, you get something to do with hardware and you do this. If You are lucky someone ask you what ROM memory you suppose to need, I have never get situation: what RAM size you need… so call stack deep is completely abstraction

  7. Miro Samek says:

    To add insult to injury, the 8-bit PIC microcontrollers have also the worst code density in the industry (see my blog post “Insects of the computer world” at http://www.embeddedgurus.net/state-space/2009/03/insects-of-computer-world.html).The 8-bit PIC is possibly the worst imaginable embedded processor in the history of mankind. PIC is not just braindead. It is outright dangerous.It also is the most popular chip in the world…This doesn’t add up in my mind. What’s wrong with our industry? Is the intellectual inertia the most dominant force for us?

  8. Kyle says:

    I think Microchip enjoys the benefit of being familiar to most engineers and the “incumbent part” on many redesigns. And while they may lag technologically today, they achieved a number of firsts in the 90’s which put them in this position. They were the first to break the $3 barrier, and then the $1 barrier. They were the first in 8 pin packages. They were the first to give away an IDE – not just command line tools. (The IDE price isn’t really an issue once you get into a serious development project but it will help you take a look at something you are unsure of using.) Atmel may have beat them at their own game technologically, but their marketing is still weak. I recently wanted to play around with an SPI interface part and thought it would be interesting to drive it with a USB interface and a PC application. I requested samples from Microchip and Atmel – I had found sample USB interface code for each processor but I wanted the option to use either after studying the examples. The Microchip part arrived in two days. The Atmel part took six weeks to get to me. If that had been a “real” project the decision would have been made long before the sample arrived.

  9. Nigel Jones says:

    Good point Kyle. If anyone from Atmel is reading this blog then I think it should give you cause to pause. So far no one has defended the PIC on technical grounds – instead it’s down to marketing. I’m struck by the parallels with the PC / Mac debate.

  10. Stephen says:

    From my perspective, PICs are readily available and cheap, but also have a very good range of peripherals available. Indeed, these are what keep me using PICs as you can do some pretty powerful stuff. Most programmers are isolated from the problems of the architecture as they use higher level languages. The PIC also has a few free C compilers available which makes it ideal for hobbyists and EEs on a budget. Also, for prototyping they are great as all the more basic ones are available in DIL packages so are very easy to breadboard. I believe also that Microchip are addressing some of the architecture issues with an enhanced baseline range.I guess also the use of PICs depends on what you are trying to do. For complex programs granted they aren’t that good, but for implementing simple functions they are pretty easy to use.

    • Jon says:

      This. Great peripherals. 8-Bit PIC’s are a perfectly justifiable choice of MCU for smaller projects, where the peripheral mix can make all the difference.
      Regards
      Jon

  11. J. Peterson says:

    I wanted a cheap, low pin count part with (here's the catch) a built in UART. The last requirement pretty quickly left me with the PIC16F688.I've found the BKND CC5X compiler does a reasonable job of letting you program the 16F688 in some semblance of C while generating efficient code. Well worth the $220.By the way, don't ignore the 16 bit PIC devices (PIC24F, dsPIC30 etc). Unlike the smaller chips, the 16 bit PIC line has a very clean architecture that works very well with C. They also have some great features (e.g., ability to map internal peripherals to arbitrary pins on the fly) that other chips don't have.

  12. Anonymous says:

    I entered with PIC18, and was mainly attracted because the barrier to entry is low and the abundance of peripheral devices like uarts, timers/counters, interrupt-pins and even integrated ethernet. I later migrated to dspic33 because the cost are not important (we ship 10 units a year, and some of the optocouplers on the print are more expensive than the most expensive microchip CPU), and our electronics man was able to solder even the 100pin devices by hand. 100pins, 2 uarts, CAN, I2C, 2x SPI, quadrature decoder, 6 timers, 6 interrupt pins, block-pin interrupts the list goes on and on, for Eur 5. Programmed in (gc)C.The only thing I hate is the missing ethernet support in the 16-bit line

  13. Anonymous says:

    I can tell you why PICs are most common, Microchip has the BEST documentation by far. The "getting started" series from Microchip is just Fantastic. I myself learned how to program PIC/PC interface in Assembly with the help of all the PDFs that microchip have online, without ANY knowlage of HI or LO level programming.Atmels documentation is much harder to understand for an amature /Sebastian

  14. Anonymous says:

    1) Don't agree with the economic tradeoffs selected by Atmel. PICs are ruthlessly cheap, fast, and high-feature compared to AVRs because Microchip didn't set out with the (quixotic and misdirected) goal of running GCC on a $1.90 part. 2) I find the Atmel marketing materials and documentation to be heavy-handed and evangelistic. 3) The Atmel demo boards are not as extensive or interesting as the PIC equivalents.4) PICs have been around longer; it's like Bing vs. Google. Part of me won't use Bing simply because Google got their first and I don't see why the gents behind Bing can't find something more creative to do with their time. So a couple of geniuses in Scandinavia made a little RISC chip for their senior project… who cares? I find the Atmel proponents to be a bit too heavy-handed and evangelistic in tone. The message seems to be "quit using that bloody primitive PIC and come into the 21st century, you knuckle-dragger" and I just don't respond to that. I'm paraphrasing and reading between the lines a bit, to be sure, but this is my impression. This translates into the documentation as well. The Atmel materials tend to drone on theoretically where the PIC docs are ruthlessly hands-on. Beyond that, I think it's a question of economics. Who really wants to write high-level code for a $1.00 part?

  15. Anonymous says:

    I read above that "no one has defended the PIC on technical grounds – it's all marketing." This supposedly calls to mind the PC-vs.-Mac dichotomy. I don't agree with that.I think a better analogy is UNIX versus Multics. The PIC philosophy is very similar to UNIX… to get a given piece of functionality into as many hands as possible as quickly and cheaply as possible. Edge cases (e.g. applications that might nest function calls beyond the capabilities of an 8-entry stack) are not allowed to impact the large majority of people with more rational needs. Like UNIX, the PIC architecture doesn't make heroic or costly attempts to deal with user error. If you exceed the boundaries of the fixed stack, the resultant behavior is potentially destructive, but is also easy to understand and presumably easy to implement… two very important points in its favor. In the case of UNIX, this "less-is-more" approach was justified because the team had tried to do something better (Multics), and it turned into an unusable behemoth. In the case of the PIC, the justification is that this narrow, undistracted focus has given us a cheap, fast, and comprehensible device. And I can't overemphasize this last point… I find it refreshing how Microchip's marketing materials enthuse that there are "only 35 instructions to learn." When I read things like that, I think "Someone else gets it. Sometimes less really is more." Practically, one big dividend of the PIC's simplistic assembly language is that it's quite easy to determine execution time. Much of the time, it's possible to determine execution time by measuring the code with a ruler! Sure, higher level features are nice, but by no means do they help with this task (which is crucial in real-time systems).In my opinion, the architects of the Atmel were too distracted by notions gleaned outside the embedded millieu. Sure, PIC assembly is crap if you're trying to port GCC to it. But there's nothing particularly wrong with PIC assembly for an assembly language programmer. Having worked with several assembly languages, I can assure you that there are scarier monsters than the PIC. For one thing, it's assembly language is small. This reduces the opportunity to write code that's slower than it needs to be. Atmel makes a big stink over the word "othogonal." But I do think PIC assembly is very regular and logical. All of the byte-oriented instructions have a version that places the result in a register and a version that places it in the accumulator. There are no exceptions, nor are extra modes grafted onto select instructions. This is a great deal more "orthogonality" than one finds in, for example, x86 assembly. I should add that simplicity doesn't just translates to cheap and comprehensible. It also translates into low power consumption and high speed.So, in summary, I think what many people overlook is the issue of "tradeoffs." And I do not think the issues of business can be seperated from the abstract concept of "design" quite so cleanly as many of the other posts propose.

  16. krishnaprasad says:

    stack overflow – the program doesn’t get return instruction / nested call routine is more than 8

  17. Nico says:

    I’m still wondering why nobody is mentioning code portability (protection of investment!). Since the early 90’s most of the product development costs go into software. Due to the archaic architecture of the 8 bit PICs you can’t port C code because of to bank switching and the lack of pointer support.

    I’m involved in a couple of projects that involve legacy PIC designs and new ARM based designs. I need to write the same code twice. Thats just a waste of effort and money. I have been using TI’s MSP430, Renesas’ H8 as well and I can port C code to ARM without any problems. Even old 8051 code for large memory models isn’t a big problem.

  18. Hernan says:

    World isn’t made only of software. Microchip PIC Pins can drive more current, in some countries is easier to buy a PIC than an AVR and to have only 35 instruction (althought is an illusion of simplicity) it works. Good Blog!

  19. Doug says:

    I started my firmware life in assembly on the PIC16C family and I loved it. Granted it had some downsides, but those were mostly a factor when using compiled languages. In assembly, the segmented architecture and invisible stack pointer weren’t nearly so troublesome. The upsides, as I saw them: 1) Price — we were getting 20MHz processors (granted that’s only 5MHz instructions) with 28 GPIOs for $0.92 each at one point, 2) rich peripheral set, every one of which worked well with minimal errata and was properly documented, 3) excellent support, including access to a true factory FAE and the best documentation in the business, and 4) very low barriers to entry, including cheap debug and development hardware and software.

    I think we also need to put the whole family in their historical context. Remember this family was born when the Motorola 68HC04 family was king, and it seemed like — and was — a big step up from those. (One of the reasons those GPIOs had enough drive to directly power LEDs is the parts were fab’ed on a 0.7 micron process! — larger features = more current capability.) They’re probably hanging around now largely on the strength of installed base, engineer familiarity and sheer inertia. Comparing them to a more modern family really just isn’t fair.

  20. David Cary says:

    I’ve been told that the Microchip 16C84 (PIC16x84), introduced in 1993, was the first CPU with on-chip electrically erasable program memory.

    (Anyone happen to know the first microcontroller from any other company with on-chip electrically erasable program memory — EEPROM program memory or flash program memory?)

    Since the firmware was on-chip in electrically erasable memory, that made it vastly superior for rapid prototyping and debugging firmware compared to every other CPU of the time.
    All the other CPUs around that time period either
    (a) had a quartz “erase window” for erasing EPROM, such as earlier some versions of the PIC and Motorola 68HC11. The chips were expensive, and it was annoying waiting for long erase times in the UV eraser every time we wanted to make some small, quick change.
    (b) identically the same chip packaged without the quarts erase window, making it one-time programmable (OTP).
    Alas, if you made even the tiniest error in the firmware you had to throw away the whole chip and start over.
    These chips were relatively inexpensive, so if you could somehow fix all the bugs in only one or two iterations it’s still cheaper than the quartz window chips.
    Alas, I don’t know anyone that can fix all the bugs in only one or two iterations.
    Many times a piece of firmware would *still* have bugs even after dozens of iterations, so you have dozens of chips in the trash, and you slowly realize that a couple of re-usable quartz window chips would have been better than dozens of OTP chips.
    (c) had mask-ROM internal program memory, with huge lead times and NRE costs.
    (d) no internal program memory, so it requires external program memory — so lots of pins on the package were eaten up by the memory interface. Therefore, to get the same number of useful general-purpose I/O pins as the 16C84 required a package with more pins, therefore the packaged chip was larger and more expensive. Or else external bus decoder chips, therefore the board as whole was larger and more expensive. Also a bit of a hassle hooking up the address bus and data bus when you just wanted a quick prototype. Also much more difficult to pass FCC testing with a external memory bus than when all the memory is internal.

    Since this was such an huge advantage of the 16C84 over all other CPUs, lots of people switched to it.
    When they needed to pick a chip for some new project, they *started* with the PIC because rapid development was much faster and cheaper, even if they thought in the back of their mind that once the feature set had “stabilized”, they would switch to some other processor — perhaps mask-ROM.
    When it came time to teach some new guy how to program microcontrollers, those people would use the chip that was cheap to reprogram over and over — because we know the new guy is going to make the same sorts of common program errors that all new guys make.
    But once development was “finished”, they were too busy with the next project to port it,
    so they stuck with they knew was already working.
    Or they made an easy switch to some other Microchip PIC processor that used basically the same instruction set and chip burner.

    Switching to some other company’s microcontrollers was much more expensive.
    That required buying a new chip burner, and either
    (a) buying a new C compiler that targeted that chip, which at the time was typically over $1000, which allowed you to start programming right away in the familiar C programming language. Or
    (b) learning yet another assembly language, which took a long time to become fluent.

    Today there are lots of competing chips — AVR, ARM, and Freescale — with on-board flash program memory.
    While they have many small advantages over the Microchip PIC series, I don’t see such an overwhelming advantage that makes it worthwhile to spend thousands of dollars in up-front switching costs.

    Because lots of people were using Microchip PICs, we had an reinforcing economy-of-scale feedback loop that caused prices of PICs to fall and prices of other chips to well, not exactly rise, but fall less rapidly.
    Causing more people to switch to PIC because of its lower cost, which led to even lower cost PICs.

    However, since the cost of C compilers such as SDCC and gcc has bottomed out at “free”, and the cost of chip burners has plummeted, the switching cost is far lower than it used to be. So even small advantages of other chips are enough to make it worthwhile to switch.

    I suspect most if not all of Microchip’s current popularity come from a combination of this high switching cost which has only recently disappeared, and the related reasons Doug pointed out — strength of installed base, engineer familiarity and sheer inertia.

  21. Horsedorf says:

    Something that one of the financiers that sat on the board of directors at Netapp said to the senior executive staff of Netapp comes to mind. “When are you engineers going to get that it doesn’t have to be perfect, it just has to be good enough.” And that, is where the Pic sits.. It’s not perfect, but it *IS* “good enough”.

  22. Henrique says:

    It’s simple, every time you want a microcontroller to do some dumb & quick job, you ask your colleagues about witch mcu you should choose.. the answers are:
    – “Use a ARM MCU! I’ve seen some pretty cheap around, or maybe you should buy a kit”
    – “Use AVR! I’ve never used one but some people say they are very easy to deal with, just don’t know which to buy.”
    – “Use 8051! But do it in assembly. 8051 compilers are lame”

    An then you hear:
    – “Use PIC. I’ve dealt with them.. in fact i have a programmer in my house”
    – “Hey! but I have a programmer in my office.. give me just a minute..”
    – “Well, I have half your code written with test cases and everything!”
    – “You know what? let me just do that with you and by the end of the day will have it ready!”

    and so it goes…

  23. cyrile says:

    Hi,

    I use PIC because when i was a student, i got an old programmer. then i had many sample of their circuits, free. it was a criteras for me, having them free cause it’s a little bit expensive for a student without money.

    I meet your point, i agree. architecture is awfull, the two interrupts vectors are not sufficient for effective programming, sometimes it’s a nigthmare… small stack, really small.
    some bugs in the compiler also..

    But the computing power is sufficient for me, and when i need more power, i have my own DSP board.

    Maybe i’ll change for ARM one day..

    • cyrile says:

      i almost forgot, microchip keep many pics active, the ones we used many years ago are still avaible, or a very close equivalent.
      i’ts somehow interesting for us, as we are not in a race for the last product !

      • Nigel Jones says:

        I think that’s a very important point. I do a fair amount of work for clients simply based on parts obsolescence. The fact that Microchip keeps parts around forever certainly helps maintain brand loyalty.

  24. As a teacher I do a PIC assembler class. One thing I tell my students is that once they have mastered the 14-bit core PICs anything else will be a piece of cake.

    More seriously, IMO the main reason PICs are so popular (beside being popular, which is a big factor) is that Microchip has a very good reputation for delivering the chips, and continuing to deliver them. If you make a product now and still want to sell it (without the hassle of re-design, re-testing, re-certification, etc) I think Microchip is the only choice. If OTOH you want a the best bang for the buck right now (and don’t care much what happens in 5 years, or don’t mind sitching to a different chip) Microchip it is definitely the wrong choice.

    • Nigel Jones says:

      The first time I looked at PIC14 assembler (having programmed 6800’s, Z80s, AVRs etc for years) I thought it was someone’s idea of a joke! Thus I agree that if you can handle PIC14, you can probably do anything. While I agree that Microchip has made a serious commitment to keeping parts available for years (I suspect in part because they buy fully depreciated fabs), I don’t think they are alone in this regard. Notwithstanding that, future parts availability is a very serious design issue; indeed every year I do a reasonable amount of work for clients redesigning products where parts are no longer available.

      Finally thank you for such a thoughtful comment. One of the things I really like about those who comment on this blog is that the comments are well reasoned and devoid of religious fervour. It’s quite refreshing compared to some of the things that get posted on other forums.

  25. Excellent guide for embedded engineers…! Thanks for such valuable knowledge.

  26. David Bronke says:

    I’m a hobbyist PIC developer who got his start taking a microcontrollers class in college. The class started us out with PICs, and though I’ve looked at other MCUs (mostly Atmel and ARM-based ones) over the years, I haven’t yet been able to switch.

    Entry cost is a significant concern for me. With PIC, you can get a decent programmer (the PicKit2) for $35 direct from Microchip, and there’s several cheaper programmers available, including several do-it-yourself designs that can be assembled for under $10 if you have a serial port. When looking into Atmel, I initially couldn’t find anything under $100 that would get me a working programmer and software, though now I’ve found a good, cheap in-circuit debugger for their 8-bit products that only costs $34, so there do seem to be options there as well. However, I have never found a way to figure out how much an AVR MCU will cost without specifically asking Atmel for a quote (unlike Microchip, Atmel’s site doesn’t list prices), and their samples process is much more involved and restricted than Microchip’s.

    Another thing that PIC does very well on is providing high-end devices in DIP packages. I recently looked at Atmel when trying to put together a parts list for a custom keyboard controller I’m building, and was disappointed to find that Atmel doesn’t actually sell any MCUs with USB support in a DIP package; anything they have that supports USB is surface-mount, which makes it impossible to use with a standard prototyping board without first attaching it to some sort of adapter. Microchip, on the other hand, sells almost 150 different models of PICs which include USB support and come in a DIP package. There are some third-party solutions available to adapt AVRs with USB support to a DIP format, such as the Teensy and Teensy++ provided by PJRC, but they’re much more expensive… the Teensy++ (which is the only model I found with enough pins to work for this project) is $24 each; the PIC I ended up choosing instead (the PIC18F4550) only costs $4.47 each.

    Atmel has interested me for a long time, but given that this is only a hobby for me, I haven’t seen enough reason to spend the time and money required to switch. If I were to get the opportunity to use microcontrollers in a project at work, I would choose PIC without a second thought, since I’m more familiar with it and already own all of the tools required.

  27. Michiel Bruijn says:

    After spending too much time on debugging a stack overflow problem I like to mention an other scenario causing stack overflow , not mentioned yet in the top of this article.
    In my case I reserved, as described in several articles, general purpose registers, for saving W, STATUS and PCLATH when entering interrupt subroutines(ISR). If you don’t reserve registers with the same (short) address in the other banks for this purpose then you can have stack overflow bugs as I’ve learned the hard way.
    Hopefully this will help someone else with this sort of bugs.

  28. John Moore says:

    I suspect one reason that PIC is popular is that so many hobbyists have been using it forever. Hobbyists often are or end up being engineers. Microchip was very good at getting that market, starting a long time ago. Too many MCU manufacturers don’t understand the influence of the hobby market.

    As just one example, Apple used the 6502, not the 6800 or the far superior TI 9995 because they could get the chip at a hobbyist price. Having dealt with the various manufacturers as the engineer for a manufacturer, I could see that TI was simply not interested in anyone who wouldn’t immediately generate huge volumes. I suspect Microchip doesn’t have that attitude.

    I write this as someone who has been a programmer and engineer for 45 years, and would never, ever use an 8 bit PIC if I had a choice! Unfortunately, I have a customer who insisted on the PIC so I’m stuck with the *horrible* development environment and assembly architecture (the latter forces me to use C, which forces me into the crappy debuggers).

  29. lahiru says:

    lecturers in our uni seems to be promoting PIC. But they say.. “programming is awful in PIC. but you will be in a better position to program any processor after being able to program a (difficult to do so) PIC.”. Also, they even publish PIC tutorials in local languages for famous scientific newspapers.
    In my country, we can buy PICs in most of the electronics shops. But when it comes to AVR or other processors, they are very hard to find. PICs are very popular that some people use the word “PIC”, generally for any microcontroller.
    These are the possible reasons for such a popularity for PICs among scientific community in my country (Sri Lanka).

  30. Jason says:

    I started out with PICs about five years ago pretty much because of the abundance of free code on the internet and they looked to be the easiest to learn. I think since then I’ve only ran into the stack overflow problem three or four times (mainly because I wasn’t watching what I was doing). I’ve since written my own call macros that simply store the return address into my own software stack and manipulate the pc registers instead of using call / return instructions. Granted it takes a few more instruction cycles to implement, but it’s worked out fine so far.

    I never went to school for computers… when I graduated high school the 386 was still pretty much what was in PCs at the time and the first Pentium was just getting it’s debut. I had no clue what a microcontroller even was until a brother’s friend mentioned them and my interest grew to a point I finally looked up digikey and ordered a couple different ones along with a programmer and set out to learn something new. I’m guess since a pretty much uneducated person with a decent working ball of gray matter can figure them out and make them do things with minimal cussing… they must be doing something right.

    I have a couple PIC24 samples on the way as well as PIC18 with the external memory bus samples on the way with some pretty ambitious plans for them. I haven’t looked into the other microcontroller brands yet and probably never will as long as I can keep finding new projects and ideas that can be done with the PICs I already have. I only use assembly which can be a bit cumbersome I’ll admit, but with a slew of thoughtful and worthwhile macros under my belt the code really isn’t that hard to read anymore. I tried using sdcc and a couple of Microchip’s free versions of c compilers and found assembly just worked better for me.

  31. I started using pic back in 2001 because the company i worked for used it, main advantage was the built in ADC, 14 years later i am still using PIC but mostly 32 bit one which are working perfectly with no issues, BUT you are right about the 8bit family is horrible, THE STACK OVER FLOW is the worst night mare.. I agree with you we should use ATmeg or new variations of 8051 then using the 8 bit pics .. The biggest issue i have with microchip is that their IDE and tools are very expensive

Leave a Reply

You must be logged in to post a comment.