embedded software boot camp

Is GCC a ‘good’ compiler?

Tuesday, February 2nd, 2010 by Nigel Jones

It seems that barely a month goes by when I’m not asked my opinion on compilers. Sometimes I’m simply asked what compilers I use, while other times I’m asked my opinion on specific compilers – with GCC being by far the most asked about compiler. I’ve resisted writing about this topic because quite frankly it’s the sort of topic that people get very passionate about – and by passionate I mean frothing at the mouth passionate. It seems that some folks simply can’t accept the fact that someone doesn’t agree with them that XYZ is simply the best compiler ever. Notwithstanding this, the volume of inquiries has reached the point where I really feel the need to break my silence.

First of all lets make some general observations.

  1. Despite the fact that I’ve been doing this for nearly 30 years and also despite the fact that as a consultant I probably use a wider variety of compilers than someone that works for an employer, the simple fact is that I’ve only had cause to use in anger a limited number of compilers. Thus are Rowley compilers any good? Well their website is decent, the documentation is OK and the IDE very nice. However I’ve never built a real project with their tools and so I really don’t know whether Rowley compilers are any good.
  2. Many vendors provide compilers for many targets. As such it’s a good bet that if their 8051 compiler is very good, then their ARM compiler is also likely to be excellent. However it isn’t a given. Thus while I whole heartedly endorse the Keil 8051 compiler, I have no opinion on their ARM compiler.
  3. Compilers vary in price from ‘free’ to ‘cheap’ to ‘expensive’ to ‘they have got to be joking’. I’ve put all of these costs in quotes, because as you’ll see below, one’s perspective on what constitutes ‘free’ or ‘expensive’ is not easily defined.

So, enough with the preamble. Lets start with the ‘free’ and ‘cheap’ compilers, including GCC. Well for me the bottom line (literally) is that I can’t afford to use these compilers. The reason is quite simple. I’m a high priced consultant. I can charge high hourly rates in part because I have exceptionally high productivity. Part of the way I achieve my high productivity is by not wasting my time (and hence my client’s money) on stupid issues unrelated to the problem at hand. Given that a compiler / linker is such a frequently used tool, and given that I’m also the sort of engineer who pushes his tools hard, it’s absolutely essential to me that when I run into a compiler issue I can pick up the phone and get an intelligent response ASAP. One simply can’t do that with ‘free’ or ‘cheap’ compilers, and thus too often one is reduced to browsing the Internet to find the solution to a problem. When this happens, then my ‘free’ compiler rapidly starts to cost an arm and a leg.

What always amazes me about this topic is that so few employers / engineers seem to understand this. It seems that too many folks will eschew paying $2000 for a compiler – and then happily let their engineers bang their heads against a problem for a week – at a rate of at least $1000 a day.

Thus for me, the answer to the question ‘Is GCC a good compiler?’ is ‘no, it isn’t’. Of course if you are a student, or indeed anyone who is cash poor and time rich, then by all means use GCC. I’m sure you’ll be very pleased with the results and that you’ll find it to be a good compiler for you.

What then of the ‘expensive’ and they ‘have got to be joking’ categories? Rather interestingly, although based on limited experience, I’ve found that the very expensive compiler vendors ($10K+) also have lousy support. Instead it’s the ‘expensive’ vendors that actually seem to offer the best combination of functionality, code quality, support and price – and it’s this category that I tend to use the most.

Finally, regarding which compiler vendor I use. I happen to be a fan of IAR compilers. I’ve always found their code quality to be at least ‘good’. Their linker is probably the easiest and most powerful  linker I’ve ever used. Their support is very good (thanks Steve :-)). Finally their IDE is easy to use and has a very consistent look and feel across a wide range of processors, which is important to me as I tend to switch between architectures a lot.

Home

Tags: , , , ,

38 Responses to “Is GCC a ‘good’ compiler?”

  1. Peter says:

    Nigel, sounds like you fall into the 'cheap' consultant category.'Expensive' consultants support their own development tool chains.Out-sourcing your support to a third-party only works when that third-part is responsive to your needs.

  2. Nigel Jones says:

    Ouch – and I thought it was only my wife that considered me cheap! I certainly know of consultants that support their own tool chains – and even make a virtue of it. Indeed Bill Gatliff for example has a consulting business largely based on promoting the use of open sourced tools. I'd be interested to know if the 'Expensive' consultants you are referring to bill their clients for the time they spend supporting their own development tool chains.

  3. Kyle Bostian says:

    Peter, our company has two embedded products that implement complex state machines developed by Nigel. One was developed with about $8K worth of IAR tools (compiler + Visual State.) When I need to make a change to that product, I fire up the tools, make my revisions, test them, and ship them.The other was developed with a popular free assembler. I've read the code, and I think I understand it. That being said, I won't touch that code with a ten foot pole – it works and the risk of introducing a bug it too high. This is one of the motivations for development of the former product. My anecdote is probably more a testament to the power of statecharting tools than it is about the rest of the toolchain. Nevertheless, when the right tools are used, the tools are well worth it. (Nigel is as well.)

    • Anonymous says:

      Please don’t mix up things. The fact that you have a bad written program, poorly structured and hard to understand has nothing to do with the compiler.

      I’ve been in the embedded world for 20 years and I’ve seen all kinds. There is, and sadly there will always be, people that write code not thinking on modularity, reuse, maintenance and scalability while others simply do the work right. The compiler do not mind about how good is the code you write so, if you have a hard to maintain code blame the coder not the tool. It will be as hard to maintain on an expensive compiler.

    • Anonymus says:

      Please lets don’t mix up things. The fact that you have a bad written program, poorly structured and hard to understand has nothing to do with the compiler.

      I’ve been in the embedded world for 20 years and I’ve seen all kinds. There is, and sadly there will always be, people that write code not thinking on modularity, reuse, maintenance and scalability while others simply do the work right. The compiler do not mind about how good is the code you write so, if you have a hard to maintain code blame the coder not the tool. It will be as hard to maintain on an expensive compiler.

      • redengin says:

        A good toolchain absolutely does curtail a poorly written program. You pay more for applying additional rules to the standard at the toolchain level. While the toolchain can do this work, code reviews will pay off much better, as you’ll teach others and learn how others understand your code. That said, if you’re in embedded work, you must still evaluate the output, as they’ve all had errors and inefficiencies. GCC is a great compiler, but depending your target and your use case, GCC may not be the best starting point.

  4. Kyle Bostian says:

    Reading my comment after posting, I wanted to clarify that it isn't intended as a criticism in any way. What I'm trying to say is, that the more expensive tools gave us a product we can maintain ourselves.

  5. Anonymous says:

    Peter,I think you need to give us some examples of consultancies supporting their own tool chains. At least if we are talking about C/C++ compilers for CPU's at the deeply embedded end of things…Sure, there are a number of highflying consultancies/service companies with their own proprietary compiler/language tools but then we are talking about x86 or other server/workstation type CPU's as the target architecture.There are a number of companies maintaining their own ports of GCC for a few selected CPU's, but in the end they are just as dependent on the open source community as the ordinary developer if they want to stay reasonably in synch with main line development.It's *very* expensive to develop and maintain, not to mention supporting, compiler tools for multiple CPU architectures, which is why there are so few companies out there doing this. (It's even expensive to do it for only one architecture if you for example look at ARM.)There is simply no business even for the most pricey consultanices in developing and maintaining their own AVR, msp430 or m16c compiler for example, because that would imply steering all projects towards that architecture. (Again, x86 etc are a completely different thing.)So, I'm really very interested in hearing about counter examples to what I state above!

  6. M. Eric Carr says:

    Interesting perspective. As a relatively inexperienced embedded developer, I'm still very much in the bang-for-the-buck category (free being the usual goal), and have yet to come across too many gnarly compiler-related issues — although I have no doubt they are out there. I can see where using a well-supported compiler could be cost-effective, though. Thanks for the article.

  7. The Walrus says:

    I've also been around this for a long, long time. The first compiler trouble you have costs a packet. A day spent messing about trying to get the linker to work, or to declare in interrupt handler in this wesks version of gcc…. and you've just blown the cost of buying a full on compiler – even ignoring the support issues. (I've found compiler bugs in some of the expensive compilers as well – but I got them fixed pretty damn quick by the vendors!)Paying money for a reputable compiler is simply, in most cases, a good business practice. Few engineers work out the cost of their own time. They should. It changes the approach you take to compilers, and many other things as well.

  8. Ferdi Pienaar says:

    Nigel, as a regular reader there's an aspect of compilers I've been hoping to read something about on your blog — ROMability of data. I think it's probably more of an issue for C++ than C, where, if a const structure is initialized statically, the compiler will place it in ROM (although even C compilers differ in how good they are at doing this — I recall a compiler for the AVR that required the programmer to use special key-words).For C++, the rules are a little more arcane (aren't they always?), and whether an instance of a class will be placed in ROM depends on other factors (does the class have user-defined constructors? Does it have virtual methods?). It depends on the compiler's ability to detect that the memory contents could be initialized at compile-time.As an exercise, I'm implementing in C++ a module I developed in C for a previous employer. I certainly feel the C++ code is more readable and maintainable. However, if the constant data is not placed in ROM, the module would not be usable on a small embedded device, and the advantages of using C++ would become irrelevant.The ISO's Technical Report on C++ Performance has something to say about this, but, unsurprisingly, it seems to boil down to, "it depends on the compiler's ability to do static analysis".

  9. GroovyD says:

    I have used both free and paid compilers and can say that encountering bugs in the tool chain are the rarest of problems and occur if not equally then even more often on paid tool chains than free ones as less people use the paid tool chains.

    I do agree that in the extremely rare event (it only happened two or perhaps three times for me in over 20 years of embedded programming) you happen across a tool chain bug some good real support would make a world of difference; but the honest truth is once you have identified the problem enough to realize it is in the tool and not your code it is often far easier to just work around it with some creative coding than wait for someone’s support.

    The true productivity stopper I have found is not in which tool is being used but simply not knowing enough about the tool you are using (ie. which linker option or pragma does what). Pick your tools out of a hat; whichever they might be, and really learn to use them and you will undoubtably be most productive with those tools. Another productivity stopper is trying some fancy abstraction around coding something that should really be done in a much simpler way.

    • Nigel Jones says:

      Great observation! I agree wholeheartedly that not knowing the tool inside out is a productivity stopper – which explains in part why I read compiler manuals so much! I think as a consultant, my situation is a little different in that the nature of the job requires me to switch tools at a higher frequency than a normal gainfully employed engineer. Notwithstanding this I’ve come across plenty of engineers that have never cracked open the reference manual for their tool chain – and thus wonder why they can’t get things done.

  10. Ashleigh says:

    Nigel – your comment about is spot on. First thing I do with any new compiler or tool chain is RTFM.

    So many of my colleagues dont (and never have, over the years) and it makes my almost cry with frustration. The GUI era has made things even worse, people think they dont NEED to read because the GUI saves them the effort.

    And then… there are those who insist on always building a product using the GUI but forget or don’t know how to check all the GUI config files into version control – and then when they change you can’t see what that changes were. These days I build EVERYTHING using batch or make files because its 100% repeatable and not prone to accidentally clicking some change of option – and then discovering the mistake 3 months after shipping. (And no, the “debug / release” options in GUIs don’t cut the mustard either!)

    Having used GCC for years on Sun workstations and PCs – it has its place. But for deep embedded micros I’ve been pretty underwhelmed; and recently by the poor optimisers for at least some micros. If code size matters, then spending a day or 2 doing compiler benchmarking really pays off.

    Example: If I need to ship a product on a bigger micro, and that bigger micro costs $0.20 more, and I can ship (say) 50,000 products / year then that bigger micro costs $10,000 (per year). If I spend a couple of days of my time it costs (say) $1000, and if I then buy a professional compiler at $5000, I’m still $4000 ahead at the end of my first year. Or the economic payback period is about 8 months. Now ANY businessman will take you up on an investment opportunity like that.

  11. David Brown says:

    I work with a lot of different microcontrollers and processors. gcc is generally my first choice, even if the budget for a particular project or customer would cover expensive commercial tools. gcc saves me time – it is much faster and easier to download and install gcc for most microcontrollers than to order commercial tools, wait for delivery, then fight with their licensing and node locking systems.

    After installation, each commercial toolset has its own idea of a half-baked IDE to get used to, and its own set of documentation to read (yes, I read through the manuals for my tools). With gcc, it is the same compiler, the same setup, and the same documentation – all I need to consider is the target-specific details and possibly small changes from version to version. I can use the same editor and build system that I always use.

    When considering support, I’ve found that support for commercial compilers vary enormously. I know of a big name toolset vendor that apparently has more technical support staff dedicated to licensing, dongle and installation issues than it has for actual compiler support. And my experience with some vendors is that I know more about the target, C programming and standards, and often their own compiler than the support staff do. With typical gcc ports, you have a mailing list for support – you send an email and the people that answer reply are either interested and technically competent users, or they are the compiler developers themselves. Unless you are a huge customer, you simply can’t get that sort of direct developer contact from most commercial vendors.

    Additionally, I’ve generally found gcc to be a better compiler than most commercial tools I’ve used. It often generates better code, it has more powerful and useful extensions, far better inline assembly support than almost any other compiler, and much better error checking and warnings than other toolsets.

    Having said all that, I don’t always choose gcc, and I don’t always recommend it to other people. Different users and different projects have different needs. And there are commercial vendors that really do give excellent value for money, and excellent support. There are also large differences in the usability of gcc for different targets, and from different sources. Downloading gcc from the official FSF sources and building it yourself is certainly a specialist job. But purchasing and downloading a binary package from http://www.codesourcery.com, along with a support contract, is no different from making any other commercial toolchain purchase – except that you get much better value for money, and you get support from the people making the compiler.

    It is with good reason that manufacturers such as TI, Atmel, Freescale, Intel, etc., contribute to and encourage gcc development. It is with good reason that TI sells Cortex-M3 evaluation boards with gcc (from Code Sourcery or Code Red) on an equal footing with Kiel and IAR. It is with good reason that Atmel made a gcc port for their AVR32 while designing the processor.

    There are times when gcc is the best choice for a particular job, and times when it is not. But any consultant who does not consider gcc when considering toolchains for a target, is simply not doing a good job.

  12. Ryan says:

    “it’s absolutely essential to me that when I run into a compiler issue I can pick up the phone and get an intelligent response ASAP”

    A.K.A you’re a clueless tool who is basically charging clients for compiler support by proxy. You’re not productive, you’re merely using various other people’s services to uphold the illusion that you are. Incredible arrogance.

    • Ashleigh says:

      Thats a pretty rude answer. I’ve been in Nigels position where a compoiler generates bad code – or just crashes during compilation. Such show stoppers need to be fixed or my manager wants to know what the hell is going on.

      Expecting such resposne, or developing work-arounds for free software is hardly viable in a commercial circumstance.

      Did you actually read the whole article? And do you have any idea how much engineering development really costs? (Hint: it’s not what you get paid per hour!)

  13. Steven Swann says:

    Hello,

    In my humble opinion, Gcc is free, but by no means cheap.

    One point I think that is worth a mention, Gcc is not limited to a small development team, but the biggest team in existence, bugs occur, but you can be sure if you’re working on a stable release, and not working on cutting edge hardware then there will be some form of fix available…

    As a Linux developer it pains me to have to pay any money for any software, I find this to be a real productivity block!

    GNU forever!

    Steven

  14. Jeff says:

    Hi Nigel,

    I have noted from this post and others that you are a user of IAR for ARM. I also figure that for you IAR is likely a good investment for ARM because in your consultancy you support a wide variety of ARM products from different vendors.

    My question for you is the following: Are there specific features in IAR for ARM that you found indispensable? Was there a product you created where you absolutely needed the extra speed/memory provided by IAR versus other compilers?

    To give you some context, here is an example.

    I am a user for IAR for the MSP430. I have kept my eye on the TI Code Composer compiler for MSp430 for some time. I recently had some downtime, so I fired up the latest versions of both tools on a battery charger project (in other words, a very low cost, mass produced consumer product) I had on a value line processor some years back (1.25 KB Flash including info flash, 128 Bytes RAM) that I had real trouble getting to fit in the part. In my analysis, I found that when compiling on IAR with full optimization for size and using multi file compilation, I had 74 bytes to spare. With Code composer, the code did not fit, and it says that it failed to allocate 222 bytes (17% of total flash) at optimization level ‘4 Whole program optimizations’ and speed vs. size tradeoff set to ‘0 size’. From what I have seen IAR is still at the top of the heap as far as scrunching is concerned. Especially since they added the multi file compilation a few years back. While the 1/4 price tag of Code Composer compared to IAR is attractive to management, at the moment Code Composer just isn’t good enough to scrunch code to justify using it for really low end parts.

    Additionally, I agree that IAR’s linker is the most configurable I have ever seen, and has the most descriptive map output to boot. My favorite feature though is the built in support for CRC generation. I just wish they would bring the stack usage analysis they have in ARM to the MSP430…

    Thanks

    Jeff

    • Nigel Jones says:

      Hi Jeff. What it really comes down to for me is the cost of my time. I’m paid by the hour and can command high hourly rates in part because I’m very productive. Part of the productivity comes from not wasting time on cheap tools. To take the example you quoted. If I’d been using Code Composer then I’d have been stuck and would have been forced into an expensive rewrite in an effort to get the code to fit. Since I’m paid by the hour, my clients see this extra cost all too clearly. I’m sure it’s no great revelation that it doesn’t take too many hours of my time to blow away the additional cost of the IAR compiler, and thus on average IAR is the lowest cost approach for me. Throw in the fact that they produce good code quality, their documentation is excellent, you can speak to a human being when you get stuck and it’s a similar interface across a range of processors (which is essential in the consulting business) and all in all it’s a good match for me. Finally I’d be remiss in not mentioning the nice bright yellow boxes that the compiler comes in :-).

      • Jeff Gros says:

        Hi Nigel,

        Just curious, which ARM tool are you using for program/debug? Since you are using IAR, I guess ULink is out of the picture.

        Are you using the IAR I-JET? Segger? One of the other myriad of tools out there? There are so many tools out there is makes my head spin. And to make it worse, some have different pinouts, so I better decide what tool to use before making a board.

        Which do you recommend for your day to day use over a wide variety of ARM parts?

        Thanks

        Jeff

        • Nigel Jones says:

          Alas I haven’t done any serious work on an ARM processor in the last two years, so I’m not up on what are the latest tools. Anyone got a recommendation for Jeff?

        • Anders Guldahl says:

          +1 for Segger in a professional setting, very good tools both on the software and hardware side, and good support imo. Other than that, keep in mind that many mcu-devkits comes with built in programming solution, some even have a full featured debugger solution, so you don’t really need a separate debug-adapter. I’ve seen some devkits with a licensed Segger debugger on board as well.

          • Jeff Gros says:

            Thanks for the reply.

            My team and I are currently playing with eval kits with a built-in debug section on board. However, we will eventually be making custom boards and will want the programmer.

            I did take a look at the Segger website. They have a nifty comparison chart (http://www.segger.com/jlink-flash-download.html). According to the website, the J-Link Base unit was used in the comparison. Pretty impressive marketing.

            Here is a youtube video with a nifty speed comparison between i-jet and j-link (https://www.youtube.com/watch?v=DdtkMd9YTkA). The video annotations say it was the base model.

            J-Link Plus and higher has the unlimited flash breakpoints utility which gets you past the hardware breakpoint limitation. My understanding is that the flash is changed to implement this. Is this any different than software breakpoints or is it essentially the same feature?

            I-Jet also has the power debugging feature. The J-Link Ultra and better (apparently this means not the base unit) also appear to support it. I found this video (https://www.youtube.com/watch?v=vrMpD3ttgCE) which looked pretty neat. They use a EnergyMicro board with J-Link built in. The video is from 2010, so perhaps IAR didn’t have their I-Jet yet for the demo.

            There is a webinar available on the IAR website (http://www.iar.com/IAR/Webinars/Using-I-scope-to-visualize-MCU-power-consumption/) which shows power debugging with the I-Jet and I-Scope peripheral. Very cool stuff. I have no idea if the J-Link works with the I-Scope, but it would be cool if it did.

            The downside to all this power debugging stuff is that IAR bundles it into their licensing. *Sigh*…

    • Ashleigh says:

      I went through a benchmarking exercise a few years back.

      I looked at IAR, GCC, Code Composer, Rowley, and a couple of others I can’t remember.

      I compiled the same code in each (an interrupt driven UART driver with buffered I/O) – total about 1200 lines of source.

      In all cases I set the optimisation level to max scrunch and then looked at the generated code for them all. IAR came out streets ahead, something like 20% smaller than the next best (which as I recall was Code Composer) and about 40% smaller than GCC. Rowley had the odd attribute that I could not even find the generated assembler code, only its size and for me thats a show stopper.

      I was stunned that GCC did so badly, so investigated further with all possible optimisation options but could get no improvement.

      Things may have moved on since then – but everything I do is trying to jam lots of stuff into the smallest micro in order to save $ on production costs. Going up a notch on a micro costs real money.

  15. Jeff Gros says:

    Hi Nigel,

    Sorry to bother you with an IAR EWARM question, but my local support isn’t responding, I’m facing a deadline, and I figured since you’ve used the tool so much, you could probably whip out an answer lickety-split.

    Do you know how to create a block of ROM data at a specified location that is unitialized (left erased)? My goal is to create a region of non-volatile records that I can write user data to.

    It appears that in IAR EWARM you cannot create const memory at a specified location that isn’t initialized. When I mark the buffer as __no_init, the linker places the buffer in RAM.

    The closest I’ve gotten is to create a block of memory using the linker. However, when the code is loaded, it is being programmed with zero.

    Any ideas?

    Thanks

    Jeff

    • Nigel Jones says:

      I’ve just got off a long international flight so I’m not feeling too swift. I’m not sure how to do this but you could finesse the issue by explicitly initializing the area to 0xFF – i.e. unprogrammed. It’s a brute force approach but if you’re on a deadline…

      • Jeff Gros says:

        Thanks for the reply.

        I’m currently just using the block memory solution and erasing the region if it is all zeroed out (should only happen on first program execution).

        I did think about the solution you suggested and had I not found my current solution, that’s what I would have done. I could chain together a bunch of macros to help generate the 0xFF’s, but even still its a maintenance issue if the buffer size ever changes.

        I haven’t much experience with other vendor’s ARM tools, but I must say that so far I am unsatisfied with the IAR compiler/linker.

        All of the cool intrinsics are missing (__even_in_range, __delay_cycles, __ro_placement, etc), and the linker is a pain to use compared to the MSP430.

        Using the linker file, I cannot seem to create segments (only regions and blocks), I cannot control the checksum or filling options (need to use project options or command line).

        IAR has neat add on features such as Stack Usage Analysis, Power Graphing, Static Analysis, etc (much of which I haven’t used), but the main tool just makes me pull my hair out.

        This came as a complete surprise. I have many happy years of experience using IAR EW430. For the two tools to be so different is a mystery.

        Any one else know how to get IAR EWARM to do the uninitialized ROM correctly?

        • Jeff Gros says:

          Now that I am past my deadline I had some time to investigate the issue. I thought I would post my solution.

          I’ve figured out how to get an “uninitialized” buffer in ROM. I had to do it using the assembler. Reading through the assembler manual, the EWARM assembler looks pretty much as full featured as the one for the MSP430. The assembler doesn’t have a fill option, however, using the repeat instruction it is pretty trivial to create a macro to do so. In fact, the manual gives an example of how to make a fill macro (a more complicated version with the possibility of differing numbers of inputs).

          This is similar to the quick and dirty example given in C, with the exception that there is no easy way to perform a “repeat” using the preprocessor that I am aware of.

          Below is what I did in assembly:

          FILL MACRO size_B, fillValue
          REPT size_B
          dc8 fillValue
          ENDR
          ENDM

          SECTION .nv_records:CONST:ROOT(10)
          FILL 0x1000, 0xff

          Below is an example of my addition to the linker file (places nv region after code checksum):

          place at end of ROM_region { section .checksum, section .nv_records };

          • Nigel Jones says:

            Very good Jeff. Thanks for the tip. I guess there really is a reason that they include an assembler with the compiler package still.

          • Jeff Gros says:

            I recently needed to rework my checksum configuration in IAR. I was previously accomplishing the checksum through the fill menu in the project options. Now I needed to have two different checksums over two different regions, and this cannot be done except through the command line.

            As I went through this process, I had to add a fill option to the linker command line. I just made the fill cover all memory instead of just the checksum region (necessary for creating the checksum using the project options method). In retrospect, this was a much easier solution than the assembly solution listed above.

            Linker Command Line
            –fill 0xff;0x08000000-0x0800FFFF

            Linker Command File
            define region NV_RECORDS_region = mem:[from NV_RECORDS_region_start to NV_RECORDS_region_end];
            define block NV_RECORDS with fixed order, alignment = 64, size = NV_RECORDS_size { };
            place in NV_RECORDS_region { block NV_RECORDS };

            I’m still upset that not all options offered in the linker command line are available in the linker command file, and vice-versa, but at least it works.

            I hate doing anything through the command line or project options if I can avoid it. Project options are more fragile than using a linker command file. I cannot open my old projects from a year ago using the current version of the IDE, yet alone projects I developed using IAR ten years ago.

  16. Anonymous says:

    Hello Guys!!!!!

    HEW compiler has linker commands as follows
    rom=LOADER_ENTRY=RLOADER_ENTRY,P=RP,RResetPRG=ResetPRG,SPIBSC=RSPIBSC,CACHE=RCACHE,DUMMY_MODULE_END=RDUMMY_MODULE_END,DUMMY_VECTTBL=RDUMMY_MODULE_END
    –start=LOADER_ENTRY,P,ResetPRG,SPIBSC,CACHE/018000000,DUMMY_MODULE_END,DUMMY_VECTTBL/018001B00,RLOADER_ENTRY,RP,RResetPRG,RSPIBSC,B,RCACHE/0FFF80000,RDUMMY_MODULE_END,RDUMMY_VECTTBL/0FFF81B00

    what about IAR?

  17. John says:

    I have used IAR and GCC for a wide variety of architectures. In general IAR is a really good compiler but the front terrible. IAR does always out perfoms GCC in efficient code but all my benchmarks have not yielded results that would make a difference.
    I have IAR but I still choose to use gcc to be honest I think that mainly because I know GCC like the back of my hand.

    • > IAR does always out perfoms GCC in efficient code

      I cannot confirm this.

      I confirm that when I ran the Coremark benchmark, IAR was slightly faster, but when running a a real-world application (a friend used some encryption/decryption routines from an actual application), GCC was significantly faster (>2x) than IAR, with similar optimisations.

      So, if the IAR ARM compiler is superfast only for code that resembles the Coremark benchmark, thank you, but no thank you.

      Volkswagen might not be alone out there.

  18. Lis says:

    I have found bug in IAR compiler (EWARM).

    The code:

    int test_bug(int cycles_count)
    {
    int c = 0;
    int d = 0;
    do {
    d++;
    } while (++c<cycles_count);
    return d;
    }


    int ret = test_bug(4);

    What do you think, what is value of the variable `ret`? If you say 4 – you are right, but IAR returns 5 (in optimized modes: `Balanced` and `Size`)!
    I am file a bugreport to IAR support, but IAR team says 'fuck off, you don't have commercal license'.

  19. Colton says:

    I would just like to add my 2 cents. I am a professional embedded software developer of 5 years. While, I don’t have tons of experience, I do have experience in looking at compilers for usage at my company. The main two competitors we have looked at are IAR and GCC. IAR wins hands down. There are 2 reasons:

    1. A lot of the benchmarks I look at for GCC vs IAR for code density / efficiency, whatever are shown using high optimization. It’s true that GCC ‘may’ outperform IAR here in some cases… but not by much. How about code density using no optimization? IAR will run circles around GCC – there is not even a contest. Our company uses embedded controllers: mainly Cortex M series. Try running GCC vs IAR with no optimization on a cortex M0+ with a 32K, or 64K ROM part. GCC produces massive amounts of code compared to the equivalent generation using IAR.

    Why no optimization you ask? There is a little thing called ‘accurate debugging’. When you as a firmware engineer need to actually ‘develop’ code, you need to be able to debug every single line of it, especially if you are doing low level stuff like twiddling bits in registers. Once its time to ship, then you can turn up optimization and choose either the 32K or 64K ROM part for example. Some might say just buy the biggest ROM part for development; however, sometimes the part only comes in small sizes, especially if its a new part or ‘evaluation’ part not yet commercially on the market.

    2. GCC linker completely sucks. I am sorry, but its unusable for our use case. The GCC linker has no support for non contiguous memory regions for placing objects. This is a massive shortcoming that always has us coming back to IAR. IAR has no problem with this. I have heard the whole host of responses such as “Well you really want to manually be placing all of the objects in memory yourself so you can control it.” No.. no you don’t, especially if you are on a development team with 5 – 10 people. Manual placement works for a team of ONE. We have libraries flying all over the place in our projects. Imagine developer x checking out the latest version of the project and adding a single variable over a 32K RAM boundary and then having to manually go in and twiddle objects in the linker so they fit. No thanks.

    Since we are hitting a price point, we really do not have RAM to spare. We always design to the exact RAM footprint because we have to. A ton of the newer Cortex M processors, especially the high end ones (STM32L4, STM32F7) have segmented memory because some areas of memory have better uses than others. Don’t get me started on the NXP chips with their horrendous RAM memory maps.

    I as the main low level MCU developer have to provide a library to a different team and give them a template linker file for their entire project development. The linker is highly dependent on the intricacies of the memory map with the given MCU. Having that team fiddle with the file and trying to synchronize it between 10 team members would be a complete disaster. With IAR you tell it the memory map and it just works.. it spreads the objects across the memory with the rules you give it – Set once and forget.

    Now… the major sticking point here is the GNU linker. The code optimization issue we are not super salty about, because there are ways to get around that. However, for professional development, the GNU linker is a joke IMO. Its become all the more a joke since its been the same format with the same deficiencies for the past what? 20, 30 years?

  20. Daniel Glasser says:

    I’m a late-comer to this thread, but not to embedded programming, embedded tools, and compiler toolchains in general. My earliest embedded work back in the early 1980s was on PDP-11 (well, T-11, F-11 (LSI-11/23), M68000, M6809 and Intel 8751/8051, but have since done SPARC, MIPS, PowerPC, and most recently ARM. I have very strong opinions about compilers, having spent a few years working on them along with associated tools (ever write an overlay linker from scratch?)

    These days, a compiler that doesn’t support at least C99 language features (not talking about runtime library), it’s an impediment to getting the job done. I also depend on extensions, when present, to be somewhat common, but that’s less of an issue. IAR and GCC are both decent in this area, though GCC extensions are supported by more of the proprietary vendors than most others. Of the two, IAR appears to be able to produce the more compact code for the Cortex-M microprocessors, though most compact does not always seem to be fastest. One of the biggest problems I have with GCC is not code quality, but determinism; often the same code runs with different timing depending on the data because the code (in my experience most often the compiler support library is actually the culprit) takes shortcuts for special cases.

    The other parts of the toolchain also contribute to the suitability for a given application. The quality and features of the librarian, assembler, and linker, for example, can make a big difference in the usability and quality of the output. Contrary to what someone else said, the GCC toolchain using the GNU Bin-tools linker, is quite able to produce ELF files with non-contiguous memory, though to make this work you must create a linker script for the application and target and use appropriate compiler extensions (#pragma and/or __attribute__() qualifiers) to get the code and data into the sections you need them in. If you want to see a powerful but difficult to use linker, try TKB (the RSX-11 taskbuilder) sometime, but I digress.

    The toolchain is not the whole thing, and really not all that I care about when selecting a development environment to recommend to my employer.

    The debugging tools are a very large part of what I base my decision on; does it provide both common SoC peripheral register support? Does it provide a straight forward way to define application specific hardware registers for FPGAs and custom ASICs as well as COTS components? Can it support debugging in multi-threaded applications that use a home-grown executive? (That is, can you tell it how to identify each thread and set breakpoints within a specific thread instance on shared code?) Can it attach to a running target without forcing a restart? Does it support the built-in debugging features of the target hardware well? Can I save selected watchpoints and breakpoints in one session, load them in a later session, and have them actually match the source line rather than the just the memory address? Is it easy to learn and use? Are breakpoints/watchpoints scriptable? Does it support hardware trace if the target has it? I have other things I look for as well. More than any compiler bug, a debugger that doesn’t do what you need it to do can add cost and time to a project.

    I spend a lot of time in the editor. I like an editor that is configurable so that it helps with indenting, but isn’t too heavy handed in areas such as code completion. I’m a long time emacs user, can I configure the editor so I can have my preferred keeybindings? I have trouble with the editor in Eclipse/CDT because it’s somewhat, but not fully, configurable to the way my hind-brain thinks an editor should behave.

    Other things I look for in a modern IDE are the ability to export to Makefiles, import from Makefile based projects, import from other IDE’s projects, link to external read-only directories of code common with other projects rather than copying the files, source control integration (svn, git, etc.), and so on.

    So, if the GCC compiler is good enough for the application, I’m fine with a development environment that uses it, but I’ve yet to find a free development environment that provides the productivity tools I need to be as efficient as I need to be. IAR EWARM is good, I’ve had some good experiences with Keil MDK for ARM, and have put up with a bunch of others. The cost of the tools is not a major factor, we only need 4-5 seats in my group so long as they’re floating licenses.

    For home use, however, it’s a very different story. I use GCC and GNU bintools, Eclipse based IDEs, GDB, etc. I can’t afford $1.8K and up for a best-of-breed development toolkit. I have IAR EWARM, Keil MDK, both with free size limited licenses, but 32KB is insufficient for even a moderate STM32F746 application. I don’t get the features I need at work and would like at home because I can’t afford it.

    The whole point of this is that you are not just getting a compiler toolchain when you pay for tools, you’re getting a lot more than that, and sometimes it’s that lot more that is what’s worth the premium. GCC, as a compiler, is good enough for both casual tinkerers and some hard core hackers; it’s the time you spend assembling and fighting the rest of the development ecosystem that makes “free” too expensive.

Leave a Reply

You must be logged in to post a comment.