Posts Tagged ‘memory consumption’

Minimizing memory use in embedded systems tip#2 – Be completely consistent in your coding style

Friday, September 4th, 2009 Nigel Jones

This is the second in a series of postings on how to minimize the memory consumption of an embedded system.

As the title suggests, you’ll often get a nice reduction in code size if you are completely consistent in your HLL coding style. To show how this works, its necessary to take a trip into assembly language.

When you write in assembly language you soon find that you perform the same series of instructions over and over again. For example, to add two numbers together, you might have pseudo assembly language code that looks something like this:

LD X, operand1 ; X points to operand 1
LD Y, operand2 ; Y points to operand 2
LD R0,X        ; Get operand 1
LD R1,Y        ; Get operand 2
ADD            ;
ST R0          ; Store the result in R0

After you have done this a few times, it becomes clear that the only thing that changes from use to use is the address of the operands. As a result, assembly language programmers would typically define a macro. The exact syntax varies from assembler to assembler, but it might look something like this:

MACRO ADD_BYTES(P1, P2)
LD X, P1  ; X points to parameter 1
LD Y, P2  ; Y points to parameter 2
LD R0,X   ; Get operand 1
LD R1,Y   ; Get operand 2
ADD       ;
ST R0     ; Store the result in R0
ENDM

Thereafter, whenever it is necessary to add two bytes together, one would simply enter the macro together with the name of the operands of interest. However, after you have invoked the macro a few dozen times, it probably dawns on you that you are chewing up memory un-necessarily and that you can save a lot by changing the macro to this:

MACRO ADD_BYTES(P1, P2)
LD X, P1  ; X points to parameter 1
LD Y, P2  ; Y points to parameter 2
CALL LDR0R1XY
ENDM

It is of course necessary to now define a subroutine ‘LDR0R1XY’ that looks like this:

LDR0R1XY:
LD R0,X  ; Get operand 1
LD R1,Y  ; Get operand 2
ADD      ;
ST R0    ; Store the result in R0
RET

Clearly this approach starts to save a few bytes per invocation, such that once one has used ADD_BYTES several times one achieves a net saving in memory usage. If one uses ADD_BYTES dozens of times then the savings can be substantial.

So how does this help if you are programming in a HLL? Well, decent compilers will do exactly the same optimization when told to perform full size optimization. However, in this case, the optimizer looks at all the code sequences generated by the compiler and identifies those code sequences that can be placed in a subroutine. A really good compiler will do this recursively in the sense that it will replace a code sequence with a subroutine call, and that subroutine call will in turn call another subroutine and so on. The results can be a dramatic reduction in code size – albeit at a potentially big increase in demand on the call stack.

Now clearly in order to take maximal advantage of this compiler optimization, it’s essential that the compiler see the same code sequences over and over again. You can maximize the likelihood of this occurring by being completely consistent in your coding style. Some examples:

  • When making function calls, keep the parameter orders consistent. For example if you call a lot of functions with two parameters such as a uint8_t and a uint16_t, then ensure that all your functions declare the parameters in the same order.
  • If most of your variables are 16 bit, with just a handful being 8 bit, then you may find you get a code size reduction if you convert all your variables to 16 bits.
  • Don’t flip randomly between case statements and if-else-if chains.

Note that notwithstanding the fact that being completely consistent can save you a lot of code space, I also think that code that is extremely consistent in its style has other merits as well, not the least of which is readability.As a final note, does anyone know the formal name for this type of optimization?

Next Tip

Previous Tip

Home

Minimizing memory use in embedded systems Tip #1 – Eliminate unnecessary strings

Wednesday, August 26th, 2009 Nigel Jones

I already have a series of tips on efficient C, another on effective C and a third on lowering the power consumption of embedded systems. Today I’m introducing a fourth series of tips related to minimizing memory usage in embedded systems. Now back when I was a lad the single biggest issue in an embedded system was nearly always a lack of memory, and as a result one had to quickly learn how to husband this resource with great care. Fast forward 20 years and this notion probably seems quite quaint to those of you programming ARM system with 16 Mbytes of Flash and 64 Mbytes of RAM. So what’s the motivation for this post then? Well, despite the presence of gigantic memory systems in many embedded systems, it’s still surprisingly common for one to find oneself in a situation where memory is being gobbled up at an alarming rate. Anyone that has programmed an 8051 or an 8 bit PIC recently will know exactly what I’m talking about. So for those of you out there that find yourself in this situation, I hope that you’ll find this series informative. Enough preamble – on to business. The first tip is quite simple – eliminate unnecessary strings. Even if your reaction is ‘well that’s useless – I don’t have any strings in my code’, then I still suggest you read on. In order to eliminate unnecessary strings, the first step is to determine the list of strings in your code. You can of course pore over your source code. However a far better approach is to scan the binary image looking for strings. Somewhat amazingly I actually use a utility called ‘strings.exe’ that is supplied by Microsoft. It’s available here. I like this program because you can search for ASCII and/or Unicode strings, while also controlling the minimum number of matching characters. (Please note that this utility is intended to scan a pure binary file. Intel Hex, S records etc don’t cut it). If you do this, then you may of course find no strings – and I apologize for wasting your time. However, even if your program is supposed to be string free, you may well find things such as:

  • Copyright notices
  • Strings associated with assert statements.
  • Other compiler artifacts such as path names.

The latter two tend to arise if any code references the __FILE__ macro or its brethren.Of course working out how to eliminate these strings can be challenging – and in the case of copyright notices may violate the terms of a license agreement – so don’t get too aggressive.If your code does contain intentional strings, then you have several opportunities to reduce their footprint. The obvious method of making the strings more terse is of course an excellent thing to do. Less obvious is that you may find that you have multiple strings that are very similar – particularly if multiple people are working on a project. For example, I’ve recently seen code that contained a dozen variations on the string “Malloc failed”. For example:

  • Malloc failed
  • Malloc Failed
  • Malloc error
  • Etc

Now, the robust way to handle this is of course to ban inline strings and instead place them all in a string file, so that someone needing to use a string can simply reuse one that already exists. If this strikes you as too much work, then you may be interested to know that there are some linkers out there that will recognize duplicate strings and collapse them down to a single entry. However, to get this benefit, the strings need to be absolutely identical. Searching the binary image as I have described is a great way of identifying strings which will benefit from this manual optimization.Next Tip Home