Archive for January, 2010

Free store is not free lunch

Friday, January 29th, 2010

In my previous post “A Heap of Problems” I have compiled a list of problems the free store (heap) can cause in real-time embedded (RTE) systems. This was quite a litany, although I didn’t even touch the more subtle problems yet (for example, the C++ exception handling mechanism can cause memory leaks when a thrown exception bypasses memory de-allocation).

But even though the free store is definitely not a free lunch, getting by without the heap is certainly easier said than done. In C, you will have to rethink implementations that use lists, trees, and other dynamic data structures. You’ll also have to severely limit your choice of the third-party libraries and legacy code you want to reuse (especially if you borrow code designed for the desktop). In C++, the implications are even more serious because the object-oriented nature of C++ applications results in much more intensive dynamic-memory use than in applications using procedural techniques. For example, most standard C++ libraries (e.g., STL, Boost, etc.) requrie the heap. Without it, C++ simply does not feel like the same language.

Here are a few common sense guidelines for dealing with the heap:

1. For smaller systems, such as microcontrollers with only on-chip RAM, you probably don’t want to open the heap can of worms at all. The problems and waste that goes with the heap aren’t simply worth the trouble.

For systems with sufficient RAM, such as processors with megabytes of external DRAM, trading some of this cheap RAM for convenience in programming might be a reasonable deal. In the following discussion I assume that the system is big enough to run under a preemptive RTOS.

2. The simplest option is to limit the use of the heap to just one task. In this case, heap is not being shared concurrently and does not need any mutual-exclusion protection mechanism. To limit the non-determinism of the heap, I would recommend assigning low priority to the task that uses the heap. The priority should be lower than any real-time task.

3. At the expense of introducing a mutual protection to *all* heap operations (e.g., a mutex), you can allow more than one task to use the heap. However, I would still strongly recommend against using the heap in any tasks with real-time deadlines. All tasks that use the heap should run at a lower priority than any of the real-time tasks.

4. In any case, heap should never be used inside the interrupt service routines (ISRs).

In summary, using the heap in real-time embedded (RTE) systems always requires extra thought and discipline. You should always make sure that the heap is correctly integrated with your runtime environment.

A Heap of Problems

Sunday, January 24th, 2010

Some design problems never seem to go away. You think that anybody who has been in the embedded software development business for a while must have learned to be wary of malloc() and free() (or their C++ counterparts new and delete). Then you find that many developers actually don’t know why embedded real-time systems are so particularly intolerant of heap problems.

For example, recently an Embedded.com reader attacked my comment to the article “Back to the Basics – Practical Embedded Coding Tips: Part 1 Reentrancy, atomic variables and recursion“, in which I advised against using the heap. Here is this reader’s argumentation:

I have no idea why did you bring up the pledge not to use the heap, on modern 32-bit MCUs (ARMs etc) there is no reason – and no justification – to avoid using the heap. The only reason not to use the heap is to avoid memory fragmentation, but good heap implementation and careful memory allocation planning will overcome that.

As I cannot disagree more with the statements above, I decided that it’s perhaps the time to re-post my “heap of problems” list, which goes as follows:

  • Dynamically allocating and freeing memory can fragment the heap over time to the point that the program crashes because of an inability to allocate more RAM. The total remaining heap storage might be more than adequate, but no single piece satisfies a specific malloc() request.
  • Heap-based memory management is wasteful. All heap management algorithms must maintain some form of header information for each block allocated. At the very least, this information includes the size of the block. For example, if the header causes a four-byte overhead, then a four-byte allocation requires at least eight bytes, so only 50 percent of the allocated memory is usable to the application. Because of these overheads and the aforementioned fragmentation, determining the minimum size of the heap is difficult. Even if you were to know the worst-case mix of objects simultaneously allocated on the heap (which you typically don’t), the required heap storage is much more than a simple sum of the object sizes. As a result, the only practical way to make the heap more reliable is to massively oversize it.
  • Both malloc() and free() can be (and often are) nondeterministic, meaning that they potentially can take a long (hard to quantify) time to execute, which conflicts squarely with real-time constraints. Although many RTOSs have heap management algorithms with bounded, or even deterministic performance, they don’t necessarily handle multiple small allocations efficiently.

Unfortunately, the list of heap problems doesn’t stop there. A new class of problems appears when you use heap in a multithreaded environment. The heap becomes a shared resource and consequently causes all the headaches associated with resource sharing, so the list goes on:

  • Both malloc() and free() can be (and often are) non-reentrant; that is, they cannot be safely called simultaneously from multiple threads of execution.
  • The reentrancy problem can be remedied by protecting malloc(), free(), realloc(), and so on internally with a mutex, which lets only one thread at a time access the shared heap. However, this scheme could cause excessive blocking of threads (especially if memory management is nondeterministic) and can significantly reduce parallelism. Mutexes can also be subject to priority inversion. Naturally, the heap management functions protected by a mutex are not available to interrupt service routines (ISRs) because ISRs cannot block.

Finally, all the problems listed previously come on top of the usual pitfalls associated with dynamic memory allocation. For completeness, I’ll mention them here as well.

  • If you destroy all pointers to an object and fail to free it or you simply leave objects lying about well past their useful lifetimes, you create a memory leak. If you leak enough memory, your storage allocation eventually fails.
  • Conversely, if you free a heap object but the rest of the program still believes that pointers to the object remain valid, you have created dangling pointers. If you dereference such a dangling pointer to access the recycled object (which by that time might be already allocated to somebody else), your application can crash.
  • Most of the heap-related problems are notoriously difficult to test. For example, a brief bout of testing often fails to uncover a storage leak that kills a program after a few hours, or weeks, of operation. Similarly, exceeding a real-time deadline because of nondeterminism can show up only when the heap reaches a certain fragmentation pattern. These types of problems are extremely difficult to reproduce.