Compiler optimization for embedded system software

Even with the advent of the first general-purpose computers in the
1940s, a need arose for machines designed to perform a few dedicated
tasks in real-time. This need gave birth to the world’s first embedded
systems. In 1961, Charles Stark Draper developed The Apollo Guid-
ance Computer at the MIT Instrumentation Lab, which generally is
recognized as the first modern embedded computer system [6]. To-
day’s definition of an embedded system would include the use of a mi-
croprocessor, which became commercially feasible around 1971 [4],
allowing smaller computers to aid in making a phone call, performing
surgery, or playing a game.
As embedded processor architectures have become more compli-
cated, programmers have become more dependent on the compiler’s
knowledge of the processor’s the instruction sets, pipelines, and
complex memory systems. It is a common misconception that faster,
more complex processors diminish the need for better compilers.
Compiler technology must necessarily advance to take advantage of
new processor features. Using a good compiler optimally could not
only make code smaller and faster, but also bring financial gains dur-
ing development in three basic ways:
1. Memory Usage
By decreasing code size, less memory ultimately will be needed by
the system. Although memory cost has gotten significantly
cheaper over time, it still remains one of the most expensive com-
ponents of an embedded system.
2. Processor Selection
Increasing the performance of the software enables engineers to
use slower, more cost-efficient processors.
3. Time to Market
As compilers become more powerful, time-consuming hand opti-
mization has become less important. Also, it has become essential
to write readable, modularly structured, and maintainable code
for the purpose of portability and reuse.
Posted on by