Current design methodologies, whether for software or hardware, are in constant pursuit of making applications faster. In the office PC environment there is more than enough power for typical business applications, and they can be written using tools that are not the most optimized for execution speed (for example Java, .NET, scripting languages, web application etc..). Usually accerelaring the application in modern PC enviroment consist typically of things like multithreading, parallel processing, or reconfiguration.
Embedded systems are a different story. In many SOC (system-on-chip) applications, signal processing consumes the majority of the execution time. Using normal mathematical libraries for processing data is expensive in terms of both CPU time and memory usage. Acceleration of program execution article gives you information how to speed up the program execution beyond what you get with normal libraries. The article is written from an electrical engineer’s POV (point of view) and focus on the implementation in a given processor and the tradeoffs in implementation methodologies.
One popular technology to speed up the mathematics is to use look-up tables. Lookup tables trade between processing resources and execution time to speed up execution of computational processing. There is always tradeoff between execution time versus design time, and accuracy verses memory usage. Acceleration of program execution contains useful discussion and practical case studies that show that the lookup table method does accelerate program-execution speed considerably. In embedded applications the accelaration of program execution usully means that you can use a cheaper microprocessor and/or use less power than what would be the case with non-optimized program.