Debugging and profiling in Linux

Here is a simple way to do post-mortem debugging for a program that crashes. First enable the saving of core files when program crashes by running the following command in shell before starting the program you want to test:

ulimit -c unlimited

When program crashes Linux creates core file named core to program running directory

You can look at the details on core file with gdb:

gdb -c core programbinary

In gdb run command: bt (backtrace)

The amount of details depends on how program is compiled (is there debug information in the program or has it been stripped off).

 

When you get the program running properly, the next task is to optimize it. First you need to know in which parts of the code the CPU spends most time. Profiling gives up information on this. There are Gnu profiling tools that are useful to know. GNU gprof is quite easy to use tool for profiling and worth to check out.

In this first step, we need to make sure that the profiling is enabled when the compilation of the code is done. This is made possible by adding the ‘-pg’ option in the compilation step.

gcc -Wall -pg myprogram.c -o myprogram

When the program execution ends (naturally or ended nicely) it produces gmon.out file to program directory. Now you run

gprof myprogram

The output will give you lots of details how the program has run on the system, and which functions take most of the execution time. Check Gprof documentation for information how to read the program output.

 

5 Comments

  1. Tomi Engdahl says:

    Debugging core works also like this:

    ulimit -c unlimited

    * crash the program *

    gdb program core
    bt

    Reply
  2. Tomi Engdahl says:

    gcc: Some Assembly Required
    http://goo.gl/UdIBhB

    The Profiler

    Sometimes it is obvious what’s taking time in your programs. When it isn’t, you can actually turn on profiling. If you are running GCC under Linux, for example, you can use the -pg option to have GCC add profiling instrumentation to your code automatically.

    execution statistics you can display using gprof

    Assembly

    If you start with a C or C++ program, one thing you can do is ask the compiler to output assembly language for you. With GCC, use a file name like test.s with the -o option and then use -S to force assembly language output. The output isn’t great, but it is readable. You can also use the -ahl option to get assembly code mixed with source code in comments, which is useful.

    If you find a function or section of code you want to rewrite, you can still use GCC and just stick the assembly language inline. Exactly how that works depends on what platform you use

    Reply
  3. Tomi Engdahl says:

    Statistical profiling aids code understanding

    http://www.edn.com/electronics-blogs/embedded-basics/4442393/Statistical-profiling-aids-code-understanding?_mc=NL_EDN_EDT_EDN_weekly_20160721&cid=NL_EDN_EDT_EDN_weekly_20160721&elqTrackId=1753d619a80c40e18da5426ef7fd49f2&elq=9ad6a982e69743dbabc001cd93609f6f&elqaid=33150&elqat=1&elqCampaignId=28979

    The second the Run button is pressed on the IDE, the microcontroller begins executing millions of instructions a second. But what functions are executing and how often? How much time is spent idling the processor vs code execution? Nobody knows! Statistical profiling can help answer these basic questions.

    For years, engineers could only guess at how their code was actually executing or, when forced to, instrumented their code with complex and time-consuming setups to answer basic and fundamental system questions. Happily, engineers today can use statistical profiling to find the answers. Statistical profiling is a method for estimating which functions are executing on the microcontroller and what their load is on the processer.

    Modern 32-bit architectures, such as the ARM Cortex-M, contain a mechanism known as the Serial Wire Viewer (SWV) for sending such information over the Serial Wire Debugger. The debug hardware has the ability to periodically sample the Program Counter (PC) register and transmit its value over the debug probe to the host development environment. The host can then take the PC value and correlate it with the line of code, and therefore the function, that is being executed.

    Reply
  4. Tomi Engdahl says:

    Linux in 2016 catches up to Solaris from 2004
    Veteran dev says timed sampling’s arrival in Berkeley Packet Filter makes Linux 4.9 a match for Solaris’ DTrace
    http://www.theregister.co.uk/2016/11/01/linux_in_2016_catches_up_to_solaris_from_2004/

    In 2004 former Reg hack Ashlee Vance brought us news of DTrace, a handy addition to Solaris 10 that “gives administrators thousands upon thousands of ways to check on a system’s performance and then tweak ….production boxes with minimal system impact”. Vance was excited about the code because “it can help fix problems from the kernel level on up to the user level.”

    Vance’s story quoted a chap called Brendan Gregg who enthused about tool after using it and finding “… DTrace has given me a graph of a hundred points that leaves nothing to the imagination. It did more than just help my program, it helped me understand memory allocation so that I can become a better programmer.”

    As Gregg explains on his blog, Linux has had plenty of tracing tools for a long time, but they were miscellaneous kernel capabilities rather than dedicated tools and didn’t match DTrace’s full list of functions. But over time developers have worked on further tracing tools and Facebook developer Alexei Starovoitov recently offered up some enhancements to the Linux kernel that Gregg feels mean it now matches DTrace.

    DTrace for Linux 2016
    http://www.brendangregg.com/blog/2016-10-27/dtrace-for-linux-2016.html

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*