summaryrefslogtreecommitdiff
path: root/gprof/gprof.texi
diff options
context:
space:
mode:
Diffstat (limited to 'gprof/gprof.texi')
-rw-r--r--gprof/gprof.texi2028
1 files changed, 0 insertions, 2028 deletions
diff --git a/gprof/gprof.texi b/gprof/gprof.texi
deleted file mode 100644
index b4606b42f4b..00000000000
--- a/gprof/gprof.texi
+++ /dev/null
@@ -1,2028 +0,0 @@
-\input texinfo @c -*-texinfo-*-
-@setfilename gprof.info
-@settitle GNU gprof
-@setchapternewpage odd
-
-@ifinfo
-@c This is a dir.info fragment to support semi-automated addition of
-@c manuals to an info tree. zoo@cygnus.com is developing this facility.
-@format
-START-INFO-DIR-ENTRY
-* gprof: (gprof). Profiling your program's execution
-END-INFO-DIR-ENTRY
-@end format
-@end ifinfo
-
-@ifinfo
-This file documents the gprof profiler of the GNU system.
-
-Copyright (C) 1988, 1992, 1997, 1998, 1999 Free Software Foundation, Inc.
-
-Permission is granted to make and distribute verbatim copies of
-this manual provided the copyright notice and this permission notice
-are preserved on all copies.
-
-@ignore
-Permission is granted to process this file through Tex and print the
-results, provided the printed document carries copying permission
-notice identical to this one except for the removal of this paragraph
-(this paragraph not being relevant to the printed manual).
-
-@end ignore
-Permission is granted to copy and distribute modified versions of this
-manual under the conditions for verbatim copying, provided that the entire
-resulting derived work is distributed under the terms of a permission
-notice identical to this one.
-
-Permission is granted to copy and distribute translations of this manual
-into another language, under the above conditions for modified versions.
-@end ifinfo
-
-@finalout
-@smallbook
-
-@titlepage
-@title GNU gprof
-@subtitle The @sc{gnu} Profiler
-@author Jay Fenlason and Richard Stallman
-
-@page
-
-This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
-can use it to determine which parts of a program are taking most of the
-execution time. We assume that you know how to write, compile, and
-execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
-
-This manual was edited January 1993 by Jeffrey Osier
-and updated September 1997 by Brent Baccala.
-
-@vskip 0pt plus 1filll
-Copyright @copyright{} 1988, 1992, 1997, 1998 Free Software Foundation, Inc.
-
-Permission is granted to make and distribute verbatim copies of
-this manual provided the copyright notice and this permission notice
-are preserved on all copies.
-
-@ignore
-Permission is granted to process this file through TeX and print the
-results, provided the printed document carries copying permission
-notice identical to this one except for the removal of this paragraph
-(this paragraph not being relevant to the printed manual).
-
-@end ignore
-Permission is granted to copy and distribute modified versions of this
-manual under the conditions for verbatim copying, provided that the entire
-resulting derived work is distributed under the terms of a permission
-notice identical to this one.
-
-Permission is granted to copy and distribute translations of this manual
-into another language, under the same conditions as for modified versions.
-
-@end titlepage
-
-@ifinfo
-@node Top
-@top Profiling a Program: Where Does It Spend Its Time?
-
-This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
-can use it to determine which parts of a program are taking most of the
-execution time. We assume that you know how to write, compile, and
-execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
-
-This manual was updated August 1997 by Brent Baccala.
-
-@menu
-* Introduction:: What profiling means, and why it is useful.
-
-* Compiling:: How to compile your program for profiling.
-* Executing:: Executing your program to generate profile data
-* Invoking:: How to run @code{gprof}, and its options
-
-* Output:: Interpreting @code{gprof}'s output
-
-* Inaccuracy:: Potential problems you should be aware of
-* How do I?:: Answers to common questions
-* Incompatibilities:: (between @sc{gnu} @code{gprof} and Unix @code{gprof}.)
-* Details:: Details of how profiling is done
-@end menu
-@end ifinfo
-
-@node Introduction
-@chapter Introduction to Profiling
-
-Profiling allows you to learn where your program spent its time and which
-functions called which other functions while it was executing. This
-information can show you which pieces of your program are slower than you
-expected, and might be candidates for rewriting to make your program
-execute faster. It can also tell you which functions are being called more
-or less often than you expected. This may help you spot bugs that had
-otherwise been unnoticed.
-
-Since the profiler uses information collected during the actual execution
-of your program, it can be used on programs that are too large or too
-complex to analyze by reading the source. However, how your program is run
-will affect the information that shows up in the profile data. If you
-don't use some feature of your program while it is being profiled, no
-profile information will be generated for that feature.
-
-Profiling has several steps:
-
-@itemize @bullet
-@item
-You must compile and link your program with profiling enabled.
-@xref{Compiling}.
-
-@item
-You must execute your program to generate a profile data file.
-@xref{Executing}.
-
-@item
-You must run @code{gprof} to analyze the profile data.
-@xref{Invoking}.
-@end itemize
-
-The next three chapters explain these steps in greater detail.
-
-Several forms of output are available from the analysis.
-
-The @dfn{flat profile} shows how much time your program spent in each function,
-and how many times that function was called. If you simply want to know
-which functions burn most of the cycles, it is stated concisely here.
-@xref{Flat Profile}.
-
-The @dfn{call graph} shows, for each function, which functions called it, which
-other functions it called, and how many times. There is also an estimate
-of how much time was spent in the subroutines of each function. This can
-suggest places where you might try to eliminate function calls that use a
-lot of time. @xref{Call Graph}.
-
-The @dfn{annotated source} listing is a copy of the program's
-source code, labeled with the number of times each line of the
-program was executed. @xref{Annotated Source}.
-
-To better understand how profiling works, you may wish to read
-a description of its implementation.
-@xref{Implementation}.
-
-@node Compiling
-@chapter Compiling a Program for Profiling
-
-The first step in generating profile information for your program is
-to compile and link it with profiling enabled.
-
-To compile a source file for profiling, specify the @samp{-pg} option when
-you run the compiler. (This is in addition to the options you normally
-use.)
-
-To link the program for profiling, if you use a compiler such as @code{cc}
-to do the linking, simply specify @samp{-pg} in addition to your usual
-options. The same option, @samp{-pg}, alters either compilation or linking
-to do what is necessary for profiling. Here are examples:
-
-@example
-cc -g -c myprog.c utils.c -pg
-cc -o myprog myprog.o utils.o -pg
-@end example
-
-The @samp{-pg} option also works with a command that both compiles and links:
-
-@example
-cc -o myprog myprog.c utils.c -g -pg
-@end example
-
-If you run the linker @code{ld} directly instead of through a compiler
-such as @code{cc}, you may have to specify a profiling startup file
-@file{gcrt0.o} as the first input file instead of the usual startup
-file @file{crt0.o}. In addition, you would probably want to
-specify the profiling C library, @file{libc_p.a}, by writing
-@samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
-necessary, but doing this gives you number-of-calls information for
-standard library functions such as @code{read} and @code{open}. For
-example:
-
-@example
-ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
-@end example
-
-If you compile only some of the modules of the program with @samp{-pg}, you
-can still profile the program, but you won't get complete information about
-the modules that were compiled without @samp{-pg}. The only information
-you get for the functions in those modules is the total time spent in them;
-there is no record of how many times they were called, or from where. This
-will not affect the flat profile (except that the @code{calls} field for
-the functions will be blank), but will greatly reduce the usefulness of the
-call graph.
-
-If you wish to perform line-by-line profiling,
-you will also need to specify the @samp{-g} option,
-instructing the compiler to insert debugging symbols into the program
-that match program addresses to source code lines.
-@xref{Line-by-line}.
-
-In addition to the @samp{-pg} and @samp{-g} options,
-you may also wish to specify the @samp{-a} option when compiling.
-This will instrument
-the program to perform basic-block counting. As the program runs,
-it will count how many times it executed each branch of each @samp{if}
-statement, each iteration of each @samp{do} loop, etc. This will
-enable @code{gprof} to construct an annotated source code
-listing showing how many times each line of code was executed.
-
-@node Executing
-@chapter Executing the Program
-
-Once the program is compiled for profiling, you must run it in order to
-generate the information that @code{gprof} needs. Simply run the program
-as usual, using the normal arguments, file names, etc. The program should
-run normally, producing the same output as usual. It will, however, run
-somewhat slower than normal because of the time spent collecting and the
-writing the profile data.
-
-The way you run the program---the arguments and input that you give
-it---may have a dramatic effect on what the profile information shows. The
-profile data will describe the parts of the program that were activated for
-the particular input you use. For example, if the first command you give
-to your program is to quit, the profile data will show the time used in
-initialization and in cleanup, but not much else.
-
-Your program will write the profile data into a file called @file{gmon.out}
-just before exiting. If there is already a file called @file{gmon.out},
-its contents are overwritten. There is currently no way to tell the
-program to write the profile data under a different name, but you can rename
-the file afterward if you are concerned that it may be overwritten.
-
-In order to write the @file{gmon.out} file properly, your program must exit
-normally: by returning from @code{main} or by calling @code{exit}. Calling
-the low-level function @code{_exit} does not write the profile data, and
-neither does abnormal termination due to an unhandled signal.
-
-The @file{gmon.out} file is written in the program's @emph{current working
-directory} at the time it exits. This means that if your program calls
-@code{chdir}, the @file{gmon.out} file will be left in the last directory
-your program @code{chdir}'d to. If you don't have permission to write in
-this directory, the file is not written, and you will get an error message.
-
-Older versions of the @sc{gnu} profiling library may also write a file
-called @file{bb.out}. This file, if present, contains an human-readable
-listing of the basic-block execution counts. Unfortunately, the
-appearance of a human-readable @file{bb.out} means the basic-block
-counts didn't get written into @file{gmon.out}.
-The Perl script @code{bbconv.pl}, included with the @code{gprof}
-source distribution, will convert a @file{bb.out} file into
-a format readable by @code{gprof}.
-
-@node Invoking
-@chapter @code{gprof} Command Summary
-
-After you have a profile data file @file{gmon.out}, you can run @code{gprof}
-to interpret the information in it. The @code{gprof} program prints a
-flat profile and a call graph on standard output. Typically you would
-redirect the output of @code{gprof} into a file with @samp{>}.
-
-You run @code{gprof} like this:
-
-@smallexample
-gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
-@end smallexample
-
-@noindent
-Here square-brackets indicate optional arguments.
-
-If you omit the executable file name, the file @file{a.out} is used. If
-you give no profile data file name, the file @file{gmon.out} is used. If
-any file is not in the proper format, or if the profile data file does not
-appear to belong to the executable file, an error message is printed.
-
-You can give more than one profile data file by entering all their names
-after the executable file name; then the statistics in all the data files
-are summed together.
-
-The order of these options does not matter.
-
-@menu
-* Output Options:: Controlling @code{gprof}'s output style
-* Analysis Options:: Controlling how @code{gprof} analyses its data
-* Miscellaneous Options::
-* Depricated Options:: Options you no longer need to use, but which
- have been retained for compatibility
-* Symspecs:: Specifying functions to include or exclude
-@end menu
-
-@node Output Options,Analysis Options,,Invoking
-@section Output Options
-
-These options specify which of several output formats
-@code{gprof} should produce.
-
-Many of these options take an optional @dfn{symspec} to specify
-functions to be included or excluded. These options can be
-specified multiple times, with different symspecs, to include
-or exclude sets of symbols. @xref{Symspecs}.
-
-Specifying any of these options overrides the default (@samp{-p -q}),
-which prints a flat profile and call graph analysis
-for all functions.
-
-@table @code
-
-@item -A[@var{symspec}]
-@itemx --annotated-source[=@var{symspec}]
-The @samp{-A} option causes @code{gprof} to print annotated source code.
-If @var{symspec} is specified, print output only for matching symbols.
-@xref{Annotated Source}.
-
-@item -b
-@itemx --brief
-If the @samp{-b} option is given, @code{gprof} doesn't print the
-verbose blurbs that try to explain the meaning of all of the fields in
-the tables. This is useful if you intend to print out the output, or
-are tired of seeing the blurbs.
-
-@item -C[@var{symspec}]
-@itemx --exec-counts[=@var{symspec}]
-The @samp{-C} option causes @code{gprof} to
-print a tally of functions and the number of times each was called.
-If @var{symspec} is specified, print tally only for matching symbols.
-
-If the profile data file contains basic-block count records, specifing
-the @samp{-l} option, along with @samp{-C}, will cause basic-block
-execution counts to be tallied and displayed.
-
-@item -i
-@itemx --file-info
-The @samp{-i} option causes @code{gprof} to display summary information
-about the profile data file(s) and then exit. The number of histogram,
-call graph, and basic-block count records is displayed.
-
-@item -I @var{dirs}
-@itemx --directory-path=@var{dirs}
-The @samp{-I} option specifies a list of search directories in
-which to find source files. Environment variable @var{GPROF_PATH}
-can also be used to convery this information.
-Used mostly for annotated source output.
-
-@item -J[@var{symspec}]
-@itemx --no-annotated-source[=@var{symspec}]
-The @samp{-J} option causes @code{gprof} not to
-print annotated source code.
-If @var{symspec} is specified, @code{gprof} prints annotated source,
-but excludes matching symbols.
-
-@item -L
-@itemx --print-path
-Normally, source filenames are printed with the path
-component suppressed. The @samp{-L} option causes @code{gprof}
-to print the full pathname of
-source filenames, which is determined
-from symbolic debugging information in the image file
-and is relative to the directory in which the compiler
-was invoked.
-
-@item -p[@var{symspec}]
-@itemx --flat-profile[=@var{symspec}]
-The @samp{-p} option causes @code{gprof} to print a flat profile.
-If @var{symspec} is specified, print flat profile only for matching symbols.
-@xref{Flat Profile}.
-
-@item -P[@var{symspec}]
-@itemx --no-flat-profile[=@var{symspec}]
-The @samp{-P} option causes @code{gprof} to suppress printing a flat profile.
-If @var{symspec} is specified, @code{gprof} prints a flat profile,
-but excludes matching symbols.
-
-@item -q[@var{symspec}]
-@itemx --graph[=@var{symspec}]
-The @samp{-q} option causes @code{gprof} to print the call graph analysis.
-If @var{symspec} is specified, print call graph only for matching symbols
-and their children.
-@xref{Call Graph}.
-
-@item -Q[@var{symspec}]
-@itemx --no-graph[=@var{symspec}]
-The @samp{-Q} option causes @code{gprof} to suppress printing the
-call graph.
-If @var{symspec} is specified, @code{gprof} prints a call graph,
-but excludes matching symbols.
-
-@item -y
-@itemx --separate-files
-This option affects annotated source output only.
-Normally, gprof prints annotated source files
-to standard-output. If this option is specified,
-annotated source for a file named @file{path/filename}
-is generated in the file @file{filename-ann}.
-
-@item -Z[@var{symspec}]
-@itemx --no-exec-counts[=@var{symspec}]
-The @samp{-Z} option causes @code{gprof} not to
-print a tally of functions and the number of times each was called.
-If @var{symspec} is specified, print tally, but exclude matching symbols.
-
-@item --function-ordering
-The @samp{--function-ordering} option causes @code{gprof} to print a
-suggested function ordering for the program based on profiling data.
-This option suggests an ordering which may improve paging, tlb and
-cache behavior for the program on systems which support arbitrary
-ordering of functions in an executable.
-
-The exact details of how to force the linker to place functions
-in a particular order is system dependent and out of the scope of this
-manual.
-
-@item --file-ordering @var{map_file}
-The @samp{--file-ordering} option causes @code{gprof} to print a
-suggested .o link line ordering for the program based on profiling data.
-This option suggests an ordering which may improve paging, tlb and
-cache behavior for the program on systems which do not support arbitrary
-ordering of functions in an executable.
-
-Use of the @samp{-a} argument is highly recommended with this option.
-
-The @var{map_file} argument is a pathname to a file which provides
-function name to object file mappings. The format of the file is similar to
-the output of the program @code{nm}.
-
-@smallexample
-@group
-c-parse.o:00000000 T yyparse
-c-parse.o:00000004 C yyerrflag
-c-lang.o:00000000 T maybe_objc_method_name
-c-lang.o:00000000 T print_lang_statistics
-c-lang.o:00000000 T recognize_objc_keyword
-c-decl.o:00000000 T print_lang_identifier
-c-decl.o:00000000 T print_lang_type
-@dots{}
-
-@end group
-@end smallexample
-
-GNU @code{nm} @samp{--extern-only} @samp{--defined-only} @samp{-v} @samp{--print-file-name} can be used to create @var{map_file}.
-
-@item -T
-@itemx --traditional
-The @samp{-T} option causes @code{gprof} to print its output in
-``traditional'' BSD style.
-
-@item -w @var{width}
-@itemx --width=@var{width}
-Sets width of output lines to @var{width}.
-Currently only used when printing the function index at the bottom
-of the call graph.
-
-@item -x
-@itemx --all-lines
-This option affects annotated source output only.
-By default, only the lines at the beginning of a basic-block
-are annotated. If this option is specified, every line in
-a basic-block is annotated by repeating the annotation for the
-first line. This behavior is similar to @code{tcov}'s @samp{-a}.
-
-@item --demangle
-@itemx --no-demangle
-These options control whether C++ symbol names should be demangled when
-printing output. The default is to demangle symbols. The
-@code{--no-demangle} option may be used to turn off demangling.
-
-@end table
-
-@node Analysis Options,Miscellaneous Options,Output Options,Invoking
-@section Analysis Options
-
-@table @code
-
-@item -a
-@itemx --no-static
-The @samp{-a} option causes @code{gprof} to suppress the printing of
-statically declared (private) functions. (These are functions whose
-names are not listed as global, and which are not visible outside the
-file/function/block where they were defined.) Time spent in these
-functions, calls to/from them, etc, will all be attributed to the
-function that was loaded directly before it in the executable file.
-@c This is compatible with Unix @code{gprof}, but a bad idea.
-This option affects both the flat profile and the call graph.
-
-@item -c
-@itemx --static-call-graph
-The @samp{-c} option causes the call graph of the program to be
-augmented by a heuristic which examines the text space of the object
-file and identifies function calls in the binary machine code.
-Since normal call graph records are only generated when functions are
-entered, this option identifies children that could have been called,
-but never were. Calls to functions that were not compiled with
-profiling enabled are also identified, but only if symbol table
-entries are present for them.
-Calls to dynamic library routines are typically @emph{not} found
-by this option.
-Parents or children identified via this heuristic
-are indicated in the call graph with call counts of @samp{0}.
-
-@item -D
-@itemx --ignore-non-functions
-The @samp{-D} option causes @code{gprof} to ignore symbols which
-are not known to be functions. This option will give more accurate
-profile data on systems where it is supported (Solaris and HPUX for
-example).
-
-@item -k @var{from}/@var{to}
-The @samp{-k} option allows you to delete from the call graph any arcs from
-symbols matching symspec @var{from} to those matching symspec @var{to}.
-
-@item -l
-@itemx --line
-The @samp{-l} option enables line-by-line profiling, which causes
-histogram hits to be charged to individual source code lines,
-instead of functions.
-If the program was compiled with basic-block counting enabled,
-this option will also identify how many times each line of
-code was executed.
-While line-by-line profiling can help isolate where in a large function
-a program is spending its time, it also significantly increases
-the running time of @code{gprof}, and magnifies statistical
-inaccuracies.
-@xref{Sampling Error}.
-
-@item -m @var{num}
-@itemx --min-count=@var{num}
-This option affects execution count output only.
-Symbols that are executed less than @var{num} times are suppressed.
-
-@item -n[@var{symspec}]
-@itemx --time[=@var{symspec}]
-The @samp{-n} option causes @code{gprof}, in its call graph analysis,
-to only propagate times for symbols matching @var{symspec}.
-
-@item -N[@var{symspec}]
-@itemx --no-time[=@var{symspec}]
-The @samp{-n} option causes @code{gprof}, in its call graph analysis,
-not to propagate times for symbols matching @var{symspec}.
-
-@item -z
-@itemx --display-unused-functions
-If you give the @samp{-z} option, @code{gprof} will mention all
-functions in the flat profile, even those that were never called, and
-that had no time spent in them. This is useful in conjunction with the
-@samp{-c} option for discovering which routines were never called.
-
-@end table
-
-@node Miscellaneous Options,Depricated Options,Analysis Options,Invoking
-@section Miscellaneous Options
-
-@table @code
-
-@item -d[@var{num}]
-@itemx --debug[=@var{num}]
-The @samp{-d @var{num}} option specifies debugging options.
-If @var{num} is not specified, enable all debugging.
-@xref{Debugging}.
-
-@item -O@var{name}
-@itemx --file-format=@var{name}
-Selects the format of the profile data files. Recognized formats are
-@samp{auto} (the default), @samp{bsd}, @samp{4.4bsd}, @samp{magic}, and
-@samp{prof} (not yet supported).
-
-@item -s
-@itemx --sum
-The @samp{-s} option causes @code{gprof} to summarize the information
-in the profile data files it read in, and write out a profile data
-file called @file{gmon.sum}, which contains all the information from
-the profile data files that @code{gprof} read in. The file @file{gmon.sum}
-may be one of the specified input files; the effect of this is to
-merge the data in the other input files into @file{gmon.sum}.
-
-Eventually you can run @code{gprof} again without @samp{-s} to analyze the
-cumulative data in the file @file{gmon.sum}.
-
-@item -v
-@itemx --version
-The @samp{-v} flag causes @code{gprof} to print the current version
-number, and then exit.
-
-@end table
-
-@node Depricated Options,Symspecs,Miscellaneous Options,Invoking
-@section Depricated Options
-
-@table @code
-
-These options have been replaced with newer versions that use symspecs.
-
-@item -e @var{function_name}
-The @samp{-e @var{function}} option tells @code{gprof} to not print
-information about the function @var{function_name} (and its
-children@dots{}) in the call graph. The function will still be listed
-as a child of any functions that call it, but its index number will be
-shown as @samp{[not printed]}. More than one @samp{-e} option may be
-given; only one @var{function_name} may be indicated with each @samp{-e}
-option.
-
-@item -E @var{function_name}
-The @code{-E @var{function}} option works like the @code{-e} option, but
-time spent in the function (and children who were not called from
-anywhere else), will not be used to compute the percentages-of-time for
-the call graph. More than one @samp{-E} option may be given; only one
-@var{function_name} may be indicated with each @samp{-E} option.
-
-@item -f @var{function_name}
-The @samp{-f @var{function}} option causes @code{gprof} to limit the
-call graph to the function @var{function_name} and its children (and
-their children@dots{}). More than one @samp{-f} option may be given;
-only one @var{function_name} may be indicated with each @samp{-f}
-option.
-
-@item -F @var{function_name}
-The @samp{-F @var{function}} option works like the @code{-f} option, but
-only time spent in the function and its children (and their
-children@dots{}) will be used to determine total-time and
-percentages-of-time for the call graph. More than one @samp{-F} option
-may be given; only one @var{function_name} may be indicated with each
-@samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
-
-@end table
-
-Note that only one function can be specified with each @code{-e},
-@code{-E}, @code{-f} or @code{-F} option. To specify more than one
-function, use multiple options. For example, this command:
-
-@example
-gprof -e boring -f foo -f bar myprogram > gprof.output
-@end example
-
-@noindent
-lists in the call graph all functions that were reached from either
-@code{foo} or @code{bar} and were not reachable from @code{boring}.
-
-@node Symspecs,,Depricated Options,Invoking
-@section Symspecs
-
-Many of the output options allow functions to be included or excluded
-using @dfn{symspecs} (symbol specifications), which observe the
-following syntax:
-
-@example
- filename_containing_a_dot
-| funcname_not_containing_a_dot
-| linenumber
-| ( [ any_filename ] `:' ( any_funcname | linenumber ) )
-@end example
-
-Here are some sample symspecs:
-
-@table @samp
-@item main.c
-Selects everything in file @file{main.c}---the
-dot in the string tells gprof to interpret
-the string as a filename, rather than as
-a function name. To select a file whose
-name does not contain a dot, a trailing colon
-should be specified. For example, @samp{odd:} is
-interpreted as the file named @file{odd}.
-
-@item main
-Selects all functions named @samp{main}.
-
-Note that there may be multiple instances of the same function name
-because some of the definitions may be local (i.e., static). Unless a
-function name is unique in a program, you must use the colon notation
-explained below to specify a function from a specific source file.
-
-Sometimes, function names contain dots. In such cases, it is necessary
-to add a leading colon to the name. For example, @samp{:.mul} selects
-function @samp{.mul}.
-
-In some object file formats, symbols have a leading underscore. gprof
-will normally not print these underscores. However, you must use the
-underscore when you name a symbol in a symspec. You can use the
-@code{nm} program to see whether symbols have underscores for the object
-file format you are using.
-
-@item main.c:main
-Selects function @samp{main} in file @file{main.c}.
-
-@item main.c:134
-Selects line 134 in file @file{main.c}.
-@end table
-
-@node Output
-@chapter Interpreting @code{gprof}'s Output
-
-@code{gprof} can produce several different output styles, the
-most important of which are described below. The simplest output
-styles (file information, execution count, and function and file ordering)
-are not described here, but are documented with the respective options
-that trigger them.
-@xref{Output Options}.
-
-@menu
-* Flat Profile:: The flat profile shows how much time was spent
- executing directly in each function.
-* Call Graph:: The call graph shows which functions called which
- others, and how much time each function used
- when its subroutine calls are included.
-* Line-by-line:: @code{gprof} can analyze individual source code lines
-* Annotated Source:: The annotated source listing displays source code
- labeled with execution counts
-@end menu
-
-
-@node Flat Profile,Call Graph,,Output
-@section The Flat Profile
-@cindex flat profile
-
-The @dfn{flat profile} shows the total amount of time your program
-spent executing each function. Unless the @samp{-z} option is given,
-functions with no apparent time spent in them, and no apparent calls
-to them, are not mentioned. Note that if a function was not compiled
-for profiling, and didn't run long enough to show up on the program
-counter histogram, it will be indistinguishable from a function that
-was never called.
-
-This is part of a flat profile for a small program:
-
-@smallexample
-@group
-Flat profile:
-
-Each sample counts as 0.01 seconds.
- % cumulative self self total
- time seconds seconds calls ms/call ms/call name
- 33.34 0.02 0.02 7208 0.00 0.00 open
- 16.67 0.03 0.01 244 0.04 0.12 offtime
- 16.67 0.04 0.01 8 1.25 1.25 memccpy
- 16.67 0.05 0.01 7 1.43 1.43 write
- 16.67 0.06 0.01 mcount
- 0.00 0.06 0.00 236 0.00 0.00 tzset
- 0.00 0.06 0.00 192 0.00 0.00 tolower
- 0.00 0.06 0.00 47 0.00 0.00 strlen
- 0.00 0.06 0.00 45 0.00 0.00 strchr
- 0.00 0.06 0.00 1 0.00 50.00 main
- 0.00 0.06 0.00 1 0.00 0.00 memcpy
- 0.00 0.06 0.00 1 0.00 10.11 print
- 0.00 0.06 0.00 1 0.00 0.00 profil
- 0.00 0.06 0.00 1 0.00 50.00 report
-@dots{}
-@end group
-@end smallexample
-
-@noindent
-The functions are sorted by first by decreasing run-time spent in them,
-then by decreasing number of calls, then alphabetically by name. The
-functions @samp{mcount} and @samp{profil} are part of the profiling
-aparatus and appear in every flat profile; their time gives a measure of
-the amount of overhead due to profiling.
-
-Just before the column headers, a statement appears indicating
-how much time each sample counted as.
-This @dfn{sampling period} estimates the margin of error in each of the time
-figures. A time figure that is not much larger than this is not
-reliable. In this example, each sample counted as 0.01 seconds,
-suggesting a 100 Hz sampling rate.
-The program's total execution time was 0.06
-seconds, as indicated by the @samp{cumulative seconds} field. Since
-each sample counted for 0.01 seconds, this means only six samples
-were taken during the run. Two of the samples occured while the
-program was in the @samp{open} function, as indicated by the
-@samp{self seconds} field. Each of the other four samples
-occured one each in @samp{offtime}, @samp{memccpy}, @samp{write},
-and @samp{mcount}.
-Since only six samples were taken, none of these values can
-be regarded as particularly reliable.
-In another run,
-the @samp{self seconds} field for
-@samp{mcount} might well be @samp{0.00} or @samp{0.02}.
-@xref{Sampling Error}, for a complete discussion.
-
-The remaining functions in the listing (those whose
-@samp{self seconds} field is @samp{0.00}) didn't appear
-in the histogram samples at all. However, the call graph
-indicated that they were called, so therefore they are listed,
-sorted in decreasing order by the @samp{calls} field.
-Clearly some time was spent executing these functions,
-but the paucity of histogram samples prevents any
-determination of how much time each took.
-
-Here is what the fields in each line mean:
-
-@table @code
-@item % time
-This is the percentage of the total execution time your program spent
-in this function. These should all add up to 100%.
-
-@item cumulative seconds
-This is the cumulative total number of seconds the computer spent
-executing this functions, plus the time spent in all the functions
-above this one in this table.
-
-@item self seconds
-This is the number of seconds accounted for by this function alone.
-The flat profile listing is sorted first by this number.
-
-@item calls
-This is the total number of times the function was called. If the
-function was never called, or the number of times it was called cannot
-be determined (probably because the function was not compiled with
-profiling enabled), the @dfn{calls} field is blank.
-
-@item self ms/call
-This represents the average number of milliseconds spent in this
-function per call, if this function is profiled. Otherwise, this field
-is blank for this function.
-
-@item total ms/call
-This represents the average number of milliseconds spent in this
-function and its descendants per call, if this function is profiled.
-Otherwise, this field is blank for this function.
-This is the only field in the flat profile that uses call graph analysis.
-
-@item name
-This is the name of the function. The flat profile is sorted by this
-field alphabetically after the @dfn{self seconds} and @dfn{calls}
-fields are sorted.
-@end table
-
-@node Call Graph,Line-by-line,Flat Profile,Output
-@section The Call Graph
-@cindex call graph
-
-The @dfn{call graph} shows how much time was spent in each function
-and its children. From this information, you can find functions that,
-while they themselves may not have used much time, called other
-functions that did use unusual amounts of time.
-
-Here is a sample call from a small program. This call came from the
-same @code{gprof} run as the flat profile example in the previous
-chapter.
-
-@smallexample
-@group
-granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
-
-index % time self children called name
- <spontaneous>
-[1] 100.0 0.00 0.05 start [1]
- 0.00 0.05 1/1 main [2]
- 0.00 0.00 1/2 on_exit [28]
- 0.00 0.00 1/1 exit [59]
------------------------------------------------
- 0.00 0.05 1/1 start [1]
-[2] 100.0 0.00 0.05 1 main [2]
- 0.00 0.05 1/1 report [3]
------------------------------------------------
- 0.00 0.05 1/1 main [2]
-[3] 100.0 0.00 0.05 1 report [3]
- 0.00 0.03 8/8 timelocal [6]
- 0.00 0.01 1/1 print [9]
- 0.00 0.01 9/9 fgets [12]
- 0.00 0.00 12/34 strncmp <cycle 1> [40]
- 0.00 0.00 8/8 lookup [20]
- 0.00 0.00 1/1 fopen [21]
- 0.00 0.00 8/8 chewtime [24]
- 0.00 0.00 8/16 skipspace [44]
------------------------------------------------
-[4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
- 0.01 0.02 244+260 offtime <cycle 2> [7]
- 0.00 0.00 236+1 tzset <cycle 2> [26]
------------------------------------------------
-@end group
-@end smallexample
-
-The lines full of dashes divide this table into @dfn{entries}, one for each
-function. Each entry has one or more lines.
-
-In each entry, the primary line is the one that starts with an index number
-in square brackets. The end of this line says which function the entry is
-for. The preceding lines in the entry describe the callers of this
-function and the following lines describe its subroutines (also called
-@dfn{children} when we speak of the call graph).
-
-The entries are sorted by time spent in the function and its subroutines.
-
-The internal profiling function @code{mcount} (@pxref{Flat Profile})
-is never mentioned in the call graph.
-
-@menu
-* Primary:: Details of the primary line's contents.
-* Callers:: Details of caller-lines' contents.
-* Subroutines:: Details of subroutine-lines' contents.
-* Cycles:: When there are cycles of recursion,
- such as @code{a} calls @code{b} calls @code{a}@dots{}
-@end menu
-
-@node Primary
-@subsection The Primary Line
-
-The @dfn{primary line} in a call graph entry is the line that
-describes the function which the entry is about and gives the overall
-statistics for this function.
-
-For reference, we repeat the primary line from the entry for function
-@code{report} in our main example, together with the heading line that
-shows the names of the fields:
-
-@smallexample
-@group
-index % time self children called name
-@dots{}
-[3] 100.0 0.00 0.05 1 report [3]
-@end group
-@end smallexample
-
-Here is what the fields in the primary line mean:
-
-@table @code
-@item index
-Entries are numbered with consecutive integers. Each function
-therefore has an index number, which appears at the beginning of its
-primary line.
-
-Each cross-reference to a function, as a caller or subroutine of
-another, gives its index number as well as its name. The index number
-guides you if you wish to look for the entry for that function.
-
-@item % time
-This is the percentage of the total time that was spent in this
-function, including time spent in subroutines called from this
-function.
-
-The time spent in this function is counted again for the callers of
-this function. Therefore, adding up these percentages is meaningless.
-
-@item self
-This is the total amount of time spent in this function. This
-should be identical to the number printed in the @code{seconds} field
-for this function in the flat profile.
-
-@item children
-This is the total amount of time spent in the subroutine calls made by
-this function. This should be equal to the sum of all the @code{self}
-and @code{children} entries of the children listed directly below this
-function.
-
-@item called
-This is the number of times the function was called.
-
-If the function called itself recursively, there are two numbers,
-separated by a @samp{+}. The first number counts non-recursive calls,
-and the second counts recursive calls.
-
-In the example above, the function @code{report} was called once from
-@code{main}.
-
-@item name
-This is the name of the current function. The index number is
-repeated after it.
-
-If the function is part of a cycle of recursion, the cycle number is
-printed between the function's name and the index number
-(@pxref{Cycles}). For example, if function @code{gnurr} is part of
-cycle number one, and has index number twelve, its primary line would
-be end like this:
-
-@example
-gnurr <cycle 1> [12]
-@end example
-@end table
-
-@node Callers, Subroutines, Primary, Call Graph
-@subsection Lines for a Function's Callers
-
-A function's entry has a line for each function it was called by.
-These lines' fields correspond to the fields of the primary line, but
-their meanings are different because of the difference in context.
-
-For reference, we repeat two lines from the entry for the function
-@code{report}, the primary line and one caller-line preceding it, together
-with the heading line that shows the names of the fields:
-
-@smallexample
-index % time self children called name
-@dots{}
- 0.00 0.05 1/1 main [2]
-[3] 100.0 0.00 0.05 1 report [3]
-@end smallexample
-
-Here are the meanings of the fields in the caller-line for @code{report}
-called from @code{main}:
-
-@table @code
-@item self
-An estimate of the amount of time spent in @code{report} itself when it was
-called from @code{main}.
-
-@item children
-An estimate of the amount of time spent in subroutines of @code{report}
-when @code{report} was called from @code{main}.
-
-The sum of the @code{self} and @code{children} fields is an estimate
-of the amount of time spent within calls to @code{report} from @code{main}.
-
-@item called
-Two numbers: the number of times @code{report} was called from @code{main},
-followed by the total number of nonrecursive calls to @code{report} from
-all its callers.
-
-@item name and index number
-The name of the caller of @code{report} to which this line applies,
-followed by the caller's index number.
-
-Not all functions have entries in the call graph; some
-options to @code{gprof} request the omission of certain functions.
-When a caller has no entry of its own, it still has caller-lines
-in the entries of the functions it calls.
-
-If the caller is part of a recursion cycle, the cycle number is
-printed between the name and the index number.
-@end table
-
-If the identity of the callers of a function cannot be determined, a
-dummy caller-line is printed which has @samp{<spontaneous>} as the
-``caller's name'' and all other fields blank. This can happen for
-signal handlers.
-@c What if some calls have determinable callers' names but not all?
-@c FIXME - still relevant?
-
-@node Subroutines, Cycles, Callers, Call Graph
-@subsection Lines for a Function's Subroutines
-
-A function's entry has a line for each of its subroutines---in other
-words, a line for each other function that it called. These lines'
-fields correspond to the fields of the primary line, but their meanings
-are different because of the difference in context.
-
-For reference, we repeat two lines from the entry for the function
-@code{main}, the primary line and a line for a subroutine, together
-with the heading line that shows the names of the fields:
-
-@smallexample
-index % time self children called name
-@dots{}
-[2] 100.0 0.00 0.05 1 main [2]
- 0.00 0.05 1/1 report [3]
-@end smallexample
-
-Here are the meanings of the fields in the subroutine-line for @code{main}
-calling @code{report}:
-
-@table @code
-@item self
-An estimate of the amount of time spent directly within @code{report}
-when @code{report} was called from @code{main}.
-
-@item children
-An estimate of the amount of time spent in subroutines of @code{report}
-when @code{report} was called from @code{main}.
-
-The sum of the @code{self} and @code{children} fields is an estimate
-of the total time spent in calls to @code{report} from @code{main}.
-
-@item called
-Two numbers, the number of calls to @code{report} from @code{main}
-followed by the total number of nonrecursive calls to @code{report}.
-This ratio is used to determine how much of @code{report}'s @code{self}
-and @code{children} time gets credited to @code{main}.
-@xref{Assumptions}.
-
-@item name
-The name of the subroutine of @code{main} to which this line applies,
-followed by the subroutine's index number.
-
-If the caller is part of a recursion cycle, the cycle number is
-printed between the name and the index number.
-@end table
-
-@node Cycles,, Subroutines, Call Graph
-@subsection How Mutually Recursive Functions Are Described
-@cindex cycle
-@cindex recursion cycle
-
-The graph may be complicated by the presence of @dfn{cycles of
-recursion} in the call graph. A cycle exists if a function calls
-another function that (directly or indirectly) calls (or appears to
-call) the original function. For example: if @code{a} calls @code{b},
-and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
-
-Whenever there are call paths both ways between a pair of functions, they
-belong to the same cycle. If @code{a} and @code{b} call each other and
-@code{b} and @code{c} call each other, all three make one cycle. Note that
-even if @code{b} only calls @code{a} if it was not called from @code{a},
-@code{gprof} cannot determine this, so @code{a} and @code{b} are still
-considered a cycle.
-
-The cycles are numbered with consecutive integers. When a function
-belongs to a cycle, each time the function name appears in the call graph
-it is followed by @samp{<cycle @var{number}>}.
-
-The reason cycles matter is that they make the time values in the call
-graph paradoxical. The ``time spent in children'' of @code{a} should
-include the time spent in its subroutine @code{b} and in @code{b}'s
-subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
-@code{a}'s time should be included in the children of @code{a}, when
-@code{a} is indirectly recursive?
-
-The way @code{gprof} resolves this paradox is by creating a single entry
-for the cycle as a whole. The primary line of this entry describes the
-total time spent directly in the functions of the cycle. The
-``subroutines'' of the cycle are the individual functions of the cycle, and
-all other functions that were called directly by them. The ``callers'' of
-the cycle are the functions, outside the cycle, that called functions in
-the cycle.
-
-Here is an example portion of a call graph which shows a cycle containing
-functions @code{a} and @code{b}. The cycle was entered by a call to
-@code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
-
-@smallexample
-index % time self children called name
-----------------------------------------
- 1.77 0 1/1 main [2]
-[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
- 1.02 0 3 b <cycle 1> [4]
- 0.75 0 2 a <cycle 1> [5]
-----------------------------------------
- 3 a <cycle 1> [5]
-[4] 52.85 1.02 0 0 b <cycle 1> [4]
- 2 a <cycle 1> [5]
- 0 0 3/6 c [6]
-----------------------------------------
- 1.77 0 1/1 main [2]
- 2 b <cycle 1> [4]
-[5] 38.86 0.75 0 1 a <cycle 1> [5]
- 3 b <cycle 1> [4]
- 0 0 3/6 c [6]
-----------------------------------------
-@end smallexample
-
-@noindent
-(The entire call graph for this program contains in addition an entry for
-@code{main}, which calls @code{a}, and an entry for @code{c}, with callers
-@code{a} and @code{b}.)
-
-@smallexample
-index % time self children called name
- <spontaneous>
-[1] 100.00 0 1.93 0 start [1]
- 0.16 1.77 1/1 main [2]
-----------------------------------------
- 0.16 1.77 1/1 start [1]
-[2] 100.00 0.16 1.77 1 main [2]
- 1.77 0 1/1 a <cycle 1> [5]
-----------------------------------------
- 1.77 0 1/1 main [2]
-[3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
- 1.02 0 3 b <cycle 1> [4]
- 0.75 0 2 a <cycle 1> [5]
- 0 0 6/6 c [6]
-----------------------------------------
- 3 a <cycle 1> [5]
-[4] 52.85 1.02 0 0 b <cycle 1> [4]
- 2 a <cycle 1> [5]
- 0 0 3/6 c [6]
-----------------------------------------
- 1.77 0 1/1 main [2]
- 2 b <cycle 1> [4]
-[5] 38.86 0.75 0 1 a <cycle 1> [5]
- 3 b <cycle 1> [4]
- 0 0 3/6 c [6]
-----------------------------------------
- 0 0 3/6 b <cycle 1> [4]
- 0 0 3/6 a <cycle 1> [5]
-[6] 0.00 0 0 6 c [6]
-----------------------------------------
-@end smallexample
-
-The @code{self} field of the cycle's primary line is the total time
-spent in all the functions of the cycle. It equals the sum of the
-@code{self} fields for the individual functions in the cycle, found
-in the entry in the subroutine lines for these functions.
-
-The @code{children} fields of the cycle's primary line and subroutine lines
-count only subroutines outside the cycle. Even though @code{a} calls
-@code{b}, the time spent in those calls to @code{b} is not counted in
-@code{a}'s @code{children} time. Thus, we do not encounter the problem of
-what to do when the time in those calls to @code{b} includes indirect
-recursive calls back to @code{a}.
-
-The @code{children} field of a caller-line in the cycle's entry estimates
-the amount of time spent @emph{in the whole cycle}, and its other
-subroutines, on the times when that caller called a function in the cycle.
-
-The @code{calls} field in the primary line for the cycle has two numbers:
-first, the number of times functions in the cycle were called by functions
-outside the cycle; second, the number of times they were called by
-functions in the cycle (including times when a function in the cycle calls
-itself). This is a generalization of the usual split into nonrecursive and
-recursive calls.
-
-The @code{calls} field of a subroutine-line for a cycle member in the
-cycle's entry says how many time that function was called from functions in
-the cycle. The total of all these is the second number in the primary line's
-@code{calls} field.
-
-In the individual entry for a function in a cycle, the other functions in
-the same cycle can appear as subroutines and as callers. These lines show
-how many times each function in the cycle called or was called from each other
-function in the cycle. The @code{self} and @code{children} fields in these
-lines are blank because of the difficulty of defining meanings for them
-when recursion is going on.
-
-@node Line-by-line,Annotated Source,Call Graph,Output
-@section Line-by-line Profiling
-
-@code{gprof}'s @samp{-l} option causes the program to perform
-@dfn{line-by-line} profiling. In this mode, histogram
-samples are assigned not to functions, but to individual
-lines of source code. The program usually must be compiled
-with a @samp{-g} option, in addition to @samp{-pg}, in order
-to generate debugging symbols for tracking source code lines.
-
-The flat profile is the most useful output table
-in line-by-line mode.
-The call graph isn't as useful as normal, since
-the current version of @code{gprof} does not propagate
-call graph arcs from source code lines to the enclosing function.
-The call graph does, however, show each line of code
-that called each function, along with a count.
-
-Here is a section of @code{gprof}'s output, without line-by-line profiling.
-Note that @code{ct_init} accounted for four histogram hits, and
-13327 calls to @code{init_block}.
-
-@smallexample
-Flat profile:
-
-Each sample counts as 0.01 seconds.
- % cumulative self self total
- time seconds seconds calls us/call us/call name
- 30.77 0.13 0.04 6335 6.31 6.31 ct_init
-
-
- Call graph (explanation follows)
-
-
-granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
-
-index % time self children called name
-
- 0.00 0.00 1/13496 name_too_long
- 0.00 0.00 40/13496 deflate
- 0.00 0.00 128/13496 deflate_fast
- 0.00 0.00 13327/13496 ct_init
-[7] 0.0 0.00 0.00 13496 init_block
-
-@end smallexample
-
-Now let's look at some of @code{gprof}'s output from the same program run,
-this time with line-by-line profiling enabled. Note that @code{ct_init}'s
-four histogram hits are broken down into four lines of source code - one hit
-occured on each of lines 349, 351, 382 and 385. In the call graph,
-note how
-@code{ct_init}'s 13327 calls to @code{init_block} are broken down
-into one call from line 396, 3071 calls from line 384, 3730 calls
-from line 385, and 6525 calls from 387.
-
-@smallexample
-Flat profile:
-
-Each sample counts as 0.01 seconds.
- % cumulative self
- time seconds seconds calls name
- 7.69 0.10 0.01 ct_init (trees.c:349)
- 7.69 0.11 0.01 ct_init (trees.c:351)
- 7.69 0.12 0.01 ct_init (trees.c:382)
- 7.69 0.13 0.01 ct_init (trees.c:385)
-
-
- Call graph (explanation follows)
-
-
-granularity: each sample hit covers 4 byte(s) for 7.69% of 0.13 seconds
-
- % time self children called name
-
- 0.00 0.00 1/13496 name_too_long (gzip.c:1440)
- 0.00 0.00 1/13496 deflate (deflate.c:763)
- 0.00 0.00 1/13496 ct_init (trees.c:396)
- 0.00 0.00 2/13496 deflate (deflate.c:727)
- 0.00 0.00 4/13496 deflate (deflate.c:686)
- 0.00 0.00 5/13496 deflate (deflate.c:675)
- 0.00 0.00 12/13496 deflate (deflate.c:679)
- 0.00 0.00 16/13496 deflate (deflate.c:730)
- 0.00 0.00 128/13496 deflate_fast (deflate.c:654)
- 0.00 0.00 3071/13496 ct_init (trees.c:384)
- 0.00 0.00 3730/13496 ct_init (trees.c:385)
- 0.00 0.00 6525/13496 ct_init (trees.c:387)
-[6] 0.0 0.00 0.00 13496 init_block (trees.c:408)
-
-@end smallexample
-
-
-@node Annotated Source,,Line-by-line,Output
-@section The Annotated Source Listing
-
-@code{gprof}'s @samp{-A} option triggers an annotated source listing,
-which lists the program's source code, each function labeled with the
-number of times it was called. You may also need to specify the
-@samp{-I} option, if @code{gprof} can't find the source code files.
-
-Compiling with @samp{gcc @dots{} -g -pg -a} augments your program
-with basic-block counting code, in addition to function counting code.
-This enables @code{gprof} to determine how many times each line
-of code was exeucted.
-For example, consider the following function, taken from gzip,
-with line numbers added:
-
-@smallexample
- 1 ulg updcrc(s, n)
- 2 uch *s;
- 3 unsigned n;
- 4 @{
- 5 register ulg c;
- 6
- 7 static ulg crc = (ulg)0xffffffffL;
- 8
- 9 if (s == NULL) @{
-10 c = 0xffffffffL;
-11 @} else @{
-12 c = crc;
-13 if (n) do @{
-14 c = crc_32_tab[...];
-15 @} while (--n);
-16 @}
-17 crc = c;
-18 return c ^ 0xffffffffL;
-19 @}
-
-@end smallexample
-
-@code{updcrc} has at least five basic-blocks.
-One is the function itself. The
-@code{if} statement on line 9 generates two more basic-blocks, one
-for each branch of the @code{if}. A fourth basic-block results from
-the @code{if} on line 13, and the contents of the @code{do} loop form
-the fifth basic-block. The compiler may also generate additional
-basic-blocks to handle various special cases.
-
-A program augmented for basic-block counting can be analyzed with
-@code{gprof -l -A}. I also suggest use of the @samp{-x} option,
-which ensures that each line of code is labeled at least once.
-Here is @code{updcrc}'s
-annotated source listing for a sample @code{gzip} run:
-
-@smallexample
- ulg updcrc(s, n)
- uch *s;
- unsigned n;
- 2 ->@{
- register ulg c;
-
- static ulg crc = (ulg)0xffffffffL;
-
- 2 -> if (s == NULL) @{
- 1 -> c = 0xffffffffL;
- 1 -> @} else @{
- 1 -> c = crc;
- 1 -> if (n) do @{
- 26312 -> c = crc_32_tab[...];
-26312,1,26311 -> @} while (--n);
- @}
- 2 -> crc = c;
- 2 -> return c ^ 0xffffffffL;
- 2 ->@}
-@end smallexample
-
-In this example, the function was called twice, passing once through
-each branch of the @code{if} statement. The body of the @code{do}
-loop was executed a total of 26312 times. Note how the @code{while}
-statement is annotated. It began execution 26312 times, once for
-each iteration through the loop. One of those times (the last time)
-it exited, while it branched back to the beginning of the loop 26311 times.
-
-@node Inaccuracy
-@chapter Inaccuracy of @code{gprof} Output
-
-@menu
-* Sampling Error:: Statistical margins of error
-* Assumptions:: Estimating children times
-@end menu
-
-@node Sampling Error,Assumptions,,Inaccuracy
-@section Statistical Sampling Error
-
-The run-time figures that @code{gprof} gives you are based on a sampling
-process, so they are subject to statistical inaccuracy. If a function runs
-only a small amount of time, so that on the average the sampling process
-ought to catch that function in the act only once, there is a pretty good
-chance it will actually find that function zero times, or twice.
-
-By contrast, the number-of-calls and basic-block figures
-are derived by counting, not
-sampling. They are completely accurate and will not vary from run to run
-if your program is deterministic.
-
-The @dfn{sampling period} that is printed at the beginning of the flat
-profile says how often samples are taken. The rule of thumb is that a
-run-time figure is accurate if it is considerably bigger than the sampling
-period.
-
-The actual amount of error can be predicted.
-For @var{n} samples, the @emph{expected} error
-is the square-root of @var{n}. For example,
-if the sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second,
-@var{n} is 100 samples (1 second/0.01 seconds), sqrt(@var{n}) is 10 samples, so
-the expected error in @code{foo}'s run-time is 0.1 seconds (10*0.01 seconds),
-or ten percent of the observed value.
-Again, if the sampling period is 0.01 seconds and @code{bar}'s run-time is
-100 seconds, @var{n} is 10000 samples, sqrt(@var{n}) is 100 samples, so
-the expected error in @code{bar}'s run-time is 1 second,
-or one percent of the observed value.
-It is likely to
-vary this much @emph{on the average} from one profiling run to the next.
-(@emph{Sometimes} it will vary more.)
-
-This does not mean that a small run-time figure is devoid of information.
-If the program's @emph{total} run-time is large, a small run-time for one
-function does tell you that that function used an insignificant fraction of
-the whole program's time. Usually this means it is not worth optimizing.
-
-One way to get more accuracy is to give your program more (but similar)
-input data so it will take longer. Another way is to combine the data from
-several runs, using the @samp{-s} option of @code{gprof}. Here is how:
-
-@enumerate
-@item
-Run your program once.
-
-@item
-Issue the command @samp{mv gmon.out gmon.sum}.
-
-@item
-Run your program again, the same as before.
-
-@item
-Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
-
-@example
-gprof -s @var{executable-file} gmon.out gmon.sum
-@end example
-
-@item
-Repeat the last two steps as often as you wish.
-
-@item
-Analyze the cumulative data using this command:
-
-@example
-gprof @var{executable-file} gmon.sum > @var{output-file}
-@end example
-@end enumerate
-
-@node Assumptions,,Sampling Error,Inaccuracy
-@section Estimating @code{children} Times
-
-Some of the figures in the call graph are estimates---for example, the
-@code{children} time values and all the the time figures in caller and
-subroutine lines.
-
-There is no direct information about these measurements in the profile
-data itself. Instead, @code{gprof} estimates them by making an assumption
-about your program that might or might not be true.
-
-The assumption made is that the average time spent in each call to any
-function @code{foo} is not correlated with who called @code{foo}. If
-@code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
-from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
-@code{children} time, by assumption.
-
-This assumption is usually true enough, but for some programs it is far
-from true. Suppose that @code{foo} returns very quickly when its argument
-is zero; suppose that @code{a} always passes zero as an argument, while
-other callers of @code{foo} pass other arguments. In this program, all the
-time spent in @code{foo} is in the calls from callers other than @code{a}.
-But @code{gprof} has no way of knowing this; it will blindly and
-incorrectly charge 2 seconds of time in @code{foo} to the children of
-@code{a}.
-
-@c FIXME - has this been fixed?
-We hope some day to put more complete data into @file{gmon.out}, so that
-this assumption is no longer needed, if we can figure out how. For the
-nonce, the estimated figures are usually more useful than misleading.
-
-@node How do I?
-@chapter Answers to Common Questions
-
-@table @asis
-@item How do I find which lines in my program were executed the most times?
-
-Compile your program with basic-block counting enabled, run it, then
-use the following pipeline:
-
-@example
-gprof -l -C @var{objfile} | sort -k 3 -n -r
-@end example
-
-This listing will show you the lines in your code executed most often,
-but not necessarily those that consumed the most time.
-
-@item How do I find which lines in my program called a particular function?
-
-Use @code{gprof -l} and lookup the function in the call graph.
-The callers will be broken down by function and line number.
-
-@item How do I analyze a program that runs for less than a second?
-
-Try using a shell script like this one:
-
-@example
-for i in `seq 1 100`; do
- fastprog
- mv gmon.out gmon.out.$i
-done
-
-gprof -s fastprog gmon.out.*
-
-gprof fastprog gmon.sum
-@end example
-
-If your program is completely deterministic, all the call counts
-will be simple multiples of 100 (i.e. a function called once in
-each run will appear with a call count of 100).
-
-@end table
-
-@node Incompatibilities
-@chapter Incompatibilities with Unix @code{gprof}
-
-@sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
-file @file{gmon.out}, and provide essentially the same information. But
-there are a few differences.
-
-@itemize @bullet
-@item
-@sc{gnu} @code{gprof} uses a new, generalized file format with support
-for basic-block execution counts and non-realtime histograms. A magic
-cookie and version number allows @code{gprof} to easily identify
-new style files. Old BSD-style files can still be read.
-@xref{File Format}.
-
-@item
-For a recursive function, Unix @code{gprof} lists the function as a
-parent and as a child, with a @code{calls} field that lists the number
-of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
-the number of recursive calls in the primary line.
-
-@item
-When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
-@code{gprof} still lists it as a subroutine of functions that call it.
-
-@item
-@sc{gnu} @code{gprof} accepts the @samp{-k} with its argument
-in the form @samp{from/to}, instead of @samp{from to}.
-
-@item
-In the annotated source listing,
-if there are multiple basic blocks on the same line,
-@sc{gnu} @code{gprof} prints all of their counts, seperated by commas.
-
-@ignore - it does this now
-@item
-The function names printed in @sc{gnu} @code{gprof} output do not include
-the leading underscores that are added internally to the front of all
-C identifiers on many operating systems.
-@end ignore
-
-@item
-The blurbs, field widths, and output formats are different. @sc{gnu}
-@code{gprof} prints blurbs after the tables, so that you can see the
-tables without skipping the blurbs.
-@end itemize
-
-@node Details
-@chapter Details of Profiling
-
-@menu
-* Implementation:: How a program collets profiling information
-* File Format:: Format of @samp{gmon.out} files
-* Internals:: @code{gprof}'s internal operation
-* Debugging:: Using @code{gprof}'s @samp{-d} option
-@end menu
-
-@node Implementation,File Format,,Details
-@section Implementation of Profiling
-
-Profiling works by changing how every function in your program is compiled
-so that when it is called, it will stash away some information about where
-it was called from. From this, the profiler can figure out what function
-called it, and can count how many times it was called. This change is made
-by the compiler when your program is compiled with the @samp{-pg} option,
-which causes every function to call @code{mcount}
-(or @code{_mcount}, or @code{__mcount}, depending on the OS and compiler)
-as one of its first operations.
-
-The @code{mcount} routine, included in the profiling library,
-is responsible for recording in an in-memory call graph table
-both its parent routine (the child) and its parent's parent. This is
-typically done by examining the stack frame to find both
-the address of the child, and the return address in the original parent.
-Since this is a very machine-dependant operation, @code{mcount}
-itself is typically a short assembly-language stub routine
-that extracts the required
-information, and then calls @code{__mcount_internal}
-(a normal C function) with two arguments - @code{frompc} and @code{selfpc}.
-@code{__mcount_internal} is responsible for maintaining
-the in-memory call graph, which records @code{frompc}, @code{selfpc},
-and the number of times each of these call arcs was transversed.
-
-GCC Version 2 provides a magical function (@code{__builtin_return_address}),
-which allows a generic @code{mcount} function to extract the
-required information from the stack frame. However, on some
-architectures, most notably the SPARC, using this builtin can be
-very computationally expensive, and an assembly language version
-of @code{mcount} is used for performance reasons.
-
-Number-of-calls information for library routines is collected by using a
-special version of the C library. The programs in it are the same as in
-the usual C library, but they were compiled with @samp{-pg}. If you
-link your program with @samp{gcc @dots{} -pg}, it automatically uses the
-profiling version of the library.
-
-Profiling also involves watching your program as it runs, and keeping a
-histogram of where the program counter happens to be every now and then.
-Typically the program counter is looked at around 100 times per second of
-run time, but the exact frequency may vary from system to system.
-
-This is done is one of two ways. Most UNIX-like operating systems
-provide a @code{profil()} system call, which registers a memory
-array with the kernel, along with a scale
-factor that determines how the program's address space maps
-into the array.
-Typical scaling values cause every 2 to 8 bytes of address space
-to map into a single array slot.
-On every tick of the system clock
-(assuming the profiled program is running), the value of the
-program counter is examined and the corresponding slot in
-the memory array is incremented. Since this is done in the kernel,
-which had to interrupt the process anyway to handle the clock
-interrupt, very little additional system overhead is required.
-
-However, some operating systems, most notably Linux 2.0 (and earlier),
-do not provide a @code{profil()} system call. On such a system,
-arrangements are made for the kernel to periodically deliver
-a signal to the process (typically via @code{setitimer()}),
-which then performs the same operation of examining the
-program counter and incrementing a slot in the memory array.
-Since this method requires a signal to be delivered to
-user space every time a sample is taken, it uses considerably
-more overhead than kernel-based profiling. Also, due to the
-added delay required to deliver the signal, this method is
-less accurate as well.
-
-A special startup routine allocates memory for the histogram and
-either calls @code{profil()} or sets up
-a clock signal handler.
-This routine (@code{monstartup}) can be invoked in several ways.
-On Linux systems, a special profiling startup file @code{gcrt0.o},
-which invokes @code{monstartup} before @code{main},
-is used instead of the default @code{crt0.o}.
-Use of this special startup file is one of the effects
-of using @samp{gcc @dots{} -pg} to link.
-On SPARC systems, no special startup files are used.
-Rather, the @code{mcount} routine, when it is invoked for
-the first time (typically when @code{main} is called),
-calls @code{monstartup}.
-
-If the compiler's @samp{-a} option was used, basic-block counting
-is also enabled. Each object file is then compiled with a static array
-of counts, initially zero.
-In the executable code, every time a new basic-block begins
-(i.e. when an @code{if} statement appears), an extra instruction
-is inserted to increment the corresponding count in the array.
-At compile time, a paired array was constructed that recorded
-the starting address of each basic-block. Taken together,
-the two arrays record the starting address of every basic-block,
-along with the number of times it was executed.
-
-The profiling library also includes a function (@code{mcleanup}) which is
-typically registered using @code{atexit()} to be called as the
-program exits, and is responsible for writing the file @file{gmon.out}.
-Profiling is turned off, various headers are output, and the histogram
-is written, followed by the call-graph arcs and the basic-block counts.
-
-The output from @code{gprof} gives no indication of parts of your program that
-are limited by I/O or swapping bandwidth. This is because samples of the
-program counter are taken at fixed intervals of the program's run time.
-Therefore, the
-time measurements in @code{gprof} output say nothing about time that your
-program was not running. For example, a part of the program that creates
-so much data that it cannot all fit in physical memory at once may run very
-slowly due to thrashing, but @code{gprof} will say it uses little time. On
-the other hand, sampling by run time has the advantage that the amount of
-load due to other users won't directly affect the output you get.
-
-@node File Format,Internals,Implementation,Details
-@section Profiling Data File Format
-
-The old BSD-derived file format used for profile data does not contain a
-magic cookie that allows to check whether a data file really is a
-gprof file. Furthermore, it does not provide a version number, thus
-rendering changes to the file format almost impossible. @sc{gnu} @code{gprof}
-uses a new file format that provides these features. For backward
-compatibility, @sc{gnu} @code{gprof} continues to support the old BSD-derived
-format, but not all features are supported with it. For example,
-basic-block execution counts cannot be accommodated by the old file
-format.
-
-The new file format is defined in header file @file{gmon_out.h}. It
-consists of a header containing the magic cookie and a version number,
-as well as some spare bytes available for future extensions. All data
-in a profile data file is in the native format of the host on which
-the profile was collected. @sc{gnu} @code{gprof} adapts automatically to the
-byte-order in use.
-
-In the new file format, the header is followed by a sequence of
-records. Currently, there are three different record types: histogram
-records, call-graph arc records, and basic-block execution count
-records. Each file can contain any number of each record type. When
-reading a file, @sc{gnu} @code{gprof} will ensure records of the same type are
-compatible with each other and compute the union of all records. For
-example, for basic-block execution counts, the union is simply the sum
-of all execution counts for each basic-block.
-
-@subsection Histogram Records
-
-Histogram records consist of a header that is followed by an array of
-bins. The header contains the text-segment range that the histogram
-spans, the size of the histogram in bytes (unlike in the old BSD
-format, this does not include the size of the header), the rate of the
-profiling clock, and the physical dimension that the bin counts
-represent after being scaled by the profiling clock rate. The
-physical dimension is specified in two parts: a long name of up to 15
-characters and a single character abbreviation. For example, a
-histogram representing real-time would specify the long name as
-"seconds" and the abbreviation as "s". This feature is useful for
-architectures that support performance monitor hardware (which,
-fortunately, is becoming increasingly common). For example, under DEC
-OSF/1, the "uprofile" command can be used to produce a histogram of,
-say, instruction cache misses. In this case, the dimension in the
-histogram header could be set to "i-cache misses" and the abbreviation
-could be set to "1" (because it is simply a count, not a physical
-dimension). Also, the profiling rate would have to be set to 1 in
-this case.
-
-Histogram bins are 16-bit numbers and each bin represent an equal
-amount of text-space. For example, if the text-segment is one
-thousand bytes long and if there are ten bins in the histogram, each
-bin represents one hundred bytes.
-
-
-@subsection Call-Graph Records
-
-Call-graph records have a format that is identical to the one used in
-the BSD-derived file format. It consists of an arc in the call graph
-and a count indicating the number of times the arc was traversed
-during program execution. Arcs are specified by a pair of addresses:
-the first must be within caller's function and the second must be
-within the callee's function. When performing profiling at the
-function level, these addresses can point anywhere within the
-respective function. However, when profiling at the line-level, it is
-better if the addresses are as close to the call-site/entry-point as
-possible. This will ensure that the line-level call-graph is able to
-identify exactly which line of source code performed calls to a
-function.
-
-@subsection Basic-Block Execution Count Records
-
-Basic-block execution count records consist of a header followed by a
-sequence of address/count pairs. The header simply specifies the
-length of the sequence. In an address/count pair, the address
-identifies a basic-block and the count specifies the number of times
-that basic-block was executed. Any address within the basic-address can
-be used.
-
-@node Internals,Debugging,File Format,Details
-@section @code{gprof}'s Internal Operation
-
-Like most programs, @code{gprof} begins by processing its options.
-During this stage, it may building its symspec list
-(@code{sym_ids.c:sym_id_add}), if
-options are specified which use symspecs.
-@code{gprof} maintains a single linked list of symspecs,
-which will eventually get turned into 12 symbol tables,
-organized into six include/exclude pairs - one
-pair each for the flat profile (INCL_FLAT/EXCL_FLAT),
-the call graph arcs (INCL_ARCS/EXCL_ARCS),
-printing in the call graph (INCL_GRAPH/EXCL_GRAPH),
-timing propagation in the call graph (INCL_TIME/EXCL_TIME),
-the annotated source listing (INCL_ANNO/EXCL_ANNO),
-and the execution count listing (INCL_EXEC/EXCL_EXEC).
-
-After option processing, @code{gprof} finishes
-building the symspec list by adding all the symspecs in
-@code{default_excluded_list} to the exclude lists
-EXCL_TIME and EXCL_GRAPH, and if line-by-line profiling is specified,
-EXCL_FLAT as well.
-These default excludes are not added to EXCL_ANNO, EXCL_ARCS, and EXCL_EXEC.
-
-Next, the BFD library is called to open the object file,
-verify that it is an object file,
-and read its symbol table (@code{core.c:core_init}),
-using @code{bfd_canonicalize_symtab} after mallocing
-an appropiate sized array of asymbols. At this point,
-function mappings are read (if the @samp{--file-ordering} option
-has been specified), and the core text space is read into
-memory (if the @samp{-c} option was given).
-
-@code{gprof}'s own symbol table, an array of Sym structures,
-is now built.
-This is done in one of two ways, by one of two routines, depending
-on whether line-by-line profiling (@samp{-l} option) has been
-enabled.
-For normal profiling, the BFD canonical symbol table is scanned.
-For line-by-line profiling, every
-text space address is examined, and a new symbol table entry
-gets created every time the line number changes.
-In either case, two passes are made through the symbol
-table - one to count the size of the symbol table required,
-and the other to actually read the symbols. In between the
-two passes, a single array of type @code{Sym} is created of
-the appropiate length.
-Finally, @code{symtab.c:symtab_finalize}
-is called to sort the symbol table and remove duplicate entries
-(entries with the same memory address).
-
-The symbol table must be a contiguous array for two reasons.
-First, the @code{qsort} library function (which sorts an array)
-will be used to sort the symbol table.
-Also, the symbol lookup routine (@code{symtab.c:sym_lookup}),
-which finds symbols
-based on memory address, uses a binary search algorithm
-which requires the symbol table to be a sorted array.
-Function symbols are indicated with an @code{is_func} flag.
-Line number symbols have no special flags set.
-Additionally, a symbol can have an @code{is_static} flag
-to indicate that it is a local symbol.
-
-With the symbol table read, the symspecs can now be translated
-into Syms (@code{sym_ids.c:sym_id_parse}). Remember that a single
-symspec can match multiple symbols.
-An array of symbol tables
-(@code{syms}) is created, each entry of which is a symbol table
-of Syms to be included or excluded from a particular listing.
-The master symbol table and the symspecs are examined by nested
-loops, and every symbol that matches a symspec is inserted
-into the appropriate syms table. This is done twice, once to
-count the size of each required symbol table, and again to build
-the tables, which have been malloced between passes.
-From now on, to determine whether a symbol is on an include
-or exclude symspec list, @code{gprof} simply uses its
-standard symbol lookup routine on the appropriate table
-in the @code{syms} array.
-
-Now the profile data file(s) themselves are read
-(@code{gmon_io.c:gmon_out_read}),
-first by checking for a new-style @samp{gmon.out} header,
-then assuming this is an old-style BSD @samp{gmon.out}
-if the magic number test failed.
-
-New-style histogram records are read by @code{hist.c:hist_read_rec}.
-For the first histogram record, allocate a memory array to hold
-all the bins, and read them in.
-When multiple profile data files (or files with multiple histogram
-records) are read, the starting address, ending address, number
-of bins and sampling rate must match between the various histograms,
-or a fatal error will result.
-If everything matches, just sum the additional histograms into
-the existing in-memory array.
-
-As each call graph record is read (@code{call_graph.c:cg_read_rec}),
-the parent and child addresses
-are matched to symbol table entries, and a call graph arc is
-created by @code{cg_arcs.c:arc_add}, unless the arc fails a symspec
-check against INCL_ARCS/EXCL_ARCS. As each arc is added,
-a linked list is maintained of the parent's child arcs, and of the child's
-parent arcs.
-Both the child's call count and the arc's call count are
-incremented by the record's call count.
-
-Basic-block records are read (@code{basic_blocks.c:bb_read_rec}),
-but only if line-by-line profiling has been selected.
-Each basic-block address is matched to a corresponding line
-symbol in the symbol table, and an entry made in the symbol's
-bb_addr and bb_calls arrays. Again, if multiple basic-block
-records are present for the same address, the call counts
-are cumulative.
-
-A gmon.sum file is dumped, if requested (@code{gmon_io.c:gmon_out_write}).
-
-If histograms were present in the data files, assign them to symbols
-(@code{hist.c:hist_assign_samples}) by iterating over all the sample
-bins and assigning them to symbols. Since the symbol table
-is sorted in order of ascending memory addresses, we can
-simple follow along in the symbol table as we make our pass
-over the sample bins.
-This step includes a symspec check against INCL_FLAT/EXCL_FLAT.
-Depending on the histogram
-scale factor, a sample bin may span multiple symbols,
-in which case a fraction of the sample count is allocated
-to each symbol, proportional to the degree of overlap.
-This effect is rare for normal profiling, but overlaps
-are more common during line-by-line profiling, and can
-cause each of two adjacent lines to be credited with half
-a hit, for example.
-
-If call graph data is present, @code{cg_arcs.c:cg_assemble} is called.
-First, if @samp{-c} was specified, a machine-dependant
-routine (@code{find_call}) scans through each symbol's machine code,
-looking for subroutine call instructions, and adding them
-to the call graph with a zero call count.
-A topological sort is performed by depth-first numbering
-all the symbols (@code{cg_dfn.c:cg_dfn}), so that
-children are always numbered less than their parents,
-then making a array of pointers into the symbol table and sorting it into
-numerical order, which is reverse topological
-order (children appear before parents).
-Cycles are also detected at this point, all members
-of which are assigned the same topological number.
-Two passes are now made through this sorted array of symbol pointers.
-The first pass, from end to beginning (parents to children),
-computes the fraction of child time to propogate to each parent
-and a print flag.
-The print flag reflects symspec handling of INCL_GRAPH/EXCL_GRAPH,
-with a parent's include or exclude (print or no print) property
-being propagated to its children, unless they themselves explicitly appear
-in INCL_GRAPH or EXCL_GRAPH.
-A second pass, from beginning to end (children to parents) actually
-propogates the timings along the call graph, subject
-to a check against INCL_TIME/EXCL_TIME.
-With the print flag, fractions, and timings now stored in the symbol
-structures, the topological sort array is now discarded, and a
-new array of pointers is assembled, this time sorted by propagated time.
-
-Finally, print the various outputs the user requested, which is now fairly
-straightforward. The call graph (@code{cg_print.c:cg_print}) and
-flat profile (@code{hist.c:hist_print}) are regurgitations of values
-already computed. The annotated source listing
-(@code{basic_blocks.c:print_annotated_source}) uses basic-block
-information, if present, to label each line of code with call counts,
-otherwise only the function call counts are presented.
-
-The function ordering code is marginally well documented
-in the source code itself (@code{cg_print.c}). Basically,
-the functions with the most use and the most parents are
-placed first, followed by other functions with the most use,
-followed by lower use functions, followed by unused functions
-at the end.
-
-@node Debugging,,Internals,Details
-@subsection Debugging @code{gprof}
-
-If @code{gprof} was compiled with debugging enabled,
-the @samp{-d} option triggers debugging output
-(to stdout) which can be helpful in understanding its operation.
-The debugging number specified is interpreted as a sum of the following
-options:
-
-@table @asis
-@item 2 - Topological sort
-Monitor depth-first numbering of symbols during call graph analysis
-@item 4 - Cycles
-Shows symbols as they are identified as cycle heads
-@item 16 - Tallying
-As the call graph arcs are read, show each arc and how
-the total calls to each function are tallied
-@item 32 - Call graph arc sorting
-Details sorting individual parents/children within each call graph entry
-@item 64 - Reading histogram and call graph records
-Shows address ranges of histograms as they are read, and each
-call graph arc
-@item 128 - Symbol table
-Reading, classifying, and sorting the symbol table from the object file.
-For line-by-line profiling (@samp{-l} option), also shows line numbers
-being assigned to memory addresses.
-@item 256 - Static call graph
-Trace operation of @samp{-c} option
-@item 512 - Symbol table and arc table lookups
-Detail operation of lookup routines
-@item 1024 - Call graph propagation
-Shows how function times are propagated along the call graph
-@item 2048 - Basic-blocks
-Shows basic-block records as they are read from profile data
-(only meaningful with @samp{-l} option)
-@item 4096 - Symspecs
-Shows symspec-to-symbol pattern matching operation
-@item 8192 - Annotate source
-Tracks operation of @samp{-A} option
-@end table
-
-@contents
-@bye
-
-NEEDS AN INDEX
-
--T - "traditional BSD style": How is it different? Should the
-differences be documented?
-
-example flat file adds up to 100.01%...
-
-note: time estimates now only go out to one decimal place (0.0), where
-they used to extend two (78.67).