20022013 Ericsson AB. All Rights Reserved. The contents of this file are subject to the Erlang Public License, Version 1.1, (the "License"); you may not use this file except in compliance with the License. You should have received a copy of the Erlang Public License along with this software. If not, it can be retrieved online at http://www.erlang.org/. Software distributed under the License is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License. Trace Tool Builder ttb_ug.xml
Introduction

The Trace Tool Builder is a base for building trace tools for single node or distributed erlang systems. It requires the runtime_tools application to be available on the traced node.

The main features of the Trace Tool Builder are:

Start tracing to file ports on several nodes with one function call. Write additional information to a trace information file, which is read during formatting. Restoring of previous configuration by maintaining a history buffer and handling configuration files. Some simple support for sequential tracing. Formatting of binary trace logs and merging of logs from multiple nodes.

The intention of the Trace Tool Builder is to serve as a base for tailor made trace tools, but you may use it directly from the erlang shell (it may mimic dbg behaviour while still providing useful additions like match specification shortcuts). The application only allows the use of file port tracer, so if you would like to use other types of trace clients you will be better off using dbg directly instead.

Getting Started

The ttb module is the interface to all functions in the Trace Tool Builder. To get started the least you need to do is to start a tracer with ttb:tracer/0/1/2, and set the required trace flags on the processes you want to trace with ttb:p/2. Then, when the tracing is completed, you must stop the tracer with ttb:stop/0/1 and format the trace log with ttb:format/1/2 (as long as there is anything to format, of course).

ttb:tracer/0/1/2 opens a trace port on each node that shall be traced. By default, trace messages are written to binary files on remote nodes(the binary trace log).

ttb:p/2 specifies which processes shall be traced. Trace flags given in this call specify what to trace on each process. You can call this function several times if you like different trace flags to be set on different processes.

If you want to trace function calls (i.e. if you have the call trace flag set on any of your processes), you must also set trace patterns on the required function(s) with ttb:tp or ttb:tpl. A function is only traced if it has a trace pattern. The trace pattern specifies how to trace the function by using match specifications. Match specifications are described in the User's Guide for the erlang runtime system erts.

ttb:stop/0/1 stops tracing on all nodes, deletes all trace patterns and flushes the trace port buffer.

ttb:format/1/2 translates the binary trace logs into something readable. By default ttb presents each trace message as a line of text, but you can also write your own handler to make more complex interpretations of the trace information. A trace log can even be presented graphically via the Event Tracer application. Note that if you give the format option to ttb:stop/1 the formatting is automatically done when stopping ttb.

Example: Tracing the local node from the erlang shell

This small module is used in the example:

-module(m). -export([f/0]). f() -> receive From when is_pid(From) -> Now = erlang:now(), From ! {self(),Now} end.

The following example shows the basic use of ttb from the erlang shell. Default options are used both for starting the tracer and for formatting (the custom fetch dir is however provided). This gives a trace log named Node-ttb in the newly-created directory, where Node is the name of the node. The default handler prints the formatted trace messages in the shell.

%% First I spawn a process running my test function (tiger@durin)47> Pid = spawn(m,f,[]). <0.125.0> (tiger@durin)48> (tiger@durin)48> %% Then I start a tracer... (tiger@durin)48> ttb:tracer(). {ok,[tiger@durin]} (tiger@durin)49> (tiger@durin)49> %% and activate the new process for tracing (tiger@durin)49> %% function calls and sent messages. (tiger@durin)49> ttb:p(Pid,[call,send]). {ok,[{<0.125.0>,[{matched,tiger@durin,1}]}]} (tiger@durin)50> (tiger@durin)50> %% Here I set a trace pattern on erlang:now/0 (tiger@durin)50> %% The trace pattern is a simple match spec (tiger@durin)50> %% indicating that the return value should be (tiger@durin)50> %% traced. Refer to the reference_manual for (tiger@durin)50> %% the full list of match spec shortcuts (tiger@durin)50> %% available. (tiger@durin)51> ttb:tp(erlang,now,return). {ok,[{matched,tiger@durin,1},{saved,1}]} (tiger@durin)52> (tiger@durin)52> %% I run my test (i.e. send a message to (tiger@durin)52> %% my new process) (tiger@durin)52> Pid ! self(). <0.72.0> (tiger@durin)53> (tiger@durin)53> %% And then I have to stop ttb in order to flush (tiger@durin)53> %% the trace port buffer (tiger@durin)53> ttb:stop([return, {fetch_dir, "fetch"}]). {stopped, "fetch"} (tiger@durin)54> (tiger@durin)54> %% Finally I format my trace log (tiger@durin)54> ttb:format("fetch"). ({<0.125.0>,{m,f,0},tiger@durin}) call erlang:now() ({<0.125.0>,{m,f,0},tiger@durin}) returned from erlang:now/0 -> {1031,133451,667611} ({<0.125.0>,{m,f,0},tiger@durin}) <0.72.0> ! {<0.125.0>,{1031,133451,667611}} ok ]]>
Example: Build your own tool

This small example shows a simple tool for "debug tracing", i.e. tracing of function calls with return values.

%% The options specify that the binary log shall be named %% -debug_log and that the print/4 function in this %% module shall be used as format handler ttb:tracer(all,[{file,"debug_log"},{handler,{{?MODULE,print},0}}]), %% All processes (existing and new) shall trace function calls %% We want trace messages to be sorted upon format, which requires %% timestamp flag. The flag is however enabled by default in ttb. ttb:p(all,call). %%% Set trace pattern on function(s) trc(M) when is_atom(M) -> trc({M,'_','_'}); trc({M,F}) when is_atom(M), is_atom(F) -> trc({M,F,'_'}); trc({M,F,_A}=MFA) when is_atom(M), is_atom(F) -> %% This match spec shortcut specifies that return values shall %% be traced. MatchSpec = dbg:fun2ms(fun(_) -> return_trace() end), ttb:tpl(MFA,MatchSpec). %%% Format a binary trace log format(Dir) -> ttb:format(Dir). %%% Stop the "mydebug" tool stop() -> ttb:stop(return). %%% --------Internal functions-------- %%% ---------------------------------- %%% Format handler print(_Out,end_of_trace,_TI,N) -> N; print(Out,Trace,_TI,N) -> do_print(Out,Trace,N), N+1. do_print(Out,{trace_ts,P,call,{M,F,A},Ts},N) -> io:format(Out, "~w: ~w, ~w:~n" "Call : ~w:~w/~w~n" "Arguments :~p~n~n", [N,Ts,P,M,F,length(A),A]); do_print(Out,{trace_ts,P,return_from,{M,F,A},R,Ts},N) -> io:format(Out, "~w: ~w, ~w:~n" "Return from : ~w:~w/~w~n" "Return value :~p~n~n", [N,Ts,P,M,F,A,R]). ]]>

To distinguish trace logs produced with this tool from other logs, the file option is used in tracer/2. The logs will therefore be fetched to a directory named ttb_upload_debug_log-YYYYMMDD-HHMMSS

By using the handler option when starting the tracer, the information about how to format the file is stored in the trace information file (.ti). This is not necessary, as it might be given at the time of formatting instead. It can however be useful if you e.g. want to automatically format your trace logs by using the format option in ttb:stop/1. It also means that you don't need any knowledge of the content of a binary log to be able to format it the way it was intended. If the handler option is given both when starting the tracer and when formatting, the one given when formatting is used.

The call trace flag is set on all processes. This means that any function activated with the trc/1 command will be traced on all existing and new processes.

Running the Trace Tool Builder against a remote node

The Observer application might not always be available on the node that shall be traced (in the following called the "traced node"). It is still possible to run the Trace Tool Builder from another node (in the following called the "trace control node") as long as

The Observer application is available on the trace control node. The Runtime Tools application is available on both the trace control node and the traced node.

If the Trace Tool Builder shall be used against a remote node, it is highly recommended to start the trace control node as hidden. This way it can connect to the traced node without the traced node "seeing" it, i.e. if the nodes() BIF is called on the traced node, the trace control node will not show. To start a hidden node, add the -hidden option to the erl command, e.g.

% erl -sname trace_control -hidden
Diskless node

If the traced node is diskless, ttb must be started from a trace control node with disk access, and the file option must be given to the tracer/2 function with the value {local, File}, e.g.

(trace_control@durin)1> ttb:tracer(mynode@diskless,{file,{local, {wrap,"mytrace"}}}). {ok,[mynode@diskless]}
Additional tracing options

When setting up a trace, several features may be turned on:

time-constrained tracing, overload protection, autoresuming.
Time-constrained tracing

Sometimes, it may be helpful to enable trace for a given period of time (i.e. to monitor a system for 24 hours or half of a second). This may be done by issuing additional {timer, TimerSpec} option. If TimerSpec has the form of MSec, the trace is stopped after MSec milliseconds using ttb:stop/0. If any additional options are provided (TimerSpec = {MSec, Opts}), ttb:stop/1 is called instead with Opts as the arguments. The timer is started with ttb:p/2, so any trace patterns should be set up before. ttb:start_trace/4 always sets up all pattern before invoking ttb:p/2. Note that due to network and processing delays the the period of tracing is approximate. The example below shows how to set up a trace which will be automatically stopped and formatted after 5 seconds

(tiger@durin)1>ttb:start_trace([node()], [{erlang, now,[]}], {all, call}, [{timer, {5000, format}}]).

When tracing live systems, special care needs to be always taken not to overload a node with too heavy tracing. ttb provides the overload option to help to address the problem.

{overload, MSec, Module, Function} instructs the ttb backend (called observer_backend, part of the runtime_tools application) to perform overload check every MSec milliseconds. If the check (namely Module:Function(check)) returns true, tracing is disabled on the selected node.

Overload protection activated on one node does not affect other nodes, where the tracing continues as normal. ttb:stop/0/1 fetches data from all clients, including everything that has been collected before overload protection was activated. Note that changing trace details (with ttb:p and ttb:tp/tpl...) once overload protection gets activated in one of the traced nodes is not permitted in order not to allow trace setup to be inconsistent between nodes.

Module:Function provided with the overload option must handle three calls: init, check and stop. init and stop allows to perform some setup and teardown required by the check. An overload check module could look like this (note that check is always called by the same process, so put and get are possible).

-module(overload). -export([check/1]). check(init) -> Pid = sophisticated_module:start(), put(pid, Pid); check(check) -> get(pid) ! is_overloaded, receive Reply -> Reply after 5000 -> true end; check(stop) -> get(pid) ! stop.
Autoresume

It is possible that a node (probably a buggy one, hence traced) crashes. In order to automatically resume tracing on the node as soon as it gets back, resume has to be used. When it is, the failing node tries to reconnect to trace control node as soon as runtime tools is started. This implies that runtime_tools must be included in other node's startup chain (if it is not, one could still resume tracing by starting runtime_tools manually, i.e. by an RPC call).

In order not to loose the data that the failing node stored up to the point of crash, the control node will try to fetch it before restarting trace. This must happen within the allowed time frame or is aborted (default is 10 seconds, can be customized with {resume, MSec}). The data fetched this way is then merged with all other traces.

Autostart feature requires additional data to be stored on traced nodes. By default, the data is stored automatically to the file called "ttb_autostart.bin" in the traced node's cwd. Users may decide to change this behaviour (i.e. on diskless nodes) by specifying their own module to handle autostart data storage and retrieval (ttb_autostart_module environment variable of runtime_tools). Please see the ttb's reference manual to see the module's API. This example shows the default handler

-module(ttb_autostart). -export([read_config/0, write_config/1, delete_config/0]). -define(AUTOSTART_FILENAME, "ttb_autostart.bin"). delete_config() -> file:delete(?AUTOSTART_FILENAME). read_config() -> case file:read_file(?AUTOSTART_FILENAME) of {ok, Data} -> {ok, binary_to_term(Data)}; Error -> Error end. write_config(Data) -> file:write_file(?AUTOSTART_FILENAME, term_to_binary(Data)).

Remember that file trace ports buffer the data by default. If the node crashes, trace messages are not flushed to the binary log. If the chance of failure is high, it might be a good idea to automatically flush the buffers every now and then. Passing {flush, MSec} as one of ttb:tracer/2 option flushes all buffers every MSec milliseconds.

dbg mode

The {shell, ShellType} option allows to make ttb operation similar to dbg. Using {shell, true} displays all trace messages in the shell before storing them. {shell, only} additionally disables message storage (so that the tool behaves exactly like dbg). This is allowed only with ip trace ports ({trace, {local, File}}).

The command ttb:tracer(dbg) is a shortcut for the pure-dbg mode ({shell, only}).

Trace Information and the .ti File

In addition to the trace log file(s), a file with the extension .ti is created when the Trace Tool Builder is started. This is the trace information file. It is a binary file, and it contains the process information, trace flags used, the name of the node to which it belongs and all information written with the write_trace_info/2 function. .ti files are always fetched with other logs when the trace is stopped.

Except for the process information, everything in the trace information file is passed on to the handler function when formatting. The TI parameter is a list of {Key,ValueList} tuples. The keys flags, handler, file and node are used for information written directly by ttb.

You can add information to the trace information file by calling write_trace_info/2. Note that ValueList always will be a list, and if you call write_trace_info/2 several times with the same Key, the ValueList will be extended with a new value each time. Example:

ttb:write_trace_info(mykey,1) gives the entry {mykey,[1]} in TI. Another call, ttb:write_trace_info(mykey,2), changes this entry to {mykey,[1,2]}.

Wrap Logs

If you want to limit the size of the trace logs, you can use wrap logs. This works almost like a circular buffer. You can specify the maximum number of binary logs and the maximum size of each log. ttb will create a new binary log each time a log reaches the maximum size. When the the maximum number of logs are reached, the oldest log is deleted before a new one is created.

Note that the overall size of data generated by ttb may be greater than the wrap specification would suggest - if a traced node restarts and autoresume is enabled, old wrap log is always stored and a new one is created.

Wrap logs can be formatted one by one or all at once. See Formatting.

Formatting

Formatting can be done automatically when stopping ttb (see Automatically collect and format logs from all nodes), or explicitly by calling the ttb:format/1/2 function.

Formatting means to read a binary log and present it in a readable format. You can use the default format handler in ttb to present each trace message as a line of text, or write your own handler to make more complex interpretations of the trace information. You can even use the Event Tracer et to present the trace log graphically (see Presenting trace logs with Event Tracer).

The first argument to ttb:format/1/2 specifies which binary log(s) to format. This is usually the name of a directory that ttb created during log fetch. Unless there is the disable_sort option provided, the logs from different files are always sorted according to timestamp in traces.

The second argument to ttb:format/2 is a list of options. The out option specifies the destination where the formatted text shall be written. Default destination is standard_io, but a filename can also be given. The handler option specifies the format handler to use. If this option is not given, the handler option given when starting the tracer is used. If the handler option was not given when starting the tracer either, a default handler is used, which prints each trace message as a line of text. The disable_sort option indicates that there logs should not be merged according to timestamp, but processed one file after another (this might be a bit faster).

A format handler is a fun taking four arguments. This fun will be called for each trace message in the binary log(s). A simple example which only prints each trace message could be like this:

fun(Fd, Trace, _TraceInfo, State) -> io:format(Fd, "Trace: ~p~n", [Trace]), State end.

Fd is the file descriptor for the destination file, or the atom standard_io. _TraceInfo contains information from the trace information file (see Trace Information and the .ti File). State is a state variable for the format handler fun. The initial value of the State variable is given with the handler option, e.g.

ttb:format("tiger@durin-ttb", [{handler, {{Mod,Fun}, initial_state}}]) ^^^^^^^^^^^^^

Another format handler could be used to calculate time spent by the garbage collector:

fun(_Fd,{trace_ts,P,gc_start,_Info,StartTs},_TraceInfo,State) -> [{P,StartTs}|State]; (Fd,{trace_ts,P,gc_end,_Info,EndTs},_TraceInfo,State) -> {value,{P,StartTs}} = lists:keysearch(P,1,State), Time = diff(StartTs,EndTs), io:format("GC in process ~w: ~w milliseconds~n", [P,Time]), State -- [{P,StartTs}] end

A more refined version of this format handler is the function handle_gc/4 in the module multitrace.erl which can be found in the src directory of the Observer application.

The actual trace message is passed as the second argument (Trace). The possible values of Trace are:

all trace messages described in erlang:trace/3 documentation, {drop, N} if ip tracer is used (see dbg:trace_port/2), end_of_trace received once when all trace messages have been processed.

By giving the format handler ttb:get_et_handler(), you can have the trace log presented graphically with et_viewer in the Event Tracer application (see Presenting trace logs with Event Tracer).

You may always decide not to format the whole trace data contained in the fetch directory, but analyze single files instead. In order to do so, a single file (or list of files) have to be passed as the first argument to format/1/2.

Wrap logs can be formatted one by one or all in one go. To format one of the wrap logs in a set, give the exact name of the file. To format the whole set of wrap logs, give the name with '*' instead of the wrap count. An example:

Start tracing:

(tiger@durin)1> ttb:tracer(node(),{file,{wrap,"trace"}}). {ok,[tiger@durin]} (tiger@durin)2> ttb:p(...) ...

This will give a set of binary logs, like:

tiger@durin-trace.0.wrp tiger@durin-trace.1.wrp tiger@durin-trace.2.wrp ...

Format the whole set of logs:

1> ttb:format("tiger@durin-trace.*.wrp"). .... ok 2>

Format only the first log:

1> ttb:format("tiger@durin-trace.0.wrp"). .... ok 2>

To merge all wrap logs from two nodes:

1> ttb:format(["tiger@durin-trace.*.wrp","lion@durin-trace.*.wrp"]). .... ok 2>
Presenting trace logs with Event Tracer

For detailed information about the Event Tracer, please turn to the User's Guide and Reference Manuals for the et application.

By giving the format handler ttb:get_et_handler(), you can have the trace log presented graphically with et_viewer in the Event Tracer application. ttb provides a few different filters which can be selected from the Filter menu in the et_viewer window. The filters are names according to the type of actors they present (i.e. what each vertical line in the sequence diagram represent). Interaction between actors is shown as red arrows between two vertical lines, and activities within an actor are shown as blue text to the right of the actors line.

The processes filter is the only filter which will show all trace messages from a trace log. Each vertical line in the sequence diagram represents a process. Erlang messages, spawn and link/unlink are typical interactions between processes. Function calls, scheduling and garbage collection are typical activities within a process. processes is the default filter.

The rest of the filters will only show function calls and function returns. All other trace message are discarded. To get the most out of these filters, et_viewer needs to known the caller of each function and the time of return. This can be obtained by using both the call and return_to flags when tracing. Note that the return_to flag only works with local call trace, i.e. when trace patterns are set with ttb:tpl.

The same result can be obtained by using the call flag only and setting a match specification like this on local or global function calls:

1> dbg:fun2ms(fun(_) -> return_trace(),message(caller()) end). [{'_',[],[{return_trace},{message,{caller}}]}]

This should however be done with care, since the {return_trace} function in the match specification will destroy tail recursiveness.

The modules filter shows each module as a vertical line in the sequence diagram. External function calls/returns are shown as interactions between modules and internal function calls/returns are shown as activities within a module.

The functions filter shows each function as a vertical line in the sequence diagram. A function calling itself is shown as an activity within a function, and all other function calls are shown as interactions between functions.

The mods_and_procs and funcs_and_procs filters are equivalent to the modules and functions filters respectively, except that each module or function can have several vertical lines, one for each process it resides on.

In the next example, modules foo and bar are used:

-module(foo). -export([start/0,go/0]). start() -> spawn(?MODULE, go, []). go() -> receive stop -> ok; go -> bar:f1(), go() end. -module(bar). -export([f1/0,f3/0]). f1() -> f2(), ok. f2() -> spawn(?MODULE,f3,[]). f3() -> ok.

Now let's set up the trace.

(tiger@durin)1>%%First we retrieve the Pid to limit traced processes set (tiger@durin)1>Pid = foo:start(). (tiger@durin)2>%%Now we set up tracing (tiger@durin)2>ttb:tracer(). (tiger@durin)3>ttb:p(Pid, [call, return_to, procs, set_on_spawn]). (tiger@durin)4>ttb:tpl(bar, []). (tiger@durin)5>%%Invoke our test function and see output with et viewer (tiger@durin)5>Pid ! go. (tiger@durin)6>ttb:stop({format, {handler, ttb:get_et_handler()}}).

This shoud render a result similar to the following:

Filter: "processes" Filter: "mods_and_procs"

Note, that we can use ttb:start_trace/4 function to help us here:

(tiger@durin)1>Pid = foo:start(). (tiger@durin)2>ttb:start_trace([node()], [{bar,[]}], {Pid, [call, return_to, procs, set_on_spawn]} {handler, ttb:get_et_handler()}). (tiger@durin)3>Pid ! go. (tiger@durin)4>ttb:stop(format).
Automatically collect and format logs from all nodes

By default ttb:stop/1 fetches trace logs and trace information files from all nodes. The logs are stored in a new directory named ttb_upload-Filename-Timestamp under the working directory of the trace control node. Fetching may be disabled by providing the nofetch option to ttb:stop/1. User can specify a fetch directory of his choice passing the {fetch_dir, Dir} option.

If the option format is given to ttb:stop/1, the trace logs are automatically formatted after tracing is stopped.

History and Configuration Files

For the tracing functionality, dbg could be used instead of the ttb for setting trace flags on processes and trace patterns for call trace, i.e. the functions p, tp, tpl, ctp, ctpl and ctpg. There are only two things added by ttb for these functions: all calls are stored in the history buffer and can be recalled and stored in a configuration file. This makes it easy to setup the same trace environment e.g. if you want to compare two test runs. It also reduces the amount of typing when using ttb from the erlang shell; shortcuts are provided for the most common match specifications (in order not to force the user to use dbg:fun2ms continually).

Use list_history/0 to see the content of the history buffer, and run_history/1 to re-execute one of the entries.

The main purpose of the history buffer is the possibility to create configuration files. Any function stored in the history buffer can be written to a configuration file and used for creating a specific configuration at any time with one single function call.

A configuration file is created or extended with write_config/2/3. Configuration files are binary files and can therefore only be read and written with functions provided by ttb.

You can write the complete content of the history buffer to a config file by calling ttb:write_config(ConfigFile,all). And you can write selected entries from the history by calling ttb:write_config(ConfigFile,NumList), where NumList is a list of integers pointing out the history entries to write. Moreover, the history buffer is always dumped to ttb_last_config when ttb:stop/0/1 is called.

User defined entries can also be written to a config file by calling the function ttb:write_config(ConfigFile,ConfigList) where ConfigList is a list of {Module,Function,Args}.

Any existing file ConfigFile is deleted and a new file is created when write_config/2 is called. The option append can be used if you wish to add something at the end of an existing config file, e.g. ttb:write_config(ConfigFile,What,[append]).

Example: History and configuration files

See the content of the history buffer

ttb:tracer(). {ok,[tiger@durin]} (tiger@durin)192> ttb:p(self(),[garbage_collection,call]). {ok,{[<0.1244.0>],[garbage_collection,call]}} (tiger@durin)193> ttb:tp(ets,new,2,[]). {ok,[{matched,1}]} (tiger@durin)194> ttb:list_history(). [{1,{ttb,tracer,[tiger@durin,[]]}}, {2,{ttb,p,[<0.1244.0>,[garbage_collection,call]]}}, {3,{ttb,tp,[ets,new,2,[]]}}] ]]>

Execute an entry from the history buffer:

ttb:ctp(ets,new,2). {ok,[{matched,1}]} (tiger@durin)196> ttb:list_history(). [{1,{ttb,tracer,[tiger@durin,[]]}}, {2,{ttb,p,[<0.1244.0>,[garbage_collection,call]]}}, {3,{ttb,tp,[ets,new,2,[]]}}, {4,{ttb,ctp,[ets,new,2]}}] (tiger@durin)197> ttb:run_history(3). ttb:tp(ets,new,2,[]) -> {ok,[{matched,1}]} ]]>

Write the content of the history buffer to a configuration file:

ttb:write_config("myconfig",all). ok (tiger@durin)199> ttb:list_config("myconfig"). [{1,{ttb,tracer,[tiger@durin,[]]}}, {2,{ttb,p,[<0.1244.0>,[garbage_collection,call]]}}, {3,{ttb,tp,[ets,new,2,[]]}}, {4,{ttb,ctp,[ets,new,2]}}, {5,{ttb,tp,[ets,new,2,[]]}}] ]]>

Extend an existing configuration:

ttb:write_config("myconfig",[{ttb,tp,[ets,delete,1,[]]}], [append]). ok (tiger@durin)201> ttb:list_config("myconfig"). [{1,{ttb,tracer,[tiger@durin,[]]}}, {2,{ttb,p,[<0.1244.0>,[garbage_collection,call]]}}, {3,{ttb,tp,[ets,new,2,[]]}}, {4,{ttb,ctp,[ets,new,2]}}, {5,{ttb,tp,[ets,new,2,[]]}}, {6,{ttb,tp,[ets,delete,1,[]]}}] ]]>

Go back to a previous configuration after stopping Trace Tool Builder:

ttb:stop(). ok (tiger@durin)203> ttb:run_config("myconfig"). ttb:tracer(tiger@durin,[]) -> {ok,[tiger@durin]} ttb:p(<0.1244.0>,[garbage_collection,call]) -> {ok,{[<0.1244.0>],[garbage_collection,call]}} ttb:tp(ets,new,2,[]) -> {ok,[{matched,1}]} ttb:ctp(ets,new,2) -> {ok,[{matched,1}]} ttb:tp(ets,new,2,[]) -> {ok,[{matched,1}]} ttb:tp(ets,delete,1,[]) -> {ok,[{matched,1}]} ok ]]>

Write selected entries from the history buffer to a configuration file:

ttb:list_history(). [{1,{ttb,tracer,[tiger@durin,[]]}}, {2,{ttb,p,[<0.1244.0>,[garbage_collection,call]]}}, {3,{ttb,tp,[ets,new,2,[]]}}, {4,{ttb,ctp,[ets,new,2]}}, {5,{ttb,tp,[ets,new,2,[]]}}, {6,{ttb,tp,[ets,delete,1,[]]}}] (tiger@durin)205> ttb:write_config("myconfig",[1,2,3,6]). ok (tiger@durin)206> ttb:list_config("myconfig"). [{1,{ttb,tracer,[tiger@durin,[]]}}, {2,{ttb,p,[<0.1244.0>,[garbage_collection,call]]}}, {3,{ttb,tp,[ets,new,2,[]]}}, {4,{ttb,tp,[ets,delete,1,[]]}}] (tiger@durin)207> ]]>
Sequential Tracing

To learn what sequential tracing is and how it can be used, please turn to the reference manual for the seq_trace module in the kernel application.

The support for sequential tracing provided by the Trace Tool Builder includes

Initiation of the system tracer. This is automatically done when a trace port is started with ttb:tracer/0/1/2 Creation of match specifications which activates sequential tracing

Starting sequential tracing requires that a tracer has been started with the ttb:tracer/0/1/2 function. Sequential tracing can then either be started via a trigger function with a match specification created with ttb:seq_trigger_ms/0/1, or directly by using the seq_trace module in the kernel application.

Example: Sequential tracing

In the following example, the function dbg:get_tracer/0 is used as trigger for sequential tracing:

ttb:tracer(). {ok,[tiger@durin]} (tiger@durin)111> ttb:p(self(),call). {ok,{[<0.158.0>],[call]}} (tiger@durin)112> ttb:tp(dbg,get_tracer,0,ttb:seq_trigger_ms(send)). {ok,[{matched,1},{saved,1}]} (tiger@durin)113> dbg:get_tracer(), seq_trace:reset_trace(). true (tiger@durin)114> ttb:stop(format). ({<0.158.0>,{shell,evaluator,3},tiger@durin}) call dbg:get_tracer() SeqTrace [0]: ({<0.158.0>,{shell,evaluator,3},tiger@durin}) {<0.237.0>,dbg,tiger@durin} ! {<0.158.0>,{get_tracer,tiger@durin}} [Serial: {0,1}] SeqTrace [0]: ({<0.237.0>,dbg,tiger@durin}) {<0.158.0>,{shell,evaluator,3},tiger@durin} ! {dbg,{ok,#Port<0.222>}} [Serial: {1,2}] ok (tiger@durin)116> ]]>

Starting sequential tracing with a trigger is actually more useful if the trigger function is not called directly from the shell, but rather implicitly within a larger system. When calling a function from the shell, it is simpler to start sequential tracing directly, e.g.

ttb:tracer(). {ok,[tiger@durin]} (tiger@durin)117> seq_trace:set_token(send,true), dbg:get_tracer(), seq_trace:reset_trace(). true (tiger@durin)118> ttb:stop(format). SeqTrace [0]: ({<0.158.0>,{shell,evaluator,3},tiger@durin}) {<0.246.0>,dbg,tiger@durin} ! {<0.158.0>,{get_tracer,tiger@durin}} [Serial: {0,1}] SeqTrace [0]: ({<0.246.0>,dbg,tiger@durin}) {<0.158.0>,{shell,evaluator,3},tiger@durin} ! {dbg,{ok,#Port<0.229>}} [Serial: {1,2}] ok (tiger@durin)120> ]]>

In both examples above, the seq_trace:reset_trace/0 resets the trace token immediately after the traced function in order to avoid lots of trace messages due to the printouts in the erlang shell.

All functions in the seq_trace module, except set_system_tracer/1, can be used after the trace port has been started with ttb:tracer/0/1/2.

Example: Multipurpose trace tool

The module multitrace.erl which can be found in the src directory of the Observer application implements a small tool with three possible trace settings. The trace messages are written to binary files which can be formatted with the function multitrace:format/1/2.

multitrace:debug(What) Start calltrace on all processes and trace the given function(s). The format handler used is multitrace:handle_debug/4 which prints each call and return. What must be an item or a list of items to trace, given on the format {Module,Function,Arity}, {Module,Function} or just Module. multitrace:gc(Procs) Trace garbage collection on the given process(es). The format handler used is multitrace:handle_gc/4 which prints start and stop and the time spent for each GC. multitrace:schedule(Procs) Trace in- and out-scheduling on the given process(es). The format handler used is multitrace:handle_schedule/4 which prints each in and out scheduling with process, timestamp and current function. It also prints the total time each traced process was scheduled in.