summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSaya Sugiura <ssugiura@jp.adit-jv.com>2019-06-26 17:54:42 +0900
committerSaya Sugiura <ssugiura@jp.adit-jv.com>2019-06-27 11:02:58 +0900
commitd073bf322cf2f8129c9bce24a3473bf7ddc2b09c (patch)
tree2eb856bd342e1bacc53fbe3f2ba67a3dfe40ea1d
parentf3a018d961af63ca1974e8f2197e441348d2e5a0 (diff)
downloadDLT-daemon-d073bf322cf2f8129c9bce24a3473bf7ddc2b09c.tar.gz
doc: Improve markdown documents
This includes improvement of markdown formatting as well as changes in description itself. Signed-off-by: Saya Sugiura <ssugiura@jp.adit-jv.com>
-rw-r--r--doc/dlt_cdh.md41
-rw-r--r--doc/dlt_extended_network_trace.md22
-rw-r--r--doc/dlt_filetransfer.md84
-rw-r--r--doc/dlt_kpi.md72
-rw-r--r--doc/dlt_multinode.md68
-rw-r--r--doc/dlt_offline_logstorage.md88
6 files changed, 242 insertions, 133 deletions
diff --git a/doc/dlt_cdh.md b/doc/dlt_cdh.md
index 4433829..235cd9f 100644
--- a/doc/dlt_cdh.md
+++ b/doc/dlt_cdh.md
@@ -4,7 +4,12 @@ Back to [README.md](../README.md)
## Overview
-When a program crash occurs on the system the Core Dump Handler is triggered to extract relevant information from the core dump generated by the system. The handler stores this extracted information in the ECU's file system as Core Dump Handler Files. These files are transported via the link:dlt_filetransfer.html[DLT Filetransfer] mechanism. The transferred information can be combined and integrated into the developer toolchain (gdb, Release SW, etc.).
+When a program crash occurs on the system, the Core Dump Handler is triggered to
+extract relevant information from the core dump generated by the system. It
+stores this extracted information in the ECU's file system as Core Dump Handler
+Files. These files are transported via the [DLT Filetransfer](dlt_filetransfer.md)
+mechanism. The transferred information can be combined and integrated into the
+developer toolchain (gdb, Release SW, etc.).
![alt text](images/dlt_core_dump_handler.png "DLT CDH")
@@ -16,7 +21,8 @@ Add
`-DWITH_DLT_COREDUMPHANDLER=ON -DTARGET_CPU_NAME={i686|x86_64}`
-options to cmake. The core dump handler code currently supports the i686 and x86_64 architecture.
+options to cmake. The core dump handler code currently supports the i686 and
+x86\_64 architecture.
### Temporary activation as replacement for default crash handler until next reboot
@@ -24,7 +30,9 @@ As *root* (not sudo) execute the following:
`echo "|/usr/local/bin/dlt-cdh %t %p %s %e" > /proc/sys/kernel/core_pattern`
-NOTE: replace */usr/local/bin* with the path dlt-cdh has been installed to. This instructs the kernel to pipe a core dump as standard input to dlt-cdh together with the following parameters:
+NOTE: replace */usr/local/bin* with the path dlt-cdh has been installed to. This
+instructs the kernel to pipe a core dump as standard input to dlt-cdh together
+with the following parameters:
- %t time of dump
- %p PID of dumped process
@@ -35,11 +43,12 @@ See
`man core`
-for details
+for details.
### Persistent activation as replacement for default crash handler
-In */usr/lib/sysctl.d/* the file *50-coredump.conf* has to be created which is done automatically by
+In */usr/lib/sysctl.d/* the file *50-coredump.conf* has to be created which is
+done automatically by
`make install`
@@ -47,13 +56,15 @@ Unfortunately - at least on Fedora systems - abrt has to be removed with
`yum remove abrtd*`
-because it ruthlessly overwrites our change at every boot. The core dump handler can be activated then without reboot by running
+because it ruthlessly overwrites our change at every boot. The core dump handler
+can be activated then without reboot by running
`sysctl -p /usr/lib/sysctl.d/50-coredump.conf`
-### Configuration of link:dlt_filetransfer.html[DLT Filetransfer] for usage with dlt-cdh
+### Configuration of [DLT Filetransfer](dlt_filetransfer.md) for usage with dlt-cdh
-Make sure the following is set in the "Filetransfer Manager" section of */etc/dlt-system.conf*:
+Make sure the following is set in the "Filetransfer Manager" section of
+*/etc/dlt-system.conf*:
```
...
@@ -65,21 +76,23 @@ FiletransferDirectory = /var/core
### Generation of core dump
-When a crash happens the kernel invokes dlt-cdh and passes it the core dump as standard input. dlt-cdh does the following tasks:
+When a crash happens the kernel invokes dlt-cdh and passes it the core dump as
+standard input. dlt-cdh does the following tasks:
- check if enough disk space available
- create target directories if not existing:
- /var/core
- - /var/core_tmp
- - /tmp/.core_locks
-- clean /var/core_tmp
+ - /var/core\_tmp
+ - /tmp/.core\_locks
+- clean /var/core\_tmp
- retrieve context data mainly from /proc fs of the crashed process to a temporary context file in text format
- initialise core dump
- read ELF headers and notes to temporary core dump output file
- move context file and core dump to /var/core
- create id which identifies the crash
-After the files have been moved to /var/core the [File Transfer](dlt_filetransfer.md) mechanism ensures that they are sent to connected clients.
+After the files have been moved to /var/core the [DLT Filetransfer](dlt_filetransfer.md)
+mechanism ensures that they are sent to connected clients.
## AUTHOR
@@ -87,4 +100,4 @@ Lutz Helwing <Lutz_Helwing (at) mentor (dot) com>
## COPYRIGHT
-Copyright (C) 2011 - 2015 BMW AG. License MPL-2.0: Mozilla Public License version 2.0 <http://mozilla.org/MPL/2.0/>. \ No newline at end of file
+Copyright (C) 2011 - 2015 BMW AG. License MPL-2.0: Mozilla Public License version 2.0 <http://mozilla.org/MPL/2.0/>.
diff --git a/doc/dlt_extended_network_trace.md b/doc/dlt_extended_network_trace.md
index 8bc7ea0..bb2f09b 100644
--- a/doc/dlt_extended_network_trace.md
+++ b/doc/dlt_extended_network_trace.md
@@ -4,23 +4,31 @@ Back to [README.md](../README.md)
## Introduction
-The extended network trace allows the user to send or truncate network trace messages that are larger than the normal maximum size of a DLT message.
+The extended network trace allows the user to send or truncate network trace
+messages that are larger than the normal maximum size of a DLT message.
## Protocol
-When truncation of messages is allowed, the truncated messages will be wrapped into a special message which indicates that a network trace message was truncated and what was the original size of the message.
+When truncation of messages is allowed, the truncated messages will be wrapped
+into a special message which indicates that a network trace message was
+truncated and what was the original size of the message.
-Segmented messages are sent in multiple packages. The package stream is prepended with a a start message indicating which contain a unique handle for this stream, size of data to follow, count of segments to follow and segment size.
+Segmented messages are sent in multiple packages. The package stream is
+prepended with a start message indicating which contains a unique handle for
+this stream, size of data to follow, count of segments to follow and segment
+size.
-Each segment contains the stream handle, segment sequence number, the data and data length.
+Each segment contains the stream handle, segment sequence number, the data and
+data length.
-Finally after sending all the data segments, one more packet is sent to indicate the end of the stream.
+Finally after sending all the data segments, one more packet is sent to indicate
+the end of the stream.
## Truncated package
Truncated message can be sent using the following function:
-` int dlt_user_trace_network_truncated(DltContext *handle, DltNetworkTraceType nw_trace_type, uint16_t header_len, void *header, uint16_t payload_len, void *payload, int allow_truncate) `
+` int dlt_user_trace_network_truncated(DltContext *handle, DltNetworkTraceType nw_trace_type, uint16_t header_len, void *header, uint16_t payload_len, void *payload, int allow_truncate) `
This will send a packet in the following format:
@@ -88,4 +96,4 @@ Lassi Marttala <Lassi.LM.Marttala (at) partner (dot) bmw (dot) de>
## COPYRIGHT
-Copyright (C) 2011 - 2015 BMW AG. License MPL-2.0: Mozilla Public License version 2.0 <http://mozilla.org/MPL/2.0/>. \ No newline at end of file
+Copyright (C) 2011 - 2015 BMW AG. License MPL-2.0: Mozilla Public License version 2.0 <http://mozilla.org/MPL/2.0/>.
diff --git a/doc/dlt_filetransfer.md b/doc/dlt_filetransfer.md
index c6c0e40..52593b4 100644
--- a/doc/dlt_filetransfer.md
+++ b/doc/dlt_filetransfer.md
@@ -4,21 +4,27 @@ Back to [README.md](../README.md)
## Overview
-DLT is a reusable open source software component for standardized logging and tracing in infotainment ECUs based on the AUTOSAR 4.0 standard.
+DLT is a reusable open source software component for standardized logging and
+tracing in infotainment ECUs based on the AUTOSAR 4.0 standard.
-The goal of DLT is the consolidation of the existing variety of logging and tracing protocols on one format.
+The goal of DLT is the consolidation of the existing variety of logging and
+tracing protocols on one format.
## Introduction to DLT Filetransfer
-With DLT Filetransfer it is possible store the binary data of a file to the automotive dlt log.
+With DLT Filetransfer it is possible store the binary data of a file to the
+automotive dlt log.
-The file will be read in binary mode and put as several chunks to a DLT_INFO log. With a special plugin of the dlt viewer, you can extract the embedded files from the trace and save them.
+The file will be read in binary mode and put as several chunks to a DLT\_INFO
+log. With a special plugin of the dlt viewer, you can extract the embedded files
+from the trace and save them.
It can be used for smaller files, e.g. HMI screenshots or little coredumps.
## Protocol
-The file transfer is at least one single transaction. This transaction consist of three main types of packages:
+The file transfer is at least one single transaction. This transaction consist
+of three main types of packages:
- header package
- one or more data packages
@@ -45,7 +51,8 @@ FLST | Package flag
## Data Package
-After the header package was sent, at least one or more data packages can be send using:
+After the header package was sent, at least one or more data packages can be
+sent using:
` int dlt_user_log_file_data(DltContext *fileContext,const char *filename,int packageToTransfer, int timeout) `
@@ -61,7 +68,8 @@ FLDA | Package flag
## End Package
-After all data packages were sent, the end package must be sent to indicate that the filetransfer is over using:
+After all data packages were sent, the end package must be sent to indicate that
+the filetransfer is over using:
` int dlt_user_log_file_end(DltContext *fileContext,const char *filename,int deleteFlag) `
@@ -75,7 +83,8 @@ FLFI | Package flag
## File information
-The library offers the user the possibility to log informations about a file using the following method without transferring the file itself using:
+The library offers the user the possibility to log informations about a file
+using the following method without transferring the file itself using:
` dlt_user_log_file_infoAbout(DltContext *fileContext, const char *filename) `
@@ -116,7 +125,7 @@ FLIF | Package flag
#define ERROR_PACKAGE_COUNT -800
```
-If an error happens during file transfer, the library will execute the mehtod:
+If an error happens during file transfer, the library will execute the method:
` void dlt_user_log_file_errorMessage(DltContext *fileContext, const char *filename, int errorCode) `
@@ -134,7 +143,8 @@ file creation date | Creation date of the file
number of packages | Counted packages which will be transferred in the data packages
FLER | Package flag
-If the file doesn't exist, the conent of the error package is a little bit different:
+If the file doesn't exist, the content of the error package is a little bit
+different:
Value | Description
:--- | :---
@@ -155,7 +165,7 @@ There are two ways to use the filetransfer
Call
-- dlt_user_log_file_complete
+- dlt\_user\_log\_file\_complete
The method needs the following arguments:
@@ -164,33 +174,39 @@ The method needs the following arguments:
- deleteFlag -> Flag if the file will be deleted after transfer. 1->delete, 0->notDelete
- timeout -> Deprecated.
-The order of the packages is to send at first the header, then one or more data packages (depends on the filesize) and in the end the end package.
-The advantage of this method is, that you must not handle the package ordering by your own.
+The order of the packages is to send at first the header, then one or more data
+packages (depends on the filesize) and in the end the end package. The advantage
+of this method is, that you must not handle the package ordering by your own.
-Within dlt_user_log_file_complete the free space of the user buffer will be checked. If the free space of the user buffer < 50% then the
-actual package won't be transferred and a timeout will be executed.
+Within dlt\_user\_log\_file\_complete the free space of the user buffer will be
+checked. If the free space of the user buffer < 50% then the actual package
+won't be transferred and a timeout will be executed.
-If the daemon crashes and the user buffer is full -> the automatic method is in an endless loop.
+If the daemon crashes and the user buffer is full, the automatic method is in an
+endless loop.
### Manual
Manual starting filetransfer with the following commands:
-- dlt_user_log_file_head | Transfers only the header of the file
-- dlt_user_log_file_data | Transfers only one single package of a file
-- dlt_user_log_file_end | Tranfers only the end of the file
+- dlt\_user\_log\_file\_head | Transfers only the header of the file
+- dlt\_user\_log\_file\_data | Transfers only one single package of a file
+- dlt\_user\_log\_file\_end | Tranfers only the end of the file
-This ordering is very important, so that you can save the transferred files to hard disk on client side with a dlt viewer plugin.
-The advantage of using several steps to transfer files by your own is, that you are very flexible to integrate the filetransfer
-in your code.
+This ordering is very important, so that you can save the transferred files to
+hard disk on client side with a dlt viewer plugin. The advantage of using
+several steps to transfer files by your own is, that you are very flexible to
+integrate the filetransfer in your code.
-An other difference to the automatic method is, that only a timeout will be done. There is no check of the user buffer.
+An other difference to the automatic method is, that only a timeout will be
+done. There is no check of the user buffer.
## Important for integration
-You should care about blocking the main program when you intergrate filetransfer in your code.
-Maybe it's useful to extract the filetransfer in an extra thread.
-Another point is the filesize. The bigger the file is, the longer takes it to log the file to dlt.
+You should care about blocking the main program when you intergrate filetransfer
+in your code. Maybe it's useful to extract the filetransfer in an extra thread.
+Another point is the filesize. The bigger the file is, the longer takes it to
+log the file to dlt.
## Example dlt filetransfer
@@ -212,17 +228,21 @@ Options:
## Testing dlt filetransfer
-When you call "sudo make install", some automatic tests will be installed. Start the test using the following command from bash:
+When you call "sudo make install", some automatic tests will be installed. Start
+the test using the following command from bash:
` dlt-test-filetransfer `
-It's important that the dlt-filetransfer example files are installed in /usr/share/dlt-filetransfer which will be done automatically by using "sudo make install".
+It's important that the dlt-filetransfer example files are installed in
+/usr/share/dlt-filetransfer which will be done automatically by using
+"sudo make install". If not, use -t and -i options to specify the path to a text
+file and an image file.
-- testFile1Run1: Test the file transfer with the condition that the transferred file is smaller as the file transfer buffer using dlt_user_log_file_complete.
+- testFile1Run1: Test the file transfer with the condition that the transferred file is smaller as the file transfer buffer using dlt\_user\_log\_file\_complete.
- testFile1Run2: Test the file transfer with the condition that the transferred file is smaller as the file transfer buffer using single package transfer
-- testFile2Run1: Test the file transfer with the condition that the transferred file is bigger as the file transfer buffer using dlt_user_log_file_complete.
+- testFile2Run1: Test the file transfer with the condition that the transferred file is bigger as the file transfer buffer using dlt\_user\_log\_file\_complete.
- testFile2Run2: Test the file transfer with the condition that the transferred file is bigger as the file transfer buffer using single package transfer
-- testFile3Run1: Test the file transfer with the condition that the transferred file does not exist using dlt_user_log_file_complete.
+- testFile3Run1: Test the file transfer with the condition that the transferred file does not exist using dlt\_user\_log\_file\_complete.
- testFile3Run2: Test the file transfer with the condition that the transferred file does not exist using single package transfer
- testFile3Run3: Test which logs some information about the file.
@@ -232,4 +252,4 @@ Christian Muck <Christian (dot) Muck (at) bmw (dot) de>
## COPYRIGHT
-Copyright (C) 2012 - 2015 BMW AG. License MPL-2.0: Mozilla Public License version 2.0 <http://mozilla.org/MPL/2.0/>. \ No newline at end of file
+Copyright (C) 2012 - 2015 BMW AG. License MPL-2.0: Mozilla Public License version 2.0 <http://mozilla.org/MPL/2.0/>.
diff --git a/doc/dlt_kpi.md b/doc/dlt_kpi.md
index 2531bb3..69b4c46 100644
--- a/doc/dlt_kpi.md
+++ b/doc/dlt_kpi.md
@@ -4,22 +4,36 @@ Back to [README.md](../README.md)
## Overview
-*DLT KPI* is a tool to provide log messages about **K**ey **P**erformance **I**ndicators to the DLT Daemon. The log message format is designed to be both readable by humans and to be parsed by DLT Viewer plugins.
-The information source for the dlt-kpi tool is the /proc file system.
+*DLT KPI* is a tool to provide log messages about **K**ey **P**erformance **I**ndicators
+to the DLT Daemon. The log message format is designed to be both readable by
+humans and to be parsed by DLT Viewer plugins. The information source for the
+dlt-kpi tool is the /proc file system.
## Message format
-*DLT KPI* logs its messages as human readable ASCII messages, divided in multiple arguments. The tool will log messages in user defined intervals, which can be set in the configuration file dlt-kpi.conf.
+*DLT KPI* logs its messages as human readable ASCII messages, divided in
+multiple arguments. The tool will log messages in user defined intervals, which
+can be set in the configuration file dlt-kpi.conf.
-## Identifiers and their datasets:
+## Identifiers and their datasets
-The logged messages always start with a three character long identifier as first argument. After this identifier, they can contain multiple datasets separated in the remaining arguments. The datasets contain information separated by semicolons. The order and meaning of those information chunks is defined below.
+The logged messages always start with a three character long identifier as first
+argument. After this identifier, they can contain multiple datasets separated in
+the remaining arguments. The datasets contain information separated by
+semicolons. The order and meaning of those information chunks is defined below.
-The following will explain the meaning to each three-character-identifier and each information chunk of the datasets associated with this identifier. The example messages all contain only one dataset - in real use, many messages will contain multiple datasets (one per argument).
+The following will explain the meaning to each three-character-identifier and
+each information chunk of the datasets associated with this identifier. The
+example messages all contain only one dataset - in real use, many messages will
+contain multiple datasets (one per argument).
-*NOTE:* Arguments are delimited by spaces when shown in ASCII, but dlt-viewer plugins can easily access each argument separately by certain methods, which makes arguments useful for parsing.
+*NOTE*: Arguments are delimited by spaces when shown in ASCII, but dlt-viewer
+plugins can easily access each argument separately by certain methods, which
+makes arguments useful for parsing.
-*NEW*: This identifies a message that contains datasets describing newly created processes.
+### NEW
+ This identifies a message that contains datasets describing newly created
+processes.
The datasets in these messages have the following form:
@@ -29,7 +43,9 @@ Example message:
`NEW 21226;1;/usr/libexec/nm-dispatcher`
-*STP*: This identifies a message that contains datasets describing processes that have ended since the last interval.
+### STP
+This identifies a message that contains datasets describing processes
+that have ended since the last interval.
The datasets in these messages have the following form:
@@ -39,7 +55,10 @@ Example message:
`STP 20541`
-*ACT*: This identifies a message that contains datasets describing active processes. These are processes that have consumed CPU time since the last interval.
+### ACT
+This identifies a message that contains datasets describing active
+processes. These are processes that have consumed CPU time since the last
+interval.
The datasets in these messages have the following form:
@@ -49,9 +68,13 @@ Example message:
`ACT 20503;10;389;3;1886649;0`
-NOTE: The *CPU time* value is the active time of a process in milliseconds, divided by the number of CPU cores. So this value should never get greater than 1000ms, which would mean 100% CPU usage.
+*NOTE:* The *CPU time* value is the active time of a process in milliseconds,
+divided by the number of CPU cores. So this value should never get greater than
+1000ms, which would mean 100% CPU usage.
-*CHK*: This identifies a message that is logged for each process in a certain interval. These messages can be used to get a list of currently existing processes and to keep a plugin, that tracks running processes, up to date if messages were lost or if the commandlines have changed.
+### CHK
+This identifies a message that is logged for each process in a certain
+interval. These messages can be used to get a list of currently existing processes and to keep a plugin, that tracks running processes, up to date if messages were lost or if the commandlines have changed.
The datasets in these messages have the following form:
@@ -61,7 +84,8 @@ Example message:
`CHK 660;/sbin/audispd`
-*IRQ*: This identifies a message that contains datasets describing the numbers of interrupts that occurred on each CPU.
+### IRQ
+This identifies a message that contains datasets describing the numbers of interrupts that occurred on each CPU.
The datasets in these messages have the following form:
@@ -71,9 +95,19 @@ Example message:
`IRQ 0;cpu0:133;cpu1:0; 1;cpu0:76827;cpu1:0;`
-Synchronization messages:
+## Synchronization messages
-Because the messages can get too long for logging and segmented network messages don't allow for individually set arguments, the datasets can be splitted into multiple messages of the same type (i.e. they have the same identifier). This can make it difficult for an observer (human or machine) to keep track of currently valid information. For example, one can't be sure if a process is part of the list of currently active processes or not, or if this message was part of an older interval that simply arrived too late. So, to correctly associate these messages to each other, each group of potentially "segmented" messages is surrounded by two synchronization messages which start with the same identifier, followed by the codes _BEG_ (for the opening sync message) or _END_ (for the closing sync message). Synchronization messages do not contain datasets.
+Because the messages can get too long for logging and segmented network messages
+don't allow for individually set arguments, the datasets can be splitted into
+multiple messages of the same type (i.e. they have the same identifier). This
+can make it difficult for an observer (human or machine) to keep track of
+currently valid information. For example, one can't be sure if a process is
+part of the list of currently active processes or not, or if this message was
+part of an older interval that simply arrived too late. So, to correctly
+associate these messages to each other, each group of potentially "segmented"
+messages is surrounded by two synchronization messages which start with the same
+identifier, followed by the codes _BEG_ (for the opening sync message) or _END_
+(for the closing sync message). Synchronization messages do not contain datasets.
Example (Messages have been shortened for simplicity):
@@ -84,9 +118,13 @@ ACT 1635;10;10696;8412557;375710810;0 990;10;22027;1176631;0;0
ACT END
```
-Only processes that are part of this group are active at this moment. *ACT* messages that came before this message-group are invalid now.
+Only processes that are part of this group are active at this moment. *ACT*
+messages that came before this message-group are invalid now.
-It can also happen that, between a *BEG* and an *END* sync message, there are messages of other types. So, plugins should not expect these message groups to always be a "solid block", but react on each message individually and dynamically, and store the logged information until the closing *END* message arrives.
+It can also happen that, between a *BEG* and an *END* sync message, there are
+messages of other types. So, plugins should not expect these message groups to
+always be a "solid block", but react on each message individually and
+dynamically, and store the logged information until the closing *END* message arrives.
## AUTHOR
diff --git a/doc/dlt_multinode.md b/doc/dlt_multinode.md
index 1a970e7..ff9c85c 100644
--- a/doc/dlt_multinode.md
+++ b/doc/dlt_multinode.md
@@ -4,23 +4,32 @@ Back to [README.md](../README.md)
## Overview
-MultiNode allows to connect DLT Daemons running on different operating systems, e.g. in a virtualized environment.
-The central component is the Gateway DLT Daemon which connects external DLT Clients, like the DLT Viewer running on a host computer with Passive DLT Daemons running on nodes without a physical connection to external DLT clients.
-All communication between passive nodes and DLT Viewer has to be send via the Gateway node. The Gateway node forwards log messages coming from passive nodes to all connected DLT clients.
-The Gateway DLT Daemon also forwards command and control requests coming from DLT clients to the corresponding passive node.
+MultiNode allows to connect DLT Daemons running on different operating systems,
+e.g. in a virtualized environment. The central component is the Gateway DLT
+Daemon which connects external DLT Clients, like the DLT Viewer running on a
+host computer with Passive DLT Daemons running on nodes without a physical
+connection to external DLT clients. All communication between passive nodes and
+DLT Viewer has to be sent via the Gateway node. The Gateway node forwards log
+messages coming from passive nodes to all connected DLT clients. The Gateway DLT
+Daemon also forwards command and control requests coming from DLT clients to the
+corresponding passive node.
![alt text](images/dlt-multinode.png "DLT MultiNode")
## Precondition
-The dlt.conf configuration file which is read by each DLT Daemon on start-up contains an entry to specify the ECU identifier (node identifier).
-It has to be ensured, that **each DLT Daemon in the System has a unique ECU** identifier specified.
-The ECU identifier is included in every DLT Message and is used to distinguish if a DLT message has to be forwarded to a passive node or handled by the Gateway DLT Daemon itself.
+The dlt.conf configuration file which is read by each DLT Daemon on start-up
+contains an entry to specify the ECU identifier (node identifier). It has to be
+ensured, that **each DLT Daemon in the System has a unique ECU** identifier
+specified. The ECU identifier is included in every DLT Message and is used to
+distinguish if a DLT message has to be forwarded to a passive node or handled by
+the Gateway DLT Daemon itself.
## Configuration
-The dlt.conf configuration file provides an option to enable the Gateway functionality of a DLT Daemon.
-The default setting is 0 (Off), which means the Gateway functionality is not available.
+The dlt.conf configuration file provides an option to enable the Gateway
+functionality of a DLT Daemon. The default setting is 0 (Off), which means the
+Gateway functionality is not available.
```
# Enable Gateway mode (Default: 0)
@@ -29,8 +38,8 @@ GatewayMode = 1
### Gateway Configuration File
-The MultiNode configuration file has to be loaded by the Gateway DLT Daemon during startup.
-
+The MultiNode configuration file has to be loaded by the Gateway DLT Daemon
+during startup.
```
[PassiveNode1]
@@ -51,22 +60,27 @@ SendControl=0x03, 0x13
SendSerialHeader=1
```
-The configuration file is written in an INI file format and contains information about different connected passive nodes.
-Each passive node’s connection parameters are specified in a unique numbered separate section ([PassiveNode{1,2, …N}]).
-Because TCP is the only supported communication channel, the IPaddress and Port of the Passive 682 DLT Daemon has to be specified.
-
-With the Connect property it is possible to specify when the Gateway DLT Daemon shall connect to the passive node.
-The following values are allowed:
- - OnStartup
- The Gateway DLT Daemon tries to connect to the Passive DLT Daemon immediately after the Gateway DLT Daemon is started.
- - OnDemand
- The Gateway DLT Daemon tries to connect to the Passive DLT Daemon when it receives a connection request.
-
-The Timeout property specifies the time after which the Gateway DLT Daemon stops connecting attempts to a Passive DLT Daemon.
-If the connection is not established within the specified time, the Gateway DLT Daemon gives up connecting attempts and writes an error messages to its internal log.
-The following control messages are supported to be send to a passive node automatically after connection is established:
- - 0x03: Get Log Info
- - 0x13: Get Software Version
+The configuration file is written in an INI file format and contains information
+about different connected passive nodes. Each passive node’s connection
+parameters are specified in a unique numbered separate section
+([PassiveNode{1,2, …N}]). Because TCP is the only supported communication
+channel, the IPaddress and Port of the Passive DLT Daemon has to be specified.
+
+With the Connect property it is possible to specify when the Gateway DLT Daemon
+shall connect to the passive node. The following values are allowed:
+- OnStartup - The Gateway DLT Daemon tries to connect to the Passive DLT Daemon
+ immediately after the Gateway DLT Daemon is started.
+- OnDemand - The Gateway DLT Daemon tries to connect to the Passive DLT Daemon
+ when it receives a connection request.
+
+The Timeout property specifies the time after which the Gateway DLT Daemon stops
+connecting attempts to a Passive DLT Daemon. If the connection is not
+established within the specified time, the Gateway DLT Daemon gives up
+connecting attempts and writes an error messages to its internal log. The
+following control messages are supported to be send to a passive node
+automatically after connection is established:
+- 0x03: Get Log Info
+- 0x13: Get Software Version
## Using DLT MultiNode
diff --git a/doc/dlt_offline_logstorage.md b/doc/dlt_offline_logstorage.md
index d028862..e21a574 100644
--- a/doc/dlt_offline_logstorage.md
+++ b/doc/dlt_offline_logstorage.md
@@ -4,8 +4,9 @@ Back to [README.md](../README.md)
## Introduction to DLT Offline Logstorage
-Logstorage is a mechanism to store DLT logs on the target system or an external device (e.g. USB stick) connected to the target.
-It can be seen as an improvement of the Offline Trace functionality which is already part of DLT.
+Logstorage is a mechanism to store DLT logs on the target system or an external
+device (e.g. USB stick) connected to the target. It can be seen as an
+improvement of the Offline Trace functionality which is already part of DLT.
Logstorage provides the following features:
- Store logs in sets of log files defined by configuration files
@@ -28,7 +29,8 @@ Logstorage provides the following features:
### General Configuration
-General configuration is done inside dlt.conf. The following configuration options exist:
+General configuration is done inside dlt.conf. The following configuration
+options exist:
```
##############################################################################
@@ -57,7 +59,9 @@ General configuration is done inside dlt.conf. The following configuration optio
### Configuration file format
-For DLT daemon to store logs the configuration file named “dlt_logstorage.conf” should be present in external storage or internal storage device (= given path in the file system).
+For DLT daemon to store logs the configuration file named “dlt\_logstorage.conf”
+should be present in external storage or internal storage device (= given path
+in the file system).
```
[Filter<unique number>] # filter configration name
@@ -72,7 +76,8 @@ EcuID=<ECUid> # Specify ECU identifier
SpecificSize=<spec size in bytes> # Store logs in storage devices after specific size is reached.
```
-The Parameter "SyncBehavior","EcuID" and "SpecificSize" are optional - all others are mandatory.
+The Parameter "SyncBehavior","EcuID" and "SpecificSize" are optional - all
+others are mandatory.
An configuration file might look like:
@@ -110,31 +115,30 @@ EcuID=ECU1
## Usage DLT Offline Logstorage
-Enable OfflineLogstorage by setting ```OfflineLogstorageMaxDevices = 1``` in dlt.conf.
-Be aware that the performance of DLT may drop if multiple Logstorage devices are used; the performance depends on the write speed of the used device, too.
+Enable OfflineLogstorage by setting ```OfflineLogstorageMaxDevices = 1``` in
+dlt.conf. Be aware that the performance of DLT may drop if multiple Logstorage
+devices are used; the performance depends on the write speed of the used device,
+too.
Create the device folder:
-```
-mkdir -p /var/dltlogs
-```
+```mkdir -p /var/dltlogs```
-Create a configuration file and store it on into that folder or mount an external device containing a configuration file.
+Create a configuration file and store it on into that folder or mount an
+external device containing a configuration file.
-Start the DLT Daemon. This is not necessary if the DLT Daemon was started already with Offline Logstorage enabled.
+Start the DLT Daemon. This is not necessary if the DLT Daemon was started
+already with Offline Logstorage enabled.
Trigger DLT Daemon to use the new logstorage device:
-
```dlt-logstorage-ctrl -c 1 -p /var/dltlogs```
-
-Afterwards, logs that match the filter configuration are stored onto the Logstorage device.
-
+Afterwards, logs that match the filter configuration are stored onto the
+Logstorage device.
```dlt-logstorage-ctrl -c 0 -p /var/dltlogs```
-
The configured logstorage device is disconnected from the DLT Daemon.
@@ -151,6 +155,7 @@ Options:
-e Set ECU ID (Default: ECU1)
-h Usage
-p Mount point path
+ -s Sync Logstorage cache
-t Specify connection timeout (Default: 10s)
-v Set verbose flag (Default:0)
```
@@ -168,16 +173,18 @@ The following procedure can be used to test Offline Logstorage:
```$ mkdir -p /var/dltlog```
-- Create the configuration file "dlt_logstorage.conf" in this folder
+- Create the configuration file "dlt\_logstorage.conf" in this folder
and define filter configuration(s):
- ```$printf "[FILTER1]
+ ```
+ [FILTER1]
LogAppName=LOG
ContextName=TEST
LogLevel=DLT_LOG_WARN
File=example
FileSize=50000
- NOFiles=5" > /tmp/dltlogs/dltlogsdev1/dlt_logstorage.conf```
+ NOFiles=5
+ ```
- Trigger dlt-daemon to use a new device
@@ -188,28 +195,37 @@ The following procedure can be used to test Offline Logstorage:
```$ dlt-example-user Hello123```
- After execution, a log file is created in /var/dltlogs
- e.g. example_001_20150512_133344.dlt
+ e.g. example\_001\_20150512\_133344.dlt
- To check the content of the file open it with dlt-convert or DLT Viewer.
## Logstorage Ring Buffer Implementation
-The DLT Logstorage is mainly used to store a configurable set of logs on an external mass storage device attached to the target.
-In this scenario, writing each incoming log message directly onto the external storage device is appreciate, because the storage device might be un-mounted/suddenly removed at any time.
-Writing each log message immediately avoids the problem of losing too many messages because the file system sync could not be finished before the device has been removed physically from the target.
-On the other hand the DLT Logstorage feature might be used as well to store a configurable set of logs on any internal, nonvolatile memory (e.g. FLASH storage device).
-Due to the reason of limited write cycles of a FLASH device the number of write cycles has to be reduced as much as possible.
-But the drawback of losing log messages in case of an unexpected operating system crash has to be taking into account as well.
-The obvious idea is to cache incoming log messages in memory and write the log data to disk based on a certain strategy.
-Incoming log messages are stored in a data cache with a specific size. Depending on user defined strategy, the data cache is written onto the storage device、without relying on the sync mechanism of the file system.
+The DLT Logstorage is mainly used to store a configurable set of logs on an
+external mass storage device attached to the target. In this scenario, writing
+each incoming log message directly onto the external storage device is
+appreciate, because the storage device might be un-mounted/suddenly removed at
+any time. Writing each log message immediately avoids the problem of losing too
+many messages because the file system sync could not be finished before the
+device has been removed physically from the target. On the other hand the DLT
+Logstorage feature might be used as well to store a configurable set of logs on
+any internal, nonvolatile memory (e.g. FLASH storage device). Due to the reason
+of limited write cycles of a FLASH device the number of write cycles has to be
+reduced as much as possible. But the drawback of losing log messages in case of
+an unexpected operating system crash has to be taking into account as well. The
+obvious idea is to cache incoming log messages in memory and write the log data
+to disk based on a certain strategy. Incoming log messages are stored in a data
+cache with a specific size. Depending on user defined strategy, the data cache
+is written onto the storage device、without relying on the sync mechanism of the
+file system.
The following strategies are implemented:
-- ON_MSG - sync every message(Default)
-- ON_DAEMON_EXIT - sync on daemon exit
-- ON_DEMAND - sync on demand
-- ON_FILE_SIZE - sync on file size reached
-- ON_SPECIFIC_SIZE - sync after specific size is reached
+- ON\_MSG - sync every message(Default)
+- ON\_DAEMON\_EXIT - sync on daemon exit
+- ON\_DEMAND - sync on demand
+- ON\_FILE\_SIZE - sync on file size reached
+- ON\_SPECIFIC\_SIZE - sync after specific size is reached
Note :
-1. Combinations (not allowed: combinations with ON_MSG,combination of ON_FILE_SIZE with ON_SPECIFIC_SIZE)
-2. If on_demand sync strategy alone is specified, it is advised to concatenate the log files in sequential order before viewing it on viewer.
+1. Combinations (not allowed: combinations with ON_MSG,combination of ON\_FILE\_SIZE with ON\_SPECIFIC\_SIZE)
+2. If on\_demand sync strategy alone is specified, it is advised to concatenate the log files in sequential order before viewing it on viewer.