summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAkihiro Motoki <amotoki@gmail.com>2017-07-06 21:50:47 +0000
committerAkihiro Motoki <amotoki@gmail.com>2017-07-06 21:50:47 +0000
commitb0d66a59040c86fce41cce9bca578561c3ecf57e (patch)
tree63ff6526a6a59edd5fc1446897b4965576207e12
parent5d11d7bf5d47ee0ee65f923ae904ca004f5d8ece (diff)
downloadosprofiler-b0d66a59040c86fce41cce9bca578561c3ecf57e.tar.gz
doc: Fix formatting
openstackdocstheme shows vertical lines for quote blocks. This commit removes unnecessary leading spaces. Change-Id: Ie5651f0510550eb3910e68e433a41e99bf2bfa8a
-rw-r--r--doc/source/user/api.rst36
-rw-r--r--doc/source/user/collectors.rst51
-rw-r--r--doc/source/user/integration.rst134
3 files changed, 111 insertions, 110 deletions
diff --git a/doc/source/user/api.rst b/doc/source/user/api.rst
index 05f55f5..78da4d7 100644
--- a/doc/source/user/api.rst
+++ b/doc/source/user/api.rst
@@ -64,17 +64,17 @@ How profiler works?
* Nested trace points are supported. The sample below produces 2 trace points:
- .. code-block:: python
+ .. code-block:: python
profiler.start("parent_point")
profiler.start("child_point")
profiler.stop()
profiler.stop()
- The implementation is quite simple. Profiler has one stack that contains
- ids of all trace points. E.g.:
+ The implementation is quite simple. Profiler has one stack that contains
+ ids of all trace points. E.g.:
- .. code-block:: python
+ .. code-block:: python
profiler.start("parent_point") # trace_stack.push(<new_uuid>)
# send to collector -> trace_stack[-2:]
@@ -87,8 +87,8 @@ How profiler works?
profiler.stop() # send to collector -> trace_stack[-2:]
# trace_stack.pop()
- It's simple to build a tree of nested trace points, having
- **(parent_id, point_id)** of all trace points.
+ It's simple to build a tree of nested trace points, having
+ **(parent_id, point_id)** of all trace points.
Process of sending to collector.
--------------------------------
@@ -207,34 +207,34 @@ Available commands:
* Help message with all available commands and their arguments:
- .. parsed-literal::
+ .. parsed-literal::
- $ osprofiler -h/--help
+ $ osprofiler -h/--help
* OSProfiler version:
- .. parsed-literal::
+ .. parsed-literal::
- $ osprofiler -v/--version
+ $ osprofiler -v/--version
* Results of profiling can be obtained in JSON (option: ``--json``) and HTML
(option: ``--html``) formats:
- .. parsed-literal::
+ .. parsed-literal::
- $ osprofiler trace show <trace_id> --json/--html
+ $ osprofiler trace show <trace_id> --json/--html
- hint: option ``--out`` will redirect result of ``osprofiler trace show``
- in specified file:
+ hint: option ``--out`` will redirect result of ``osprofiler trace show``
+ in specified file:
- .. parsed-literal::
+ .. parsed-literal::
- $ osprofiler trace show <trace_id> --json/--html --out /path/to/file
+ $ osprofiler trace show <trace_id> --json/--html --out /path/to/file
* In latest versions of OSProfiler with storage drivers (e.g. MongoDB (URI:
``mongodb://``), Messaging (URI: ``messaging://``), and Ceilometer
(URI: ``ceilometer://``)) ``--connection-string`` parameter should be set up:
- .. parsed-literal::
+ .. parsed-literal::
- $ osprofiler trace show <trace_id> --connection-string=<URI> --json/--html
+ $ osprofiler trace show <trace_id> --connection-string=<URI> --json/--html
diff --git a/doc/source/user/collectors.rst b/doc/source/user/collectors.rst
index 2bb7bdf..359d547 100644
--- a/doc/source/user/collectors.rst
+++ b/doc/source/user/collectors.rst
@@ -4,36 +4,37 @@ Collectors
There are a number of drivers to support different collector backends:
-Redis:
-------
- * Overview
- The Redis driver allows profiling data to be collected into a redis
- database instance. The traces are stored as key-value pairs where the
- key is a string built using trace ids and timestamps and the values
- are JSON strings containing the trace information. A second driver is
- included to use Redis Sentinel in addition to single node Redis.
+Redis
+-----
- * Capabilities:
- * Write trace data to the database.
+* Overview
- * Query Traces in database: This allows for pulling trace data
- querying on the keys used to save the data in the database.
+ The Redis driver allows profiling data to be collected into a redis
+ database instance. The traces are stored as key-value pairs where the
+ key is a string built using trace ids and timestamps and the values
+ are JSON strings containing the trace information. A second driver is
+ included to use Redis Sentinel in addition to single node Redis.
- * Generate a report based on the traces stored in the database.
+* Capabilities
- * Supports use of Redis Sentinel for robustness.
+ * Write trace data to the database.
+ * Query Traces in database: This allows for pulling trace data
+ querying on the keys used to save the data in the database.
+ * Generate a report based on the traces stored in the database.
+ * Supports use of Redis Sentinel for robustness.
- * Usage:
- The driver is used by OSProfiler when using a connection-string URL
- of the form redis://<hostname>:<port>. To use the Sentinel version
- use a connection-string of the form redissentinel://<hostname>:<port>
+* Usage
- * Configuration:
- * No config changes are required by for the base Redis driver.
+ The driver is used by OSProfiler when using a connection-string URL
+ of the form redis://<hostname>:<port>. To use the Sentinel version
+ use a connection-string of the form redissentinel://<hostname>:<port>
- * There are two configuration options for the Redis Sentinel driver:
- * socket_timeout: specifies the sentinel connection socket timeout
- value. Defaults to: 0.1 seconds
+* Configuration
- * sentinel_service_name: The name of the Sentinel service to use.
- Defaults to: "mymaster" \ No newline at end of file
+ * No config changes are required by for the base Redis driver.
+ * There are two configuration options for the Redis Sentinel driver:
+
+ * socket_timeout: specifies the sentinel connection socket timeout
+ value. Defaults to: 0.1 seconds
+ * sentinel_service_name: The name of the Sentinel service to use.
+ Defaults to: "mymaster"
diff --git a/doc/source/user/integration.rst b/doc/source/user/integration.rst
index c40e209..dfb0de2 100644
--- a/doc/source/user/integration.rst
+++ b/doc/source/user/integration.rst
@@ -9,12 +9,12 @@ What we should use as a centralized collector?
We primarily decided to use `Ceilometer`_, because:
- * It's already integrated in OpenStack, so it's quite simple to send
- notifications to it from all projects.
+* It's already integrated in OpenStack, so it's quite simple to send
+ notifications to it from all projects.
- * There is an OpenStack API in Ceilometer that allows us to retrieve all
- messages related to one trace. Take a look at
- *osprofiler.drivers.ceilometer.Ceilometer:get_report*
+* There is an OpenStack API in Ceilometer that allows us to retrieve all
+ messages related to one trace. Take a look at
+ *osprofiler.drivers.ceilometer.Ceilometer:get_report*
In OSProfiler starting with 1.4.0 version other options (MongoDB driver in
1.4.0 release, Elasticsearch driver added later, etc.) are also available.
@@ -25,12 +25,12 @@ How to setup profiler notifier?
We primarily decided to use oslo.messaging Notifier API, because:
- * `oslo.messaging`_ is integrated in all projects
+* `oslo.messaging`_ is integrated in all projects
- * It's the simplest way to send notification to Ceilometer, take a
- look at: *osprofiler.drivers.messaging.Messaging:notify* method
+* It's the simplest way to send notification to Ceilometer, take a
+ look at: *osprofiler.drivers.messaging.Messaging:notify* method
- * We don't need to add any new `CONF`_ options in projects
+* We don't need to add any new `CONF`_ options in projects
In OSProfiler starting with 1.4.0 version other options (MongoDB driver in
1.4.0 release, Elasticsearch driver added later, etc.) are also available.
@@ -38,93 +38,93 @@ In OSProfiler starting with 1.4.0 version other options (MongoDB driver in
How to initialize profiler, to get one trace across all services?
-----------------------------------------------------------------
- To enable cross service profiling we actually need to do send from caller
- to callee (base_id & trace_id). So callee will be able to init its profiler
- with these values.
+To enable cross service profiling we actually need to do send from caller
+to callee (base_id & trace_id). So callee will be able to init its profiler
+with these values.
- In case of OpenStack there are 2 kinds of interaction between 2 services:
+In case of OpenStack there are 2 kinds of interaction between 2 services:
- * REST API
+* REST API
- It's well known that there are python clients for every project,
- that generate proper HTTP requests, and parse responses to objects.
+ It's well known that there are python clients for every project,
+ that generate proper HTTP requests, and parse responses to objects.
- These python clients are used in 2 cases:
+ These python clients are used in 2 cases:
- * User access -> OpenStack
+ * User access -> OpenStack
- * Service from Project 1 would like to access Service from Project 2
+ * Service from Project 1 would like to access Service from Project 2
- So what we need is to:
+ So what we need is to:
- * Put in python clients headers with trace info (if profiler is inited)
+ * Put in python clients headers with trace info (if profiler is inited)
- * Add `OSprofiler WSGI middleware`_ to your service, this initializes
- the profiler, if and only if there are special trace headers, that
- are signed by one of the HMAC keys from api-paste.ini (if multiple
- keys exist the signing process will continue to use the key that was
- accepted during validation).
+ * Add `OSprofiler WSGI middleware`_ to your service, this initializes
+ the profiler, if and only if there are special trace headers, that
+ are signed by one of the HMAC keys from api-paste.ini (if multiple
+ keys exist the signing process will continue to use the key that was
+ accepted during validation).
- * The common items that are used to configure the middleware are the
- following (these can be provided when initializing the middleware
- object or when setting up the api-paste.ini file)::
+ * The common items that are used to configure the middleware are the
+ following (these can be provided when initializing the middleware
+ object or when setting up the api-paste.ini file)::
- hmac_keys = KEY1, KEY2 (can be a single key as well)
+ hmac_keys = KEY1, KEY2 (can be a single key as well)
- Actually the algorithm is a bit more complex. The Python client will
- also sign the trace info with a `HMAC`_ key (lets call that key ``A``)
- passed to profiler.init, and on reception the WSGI middleware will
- check that it's signed with *one of* the HMAC keys (the wsgi
- server should have key ``A`` as well, but may also have keys ``B``
- and ``C``) that are specified in api-paste.ini. This ensures that only
- the user that knows the HMAC key ``A`` in api-paste.ini can init a
- profiler properly and send trace info that will be actually
- processed. This ensures that trace info that is sent in that
- does **not** pass the HMAC validation will be discarded. **NOTE:** The
- application of many possible *validation* keys makes it possible to
- roll out a key upgrade in a non-impactful manner (by adding a key into
- the list and rolling out that change and then removing the older key at
- some time in the future).
+ Actually the algorithm is a bit more complex. The Python client will
+ also sign the trace info with a `HMAC`_ key (lets call that key ``A``)
+ passed to profiler.init, and on reception the WSGI middleware will
+ check that it's signed with *one of* the HMAC keys (the wsgi
+ server should have key ``A`` as well, but may also have keys ``B``
+ and ``C``) that are specified in api-paste.ini. This ensures that only
+ the user that knows the HMAC key ``A`` in api-paste.ini can init a
+ profiler properly and send trace info that will be actually
+ processed. This ensures that trace info that is sent in that
+ does **not** pass the HMAC validation will be discarded. **NOTE:** The
+ application of many possible *validation* keys makes it possible to
+ roll out a key upgrade in a non-impactful manner (by adding a key into
+ the list and rolling out that change and then removing the older key at
+ some time in the future).
- * RPC API
+* RPC API
- RPC calls are used for interaction between services of one project.
- It's well known that projects are using `oslo.messaging`_ to deal with
- RPC. It's very good, because projects deal with RPC in similar way.
+ RPC calls are used for interaction between services of one project.
+ It's well known that projects are using `oslo.messaging`_ to deal with
+ RPC. It's very good, because projects deal with RPC in similar way.
- So there are 2 required changes:
+ So there are 2 required changes:
- * On callee side put in request context trace info (if profiler was
- initialized)
+ * On callee side put in request context trace info (if profiler was
+ initialized)
- * On caller side initialize profiler, if there is trace info in request
- context.
+ * On caller side initialize profiler, if there is trace info in request
+ context.
- * Trace all methods of callee API (can be done via profiler.trace_cls).
+ * Trace all methods of callee API (can be done via profiler.trace_cls).
What points should be tracked by default?
-----------------------------------------
- I think that for all projects we should include by default 5 kinds of points:
+I think that for all projects we should include by default 5 kinds of points:
- * All HTTP calls - helps to get information about: what HTTP requests were
- done, duration of calls (latency of service), information about projects
- involved in request.
+* All HTTP calls - helps to get information about: what HTTP requests were
+ done, duration of calls (latency of service), information about projects
+ involved in request.
- * All RPC calls - helps to understand duration of parts of request related
- to different services in one project. This information is essential to
- understand which service produce the bottleneck.
+* All RPC calls - helps to understand duration of parts of request related
+ to different services in one project. This information is essential to
+ understand which service produce the bottleneck.
- * All DB API calls - in some cases slow DB query can produce bottleneck. So
- it's quite useful to track how much time request spend in DB layer.
+* All DB API calls - in some cases slow DB query can produce bottleneck. So
+ it's quite useful to track how much time request spend in DB layer.
- * All driver calls - in case of nova, cinder and others we have vendor
- drivers. Duration
+* All driver calls - in case of nova, cinder and others we have vendor
+ drivers. Duration
- * ALL SQL requests (turned off by default, because it produce a lot of
- traffic)
+* ALL SQL requests (turned off by default, because it produce a lot of
+ traffic)
.. _CONF: http://docs.openstack.org/developer/oslo.config/
.. _HMAC: http://en.wikipedia.org/wiki/Hash-based_message_authentication_code