| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
| |
Make '--timestamps' a shorthand for --headers=time.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add optional headers to dmsetup and dmstats reports. These are
selected by the user in a similar way to report fields, e.g.:
Time: 27/07/15 12:07:56 Count: 0
Name RgID ArID RRqM/s WRqM/s R/s W/s RSz/s WSz/s AvRqS QSize SvcTm Util% AWait
vg_hex-lv_home 0 0 0.00 0.00 0.00 43.00 0 416.00k 9.50k 1.00 1.86 8.00 30.44
vg_hex-lv_root 0 0 0.00 0.00 0.00 0.00 0 0 0 0.00 0.00 0.00 0.00
vg_hex-lv_images 0 0 0.00 0.00 0.00 0.00 0 0 0 0.00 0.00 0.00 0.00
Selects the 'time' and 'report_count' headers to be output.
|
|
|
|
|
|
| |
Add a switch to optionally print a timestamp before displaying
each report. Use the same format as iostat for now (ISO format
controlled by S_FORMAT_TIME is also easy to add).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to output a reow of header data before the main
report. This can be used to print a repeating banner including
data such as the time, the report count, and system performance
metrics.
A 'header' behaves in a similar way to a field; they are defined
by passing in an array of header types and selected using a string
of names. This allows programs using dm_report to customize the
available set of headers and allow their display to be configured
by the user.
Headers do not participate in any way in sorting or selection and
can only appear in the special 'header' section of the report.
A row of headers is added to a report by passing in a string of
header names to be parsed. Header output is either written as
soon as it is defined (unbuffered) or when the library user calls
the dm_report_output_header() function.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add arguments, report types, and a 'stats' command that dm-stats will
use and implement 'clear', 'create', 'delete', 'list', 'print', and
'report' sub-commands.
Adapt _display_info_cols() to allow reporting of statistics with the
DR_STATS report type. Since a single object (device) may have many rows
of statistics to report the call to dm_report_object() is placed inside
a loop over each statistics area present.
For non-stats reports or for devices with a single region spanning the
entire device the body of the loop is executed once.
Regions and the areas that they contain are always traversed in
ascending order beginning with area zero of region zero: all sorting is
handled by the report engine.
|
|
|
|
|
| |
Rename these two variables to 'argcp' and 'argvp' to make it clear
we are dealing with pointers to an 'int argc' and 'char **argv'.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add data structures, type definitions and interfaces to
libdevmapper to work with device-mapper statistics.
To simplify handling gaps in the sequence of region_ids for users
of the library, and to allow for future changes in the data
structures used to contain statistics values in userspace, the data
structures themselves are not exported in libdevmapper.h.
Instead an opaque handle of type struct dm_stats* is obtained by
calling the library and all subsequent statistics operations are
carried out using this handle. A dm_stats object represents the
complete set of available counter sets for an individual mapped
device.
The dm_stats handle contains a pointer to a table of one or more
dm_stats_region objects representing the regions registered with the
@stats_create message. These in turn point to a table of one or more
dm_stats_counters objects containing the counter sets for each defined
area within the region:
dm_stats->dm_stats_region[nr_regions]->dm_stats_counters[nr_areas]
This structure is private to the library and may change in future
versions: all users should make use of the public interface and treat
the dm_stats type as an opaque handle. Accessor methods are provided
to obtain values stored in individual region and area objects.
Ranges and counter sets are stored in order of increasing device
sector.
Public methods are provided to create and destroy handles and to
list, create, and destroy, statistics regions as well as to obtain and
parse the counter data.
Linux iostat-style derived performance metrics are provided to return
higher-level performance metrics:
dm_stats_get_throughput()
dm_stats_get_utilization()
dm_stats_get_service_time()
dm_stats_get_rd_merges_per_sec()
dm_stats_get_wr_merges_per_sec()
dm_stats_get_reads_per_sec()
dm_stats_get_read_sectors_per_sec()
dm_stats_get_writes_per_sec()
dm_stats_get_write_sectors_per_sec()
dm_stats_get_average_request_size()
dm_stats_get_average_queue_size()
dm_stats_get_await()
dm_stats_get_r_await()
dm_stats_get_w_await()
|
|
|
|
|
| |
Rename dm_report_headings to dm_report_column_headings() to make
it clear that it's the column headings being output.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Not releasing objects back to the pool is fine for short-lived
pools since the memory will be freed when dm_pool_destroy() is
called.
Any pool that may be long-lived needs to be more careful to free
objects back to the pool to avoid leaking memory that will not be
reclaimed until the pool is destroyed at process exit time.
The report pool currently leaks each headings lines and some row
data.
Although dm_report_output() tries to free the first allocated row
this may end up freeing a later row due to sorting of the row list
while reporting. Store a pointer to the first allocated row from
_do_report_obect() instead and free this at the end of
_output_as_columns(), _output_as_rows(), and dm_report_clear().
Also make sure to call dm_pool_free() for the headings line built
in _report_headings().
Without these changes dmstats reports can leak around 600k in 10m
(exact rate depends on fields and values):
top - 12:11:32 up 4 days, 3:16, 15 users, load average: 0.01, 0.12, 0.14
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6473 root 20 0 130196 3124 2792 S 0.0 0.0 0:00.00 dmstats
top - 12:22:04 up 4 days, 3:26, 15 users, load average: 0.06, 0.11, 0.13
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6498 root 20 0 130836 3712 2752 S 0.0 0.0 0:00.60 dmstats
With this patch no increase in RSS is seen:
top - 13:54:58 up 4 days, 4:59, 15 users, load average: 0.12, 0.14, 0.14
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13962 root 20 0 130196 2996 2688 S 0.0 0.0 0:00.00 dmstats
top - 14:04:31 up 4 days, 5:09, 15 users, load average: 1.02, 0.67, 0.36
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13962 root 20 0 130196 2996 2688 S 0.3 0.0 0:00.32 dmstats
|
|
|
|
|
|
|
| |
Add a call to clear (abandon) a report's current data. This can
be used by callers that make repeating reports, such as dmstats,
in order to throw away the data of the first iteration (which will
have accumulated over some unknown interval).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a function to output the headings of columns-based reports
even if they have already been shown.
This will be used by dmstats reports to produce iostat-like
repeating reports of statistics values.
This patch removes a check for RH_HEADINGS_PRINTED from
_report_headings that prevents headings being displaye if the flag
is already set; this check is redundant since the only existing
caller (_output_as_columns()) already tests the flag before
calling the function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add functions for dealing with reports that repeat at a set time
interval:
dm_report_get_interval()
dm_report_set_interval()
dm_report_wait_interval()
The fist should be called following dm_report_init() to set the
desired interval to be used with this report. Once the report has
been prepared the program should call dm_report_wait_interval()
to suspend execution until the current interval expires. When the
wait function returns the caller should obtain and output report
data for the new interval.
Measure the actual wait interval in dm_report_wait_interval and
add dm_report_get_last_interval() so that callers can obtain it
to pass to statistics methods.
Make report interval handling consistent everywhere in libdm by
storing the report interval in nanoseconds and adding additional
helper functions to get and set a value in miliseconds. This is
consistent with the other parts of libdm that handle statistics
intervals and removes the need to convert between different
representations within the library - scaling is only needed to
either present a value to the user or to pass to an external
function that expects a particular unit of time (e.g. usleep()).
|
|
|
|
|
|
| |
Use refresh_filters instead of destroy_filters and init_filters
in refresh_toolcontext fn which deals with cmd->initialized.filters
correctly on refresh.
|
|
|
|
|
| |
- Add missing check_lvmpolld to toplevel Makefile
- Document check_system
|
| |
|
|
|
|
|
|
| |
When changing an existing VG to lock_type sanlock,
make the sanlock lv large enough to hold all the
locks needed for existing LVs.
|
|
|
|
|
|
| |
. clean up the info output for readability
. remove some internal debug output
. fix the daemon quit option
|
| |
|
|
|
|
|
|
| |
Just shuffle the items and put them into logical groups so it's
visible at first sight what each group contains - it makes it a bit
easier to make heads and tails of the whole cmd_context monster.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a command is flagged with NO_METADATA_PROCESSING flag, it means
such command does not process any metadata and hence it doens't require
lvmetad, lvmpolld and it can get away with no locking too. These are
mostly simple commands (like lvmconfig/dumpconfig, version, types,
segtypes and other builtin commands that do not process metadata
in any way).
At first, when lvm command is executed, create toolcontext without
initializing connections (lvmetad,lvmpolld) and without initializing
filters (which depend on connections init). Instead, delay this
initialization until we know we need this. That is, until the
lvm_run_command fn is called in which we know what the actual
command to run is and hence we can avoid any connection, filter
or locking initiliazation for commands that would not make use
of it anyway.
For all the other create_toolcontext calls, we keep the original
behaviour - the filters and connections are initialized together
with the toolcontext.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make it possible to decide whether we want to initialize connections and
filters together with toolcontext creation.
Add "filters" and "connections" fields to struct
cmd_context_initialized_parts and set these in cmd_context.initialized
instance accordingly.
(For now, all create_toolcontext calls do initialize connections and
filters, we'll change that in subsequent patch appropriately.)
|
|
|
|
|
|
| |
Move original lvmetad and lvmpolld initialization code from
_process_config fn to their own functions _init_lvmetad and
_init_lvmpolld (both covered with single _init_connections fn).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add struct cmd_context_initialized_parts to wrap up information
about which cmd context pieces are initialized and add variable
of this struct type into struct cmd_context.
Also, move existing "config_initialized" variable that was directly
part of cmd_context into the new cmd_context.initialized wrapper.
We'll be adding more items into the struct cmd_context_initialized_parts
with subsequent patches...
|
|
|
|
|
|
| |
When the sanlock VG holding the global lock is removed,
print a warning indicating that the global needs to be
enabled in another sanlock VG.
|
|
|
|
|
|
| |
This tries harder to avoid creating duplicate global locks in
sanlock VGs by refusing to create a new sanlock VG with a
global lock if other sanlock VGs exist that may have a gl.
|
|
|
|
|
|
|
|
|
|
|
| |
vgsummary information contains provisional VG information
that is obtained without holding the VG lock. This info
can be used to lock the VG, and then read it with vg_read().
After the VG is read properly, the vgsummary info should
be verified.
Add the VG lock_type to the vgsummary. It needs to be
known before the VG can be locked and read.
|
| |
|
| |
|
|
|
|
|
| |
If there is exactly one / which is not the first character, check
for /dev/vg/lv (as dm_dir()/../$name i.e. /dev/mapper/../vg/lv.)
|
|
|
|
|
| |
The "-o help" is now handled as implicit field and it gets processed
just like any other field - all handled by libdevmapper now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a regression introduced by commit
6c0e44d5a2e82aa160d48e83992e7ca342bc4bdf which changed
the way dev_cache_get fn works - before this patch, when a
device was not found, it fired a full rescan to correct the
cache. However, the change coming with that commit missed
this full_rescan call, causing the lvmcache to still contain
info about PVs which should be filtered now.
Such situation may have happened by coincidence of using
old persistent cache (/etc/lvm/cache/.cache) which does not
reflect the actual state anymore, a device name/symlink which
now points to a device which should be filtered and a fact we
keep info about usable DM devices in .cache no matter what
the filter setting is.
This bug could be hidden though by changes introduced in
commit f1a000a477558e157532d5f2cd2f9c9139d4f87c as it
calls full_rescan earlier before this problem is hit.
But we need to fix this anyway for the dev_cache_get
to be correct if we happen to use the same code path
again somewhere sometime.
For example, simple reproducer was (before commit
1a000a477558e157532d5f2cd2f9c9139d4f87c):
- /dev/sda contains a PV header with UUID y5PzRD-RBAv-7sBx-V3SP-vDmy-DeSq-GUh65M
- lvm.conf: filter = [ "r|.*|" ]
- rm -f .cache (to start with clean state)
- dmsetup create test --table "0 8388608 linear /dev/sda 0" (8388608 is
just the size of the /dev/sda device I use in the reproducer)
- pvs (this will create .cache file which contains
"/dev/disk/by-id/lvm-pv-uuid-y5PzRD-RBAv-7sBx-V3SP-vDmy-DeSq-GUh65M"
as well as "/dev/mapper/test" and the target node "/dev/dm-1" - all the
usable DM mappings (and their symlinks) get into the .cache file even
though the filter "is set to "ignore all" - we do this - so far it's OK)
- dmsetup remove test (so we end up with /dev/disk/by-id/lvm-pv-uuid-...
pointing to the /dev/sda now since it's the underlying device
containing the actual PV header)
- now calling "pvs" with such .cache file and we get:
$ pvs
PV VG Fmt Attr PSize PFree
/dev/disk/by-id/lvm-pv-uuid-y5PzRD-RBAv-7sBx-V3SP-vDmy-DeSq-GUh65M vg lvm2 a-- 4.00g 0
Even though we have set filter = [ "r|.*|" ] in the lvm.conf file!
|
|
|
|
|
|
| |
Moved out from lib/display and a little documentation added.
It's tuned to LVM's requirements historically and its behaviour
might not always be what you would expect.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
There is no longer an "enable" option for the global lock,
so remove the bit of code that was checking for it. It
was an optional variation anyway, and not one that was likely
to be used.
Also update the corresponding comment describing global lock
creation.
|
|
|
|
|
|
|
|
|
| |
Stop removing hyphens when = is seen. With an option
like --profile=thin-performance, the hyphen removal
will stop at = and will not remove - after thin.
Stop removing hyphens altogether when a stand alone arg
of -- appears.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simply running concurrent copies of 'pvscan | true' is enough to make
clvmd freeze: pvscan exits on the EPIPE without first releasing the
global lock.
clvmd notices the client disappear but because the cleanup code that
releases the locks is triggered from within some processing after the
next select() returns, and that processing can 'break' after doing just
one action, it sometimes never releases the locks to other clients.
Move the cleanup code before the select.
Check all fds after select().
Improve some debug messages and warn in the unlikely event that
select() capacity could soon be exceeded.
|
|
|
|
|
|
|
|
| |
When there are duplicate global locks, check if the gl
is still enabled each time a gl or vg lock is acquired
in the lockspace. Once one of the duplicates is disabled,
then other hosts will recognize that the issue is resolved
without needing to restart the lockspaces.
|
|
|
|
|
|
| |
Move the DEBUG_MEM decision inside libdevmapper.so instead of exposing
it in libdevmapper.h which causes failures if the binary and library
were compiled with opposite debugging settings.
|
|
|
|
|
|
|
|
|
|
| |
pvscan autoactivation does not work for lockd VGs because
lock start is needed on a lockd VG before locking can be
done for it. Add a check to skip the attempt at autoactivate
rather than calling it, knowing it will fail.
Add a comment explaining why pvscan --cache works fine for
lockd VGs without locks, and why autoactivate is not done.
|
|
|
|
|
|
|
|
|
|
|
| |
. the poll check will eventually call finish which will
write the VG, so an ex VG lock is needed from lvmlockd.
. fix missing unlock on poll error path
. remove the lockd locking while monitoring the progress
of the command, as suggested by the earlier FIXME comment,
as it's not needed.
|