summaryrefslogtreecommitdiff
path: root/lib/report
Commit message (Collapse)AuthorAgeFilesLines
* lvmlockd: fix report of lv_active_exclusively for special lv typesDavid Teigland2023-01-101-7/+6
| | | | | | | | | | | | Cover a case missed by the recent commit e0ea0706d "report: query lvmlockd for lv_active_exclusively" Fix the lv_active_exclusively value reported for thin LVs. It's the thin pool that is locked in lvmlockd, and the thin LV state was mistakenly being queried and not found. Certain LV types like thin can only be activated exclusively, so always report lv_active_exclusively true for these when active.
* report: query lvmlockd for lv_active_exclusivelycorubba2022-11-111-1/+9
| | | | | | | | Query LV lock state in lvmlockd to report lv_active_exclusively for active LVs in a shared VGs. As with all lvmlockd state, it is from the perspective of the local node. Signed-off-by: corubba <corubba@gmx.de>
* report: adjust lv_active_remotely for shared VGscorubba2022-11-111-1/+2
| | | | | | | | | | | Add a note to the manpage that lvmlockd is unable to determine accurately and without side-effects whether a LV is remotely active. Also change the value of the lv_active_remotely option from false to undefined for shared VGs to distinctly communicate that inability to users. Only for local VGs it can be definitely stated that they are not remotely active. Signed-off-by: corubba <corubba@gmx.de>
* report: fix lv_active column type from STR to BINPeter Rajnoha2022-09-063-7/+9
| | | | | | | | Fix lv_active to be of BIN type instead of STR. This allows lv_active to follow the report/binary_values_as_numeric setting as well as --binary cmd line switch. Also, it makes it possible to use -S|--select with either textual or numeric representation of the value, like 'lvs -S active=active' but also 'lvs -S active=1'.
* report: values: add note about self-decriptive values to reportPeter Rajnoha2022-08-262-0/+54
|
* report: report numeric values (not string synonyms) for NUM and BIN fields ↵Peter Rajnoha2022-08-161-3/+4
| | | | | | | | | | | | | | | | with json_std format Internally, NUM and BIN fields are marked as DM_REPORT_FIELD_TYPE_NUM_NUMBER through libdevmapper API. The new 'json_std' format mandates that the report string representing such a value must be a number, not an arbitrary string. This is because numeric values in 'json_std' format do not have double quotes around them. This practically means, we can't use string synonyms ("named reserved values") for such values and the report string must always represent a proper number. With 'json' and 'basic' formats, this is not an issue because 'basic' format doesn't have any structure or typing at all and 'json' format puts all values in quotes, including numeric ones.
* report: fix pe_start column type from NUM to SIZPeter Rajnoha2022-08-111-1/+1
| | | | | | | | | | | | | | | | The 'pe_start' column was incorrectly marked as being of type NUM. This was not correct as pe_start is actually of type SIZ, which means it can have a size suffix and hence it's not a pure numeric value. Proper column type is important for selection to work correctly, so we can also do comparisons while using suffixes. This is also important for new "json_std" output format which does not put double quotes around pure numeric values. With pe_start incorrectly marked as NUM instead of SIZ, this produced invalid JSON output like '"pe_start" = 1.00m' because it contained the 'm' (or other) size suffix. If properly marked as SIZ, this is then put in double quotes like '"pe_start" = "1.00m"'.
* writecache: display block size from lvsDavid Teigland2022-02-213-0/+23
| | | | | lvs was missing the ability to display writecache block size. now possible with lvs -o writecache_block_size
* cov: remove unused variable settingZdenek Kabelac2021-09-131-2/+1
| | | | Since there is no use for &end after strtol, remove it.
* cov: make it aware we need these headers for muslCZdenek Kabelac2021-09-131-0/+3
|
* cov: keep time calculation ready for 2038Zdenek Kabelac2021-09-131-1/+1
| | | | Be prepared ;) and keep arithmetic 64bit ready.
* cov: keep 64bit arithmeticZdenek Kabelac2021-09-131-1/+1
| | | | | Highly unlikely this case will ever need 64bit math, but just in case, keep the expression as 64bit.
* Add metadata-based autoactivation property for VG and LVDavid Teigland2021-04-073-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The autoactivation property can be specified in lvcreate or vgcreate for new LVs/VGs, and the property can be changed by lvchange or vgchange for existing LVs/VGs. --setautoactivation y|n enables|disables autoactivation of a VG or LV. Autoactivation is enabled by default, which is consistent with past behavior. The disabled state is stored as a new flag in the VG metadata, and the absence of the flag allows autoactivation. If autoactivation is disabled for the VG, then no LVs in the VG will be autoactivated (the LV autoactivation property will have no effect.) When autoactivation is enabled for the VG, then autoactivation can be controlled on individual LVs. The state of this property can be reported for LVs/VGs using the "-o autoactivation" option in lvs/vgs commands, which will report "enabled", or "" for the disabled state. Previous versions of lvm do not recognize this property. Since autoactivation is enabled by default, the disabled setting will have no effect in older lvm versions. If the VG is modified by older lvm versions, the disabled state will also be dropped from the metadata. The autoactivation property is an alternative to using the lvm.conf auto_activation_volume_list, which is still applied to to VGs/LVs in addition to the new property. If VG or LV autoactivation is disabled either in metadata or in auto_activation_volume_list, it will not be autoactivated. An autoactivation command will silently skip activating an LV when the autoactivation property is disabled. To determine the effective autoactivation behavior for a specific LV, multiple settings would need to be checked: the VG autoactivation property, the LV autoactivation property, the auto_activation_volume_list. The "activation skip" property would also be relevant, since it applies to both normal and auto activation.
* cov: ensure settings is setZdenek Kabelac2021-03-102-11/+15
|
* device usage based on devices fileDavid Teigland2021-02-233-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The LVM devices file lists devices that lvm can use. The default file is /etc/lvm/devices/system.devices, and the lvmdevices(8) command is used to add or remove device entries. If the file does not exist, or if lvm.conf includes use_devicesfile=0, then lvm will not use a devices file. When the devices file is in use, the regex filter is not used, and the filter settings in lvm.conf or on the command line are ignored. LVM records devices in the devices file using hardware-specific IDs, such as the WWID, and attempts to use subsystem-specific IDs for virtual device types. These device IDs are also written in the VG metadata. When no hardware or virtual ID is available, lvm falls back using the unstable device name as the device ID. When devnames are used, lvm performs extra scanning to find devices if their devname changes, e.g. after reboot. When proper device IDs are used, an lvm command will not look at devices outside the devices file, but when devnames are used as a fallback, lvm will scan devices outside the devices file to locate PVs on renamed devices. A config setting search_for_devnames can be used to control the scanning for renamed devname entries. Related to the devices file, the new command option --devices <devnames> allows a list of devices to be specified for the command to use, overriding the devices file. The listed devices act as a sort of devices file in terms of limiting which devices lvm will see and use. Devices that are not listed will appear to be missing to the lvm command. Multiple devices files can be kept in /etc/lvm/devices, which allows lvm to be used with different sets of devices, e.g. system devices do not need to be exposed to a specific application, and the application can use lvm on its own set of devices that are not exposed to the system. The option --devicesfile <filename> is used to select the devices file to use with the command. Without the option set, the default system devices file is used. Setting --devicesfile "" causes lvm to not use a devices file. An existing, empty devices file means lvm will see no devices. The new command vgimportdevices adds PVs from a VG to the devices file and updates the VG metadata to include the device IDs. vgimportdevices -a will import all VGs into the system devices file. LVM commands run by dmeventd not use a devices file by default, and will look at all devices on the system. A devices file can be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If this file exists, lvm commands run by dmeventd will use it. Internal implementaion: - device_ids_read - read the devices file . add struct dev_use (du) to cmd->use_devices for each devices file entry - dev_cache_scan - get /dev entries . add struct device (dev) to dev_cache for each device on the system - device_ids_match - match devices file entries to /dev entries . match each du on cmd->use_devices to a dev in dev_cache, using device ID . on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID - label_scan - read lvm headers and metadata from devices . filters are applied, those that do not need data from the device . filter-deviceid skips devs without MATCHED_USE_ID, i.e. skips /dev entries that are not listed in the devices file . read lvm label from dev . filters are applied, those that use data from the device . read lvm metadata from dev . add info/vginfo structs for PVs/VGs (info is "lvmcache") - device_ids_find_renamed_devs - handle devices with unstable devname ID where devname changed . this step only needed when devs do not have proper device IDs, and their dev names change, e.g. after reboot sdb becomes sdc. . detect incorrect match because PVID in the devices file entry does not match the PVID found when the device was read above . undo incorrect match between du and dev above . search system devices for new location of PVID . update devices file with new devnames for PVIDs on renamed devices . label_scan the renamed devs - continue with command processing
* integrity: display total mismatches at raid LV levelDavid Teigland2020-11-111-0/+4
| | | | | | | | | | | | | | | | Each integrity image in a raid LV reports its own number of integrity mismatches, e.g. lvs -o integritymismatches vg/lv_rimage_0 lvs -o integritymismatches vg/lv_rimage_1 In addition to this, allow the total number of integrity mismatches from all images to be displayed for the raid LV. lvs -o integritymismatches vg/lv shows the number of mismatches from both lv_rimage_0 and lv_rimage_1.
* properties: fix data_usage typoZdenek Kabelac2020-10-191-1/+1
| | | | | | | Patch 4de6f58085c533c79ce2e0db6cdeb6ed06fe05f8 introduce typo, we need to use data_usage. Note: this code was used by lvmapp library and currently is unused.
* thin: use lv_status_thin and lv_status_thin_poolZdenek Kabelac2020-09-291-22/+38
| | | | | | | | | | | | | | | | | | Introduce structures lv_status_thin_pool and lv_status_thin (pair to lv_status_cache, lv_status_vdo) Convert lv_thin_percent() -> lv_thin_status() and lv_thin_pool_percent() + lv_thin_pool_transaction_id() -> lv_thin_pool_status(). This way a function user can see not only percentages, but also other important status info about thin-pool. TODO: This patch tries to not change too many other things, but pool_below_threshold() now uses new thin-pool info to return failure if thin-pool cannot be actually modified. This should be handle separately in a better way.
* integrity: fix segfault reporting integrity for other lvsDavid Teigland2020-09-091-0/+3
|
* integrity: report mismatchesDavid Teigland2020-09-013-0/+27
| | | | | | | with lvs -o integritymismatches reported for integrity images, which may report different values
* integrity: report raidintegritymode randintegrityblocksizeDavid Teigland2020-09-013-0/+88
| | | | reported for the raid lv and the integrity images
* gcc: drop bogus ;Zdenek Kabelac2020-08-281-7/+7
|
* cov: use 64bit arithmeticZdenek Kabelac2020-06-241-1/+1
| | | | | Although values of VDO block_map_cache_size, index_memory_size, slab_size should not overflow here - use proper 64bit math.
* writecache: add settings cleaner and max_ageDavid Teigland2020-06-101-1/+1
| | | | available in dm-writecache 1.2
* writecache: cachesettings in lvchange and lvsDavid Teigland2020-06-101-0/+10
| | | | | lvchange --cachesettings lvs -o+cache_settings
* writecache: show error in lv_health_status and lv_attrDavid Teigland2020-06-101-0/+6
| | | | | lv_attr is 'E' and lv_health_status is 'error' when dm-writecache status reports error.
* Allow dm-integrity to be used for raid imagesDavid Teigland2020-04-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Example: create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
* writecache: report status fieldsDavid Teigland2020-01-313-0/+34
| | | | | | | | | | reporting fields (-o) directly from kernel: writecache_total_blocks writecache_free_blocks writecache_writeback_blocks writecache_error The data_percent field shows used cache blocks / total cache blocks.
* vdo: add lvs fields to query vdo volume propertiesZdenek Kabelac2019-10-044-2/+324
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add lots of vdo fields: vdo_operating_mode - For vdo pools, its current operating mode. vdo_compression_state - For vdo pools, whether compression is running. vdo_index_state - For vdo pools, state of index for deduplication. vdo_used_size - For vdo pools, currently used space. vdo_saving_percent - For vdo pools, percentage of saved space. vdo_compression - Set for compressed LV (vdopool). vdo_deduplication - Set for deduplicated LV (vdopool). vdo_use_metadata_hints - Use REQ_SYNC for writes (vdopool). vdo_minimum_io_size - Minimum acceptable IO size (vdopool). vdo_block_map_cache_size - Allocated caching size (vdopool). vdo_block_map_era_length - Speed of cache writes (vdopool). vdo_use_sparse_index - Sparse indexing (vdopool). vdo_index_memory_size - Allocated indexing memory (vdopool). vdo_slab_size - Increment size for growing (vdopool). vdo_ack_threads - Acknowledging threads (vdopool). vdo_bio_threads - IO submitting threads (vdopool). vdo_bio_rotation - IO enqueue (vdopool). vdo_cpu_threads - CPU threads for compression and hashing (vdopool). vdo_hash_zone_threads - Threads for subdivide parts (vdopool). vdo_logical_threads - Logical threads for subdivide parts (vdopool). vdo_physical_threads - Physical threads for subdivide parts (vdopool). vdo_max_discard - Maximum discard size volume can recieve (vdopool). vdo_write_policy - Specified write policy (vdopool). vdo_header_size - Header size at front of vdopool. Previously only 'lvdisplay -m' was exposing them.
* vdo: field updateZdenek Kabelac2019-10-041-6/+6
|
* lvmcache: renaming functions and variablesDavid Teigland2019-08-161-1/+1
| | | | related to duplicates, no functional changes.
* Use "cachevol" to refer to cache on a single LVDavid Teigland2019-02-271-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | and "cachepool" to refer to a cache on a cache pool object. The problem was that the --cachepool option was being used to refer to both a cache pool object, and to a standard LV used for caching. This could be somewhat confusing, and it made it less clear when each kind would be used. By separating them, it's clear when a cachepool or a cachevol should be used. Previously: - lvm would use the cache pool approach when the user passed a cache-pool LV to the --cachepool option. - lvm would use the cache vol approach when the user passed a standard LV in the --cachepool option. Now: - lvm will always use the cache pool approach when the user uses the --cachepool option. - lvm will always use the cache vol approach when the user uses the --cachevol option.
* Allow dm-cache cache device to be standard LVDavid Teigland2018-11-061-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a single, standard LV is specified as the cache, use it directly instead of converting it into a cache-pool object with two separate LVs (for data and metadata). With a single LV as the cache, lvm will use blocks at the beginning for metadata, and the rest for data. Separate dm linear devices are set up to point at the metadata and data areas of the LV. These dm devs are given to the dm-cache target to use. The single LV cache cannot be resized without recreating it. If the --poolmetadata option is used to specify an LV for metadata, then a cache pool will be created (with separate LVs for data and metadata.) Usage: $ lvcreate -n main -L 128M vg /dev/loop0 $ lvcreate -n fast -L 64M vg /dev/loop1 $ lvs -a vg LV VG Attr LSize Type Devices main vg -wi-a----- 128.00m linear /dev/loop0(0) fast vg -wi-a----- 64.00m linear /dev/loop1(0) $ lvconvert --type cache --cachepool fast vg/main $ lvs -a vg LV VG Attr LSize Origin Pool Type Devices [fast] vg Cwi---C--- 64.00m linear /dev/loop1(0) main vg Cwi---C--- 128.00m [main_corig] [fast] cache main_corig(0) [main_corig] vg owi---C--- 128.00m linear /dev/loop0(0) $ lvchange -ay vg/main $ dmsetup ls vg-fast_cdata (253:4) vg-fast_cmeta (253:5) vg-main_corig (253:6) vg-main (253:24) vg-fast (253:3) $ dmsetup table vg-fast_cdata: 0 98304 linear 253:3 32768 vg-fast_cmeta: 0 32768 linear 253:3 0 vg-main_corig: 0 262144 linear 7:0 2048 vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0 vg-fast: 0 131072 linear 7:1 2048 $ lvchange -an vg/min $ lvconvert --splitcache vg/main $ lvs -a vg LV VG Attr LSize Type Devices fast vg -wi------- 64.00m linear /dev/loop1(0) main vg -wi------- 128.00m linear /dev/loop0(0)
* cache: factor report functionsDavid Teigland2018-11-061-31/+34
| | | | to prepare for future addition
* report: show empty lock_type for noneDavid Teigland2018-06-151-1/+7
| | | | | | Sometimes lock_type would be displayed as "none" (after changing it) and sometimes as empty. Make it consistently empty.
* device_mapper: rename libdevmapper.h -> all.hJoe Thornber2018-06-081-1/+1
| | | | | I'm paranoid a file will include the global one in /usr/include by accident.
* Remove unused clvm variations for active LVsDavid Teigland2018-06-071-37/+3
| | | | | | | | | | | | | | | | | | Different flavors of activate_lv() and lv_is_active() which are meaningful in a clustered VG can be eliminated and replaced with whatever that flavor already falls back to in a local VG. e.g. lv_is_active_exclusive_locally() is distinct from lv_is_active() in a clustered VG, but in a local VG they are equivalent. So, all instances of the variant are replaced with the basic local equivalent. For local VGs, the same behavior remains as before. For shared VGs, lvmlockd was written with the explicit requirement of local behavior from these functions (lvmlockd requires locking_type 1), so the behavior in shared VGs also remains the same.
* Merge branch 'master' into 2018-05-11-fork-libdmJoe Thornber2018-06-014-0/+12
|\
| * vgs: add report field for sharedDavid Teigland2018-05-314-0/+12
| | | | | | | | equivalent to a non-empty -o locktype.
* | device-mapper: Fork libdm internally.Joe Thornber2018-05-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | The device-mapper directory now holds a copy of libdm source. At the moment this code is identical to libdm. Over time code will migrate out to appropriate places (see doc/refactoring.txt). The libdm directory still exists, and contains the source for the libdevmapper shared library, which we will continue to ship (though not neccessarily update). All code using libdm should now use the version in device-mapper.
* | build: Don't generate symlinks in include/ dirJoe Thornber2018-05-144-22/+22
|/ | | | | | | As we start refactoring the code to break dependencies (see doc/refactoring.txt), I want us to use full paths in the includes (eg, #include "base/data-struct/list.h"). This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in metadata/ from base/
* Remove lvm1 and pool disk formatsDavid Teigland2018-04-301-3/+1
| | | | | | | | | | | There are likely more bits of code that can be removed, e.g. lvm1/pool-specific bits of code that were identified using FMT flags. The vgconvert command can likely be reduced further. The lvm1-specific config settings should probably have some other fields set for proper deprecation.
* tidy: Add missing underscores to statics.Alasdair G Kergon2017-10-182-10/+9
|
* reporting: validate time parsing with strtolZdenek Kabelac2017-08-251-0/+5
| | | | Check for out-of-range numbers being result of strtol parsing.
* tidy: prefer not using else after returnZdenek Kabelac2017-07-201-26/+30
| | | | | clang-tidy: avoid using 'else' after return - give more readable code, and also saves indention level.
* cleanup: add braces in macroZdenek Kabelac2017-07-201-3/+3
|
* report: fix data_offset/new_data_offset reportingHeinz Mauelshagen2017-07-141-2/+2
|
* raid: report percent with segtype infoZdenek Kabelac2017-06-161-4/+6
| | | | | | | | | | | | | | | | | | Enhance reporting code, so it does not need to do 'extra' ioctl to get 'status' of normal raid and provide percentage directly. When we have 'merging' snapshot into raid origin, we still need to get this secondary number with extra status call - however, since 'raid' is always a single segment LV - we may skip 'copy_percent' call as we directly know the percent and also with better precision. NOTE: for mirror we still base reported number on the percetage of transferred extents which might get quite imprecisse if big size of extent is used while volume itself is smaller as reporting jump steps are much bigger the actual reported number provides. 2nd.NOTE: raid lvs line report already requires quite a few extra status calls for the same device - but fix will be need slight code improval.
* build: fix x32 archMikulas Patocka2017-03-271-5/+5
| | | | | | | | This patch fixed lvm2 compilation running on x32 arch. (Using 64bit x86 cpu features but running on 32b address space, so consuming less mem in VM). On x32 arch 'time_t' is 64bit while 'long' is 32bit.
* raid: use 64bit arithmeticZdenek Kabelac2017-03-161-1/+1
| | | | | Coverity - keep multiplication for size cals in 64bit (otherwise it's just 32b x 32b)