summaryrefslogtreecommitdiff
path: root/tools/lvchange.c
Commit message (Collapse)AuthorAgeFilesLines
* writecache: support settings metadata_only and pause_writebackDavid Teigland2022-12-081-0/+10
| | | | Two new settings for tuning dm-writecache.
* lvchange: handle unrecognized writecache settingDavid Teigland2022-12-081-0/+5
| | | | It was being ignored.
* thin: rename internal functionZdenek Kabelac2022-08-301-1/+1
| | | | | | | | Names matching internal code layout. Functionc in thin_manip.c uses thin_pool in its name. Keep 'pool' only for function working for both cache and thin pools. No change of functionality.
* vdo: support --vdosettingsZdenek Kabelac2022-05-031-1/+42
| | | | | | | | Allow to use --vdosettings with lvcreate,lvconvert,lvchange. Support settings currenly only configurable via lvm.conf. With lvchange we require inactivate LV for changes to be applied. Settings block_map_era_length has supported alias block_map_period.
* activation: use lv_is_activeZdenek Kabelac2022-01-311-3/+1
| | | | Use existing lv_is_active
* tools: missing sync after deactivationZdenek Kabelac2022-01-311-0/+2
| | | | | Caching of DM states optimisation revealed some missing synchronisation points.
* lvchange: fix lvchange refresh failed for dm suspend or resume failedYi Wang2021-08-161-1/+1
| | | | | | | | | | | When multiple lvchange refresh processes executed at the same time, suspend/resume ioctl on the same dm, some of these commands will be failed for dm aready change status, and ioctl will return EINVAL in _do_dm_ioctl function. to avoid this problem, add READ_FOR_ACTIVATE flags in lvchange refresh process, it will hold LCK_WRITE lock and avoid suspend/resume dm at the same time. Signed-off-by: Long YunJian <long.yunjian@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
* devices file: avoid updating entry dev names in some casesDavid Teigland2021-08-051-0/+3
| | | | | | Avoid thrashing changes to devices file device names by some commands that are run during startup when devnames are still being set up.
* cov: add internal error for missing argZdenek Kabelac2021-07-281-0/+5
| | | | Analyzer is happier.
* Add metadata-based autoactivation property for VG and LVDavid Teigland2021-04-071-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The autoactivation property can be specified in lvcreate or vgcreate for new LVs/VGs, and the property can be changed by lvchange or vgchange for existing LVs/VGs. --setautoactivation y|n enables|disables autoactivation of a VG or LV. Autoactivation is enabled by default, which is consistent with past behavior. The disabled state is stored as a new flag in the VG metadata, and the absence of the flag allows autoactivation. If autoactivation is disabled for the VG, then no LVs in the VG will be autoactivated (the LV autoactivation property will have no effect.) When autoactivation is enabled for the VG, then autoactivation can be controlled on individual LVs. The state of this property can be reported for LVs/VGs using the "-o autoactivation" option in lvs/vgs commands, which will report "enabled", or "" for the disabled state. Previous versions of lvm do not recognize this property. Since autoactivation is enabled by default, the disabled setting will have no effect in older lvm versions. If the VG is modified by older lvm versions, the disabled state will also be dropped from the metadata. The autoactivation property is an alternative to using the lvm.conf auto_activation_volume_list, which is still applied to to VGs/LVs in addition to the new property. If VG or LV autoactivation is disabled either in metadata or in auto_activation_volume_list, it will not be autoactivated. An autoactivation command will silently skip activating an LV when the autoactivation property is disabled. To determine the effective autoactivation behavior for a specific LV, multiple settings would need to be checked: the VG autoactivation property, the LV autoactivation property, the auto_activation_volume_list. The "activation skip" property would also be relevant, since it applies to both normal and auto activation.
* cleanup: no backtraces needed after log_errorZdenek Kabelac2021-03-101-1/+1
| | | | Reduce double backtracing.
* lvchange: remove unneeded callZdenek Kabelac2021-02-171-7/+0
| | | | Sync is already happining in activate_and_wipe_lvlist().
* lvchange: fix error for foreign vg activationDavid Teigland2020-11-171-1/+1
| | | | was using ECMD_FAILED instead of 0.
* lvchange: allow syncaction check with integrityDavid Teigland2020-10-261-3/+5
| | | | syncaction check will detect and correct integrity checksum mismatches.
* writecache: add settings cleaner and max_ageDavid Teigland2020-06-101-0/+10
| | | | available in dm-writecache 1.2
* writecache: cachesettings in lvchange and lvsDavid Teigland2020-06-101-0/+75
| | | | | lvchange --cachesettings lvs -o+cache_settings
* Allow dm-integrity to be used for raid imagesDavid Teigland2020-04-151-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Example: create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
* vdo: restore monitoring of vdo poolZdenek Kabelac2019-09-301-1/+1
| | | | Switch to -vpool layered name needs to monitor proper device.
* lvchange: allow activating cachevolDavid Teigland2019-09-201-0/+14
|
* vdo: enhance activation with layer -vpoolZdenek Kabelac2019-09-171-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | Enhance 'activation' experience for VDO pool to more closely match what happens for thin-pools where we do use a 'fake' LV to keep pool running even when no thinLVs are active. This gives user a choice whether he want to keep thin-pool running (wihout possibly lenghty activation/deactivation process) As we do plan to support multple VDO LVs to be mapped into a single VDO, we want to give user same experience and 'use-patter' as with thin-pools. This patch gives option to activate VDO pool only without activating VDO LV. Also due to 'fake' layering LV we can protect usage of VDO pool from command like 'mkfs' which do require exlusive access to the volume, which is no longer possible. Note: VDO pool contains 1024 initial sectors as 'empty' header - such header is also exposed in layered LV (as read-only LV). For blkid we are indentified as LV with UUID suffix - thus private DM device of lvm2 - so we do not need to store any extra info in this header space (aka zero is good enough).
* cache: warn and prompt for writeback with cachevolDavid Teigland2019-07-021-0/+8
| | | | | | The cache repair utility does not yet work with a cachevol (where metadata and data exist on the same LV.) So, warn and prompt if writeback is specified with a cachevol.
* scanning: open devs rw when rescanning for writeDavid Teigland2019-06-211-1/+1
| | | | | | | When vg_read rescans devices with the intention of writing the VG, the label rescan can open the devs RW so they do not need to be closed and reopened RW in dev_write_bytes.
* fix command definition for pvchange -aDavid Teigland2019-06-101-2/+2
| | | | | | | | | | | The -a was being included in the set of "one or more" options instead of an actual required option. Even though the cmd def was not implementing the restrictions correctly, the command internally was. Adjust the cmd def code which did not support a command with some real required options and a set of "one or more" options.
* vdo: enable caching for vdopool LV and vdo LVZdenek Kabelac2019-03-201-0/+3
| | | | | | | Allow using caching with VDO. User can either cache a single vdopool or a vdo LV - difference when the caching is put-in depends on a use-case and it's upto user to decide which kind of speed is expected.
* Use "cachevol" to refer to cache on a single LVDavid Teigland2019-02-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | and "cachepool" to refer to a cache on a cache pool object. The problem was that the --cachepool option was being used to refer to both a cache pool object, and to a standard LV used for caching. This could be somewhat confusing, and it made it less clear when each kind would be used. By separating them, it's clear when a cachepool or a cachevol should be used. Previously: - lvm would use the cache pool approach when the user passed a cache-pool LV to the --cachepool option. - lvm would use the cache vol approach when the user passed a standard LV in the --cachepool option. Now: - lvm will always use the cache pool approach when the user uses the --cachepool option. - lvm will always use the cache vol approach when the user uses the --cachevol option.
* raid: fix (de)activation of RaidLVs with visible SubLVsHeinz Mauelshagen2018-12-111-27/+5
| | | | | | | | | | | | | | | | | | | | There's a small window during creation of a new RaidLV when rmeta SubLVs are made visible to wipe them in order to prevent erroneous discovery of stale RAID metadata. In case a crash prevents the SubLVs from being committed hidden after such wiping, the RaidLV can still be activated with the SubLVs visible. During deactivation though, a deadlock occurs because the visible SubLVs are deactivated before the RaidLV. The patch adds _check_raid_sublvs to the raid validation in merge.c, an activation check to activate.c (paranoid, because the merge.c check will prevent activation in case of visible SubLVs) and shares the existing wiping function _clear_lvs in raid_manip.c moved to lv_manip.c and renamed to activate_and_wipe_lvlist to remove code duplication. Whilst on it, introduce activate_and_wipe_lv to share with (lvconvert|lvchange).c. Resolves: rhbz1633167
* Allow dm-cache cache device to be standard LVDavid Teigland2018-11-061-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a single, standard LV is specified as the cache, use it directly instead of converting it into a cache-pool object with two separate LVs (for data and metadata). With a single LV as the cache, lvm will use blocks at the beginning for metadata, and the rest for data. Separate dm linear devices are set up to point at the metadata and data areas of the LV. These dm devs are given to the dm-cache target to use. The single LV cache cannot be resized without recreating it. If the --poolmetadata option is used to specify an LV for metadata, then a cache pool will be created (with separate LVs for data and metadata.) Usage: $ lvcreate -n main -L 128M vg /dev/loop0 $ lvcreate -n fast -L 64M vg /dev/loop1 $ lvs -a vg LV VG Attr LSize Type Devices main vg -wi-a----- 128.00m linear /dev/loop0(0) fast vg -wi-a----- 64.00m linear /dev/loop1(0) $ lvconvert --type cache --cachepool fast vg/main $ lvs -a vg LV VG Attr LSize Origin Pool Type Devices [fast] vg Cwi---C--- 64.00m linear /dev/loop1(0) main vg Cwi---C--- 128.00m [main_corig] [fast] cache main_corig(0) [main_corig] vg owi---C--- 128.00m linear /dev/loop0(0) $ lvchange -ay vg/main $ dmsetup ls vg-fast_cdata (253:4) vg-fast_cmeta (253:5) vg-main_corig (253:6) vg-main (253:24) vg-fast (253:3) $ dmsetup table vg-fast_cdata: 0 98304 linear 253:3 32768 vg-fast_cmeta: 0 32768 linear 253:3 0 vg-main_corig: 0 262144 linear 7:0 2048 vg-main: 0 262144 cache 253:5 253:4 253:6 128 2 metadata2 writethrough mq 0 vg-fast: 0 131072 linear 7:1 2048 $ lvchange -an vg/min $ lvconvert --splitcache vg/main $ lvs -a vg LV VG Attr LSize Type Devices fast vg -wi------- 64.00m linear /dev/loop1(0) main vg -wi------- 128.00m linear /dev/loop0(0)
* cache: factor lvchange_cacheDavid Teigland2018-11-061-6/+14
| | | | to prepare for future addition
* Remove lvmetadDavid Teigland2018-07-111-14/+0
| | | | | | | | | | | | | Native disk scanning is now both reduced and async/parallel, which makes it comparable in performance (and often faster) when compared to lvm using lvmetad. Autoactivation now uses local temp files to record online PVs, and no longer requires lvmetad. There should be no apparent command-level change in behavior.
* lvchange: vdo support compression deduplication changeZdenek Kabelac2018-07-091-0/+71
| | | | | | | Add basic support for changing compression and deduplication state of a VDO pool volume. Allowing to access it also via top-level VDO volume.
* use exclusive file lock on VG for activationDavid Teigland2018-06-071-1/+1
| | | | | | | | | Make activation commands: vgchange -ay, lvchange -ay, pvscan -aay take an exclusive file lock on the VG to serialize multiple concurrent activation commands which could otherwise interfere with each other.
* Remove unused clvm variations for active LVsDavid Teigland2018-06-071-7/+4
| | | | | | | | | | | | | | | | | | Different flavors of activate_lv() and lv_is_active() which are meaningful in a clustered VG can be eliminated and replaced with whatever that flavor already falls back to in a local VG. e.g. lv_is_active_exclusive_locally() is distinct from lv_is_active() in a clustered VG, but in a local VG they are equivalent. So, all instances of the variant are replaced with the basic local equivalent. For local VGs, the same behavior remains as before. For shared VGs, lvmlockd was written with the explicit requirement of local behavior from these functions (lvmlockd requires locking_type 1), so the behavior in shared VGs also remains the same.
* Remove clvmd and associated codeDavid Teigland2018-06-051-29/+0
| | | | More code reduction and simplification can follow.
* Merge branch 'master' into 2018-05-11-fork-libdmJoe Thornber2018-06-011-3/+5
|\
| * lvmlockd: do not use an LV lock for some lvchange optionsDavid Teigland2018-05-301-3/+5
| | | | | | | | Some lvchange options can be used even if the LV is active.
* | build: Don't generate symlinks in include/ dirJoe Thornber2018-05-141-1/+1
|/ | | | | | | As we start refactoring the code to break dependencies (see doc/refactoring.txt), I want us to use full paths in the includes (eg, #include "base/data-struct/list.h"). This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in metadata/ from base/
* mirror: improve table updateZdenek Kabelac2018-04-301-4/+0
| | | | | | | | | | | | Shift refresh of mirror table right into monitor_dev_for_events(). Use !vg_write_lock_held() to recognize use of lvchange/vgchange. (this shall change if this would no longer work, but requires futher some API changes). With this patch dm mirror table is only refreshed when necassary. Also update WARNING message about mirror usage without monitoring and display LV name.
* lvchange: update mirror table when changing monitoringZdenek Kabelac2018-04-231-0/+4
| | | | | | Since for non-monitored mirrors we let mirror running without error handling - when monitoring changes for mirror, updated table (refresh) is needed.
* activation: support activation of component LVsZdenek Kabelac2018-03-061-1/+26
| | | | | | | | | | | | | | Occasionaly users may need to peek into 'component devices. Normally lvm2 does not let users activation component. This patch adds special mode where user can activate component LV in a 'read-only' mode i.e.: lvchange -ay vg/pool_tdata All devices can be deactivated with: lvchange -an vg | vgchange -an....
* cleanup: indentZdenek Kabelac2018-02-281-1/+2
|
* tidy: Add missing underscores to statics.Alasdair G Kergon2017-10-181-4/+4
|
* lvchange: allow changing properties on thin pool data lvDavid Teigland2017-05-151-0/+10
| | | | | | | Add an exception to not allowing lvchange to change properties on hidden LVs. When a thin pool data LV is a cache LV, we need to allow changing cache properties on the tdata sublv of the thin pool.
* lvchange/lvconvert: fix missing lvmlockd LV locksDavid Teigland2017-04-051-10/+19
| | | | | | | | | | | | | lvchange/lvconvert operations that do not require the LV to already be active need to acquire a transient LV lock before changing/converting the LV to ensure the LV is not active on another host. If the LV is already active, a persistent LV lock already exists on the host and the transient LV lock request is a no-op. Some lvmlockd locks in lvchange were lost in the cmd def changes. The lvmlockd locks in lvconvert seem to have been missed from the start.
* lvchange: tidy switch code in _lvchange_properties_single()Heinz Mauelshagen2017-04-051-32/+59
|
* lvchange: fix missing return valueDavid Teigland2017-04-051-0/+2
| | | | | A return value from lvchange_persistent_cmd() was missed in commit 1c41898c07ad750820fb39770355fded8e9b030a
* lvchange: fix --poll value when set from optionDavid Teigland2017-04-041-5/+14
| | | | | | | | | | The actual value specified by the --poll y|n option was not being used. The way the --poll value is used is hidden through an indirection where the value is stored in a global variable at the start of the command, and then the value is read from there later. Setting the global variable early in the command had been lost with the cmd def changes.
* vgchange/lvchange: fix poll and monitor useDavid Teigland2017-04-041-8/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fill in some gaps where old versions of lvm allowed --poll and --monitor in combination with other operations, but those combinations had been lost since the cmd def work. (The new cmd def code also added some combinations that had been missed by the old code.) Changes: lvchange --activate: add poll and monitor options, and add calls to them in implementation. lvchange --refresh: add monitor option (poll already there), and call to monitor in implementation. lvchange <metadata ops>: add poll and monitor options, and add calls to them in implementation. vgchange <metadata ops>: add poll option (call to poll already in implementation). vgchange --refresh: remove monitor option (not used by code) lvchange --persistent y: add poll and monitor options, and add calls to them, and to activate in the implementation. (Making it match the main lvchange metadata command.) Summary of current usage: lvchange --activate: monitor, poll vgchange --activate: monitor, poll lvchange --refresh: monitor, poll vgchange --refresh: poll lvchange --monitor: ok lvchange --poll: ok lvchange --monitor --poll: ok vgchange --monitor: ok vgchange --poll: ok vgchange --monitor --poll: ok lvchange <metadata ops>: monitor, poll vgchange <metadata ops>: poll
* lvchange: enhance avoiding multiple metadata updates/reloads/backupsHeinz Mauelshagen2017-04-041-68/+179
| | | | | | | | | | | Enhance commit 25b5915c9b5260c59d627bd1f6db8220bd4ad61e to process options requiring immediate metadata commits and reloads after those we can group together doing just one commit and an optional reload for the whole group. Backup metadata after processing options successfully. Related: rhbz1437611
* lvchange: avoid multiple metadata updates/reloads/backupsHeinz Mauelshagen2017-04-011-73/+120
| | | | | | | | | | | | _lvchange_properties_single() processes multiple command line arguments in a loop causing metadata updates and/or backups per argument. Optimize to only perform one update and/or backup (but necessary interim ones; e.g. for --resync) per command run. Related: rhbz1437611
* lvchange: reject setting all raid1 images to writemostlyHeinz Mauelshagen2017-03-261-2/+16
| | | | | | | | | | raid1 doesn't allow to set all images to writemostly because at least one image is required to receive any written data immediately. The dm-raid target will detect such invalid request and fail it iwith a kernel error message. Reject such request in uspace displaying a respective error message.