summaryrefslogtreecommitdiff
path: root/lib/config/defaults.h
Commit message (Collapse)AuthorAgeFilesLines
* device id: add new types using values from vpd_pg83David Teigland2022-10-101-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | The new device_id types are: wwid_naa, wwid_eui, wwid_t10. The new types use the specific wwid type in their name. lvm currently gets the values for these types by reading the device's vpd_pg83 sysfs file (this could change in the future if better methods become available for reading the values.) If a device is added to the devices file using one of these types, prior versions of lvm will not recognize the types and will be unable to use the devices. When adding a new device, lvm continues to first use sys_wwid from the sysfs wwid file. If the device has no sysfs wwid file, lvm now attempts to use one of the new types from vpd_pg83. If a devices file entry with type sys_wwid does not match a given device's sysfs wwid file, the sys_wwid value will also be compared to that device's other wwids from its vpd_pg83 file. If the kernel changes the wwid type reported from the sysfs wwid file, e.g. from a device's t10 id to its naa id, then lvm should still be able to match it correctly using the vpd_pg83 data which will include both ids.
* add hints interface to the pvs_online file informationDavid Teigland2021-11-041-0/+4
| | | | | | | | | | | | | | | The information in /run/lvm/pvs_online/<pvid> files can be used to build a list of devices for a given VG. The pvscan -aay command has long used this information to activate a VG while scanning only devices in that VG, which is an important optimization for autoactivation. This patch implements the same thing through the existing device hints interface, so that the optimization can be applied elsewhere. A future patch will take advantage of this optimization in vgchange -aay, which is now used in place of pvscan -aay for event activation.
* configure: updatesZdenek Kabelac2021-10-141-1/+0
|
* fix syslog settingDavid Teigland2021-10-111-1/+1
| | | | | | | | Just setting lvm.conf level=N should not send messages to syslog (now the journal by default.) Sending messages to syslog should require setting lvm.conf log { syslog=1 level=N }.
* config: change default use_devicesfile to 1David Teigland2021-10-071-1/+1
|
* devices: rework libudev usageDavid Teigland2021-07-131-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | related to config settings: obtain_device_info_from_udev (controls if lvm gets a list of devices from readdir /dev or from libudev) external_device_info_source (controls if lvm asks libudev for device information) . Make the obtain_device_list_from_udev setting affect only the choice of readdir /dev vs libudev. The setting no longer controls if udev is used for device type checks. . Change obtain_device_list_from_udev default to 0. This helps avoid boot timeouts due to slow libudev queries, avoids reported failures from udev_enumerate_scan_devices, and avoids delays from "device not initialized in udev database" errors. Even without errors, for a system booting with 1024 PVs, lvm2-pvscan times improve from about 100 sec to 15 sec, and the pvscan command from about 64 sec to about 4 sec. . For external_device_info_source="none", remove all libudev device info queries, and use only lvm native device info. . For external_device_info_source="udev", first check lvm native device info, then check libudev info. . Remove sleep/retry loop when attempting libudev queries for device info. udev info will simply be skipped if it's not immediately available. . Only set up a libdev connection if it will be used by obtain_device_list_from_udev/external_device_info_source. . For native multipath component detection, use /etc/multipath/wwids. If a device has a wwid matching an entry in the wwids file, then it's considered a multipath component. This is necessary to natively detect multipath components when the mpath device is not set up.
* vdo: support vdo_pool_header_sizeZdenek Kabelac2021-06-281-2/+1
| | | | | | | | | | | | Add profilable configurable setting for vdo pool header size, that is used as 'extra' empty space at the front and end of vdo-pool device to avoid having a disk in the system the may have same data is real vdo LV. For some conversion cases however we may need to allow using '0' header size. TODO: in this case we may eventually avoid adding 'linear' mapping layer in future - but this requires further modification over lvm code base.
* device usage based on devices fileDavid Teigland2021-02-231-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The LVM devices file lists devices that lvm can use. The default file is /etc/lvm/devices/system.devices, and the lvmdevices(8) command is used to add or remove device entries. If the file does not exist, or if lvm.conf includes use_devicesfile=0, then lvm will not use a devices file. When the devices file is in use, the regex filter is not used, and the filter settings in lvm.conf or on the command line are ignored. LVM records devices in the devices file using hardware-specific IDs, such as the WWID, and attempts to use subsystem-specific IDs for virtual device types. These device IDs are also written in the VG metadata. When no hardware or virtual ID is available, lvm falls back using the unstable device name as the device ID. When devnames are used, lvm performs extra scanning to find devices if their devname changes, e.g. after reboot. When proper device IDs are used, an lvm command will not look at devices outside the devices file, but when devnames are used as a fallback, lvm will scan devices outside the devices file to locate PVs on renamed devices. A config setting search_for_devnames can be used to control the scanning for renamed devname entries. Related to the devices file, the new command option --devices <devnames> allows a list of devices to be specified for the command to use, overriding the devices file. The listed devices act as a sort of devices file in terms of limiting which devices lvm will see and use. Devices that are not listed will appear to be missing to the lvm command. Multiple devices files can be kept in /etc/lvm/devices, which allows lvm to be used with different sets of devices, e.g. system devices do not need to be exposed to a specific application, and the application can use lvm on its own set of devices that are not exposed to the system. The option --devicesfile <filename> is used to select the devices file to use with the command. Without the option set, the default system devices file is used. Setting --devicesfile "" causes lvm to not use a devices file. An existing, empty devices file means lvm will see no devices. The new command vgimportdevices adds PVs from a VG to the devices file and updates the VG metadata to include the device IDs. vgimportdevices -a will import all VGs into the system devices file. LVM commands run by dmeventd not use a devices file by default, and will look at all devices on the system. A devices file can be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If this file exists, lvm commands run by dmeventd will use it. Internal implementaion: - device_ids_read - read the devices file . add struct dev_use (du) to cmd->use_devices for each devices file entry - dev_cache_scan - get /dev entries . add struct device (dev) to dev_cache for each device on the system - device_ids_match - match devices file entries to /dev entries . match each du on cmd->use_devices to a dev in dev_cache, using device ID . on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID - label_scan - read lvm headers and metadata from devices . filters are applied, those that do not need data from the device . filter-deviceid skips devs without MATCHED_USE_ID, i.e. skips /dev entries that are not listed in the devices file . read lvm label from dev . filters are applied, those that use data from the device . read lvm metadata from dev . add info/vginfo structs for PVs/VGs (info is "lvmcache") - device_ids_find_renamed_devs - handle devices with unstable devname ID where devname changed . this step only needed when devs do not have proper device IDs, and their dev names change, e.g. after reboot sdb becomes sdc. . detect incorrect match because PVID in the devices file entry does not match the PVID found when the device was read above . undo incorrect match between du and dev above . search system devices for new location of PVID . update devices file with new devnames for PVIDs on renamed devices . label_scan the renamed devs - continue with command processing
* thin: improve 16g support for thin pool metadataZdenek Kabelac2021-02-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initial support for thin-pool used slightly smaller max size 15.81GiB for thin-pool metadata. However the real limit later settled at 15.88GiB (difference is ~64MiB - 16448 4K blocks). lvm2 could not simply increase the size as it has been using hard cropping of the loaded metadata device to avoid warnings printing warning of kernel when the size was bigger (i.e. due to bigger extent_size). This patch adds the new lvm.conf configurable setting: allocation/thin_pool_crop_metadata which defaults to 0 -> no crop of metadata beyond 15.81GiB. Only user with these sizes of metadata will be affected. Without cropping lvm2 now limits metadata allocation size to 15.88GiB. Any space beyond is currently not used by thin-pool target. Even if i.e. bigger LV is used for metadata via lvconvert, or allocated bigger because of to large extent size. With cropping enabled (=1) lvm2 preserves the old limitation 15.81GiB and should allow to work in the evironement with older lvm2 tools (i.e. older distribution). Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA flag within lvm2 metadata, so older lvm2 recognizes an incompatible thin-pool and cannot activate such pool! Users should use uncropped version as it is not suffering from various issues between thin_repair results and allocated metadata LV as thin_repair limit is 15.88GiB Users should use cropping only when really needed! Patch also better handles resize of thin-pool metadata and prevents resize beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically switches pool to no-crop version. Even with existing bigger thin-pool metadata command 'lvextend -l+1 vg/pool_tmeta' does the change. Patch gives better controls 'coverted' metadata LV and reports less confusing message during conversion. Patch set also moves the code for updating min/max into pool_manip.c for better sharing with cache_pool code.
* pool: zero metadataZdenek Kabelac2020-06-241-0/+1
| | | | | | | | | | | | To avoid polution of metadata with some 'garbage' content or eventualy some leak of stale data in case user want to upload metadata somewhere, ensure upon allocation the metadata device is fully zeroed. Behaviour may slow down allocation of thin-pool or cache-pool a bit so the old behaviour can be restored with lvm.conf setting: allocation/zero_metadata=0 TODO: add zeroing for extension of metadata volume.
* vdo: raise VDO default bio threads to 4Zdenek Kabelac2019-10-041-1/+1
| | | | | Since 'vdo create' tends to use this setting, update lvm2 to provide same default.
* Additional MD component checkingDavid Teigland2019-06-071-0/+2
| | | | | | | | | | | | | | | | | If udev info is missing for a device, (which would indicate if it's an MD component), then do an end-of-device read to check if a PV is an MD component. (This is skipped when using hints since we already know devs in hints are good.) A new config setting md_component_checks can be used to disable the additional end-of-device MD checks, or to always enable end-of-device MD checks. When both hints and udev info are disabled/unavailable, the end of PVs will now be scanned by default. If md devices with end-of-device superblocks are not being used, the extra I/O overhead can be avoided by setting md_component_checks="start".
* thin: max thinZdenek Kabelac2019-03-201-0/+2
|
* io: increase the default io memory from 4 to 8 MiBDavid Teigland2019-03-041-1/+1
| | | | | | | | | | | | | | This is the default bcache size that is created at the start of the command. It needs to be large enough to hold a single copy of metadata for a given VG, or the VG cannot be read or written (since the entire VG would not fit into available memory.) Increasing the default reduces the chances of anyone needing to increase the default to use their VG. The size can be set in lvm.conf global/io_memory_size; the lower limit is 4 MiB and the upper limit is 128 MiB.
* config: add new setting io_memory_sizeDavid Teigland2019-03-041-0/+2
| | | | | | which defines the amount of memory that lvm will allocate for bcache. Increasing this setting is required if it is smaller than a single copy of VG metadata.
* logging: add command[pid] and timestamp to file and verbose outputDavid Teigland2019-02-261-1/+1
| | | | | | | | | Without this, the output from different commands in a single log file could not be separated. Change the default "indent" setting to 0 so that the default debug output does not include variable spaces in the middle of debug lines.
* config: change scan_lvs default to 0David Teigland2019-02-201-1/+1
| | | | so that lvm does not scan LVs for PVs by default.
* add device hints to reduce scanningDavid Teigland2019-01-151-0/+2
| | | | | | | Save the list of PVs in /run/lvm/hints. These hints are used to reduce scanning in a number of commands to only the PVs on the system, or only the PVs in a requested VG (rather than all devices on the system.)
* lib: move towards v2 version of VDO formatZdenek Kabelac2018-12-201-4/+4
| | | | | | | | | Drop very old original format of VDO target and focus on V2 version. So some variables were renamed or replaced. There is no compatibility preserved (with assumption so far this is experimental feature and there is no real user). Note - version currently VDO calls this version 6.2.
* Place the first PE at 1 MiB for all defaultsDavid Teigland2018-11-261-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | . When using default settings, this commit should change nothing. The first PE continues to be placed at 1 MiB resulting in a metadata area size of 1020 KiB (for 4K page sizes; slightly smaller for larger page sizes.) . When default_data_alignment is disabled in lvm.conf, align pe_start at 1 MiB, based on a default metadata area size that adapts to the page size. Previously, disabling this option would result in mda_size that was too small for common use, and produced a 64 KiB aligned pe_start. . Customized pe_start and mda_size values continue to be set as before in lvm.conf and command line. . Remove the configure option for setting default_data_alignment at build time. . Improve alignment related option descriptions. . Add section about alignment to pvcreate man page. Previously, DEFAULT_PVMETADATASIZE was 255 sectors. However, the fact that the config setting named "default_data_alignment" has a default value of 1 (MiB) meant that DEFAULT_PVMETADATASIZE was having no effect. The metadata area size is the space between the start of the metadata area (page size offset from the start of the device) and the first PE (1 MiB by default due to default_data_alignment 1.) The result is a 1020 KiB metadata area on machines with 4KiB page size (1024 KiB - 4 KiB), and smaller on machines with larger page size. If default_data_alignment was set to 0 (disabled), then DEFAULT_PVMETADATASIZE 255 would take effect, and produce a metadata area that was 188 KiB and pe_start of 192 KiB. This was too small for common use. This is fixed by making the default metadata area size a computed value that matches the value produced by default_data_alignment.
* io: use sync io if aio failsDavid Teigland2018-11-201-0/+1
| | | | | | | | | | | | | | | | | | io_setup() for aio may fail if a system has reached the aio request limit. In this case, fall back to using sync io. Also, lvm use of aio can be disabled entirely with config setting global/use_aio=0. The system limit for aio requests can be seen from /proc/sys/fs/aio-max-nr The current usage of aio requests can be seen from /proc/sys/fs/aio-nr The system limit for aio requests can be increased by setting fs.aio-max-nr using sysctl. Also add last-byte limit to the sync io code.
* filter: add config setting to skip scanning LVsDavid Teigland2018-08-301-0/+2
| | | | | | | | | | | devices/scan_lvs (default 1) determines whether lvm will scan LVs for layered PVs. The lvm behavior has always been to scan LVs, but it's rare for LVs to have layered PVs, and much more common for there to be many LVs that substantially slow down scanning with no benefit. This is implemented in the usable filter, and has the same effect as listing all LVs in the global_filter.
* dmeventd: lvm vdo supportZdenek Kabelac2018-07-091-0/+4
|
* lib: new vdo segment configurable optionsZdenek Kabelac2018-07-091-0/+33
| | | | | Configurable for vdo segment with their default values. Also specify their ranges with minimal and maximal values.
* Remove unused device error countingDavid Teigland2018-06-151-3/+0
|
* lvmcache: simplify metadata cacheDavid Teigland2018-04-201-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The copy of VG metadata stored in lvmcache was not being used in general. It pretended to be a generic VG metadata cache, but was not being used except for clvmd activation. There it was used to avoid reading from disk while devices were suspended, i.e. in resume. This removes the code that attempted to make this look like a generic metadata cache, and replaces with with something narrowly targetted to what it's actually used for. This is a way of passing the VG from suspend to resume in clvmd. Since in the case of clvmd one caller can't simply pass the same VG to both suspend and resume, suspend needs to stash the VG somewhere that resume can grab it from. (resume doesn't want to read it from disk since devices are suspended.) The lvmcache vginfo struct is used as a convenient place to stash the VG to pass it from suspend to resume, even though it isn't related to the lvmcache or vginfo. These suspended_vg* vginfo fields should not be used or touched anywhere else, they are only to be used for passing the VG data from suspend to resume in clvmd. The VG data being passed between suspend and resume is never modified, and will only exist in the brief period between suspend and resume in clvmd. suspend has both old (current) and new (precommitted) copies of the VG metadata. It stashes both of these in the vginfo prior to suspending devices. When vg_commit is successful, it sets a flag in vginfo as before, signaling the transition from old to new metadata. resume grabs the VG stashed by suspend. If the vg_commit happened, it grabs the new VG, and if the vg_commit didn't happen it grabs the old VG. The VG is then used to resume LVs. This isolates clvmd-specific code and usage from the normal lvm vg_read code, making the code simpler and the behavior easier to verify. Sequence of operations: - lv_suspend() has both vg_old and vg_new and stashes a copy of each onto the vginfo: lvmcache_save_suspended_vg(vg_old); lvmcache_save_suspended_vg(vg_new); - vg_commit() happens, which causes all clvmd instances to call lvmcache_commit_metadata(vg). A flag is set in the vginfo indicating the transition from the old to new VG: vginfo->suspended_vg_committed = 1; - lv_resume() needs either vg_old or vg_new to use in resuming LVs. It doesn't want to read the VG from disk since devices are suspended, so it gets the VG stashed by lv_suspend: vg = lvmcache_get_suspended_vg(vgid); If the vg_commit did not happen, suspended_vg_committed will not be set, and in this case, lvmcache_get_suspended_vg() will return the old VG instead of the new VG, and it will resume LVs based on the old metadata.
* [io paths] Unpick agk's aio stuffJoe Thornber2018-04-201-3/+0
|
* device: Queue any aio beyond defined limits.Alasdair G Kergon2018-02-081-0/+2
|
* device: Basic config and setup to support async I/O.Alasdair G Kergon2018-02-081-0/+1
|
* cleanup: define really uses KBZdenek Kabelac2017-06-091-1/+1
| | | | | Cleanup also units for DEFAULT_THIN_POOL_OPTIMAL_METADATA_SIZE define (128MB) and update calcs for it.
* cleanup: use DM limit defineZdenek Kabelac2017-06-081-1/+1
| | | | | For calculation use already defined size in libdm, which give better estimation of maximal size of thin pool metadata.
* cleanup: rename internal defineZdenek Kabelac2017-06-081-1/+1
| | | | More descriptive name of #define.
* lvcreate: raise default raid regionsize to 2MiBHeinz Mauelshagen2017-04-131-1/+1
| | | | Related: rhbz1392947.
* fsadm: support configurable full pathZdenek Kabelac2017-04-121-0/+2
| | | | | | | Just like with other tools lvm2 is using allow to define fully configurable path. Default is selected by $PREFIX/sbin/fsadm
* cache: introduce allocation/cache_metadata_formatZdenek Kabelac2017-03-101-0/+1
| | | | | | | | | | | | | Add new profilable configation setting to let user select which metadata format of a created cache pool he wish to use. By default the 'best' available format is autodetected at runtime, but user may enforce format 1 or 2 ATM. Code also detects availability for metadata2 supporting cache target. In case of troubles user may easily Disable usage of this feature by placing 'metadata2' into global/cache_disabled_features list.
* lvconvert/lvcreate: raise maximum number of raid imagesHeinz Mauelshagen2017-02-241-2/+2
| | | | | | | | | | | | | | | | | | Because of contraints in renaming shifted rimage/rmeta LV names the current RaidLV limit is a maximum of 10 SubLV pairs. With the previous introduction of reshaping infratructure that constriant got removed. Kernel supports 253 since dm-raid target 1.9.0, older kernels 64. Raise the maximum number of RaidLV rimage/rmeta pairs to 64. If we want to raise past 64, we have to introdce a check for the kernel supporting it in lvcreate/lvconvert. Related: rhbz834579 Related: rhbz1191935 Related: rhbz1191978
* lvconvert: add infrastructure for RaidLV reshaping supportHeinz Mauelshagen2017-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | In order to support striped raid5/6/10 LV reshaping (change of LV type, stripesize or number of legs), this patch introduces the changes to call the reshaping infratructure from lv_raid_convert(). Changes: - add reshaping calls from lv_raid_convert() - add command definitons for reshaping to tools/command-lines.in - fix raid_rimage_extents() - add 2 new test scripts lvconvert-raid-reshape-linear_to_striped.sh and lvconvert-raid-reshape-striped_to_linear.sh to test the linear <-> striped multi-step conversions - add lvconvert-raid-reshape.sh reshaping tests - enhance lvconvert-raid-takeover.sh with new raid10 tests Related: rhbz834579 Related: rhbz1191935 Related: rhbz1191978
* config: new option dmeventd/thin_commandZdenek Kabelac2017-01-201-0/+1
| | | | | This setting will allowing configuring which command gets executed when thin-pool fullness goes from 50%..100%
* libdm: add human R|readable unitsZdenek Kabelac2017-01-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When showing sizes with 'H|human' units we do use standard rounding. This however is confusing users from time to time, when the printed number uses some biger units i.e. GiB and there is just tiny fraction of space missing. So here is some real-life example with new 'r' unit. $lvs LV VG Attr LSize Pool Origin lvol0 vg -wi-a----- 1.99g lvol1 vg -wi-a----- <2.00g lvol2 vg -wi-a----- <2.01g Meaning is - lvol1 has 'slightly' less then 2.00g - from sign '<' user can be aware the LV doesn't have full 2.00GiB in size so he will be less surpriced allocation of 2G volume will not succeed. $ vgs VG #PV #LV #SN Attr VSize VFree vg 2 2 0 wz--n- <6,00g <2,01g For uses needing 'old' undecorated human unit simply will continue to use 'H|h' units. The new R|r may further change when we would recongnize some other way how to improve readability.
* cache: introduce cache_pool_max_chunksZdenek Kabelac2016-08-291-0/+1
| | | | | | | | | | | | | | | Introduce 'hard limit' for max number of cache chunks. When cache target operates with too many chunks (>10e6). When user is aware of related possible troubles he may increase the limit in lvm.conf. Also verbosely inform user about possible solution. Code works for both lvcreate and lvconvert. Lvconvert fully supports change of chunk_size when caching LV (and validates for compatible settings).
* lvcreate/lvconvert: fix validation of maximum mirrors/stripesHeinz Mauelshagen2016-08-121-2/+7
| | | | | | | | | Enforce mirror/raid0/1/10/4/5/6 type specific maximum images when creating LVs or converting them from mirror <-> raid1. Document those maxima in the lvcreate/lvconvert man pages. - resolves rhbz1366060
* lvcreate: raid0 needs default number of stripesHeinz Mauelshagen2016-07-201-0/+1
| | | | | | | | | | | | | | | Commit 3928c96a37941d765bf467d82502cd2aec7fd809 introduced new defaults for raid number of stripes, which may cause backwards compatibility issues with customer scripts. Adding configurable option 'raid_stripe_all_devices' defaulting to '0' (i.e. off = new behaviour) to select the old behaviour of using all PVs in the VG or those provided on the command line. In case any scripts rely on the old behaviour, just set 'raid_strip_all_devices = 1'. - resolves rhbz1354650
* raid: Infrastructure for raid takeover.Alasdair G Kergon2016-06-281-1/+2
|
* conf: add log/command_log_selection config settingPeter Rajnoha2016-06-201-0/+2
|
* commands: report: add lvm fullreport commandPeter Rajnoha2016-06-201-0/+12
| | | | | | | | | | lvm fullreport executes 5 subreports (vg, pv, lv, pvseg, seg) per each VG (and so taking one VG lock each time) within one command which makes it easier to produce full report about LVM entities. Since all 5 subreports for a VG are done under a VG lock, the output is more consistent mainly in cases where LVM entities may be changed in parallel.
* conf: add log/report_command_log config settingPeter Rajnoha2016-06-201-0/+1
|
* conf: add report/output_format config settingPeter Rajnoha2016-06-201-0/+1
| | | | | | | | New report/output_format configuration sets the output format used for all LVM commands globally. Currently, there are 2 formats recognized: - basic (the classical basic output with columns and rows, used by default) - json (output is in json format)
* report: add CMDLOG report typePeter Rajnoha2016-06-201-0/+2
| | | | | | | | | | | | | | | | This is a preparation for new CMDLOG report type which is going to be used for reporting LVM command log. The new report type introduces several new fields (log_seq_num, log_type, log_context, log_object_type, log_object_group, log_object_id, object_name, log_message, log_errno, log_ret_code) as well as new configuration settings to set this report type (report/command_log_sort and report/command_log_cols lvm.conf settings). This patch also introduces internal report_cmdlog helper function which is a wrapper over dm_report_object to report command log via CMDLOG report type and which is going to be used throughout the code to report the log items.
* lvmcache: improve duplicate PV handlingDavid Teigland2016-05-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wait to compare and choose alternate duplicate devices until after all devices are scanned. During scanning, the first duplicate dev is kept in lvmcache, and others are kept in a new list (_found_duplicate_devs). After all devices are scanned, compare all the duplicates available for a given PVID and decide which is best. If the dev used in lvmcache is changed, drop the old dev from lvmcache entirely and rescan the replacement dev. Previously the VG metadata from the old dev was kept in lvmcache and only the dev was replaced. A new config setting devices/allow_changes_with_duplicate_pvs can be set to 0 which disallows modifying a VG or activating LVs in it when the VG contains PVs with duplicate devices. Set to 1 is the old behavior which allowed the VG to be changed. The logic for which of two devs is preferred has changed. The primary goal is to choose a device that is currently in use if the other isn't, e.g. by an active LV. . prefer dev with fs mounted if the other doesn't, else . prefer dev that is dm if the other isn't, else . prefer dev in subsystem if the other isn't If neither device is preferred by these rules, then don't change devices in lvmcache, leaving the one that was found first. The previous logic for preferring a device was: . prefer dev in subsystem if the other isn't, else . prefer dev without holders if the other has holders, else . prefer dev that is dm if the other isn't
* lvmetad: preemptively check and rescan in commandsDavid Teigland2016-04-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Move checking the lvmetad state, and the possible rescan, out of lvmetad_send() to the start of the command. Previously, the token mismatch and rescan would occur within lvmetad_send() for some other request. Now, the token mismatch is detected earlier, so the rescan can be done before the main command is in progress. Rescanning deep within the processing of another command will disturb the lvmcache state of that other command. A rescan already exists at the start of the command for the case where foreign VGs are going to be read. This same rescan is now also performed when there is an lvmetad token mismatch (from a changed global_filter). The commands pvscan/vgscan/lvscan/vgimport are excluded from this preemptive checking/rescanning for lvmetad because they want to do rescanning themselves explicitly. If rescanning devices fails, then lvmetad has not been correctly repopulated and should not be used, so make the command revert to not using lvmetad.