summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* lvmlockd: start and stop VG lockspacedev-dct-lvmlockd5-startstopDavid Teigland2014-11-186-6/+202
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lock manager requires an application to "start" or "join" a lockspace before using locks from it. Start is the point at which the lock manager on a host begins interacting with other hosts to coordinate access to locks in the lockspace. Similarly, an application needs to "stop" or "leave" a lockspace when it's done using locks from the lockspace so the lock manager can shut down and clean up the lockspace. lvmlockd uses a lockspace for each sanlock|dlm VG, and the lockspace for each VG needs to be started before lvm can use it. These commands tell lvmlockd to start or stop the lockspace for a VG: vgchange --lock-start vg_name vgchange --lock-stop vg_name To start the lockspace for a VG, lvmlockd needs to know which lock manager (sanlock or dlm) to use, and this is stored in the VG metadata as lock_type = "sanlock|dlm", along with data that is specific to the lock manager for the VG, saved as lock_args. For sanlock, lock_args is the location of the locks on disk. For dlm, lock_args is the name of the cluster the dlm should use. So, the process for starting a VG includes: - Reading the VG without a lock (no lock can be acquired because the lockspace is not started). - Taking the lock_type and lock_args strings from the VG metadata. - Asking lvmlockd to start the VG lockspace, providing the lock_type and lock_args strings which tell lvmlockd exactly which lock manager is needed. - lvmlockd will ask the specific lock manager to join the lockspace. The VG read in the first step, without a lock, is not used for for anything except getting the lock information needed to start the lockspace. Subsequent use of the VG would use the VG lock. In the case of a sanlock VG, there is an additional step in the sequence. Between the second and third steps, the vgchange lock-start command needs to activate the internal LV in the VG that holds the sanlock locks. This LV must be active before sanlock can join the lockspace. Starting and stopping VG's would typically be done automatically by the system, similar to the way LV's are automatically activated by the system. But, it is always possible to directly start/stop VG lockspaces, as it is always possible to directly activate/deactivate LVs. Automatic VG start/stop will be added by a later patch, using the basic functionality from this patch.
* lvmlockd: vgcreate/vgremove call init_vg/free_vgdev-dct-lvmlockd4-initfreeDavid Teigland2014-11-185-1/+655
| | | | | | | | | | | | | | | | | vgcreate calls lvmlockd_init_vg() to do any create/initialize steps that are needed in lvmlockd for the given lock_type. vgcreate calls lvmlockd_free_vg_before() to do any removal/freeing steps that are needed in lvmlockd for the given lock_type before the VG is removed on disk. vgcreate calls lvmlockd_free_vg_final() to do any removal/freeing steps that are needed in lvmlockd for the given lock_type after the VG is removed on disk. When the lock_type is sanlock, the init/free also include lvm client side steps to create/remove an internal LV on which sanlock will store the locks for the VG.
* VG lock_type and lvmlockd setupdev-dct-lvmlockd3-locktypeDavid Teigland2014-11-1229-10/+617
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The locking required to access a VG is a property of the VG, and is specified in the VG metadata as the "lock_type". When lvm sees a VG, it looks at the VG's lock_type to determine if locks are needed and from where: - If the VG has no lock_type, or lock_type "none", then no locks are needed. This a "local VG". If the VG is visible to multiple hosts, the VG system_id provides basic protection. A VG with an unmatching system_id is inaccessible. - If the VG has lock_type "sanlock" or "dlm", then locks are needed from lvmlockd, which acquires locks from either sanlock or dlm respectively. This is a "dlock VG". If lvmlockd or the supporting lock manager are not running, then the dlock VG is inaccessible. - If the VG has the CLUSTERED status flag (or lock_type "clvm"), then locks are needed from clvmd. This is a "clvm VG". If clvmd or the supporting clustering or locking are not running, then the clvm VG is inaccessible. Settings in lvm.conf tell lvm commands which locking daemon to use: - global/use_lvmlockd=1: tells lvm to use lvmlockd when accessing VGs with lock_type sanlock|dlm. - global/locking_type=3: tells lvm to use clvmd when accessing VGs with CLUSTERED flag (or lock_type clvm). LVM commands cannot use both lvmlockd and clvmd at the same time: - use_lvmlockd=1 should be combined with locking_type=1 - locking_type=3 (clvmd) should be combined with use_lvmlockd=0 So, different configurations allow access to different VG's: - When configured to use lvmlockd, lvm commands can access VG's with lock_type sanlock|dlm, and VG's with CLUSTERED are ignored. - When configured to use clvmd (locking_type 3), lvm commands can access VG's with the CLUSTERED flag, and VG's with lock_type sanlock|dlm are ignored. - When configured to use neither lvmlockd nor clvmd, lvm commands can access only local VG's. lvm will ignore VG's with lock_type sanlock|dlm, and will ignore VG's with CLUSTERED (or lock_type clvm). A VG is created with a specific lock_type: - vgcreate --lock_type <arg> is a new syntax that can specify the lock_type directly. <arg> may be: none, clvm, sanlock, dlm. sanlock|dlm require lvmlockd to be configured (in lvm.conf) and running. clvm requires clvmd to be configured (in lvm.conf) and running. - vgcreate --clustered y (or -cy) is the old syntax that still works, but it is not preferred because the lock_type is not explicit. When clvmd is configured, -cy creates a VG with lock_type clvm. When lvmlockd is configured, -cy creates a VG with lock_type sanlock, but this can be changed to dlm with lvm.conf vgcreate_cy_lock_type. Notes: The LOCK_TYPE status flag is not strictly necessary, but is an attempt to prevent old versions of lvm (pre-lvmlockd) from using a VG with a lock_type. In the VG metadata, the lock_type string is accompanied by a lock_args string. The lock_args string is lock-manager-specific data associated with the VG. For sanlock, the location on disk of the locks, or for dlm, the cluster name. In a VG with lock_type sanlock|dlm, each LV also has a lock_type and lock_args in the metadata. The LV lock_type currently always matches the lock_type of the VG. For sanlock, the LV lock_args specify the disk location of the LV lock.
* toollib: command should fail if ignoring named argdev-dct-lvmlockd1-systemidDavid Teigland2014-11-073-21/+20
| | | | | | | | | | | | If a foreign VG is ignored when it's included by "all vgs", then the command shouldn't fail. If a foreign VG is ignored when it's named explicitly as a command arg, then the command should fail. Also, remove ignore_vg from reporter functions because it repeats what has been done in process_each given the recent new version of process_each.
* system_id: use for VG ownershipDavid Teigland2014-11-0723-29/+729
| | | | See included lvmsystemid(7) for full description.
* vgextend: use process_each_vgDavid Teigland2014-11-076-93/+122
|
* pvchange: use process_each_pvDavid Teigland2014-11-071-90/+46
|
* cleanup: avoid dm_list size calc in common pathZdenek Kabelac2014-11-051-14/+15
| | | | | Calculate dm_list_size only when there is not just a single ont segment in list - so it's only counted on error path.
* activate: check all snap segs are inactiveZdenek Kabelac2014-11-052-1/+15
| | | | | | | | | When deactivating origin, we may have possibly left table in broken state, where origin is not active, but snapshot volume is still present. Let's ensure deactivation of origin detects also all associated snapshots are inactive - otherwise do not skip deactivation. (so i.e. 'vgchange -an' would detect errors)
* lv: lv_active_change add needs_exclusive flagZdenek Kabelac2014-11-055-9/+9
| | | | | | | | | | | | | | Let's use this function for more activations in the code. 'needs_exlusive' will enforce exlusive type for any given LV. We may want to activate LV in exlusive mode, even when we know the LV (as is) supports non-exlusive activation as well. lvcreate -ay -> exclusive & local lvcreate -aay -> exclusive & local lvcreate -aly -> exclusive & local lvcreate -aey -> exclusive (might be on any node).
* snapshot: no snapshot of any cache type LVsZdenek Kabelac2014-11-052-0/+7
| | | | Unsupported as of now.
* cleanup: keep 'fall through' switch case for LVSINFO for compiler to ↵Peter Rajnoha2014-11-051-0/+2
| | | | understand this properly
* report: cleanup: simplify LVSINFO detectionPeter Rajnoha2014-11-051-8/+3
| | | | | | | | | | | | | | | | | | LVSINFO is just a subtype of LVS report type with extra "info" ioctl called for each LV reported (per output line) so include its processing within "case LVS" switch, not as completely different kind of reporting which may be misleading when reading the code. There's already the "lv_info_needed" flag set in the _report fn, so call the approriate reporting function based on this flag within the "case LVS" switch line. Actually the same is already done for LV is reported per segments within the "case SEGS" switch line. So this patch makes the code more consistent so it's processed the same way for all cases. Also, this is a preparation for another and new subtype that will be introduced later - the "LVSSTATUS" and "SEGSSTATUS" report type.
* dmeventd: Add basic thread debugging messages.Alasdair G Kergon2014-11-042-2/+90
| | | | Only with -DDEBUG.
* dmeventd: Include shutdown threads in responses.Alasdair G Kergon2014-11-042-0/+13
| | | | | | | | | | | | | | | When responding to DM_EVENT_CMD_GET_REGISTERED_DEVICE no longer ignore threads that have already been unregistered but which are still present. This means the caller can unregister a device and poll dmeventd to ensure the monitoring thread has gone away before removing the device. If a device was registered and unregistered in quick succession and then removed, WAITEVENT could run in parallel with the REMOVE. Threads are moved to the _thread_registry_unused list when they are unregistered.
* dmeventd: Remove redundant checks.Alasdair G Kergon2014-11-041-10/+4
| | | | | | | The status of threads in _thread_registry is always DM_THREAD_RUNNING (zero). DM_EVENT_REGISTRATION_PENDING is never stored in thread->events.
* tests: duplicate update of configZdenek Kabelac2014-11-041-3/+1
|
* thin: new pool is activated without overlayZdenek Kabelac2014-11-042-3/+16
| | | | | | | | | | | | | | | | | | Activate of new/unused/empty thin pool volume skips the 'overlay' part and directly provides 'visible' thin-pool LV to the user. Such thin pool still gets 'private' -tpool UUID suffix for easier udev detection of protected lvm2 devices, and also gets udev flags to avoid any scan. Such pool device is 'public' LV with regular /dev/vgname/poolname link, but it's still 'udev' hidden device for any other use. To display proper active state we need to do few explicit tests for this condition. Before it's used for any lvm2 thin volume, deactivation is now needed to avoid any 'race' with external usage.
* thin: check for new pool before creating thin volumeZdenek Kabelac2014-11-042-2/+36
| | | | | | | | | | | | | | | Call check_new_thin_pool() to detect in-use thin-pool. Save extra reactivation of thin-pool when thin pool is not active. (it's now a bit more expensive to invoke thin_check for new pools.) For new pools: We now active locally exclusively thin-pool as 'public' LV. Validate transaction_id is till 0. Deactive. Prepare create message for thin-pool and exclusively active pool. Active new thin LV. And deactivate thin pool if it used to be inactive.
* thin: validate unused thin poolZdenek Kabelac2014-11-042-0/+55
| | | | Function tests, that given new thin pool is still unused.
* thin: no validation skip of new thin poolsZdenek Kabelac2014-11-041-0/+3
| | | | | | | | Allowing 'external' use of thin-pools requires to validate even so far 'unused' new thin pools. Later we may have 'smarter' way to resolve which thin-pools are owned by lvm2 and which are external.
* thin: add lv_is_new_thin_poolZdenek Kabelac2014-11-041-0/+1
| | | | | Recognize 'new' (and never used) lvm2 thin pool - it has 'transaction_id' == 0 (lv_is_used_thin_pool() has slightly different meaning).
* libdm: allow to activate any pool with tid == 0Zdenek Kabelac2014-11-042-1/+4
| | | | | | | | | | | | | When transaction_id is set 0 for thin-pool, libdm avoids validation of thin-pool, unless there are real messages to be send to thin-pool. This relaxes strict policy which always required to know in front transaction_id for the kernel target. It now allows to activate thin-pool with any transaction_id (when transaction_id is passed in) It is now upto application to validate transaction_id from life thin-pool volume with transaction_id within it's own metadata.
* lvconvert: convert missing sizes to extentsZdenek Kabelac2014-11-041-3/+3
| | | | | | | After initial 'size' usage converted to extents, continue to use only extents. (in-release fix).
* tests: thinZdenek Kabelac2014-11-031-1/+3
|
* tests: usage of -m0 -MnZdenek Kabelac2014-11-032-1/+18
| | | | | Test -m0 passed with types. Check --readahead and thins.
* cleanup: use lv_is_poolZdenek Kabelac2014-11-031-2/+2
| | | | Use lv_is_pool() to detect both pool versions.
* cleanup: use logical_volume* directlyZdenek Kabelac2014-11-031-9/+9
|
* cleanup: consistent nameZdenek Kabelac2014-11-031-1/+1
|
* cleanup: shorter codeZdenek Kabelac2014-11-031-2/+1
|
* cleanup: rename functionZdenek Kabelac2014-11-031-12/+12
| | | | Make more clear dm_info type.
* cleanup: standard params orderingZdenek Kabelac2014-11-031-6/+6
| | | | Pass lvconvert_params as last arg.
* cleanup: init of lcpZdenek Kabelac2014-11-031-3/+2
| | | | Use struct initializer instead of memset().
* cleanup: correcting tracingZdenek Kabelac2014-11-032-5/+6
| | | | Use log_error for real error.
* cleanup: use arg_is_setZdenek Kabelac2014-11-031-3/+2
|
* cache: report stats for cache volumes usageZdenek Kabelac2014-11-033-4/+49
| | | | | | | | | | | | Show some stats with 'lvs' Display same info for active cache volume and cache-pool. data% - #used cache blocks/#total cache blocks meta% - #used metadata blocks/#total metadata blocks copy% - #dirty/#used cache blocks TODO: maybe there is a better mapping - should be seen as first-try-and-see.
* cache: wipe cache-pool before reuseZdenek Kabelac2014-11-031-0/+4
| | | | | Before we reuse cache-pool - we need to ensure metadata volume has wiped header.
* cache: support activation of empty cache-poolZdenek Kabelac2014-11-031-8/+16
| | | | | | | | When the cache pool is unused, lvm2 code will internally allow to activate such cache-pool. Cache-pool is activate as metadata LV, so lvm2 could easily wipe such volume before cache-pool is reused.
* cache: lv_cache_statusZdenek Kabelac2014-11-036-154/+60
| | | | | | | | | | Replace lv_cache_block_info() and lv_cache_policy_info() with lv_cache_status() which directly returns dm_status_cache structure together with some calculated values. After use mem pool stored inside lv_status_cache structure needs to be destroyed.
* cleanup: add arg to _setup_taskZdenek Kabelac2014-11-031-50/+17
| | | | | | | | Add init of no_open_count into _setup_task(). Report problem as warning (cannot happen anyway). Also drop some duplicated debug messages - we have already printed the info about operation so make log a bit shorter.
* cleanup: rename virtual_extentsZdenek Kabelac2014-11-035-32/+28
| | | | | Use standard 'virtual_extents' naming. Move virtual_size into 'lcp' struct out of lvcreate_params.
* cleanup: use extents to pass size to /libZdenek Kabelac2014-11-036-54/+70
| | | | Lib takes sizes in extens - do the same for pool_metadata.
* cache: add wipe_cache_poolZdenek Kabelac2014-11-032-0/+42
| | | | | Add function for wiping cache pool volume. Only unused cache-pool could be wiped.
* cache: allow deactivation of empty poolZdenek Kabelac2014-11-033-8/+18
| | | | | | | | Tool will use internal activation of unused cache pool to clear metadata area before next use of cache-pool. So allow to deactivation unused pool in case some error case happend and we were not able to deactivation pool right after metadata wipe.
* cache: convert thin-poolZdenek Kabelac2014-11-032-9/+9
| | | | | | | Support caching of thin-pool. lvresize needs to be resolved - so far, user has to manually drop cache-pool before resizing.
* thin: allow to convert chunksize of empty poolZdenek Kabelac2014-11-031-1/+2
| | | | When pool is not used, allow to change its chunksize.
* thin: reporting of thin volumes simplifiedZdenek Kabelac2014-11-032-33/+13
| | | | | | | Simplify reporting of percentage. Allows easier support for more types. Move testing of device availability into activate.c
* pool: validate sizesZdenek Kabelac2014-11-031-5/+14
| | | | 0 size are not supported as well as negative.
* filters: change return codeZdenek Kabelac2014-11-031-1/+1
| | | | | | No data for writing should be seen as 'dump' success. (reduces one <bactrace> in the log) - it has no other effect.
* lvcreate: tollerate defaultsZdenek Kabelac2014-11-031-11/+12
| | | | | lvcreate -m0 and -Mn goes with anything. Read ahead works either with pools or thin/cache, but not with both.