summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* lvmlockd: extend the sanlock lv when neededdev-dct-lvmlockd-AADavid Teigland2015-03-056-56/+165
|
* lvcreate: call lockd_init_lv laterDavid Teigland2015-03-053-15/+23
| | | | | | lockd_init_lv() needs to be called later, after the place where LV names are generated, because it requires the LV name to be known.
* vgrename: for lockd VGsDavid Teigland2015-03-057-8/+360
|
* lvconvert: prevent splitmirror in lockd VGsDavid Teigland2015-03-051-0/+7
| | | | | To handle this, a new LV lock needs to be created for the newly split LV name.
* lvmlockd: pass cmd name to daemonDavid Teigland2015-03-053-1/+10
| | | | It's helpful to see the command name being processed.
* vgchange: lock-start and lock-stop with no argDavid Teigland2015-03-051-2/+2
| | | | No arg means all vgs.
* lvrename: support lock renaming with lvmlockdDavid Teigland2015-03-052-6/+47
| | | | | | | | | | | | The steps are: - lock the old LV name - create a lock for the new LV name (creates new lock_args) - lock the new LV name (only needed if the LV is active) - set new lock_args in the LV - rename the LV - unlock the old LV name - remove the lock for the old LV args (frees old lock_args)
* lvmlockd: systemd service filesDavid Teigland2015-03-052-0/+43
| | | | | | | | | | | lvm2-lvmlockd: start/stop the lvmlockd daemon. lvm2-lvmlocking: start/stop lockspaces for lockd VGs, and activate/deactivate LVs in those VGs after/before the lockspaces have started/stopped. Starting/joining the VG lockspaces can take quite a while, and require the lock manager used by the VGs to be running.
* pvscan: notify lvmlockd of lvmetad updatesDavid Teigland2015-03-053-0/+47
| | | | | | lvmlockd keeps track of local vgs so that it can quickly determine if a vg lock request can be ignored (for a local vg) or is needed (for a lockd vg).
* vgchange: lock-start options for auto and waitDavid Teigland2015-03-055-2/+111
| | | | | | | | | | | | | | Adds options to use with --lock-start, --lock-opt wait|auto|autowait wait: wait for the start to finish auto: use when the system is running the command autowait: both auto and wait The auto option enables the use of lvm.conf activation_lock_start_list which defines VG's that should be automatically started by the system. This is similar to the auto_activation_volume_list used when the system automatically activates LV's.
* vgchange: allow lock_type to be changedDavid Teigland2015-03-054-5/+250
| | | | | | | | | | | | | | | | | | | | | | | When lvm is using clvm, the following are possible: - change lock type from none to clvm - change lock type from clvm to none When lvm is using lvmlockd, the following are possible: - change lock type from none to clvm (with warning) - change lock type from clvm to none - change lock type from none to a lockd type (sanlock|dlm) - change lock type from clvm to a lockd type (sanlock|dlm) - change lock type from lockd type to none (TODO) - change lock type from lockd type to clvm (TODO) The TODO variations are still missing the steps to undo/reverse the existing lockd type, so are currently disabled. A special 'vgchange --lock-type none --force' can be used to forcibly clear all locking settings from the VG metadata, skipping any steps that would normally be done to cleanly undo/reverse the existing lock type.
* lvmlockd: add daemonDavid Teigland2015-03-0510-3/+10289
|
* libdaemon: allow main processing function to be specifiedDavid Teigland2015-03-052-1/+8
|
* lvmlockd: LV lockingDavid Teigland2015-03-0511-5/+571
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An lv lock is a simple ex/sh lock that is: - acquired before the LV is activated - released after the LV is deactivated If the lock cannot be acquired, the LV is not activated. Unlike gl and vg locks, lv locks are persistent; they remain after a command acquiring them ends. They are held by the lvmlockd process. lvmlockd uses persistent locks in the lock manager so if lvmlockd exits, the lv locks are not dropped, and the LVs remain protected. The lvchange/vgchange -a option gains a new "s" arg to represent shared locks ("e" already means exclusive): -aey = activate using an ex lock -asy = activate using a sh lock An unspecified activation mode, e.g. -ay, defaults to the ex mode in lvmlockd VGs. If one host has an LV activated with an ex lock, any other host that attempts to activate the LV (in any mode) will fail to acquire the lock and will report the error. If a host has an LV activated with a sh lock, any other host can also activate that LV with a sh lock, but any attempt to activate the LV with an ex lock will fail. Commands that modify an LV (as opposed to changing the activation state) also use lv locks. These are lvconvert and other variations of lvchange. When the LV is already active (and locked), the lock requests do nothing, but if the LV is not active, these commands acquire the LV lock prior to changing the LV. If the LV is active and locked on another host, the lvchange/lvconvert command will fail when the LV lock request fails. Removing an LV with lvremove must also acquire the lv lock prior to removing the LV. If the LV is active and locked on another host, lvremove will fail. A thin pool LV has an lv lock that applies to all usage of that pool. Thin LVs do not have individual LV locks. A cache pool LV does not have an independent lv lock. When the cache pool LV is linked to an origin LV, the lock of the origin LV will protect the combined origin + cache pool.
* lvmlockd: VG lockingDavid Teigland2015-03-0519-42/+419
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VG lock is a simple read/write lock that protects a VG's metadata. The VG lock is used in shared mode to read the VG and in exclusive mode to modify the VG. The function to acquire/release the VG lock is lockd_vg(): . To acquire the vg lock in ex mode for writing: lockd_vg(cmd, vg_name, "ex", 0); . To acquire the vg lock in sh mode for reading: lockd_vg(cmd, vg_name, "sh", 0); . To release the vg lock: lockd_vg(cmd, vg_name, "un", 0); The lockd_vg() function sends a message to lvmlockd, asking for the lock in the specified mode. lvmlockd acquires the lock from the underlying lock manager, and sends the result back to the command. When the command exits, or calls lockd_vg("un"), lvmlockd releases the lock in the underlying lock manager. When lvm is compiled without lvmlockd support, all the lockd_vg calls simply compile to success (1). Using the vg lock in commands is simple: . A command that wants to read the VG should acquire the vg lock in the sh mode, then read the VG: lockd_vg(cmd, vg_name, "sh", 0); vg = vg_read(cmd, vg_name); lockd_vg(cmd, vg_name, "un", 0); . A command that wants to write the VG should acquire the vg lock in the ex mode, then make changes to the VG and write it: lockd_vg(cmd, vg_name, "ex", 0); vg = vg_read(cmd, vg_name); ... vg_write(vg); vg_commit(vg); lockd_vg(cmd, vg_name, "un", 0); When a command processes multiple VGs, e.g. using toollib process_each, then VG lock should be explicitly unlocked when the command is done processing the VG, i.e. lockd_vg(cmd, vg_name, "un", 0) as shown above. When a command processes a single VG or pair of VGs, then the command can simply exit and lvmlockd will automatically unlock the VG lock(s) that the command had acquired. Locking conflicts: When a command calls lockd_vg(), the lock request is passed to lvmlockd. lvmlockd makes the corresponding lock request in the lock manager using a non-blocking request. If another command on another host holds the vg lock in a conflicting mode, the lock request fails, and lvmlockd returns the failure to the command. The command reports the lock conflict and fails. A future option may enable lvmlockd to automatically retry lock requests that fail due to conflicts with locks of commands running concurrently on other hosts. (These retries could be disabled or limited to a certain number via a command or config option.) This way, simple, transient locking conflicts between commands on different hosts would be hidden. Caching: lvmlockd uses the lvb in the VG lock to hold the VG seqno. When a command writes VG metadata with a new seqno (under an ex lock), it sends lvmlockd the new VG seqno by calling lockd_vg_update(vg). When lvmlockd unlocks the ex VG lock, it saves this new seqno in the vg lock's lvb. When other hosts next acquire the VG lock, they will read the lvb, see the new seqno is higher than the last seqno they saw, and know that their cached copy of the VG is stale. When lvmlockd sees this, it invalidates the cached copy of the VG in lvmetad. When a command then reads the VG from lvmetad, it will see that it's stale, will reread the latest VG from disk, and update the cached copy in lvmetad. These commands do not yet work with lvmlockd lock_types (sanlock|dlm): . vgsplit . vgmerge . vgrename . lvrename
* lvmlockd: Global lockingDavid Teigland2015-03-0526-3/+512
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The global lock (gl) is a simple read/write lock that protects global lvm metadata, i.e. metadata or information that is not isolated to a single VG. This global information includes: A) The VG name space. B) PVs or devices that do not belong to a VG. The function to acquire/release the global lock is lockd_gl(): . To acquire the gl in exclusive mode for writing/changing A or B: lockd_gl(cmd, "ex", 0); . To acquire the gl in shared mode for reading/listing A or B: lockd_gl(cmd, "sh", 0); . To release the gl: lockd_gl(cmd, "un", 0); The lockd_gl() function sends a message to lvmlockd, asking for the lock in the specified mode. lvmlockd acquires the lock from the underlying lock manager, and sends the result back to the command. When the command exits, or calls lockd_gl("un"), lvmlockd releases the lock in the underlying lock manager. When lvm is compiled without lvmlockd support, all the lockd_gl() calls simply compile to success (1). Using the global lock in commands is simple: . A command that wants to get an accurate list of all VG names should acquire the gl in the sh mode, then read the list. . A command that wants to add or remove a VG name should acquire the gl in the ex mode, then add/remove the name. . A command that wants to get an accurate list of all PVs/devices should acquire the gl in the sh mode, then read the list. . A command that wants to change a device to/from a PV or add/remove a PV to/from a VG should acquire the gl in the ex mode, then make the change. . A command that wants to read the properties of an orphan PV should acquire the gl in the sh mode, then read the properties. . A command that wants to change the properties of an orphan PV should acquire the gl in the ex mode, then change the properties. . The gl is acquired at the start of a command before any processing. This is necessary so that the cached information used by the command is valid and up to date (see caching below). . A command generally knows at the outset which of the things above it is going to do, so it knows which lock mode to acquire. . If a command is given a tag, the tag matching requires a complete and accurate search of all VGs, and therefore implies that the global shared lock is needed. Locking conflicts: When a command calls lockd_gl(), the lock request is passed to lvmlockd. lvmlockd makes the corresponding lock request in the lock manager using a non-blocking request. If another command on another host holds the gl in a conflicting mode, the lock request fails, and lvmlockd returns the failure to the command. The command reports the lock conflict and fails. If a reporting command (sh lock), conflicts with another command using an ex lock (like vgextend), then the reporting command can simply be rerun. A future option may enable retrying sh lock requests within lvmlockd, making simple, incidental conflicts invisible. (These retries could be disabled or limited to a certain number via a command or config option.) If a command is changing global state (using an ex lock), the conflict could be with another sh lock (e.g. reporting command) or another ex lock (another command changing global state.) The lock manager does not say which type of conflict it was. In the case of ex/sh conflict, a retry of the ex request could be automatic, but an ex/ex conflict would generally want inspection before retrying. Uncoordinated commands concurrently changing the same global state would be uncommon, and a warning of the conflict with failure is probably preferred, so the state can be inspected. Still, the same retry options as above could be applied if needed. Caching: In addition to ex/sh mutual exclusion, the lock manager allows an application to stash a chunk of its own data within a lock. This chunk of data is called the "lock value block" (lvb). This opaque data does not affect the locking behavior, but offers a convenient way for applications to pass around extra information related to the lock. The lvb can be read and written by any host using the lock (the application can read the lvb content when acquiring the lock, and write it when releasing an ex lock.) (The dlm lvb is only 32 bytes. The sanlock lvb is up to 512 bytes.) An application can use the lvb for anything it wishes, and it's often used to help manage the cache associated with the lock. lvmlockd stores a counter/version number in the lvb. It's incremented when anything protected by the gl (A or B above) is changed. When other hosts see the version number has increased, they know that A or B have changed. lvmlockd refines this further by using a second version number that is incremented only when the VG namespace (A) changes. The lockd_gl() flag UPDATE_NAMES tells lvmlockd that the VG namespace is being changed. When lvmlockd acquires the gl and sees that the counters in the lvb have been incremented, it knows that the objects protected by the gl have been changed by another host. This implies that the local host's cache of this global information is likely out of date, and needs to be refreshed from disk. When this happens, lvmlockd sends a message to lvmetad to invalidate the cached global information in lvmetad. When a command sees that the data from lvmetad is stale, it reads from disk and updates lvmetad. All users of the global lock want to use valid information from lvmetad, so they always check that the lvmetad cache is valid before using it, and refresh it if needed. This check and refresh is done by lvmetad_validate_global_cache(). Instead of always calling lockd_gl()+lvmetad_validate_global_cache() back to back, lockd_gl() calls lvmetad_validate_global_cache() as the final step before returning. In the future, more optimizations can be made related to global cache updating. Similar to the UPDATE_NAMES method, commands can tell lvmlockd more details about what they are chaning under the global lock. lvmlockd can propagate these details to others using the gl lvb. Knowing these details, other hosts can limit their rescanning to only what's necessary given the specific changes. vgcreate with sanlock: With the sanlock lock_type, the sanlock locks are stored on disk, on a hidden LV within the VG. The first sanlock VG created will hold the global lock. Creating the first sanlock VG is a special case because no global lock will exist until after the VG is created on disk. vgcreate calls lockd_gl_create() to handle this special case. The comments in that function explain the details of how it works. command gl requirements: Some commands are listed twice if they have two different behaviors (depending on args) that need different gl usage. As listed above (in A,B), the reasons for using the gl are: A) reading or changing the VG name space B) reading or changing PV orphans (orphan properties or assignment to VGs) command: gl mode used, reason(s) gl is needed vgsplit: ex, add vg name vgrename: ex, add vg name vgcreate: ex, add vg name, rem pv orphan vgremove: ex, rem vg name, add pv orphan vgmerge: ex, rem vg name vgextend: ex, rem pv orphan vgreduce: ex, add pv orphan vgscan: sh, get vg names vgs: sh, get vg names (only if tags used or no args) vgchange: sh, get vg names (only if tags used or no args) vgchange: ex, change vg system_id/uuid/lock_type (equivalent to name) pvcreate: ex, add pv orphan pvremove: ex, rem pv orphan pvdisplay: sh, get vg names pvscan: sh, get vg names pvresize: sh, get vg names pvresize: ex, change pv orphan (only if pv is an orphan) pvchange: sh, get vg names pvchange: ex, change pv orphan (only if pv is an orphan) lvchange: sh, get vg names (only if tags used) lvscan: sh, get vg names
* lvmlockd: start and stop VG lockspaceDavid Teigland2015-03-057-6/+206
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lock manager requires an application to "start" or "join" a lockspace before using locks from it. Start is the point at which the lock manager on a host begins interacting with other hosts to coordinate access to locks in the lockspace. Similarly, an application needs to "stop" or "leave" a lockspace when it's done using locks from the lockspace so the lock manager can shut down and clean up the lockspace. lvmlockd uses a lockspace for each sanlock|dlm VG, and the lockspace for each VG needs to be started before lvm can use it. These commands tell lvmlockd to start or stop the lockspace for a VG: vgchange --lock-start vg_name vgchange --lock-stop vg_name To start the lockspace for a VG, lvmlockd needs to know which lock manager (sanlock or dlm) to use, and this is stored in the VG metadata as lock_type = "sanlock|dlm", along with data that is specific to the lock manager for the VG, saved as lock_args. For sanlock, lock_args is the location of the locks on disk. For dlm, lock_args is the name of the cluster the dlm should use. So, the process for starting a VG includes: - Reading the VG without a lock (no lock can be acquired because the lockspace is not started). - Taking the lock_type and lock_args strings from the VG metadata. - Asking lvmlockd to start the VG lockspace, providing the lock_type and lock_args strings which tell lvmlockd exactly which lock manager is needed. - lvmlockd will ask the specific lock manager to join the lockspace. The VG read in the first step, without a lock, is not used for for anything except getting the lock information needed to start the lockspace. Subsequent use of the VG would use the VG lock. In the case of a sanlock VG, there is an additional step in the sequence. Between the second and third steps, the vgchange lock-start command needs to activate the internal LV in the VG that holds the sanlock locks. This LV must be active before sanlock can join the lockspace. Starting and stopping VG's would typically be done automatically by the system, similar to the way LV's are automatically activated by the system. But, it is always possible to directly start/stop VG lockspaces, as it is always possible to directly activate/deactivate LVs. Automatic VG start/stop will be added by a later patch, using the basic functionality from this patch.
* lvmlockd: vgcreate/vgremove call init_vg/free_vgDavid Teigland2015-03-055-2/+636
| | | | | | | | | | | | | | | | | vgcreate calls lockd_init_vg() to do any create/initialize steps that are needed in lvmlockd for the given lock_type. vgcreate calls lockd_free_vg_before() to do any removal/freeing steps that are needed in lvmlockd for the given lock_type before the VG is removed on disk. vgcreate calls lockd_free_vg_final() to do any removal/freeing steps that are needed in lvmlockd for the given lock_type after the VG is removed on disk. When the lock_type is sanlock, the init/free also include lvm client side steps to create/remove an internal LV on which sanlock will store the locks for the VG.
* lvmlockd: lock_type and lvmlockd setupDavid Teigland2015-03-0520-23/+542
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The locking required to access a VG is a property of the VG, and is specified in the VG metadata as the "lock_type". When lvm sees a VG, it looks at the VG's lock_type to determine if locks are needed and from where: - If the VG has no lock_type, or lock_type "none", then no locks are needed. This a "local VG". If the VG is visible to multiple hosts, the VG system_id provides basic protection. A VG with an unmatching system_id is inaccessible. - If the VG has lock_type "sanlock" or "dlm", then locks are needed from lvmlockd, which acquires locks from either sanlock or dlm respectively. This is a "lockd VG". If lvmlockd or the supporting lock manager are not running, then the lockd VG is inaccessible. - If the VG has the CLUSTERED status flag (or lock_type "clvm"), then locks are needed from clvmd. This is a "clvm VG". If clvmd or the supporting clustering or locking are not running, then the clvm VG is inaccessible. Settings in lvm.conf tell lvm commands which locking daemon to use: - global/use_lvmlockd=1: tells lvm to use lvmlockd when accessing VGs with lock_type sanlock|dlm. - global/locking_type=3: tells lvm to use clvmd when accessing VGs with CLUSTERED flag (or lock_type clvm). LVM commands cannot use both lvmlockd and clvmd at the same time: - use_lvmlockd=1 should be combined with locking_type=1 - locking_type=3 (clvmd) should be combined with use_lvmlockd=0 So, different configurations allow access to different VG's: - When configured to use lvmlockd, lvm commands can access VG's with lock_type sanlock|dlm, and VG's with CLUSTERED are ignored. - When configured to use clvmd (locking_type 3), lvm commands can access VG's with the CLUSTERED flag, and VG's with lock_type sanlock|dlm are ignored. - When configured to use neither lvmlockd nor clvmd, lvm commands can access only local VG's. lvm will ignore VG's with lock_type sanlock|dlm, and will ignore VG's with CLUSTERED (or lock_type clvm). A VG is created with a specific lock_type: - vgcreate --lock_type <arg> is a new syntax that can specify the lock_type directly. <arg> may be: none, clvm, sanlock, dlm. sanlock|dlm require lvmlockd to be configured (in lvm.conf) and running. clvm requires clvmd to be configured (in lvm.conf) and running. - vgcreate --clustered y (or -cy) is the old syntax that still works, but it is not preferred because the lock_type is not explicit. When clvmd is configured, -cy creates a VG with lock_type clvm. When lvmlockd is configured, -cy creates a VG with lock_type sanlock, but this can be changed to dlm with lvm.conf vgcreate_cy_lock_type. Notes: In the VG metadata, the lock_type string is accompanied by a lock_args string. The lock_args string is lock-manager-specific data associated with the VG. For sanlock, the location on disk of the locks, or for dlm, the cluster name. In a VG with lock_type sanlock|dlm, each LV also has a lock_type and lock_args in the metadata. The LV lock_type currently always matches the lock_type of the VG. For sanlock, the LV lock_args specify the disk location of the LV lock.
* lvmlockd: add lock_type and lock_argsDavid Teigland2015-03-054-1/+28
| | | | | lock_type and lock_args are used in VG and LV structures and in text format.
* lvmcache: reread a VG if the lvmetad copy is staleDavid Teigland2015-03-054-57/+165
| | | | | | | | | | | | | | | | | | | | | and update the lvmetad copy after it is reread from disk. To test this: - Create VG foo that is visible to two hosts and usable by both. - On both run 'lvmeta vg_lookup_name foo' to see the cached copy of foo and its seqno. Say the seqno is 8. - On host1 run 'lvcreate -n lv1 -L1G foo'. - On host1 run 'lvmeta vg_lookup_name foo' to see the new version of foo in lvmetad. It should have seqno 9. - On host2 run 'lvmeta vg_lookup_name foo' to see the old cached version of foo. It should have seqno 8. - On host2 run 'lvmeta set_vg_version <uuid of foo> 9'. - On host2 run 'lvmeta vg_lookup_name foo' to see that the vg_invalid config node is reported along with the old cached version of foo. - On host2 run 'lvs foo'. It should reread foo from disk and display lv1. - On host2 run 'lvmeta vg_lookup_name foo' to see that the cached version of foo is now updated to seqno 9, and the vg_invalid node is not reported.
* lvmcache: add function to validate and update global cacheDavid Teigland2015-03-052-0/+85
| | | | | Will be used in later patches to check and update the local lvmetad global cache when needed.
* lvmeta: new program to interact with lvmetadDavid Teigland2015-03-052-1/+188
| | | | Useful for debugging.
* lvmetad: add invalidation methodDavid Teigland2015-03-051-1/+201
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ability to invalidate global or individual VG metadata. The invalid state is returned to lvm commands along with the metadata. This allows lvm commands to detect stale metadata from the cache and reread the latest metadata from disk (in a subsequent patch.) These changes do not change the protocol or compatibility between lvm commands and lvmetad. Global information ------------------ Global information refers to metadata that is not isolated to a single VG , e.g. the list of vg names, or the list of pvs. When an external system, e.g. a locking system, detects that global information has been changed from another host (e.g. a new vg has been created) it sends lvmetad the message: set_global_info: global_invalid=1. lvmetad sets the global invalid flag to indicate that its cached data is stale. When lvm commands request information from lvmetad, lvmetad returns the cached information, along with an additional top-level config node called "global_invalid". This new info tells the lvm command that the cached information is stale. When an lvm command sees global_invalid from lvmated, it knows it should rescan devices and update lvmetad with the latest information. When this is complete, it sends lvmetad the message: set_global_info: global_invalid=0, and lvmetad clears the global invalid flag. Further lvm commands will use the lvmetad cache until it is invalidated again. The most common commands that cause global invalidation are vgcreate and vgextend. These are uncommon compared to commands that report global information, e.g. vgs. So, the percentage of lvmetad replies containing global_invalid should be very small. VG information -------------- VG information refers to metadata that is isolated to a single VG, e.g. an LV or the size of an LV. When an external system determines that VG information has been changed from another host (e.g. an lvcreate or lvresize), it sends lvmetad the message: set_vg_info: uuid=X version=N. X is the VG uuid, and N is the latest VG seqno that was written. lvmetad checks the seqno of its cached VG, and if the version from the message is newer, it sets an invalid flag for the cached VG. The invalid flag, along with the newer seqno are saved in a new vg_info struct. When lvm commands request VG metadata from lvmetad, lvmetad includes the invalid flag along with the VG metadata. The lvm command checks for this flag, and rereads the VG from disk if set. The VG read from disk is sent to lvmetad. lvmetad sees that the seqno in the new version matches the seqno from the last set_vg_info message, and clears the vg invalid flag. Further lvm commands will use the VG metadata from lvmetad until it is next invalidated.
* man: add info to lvmsystemidDavid Teigland2015-03-051-7/+14
| | | | | about losing access to a VG if lvm is downgraded to an earlier version.
* system_id: avoid munging vg and lv fieldsDavid Teigland2015-03-051-18/+10
| | | | | Munge the WRITE/WRITE_LOCKED flags in a temp variable instead of in the vg/lv fields.
* system_id: undo the previous changes to the lvm1 codeDavid Teigland2015-03-052-16/+2
| | | | | The system_id and lock_type compat changes do not apply to the lvm1 code.
* system_id: make new VGs read-only for old lvm versionsDavid Teigland2015-03-059-5/+125
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previous versions of lvm will not obey the restrictions imposed by the new system_id, and would allow such a VG to be written. So, a VG with a new system_id is further changed to force previous lvm versions to treat it as read-only. This is done by removing the WRITE flag from the metadata status line of these VGs, and putting a new WRITE_LOCKED flag in the flags line of the metadata. Versions of lvm that recognize WRITE_LOCKED, also obey the new system_id. For these lvm versions, WRITE_LOCKED is identical to WRITE, and the rules associated with matching system_id's are imposed. A new VG lock_type field is also added that causes the same WRITE/WRITE_LOCKED transformation when set. A previous version of lvm will also see a VG with lock_type as read-only. Versions of lvm that recognize WRITE_LOCKED, must also obey the lock_type setting. Until the lock_type feature is added, lvm will fail to read any VG with lock_type set and report an error about an unsupported lock_type. Once the lock_type feature is added, lvm will allow VGs with lock_type to be used according to the rules imposed by the lock_type. When both system_id and lock_type settings are removed, a VG is written with the old WRITE status flag, and without the new WRITE_LOCKED flag. This allows old versions of lvm to use the VG as before.
* Revert "systemid: Add ACCESS_NEEDS_SYSTEM_ID VG flag."David Teigland2015-03-057-28/+11
| | | | | | This reverts commit bfbb5d269aa1ed56d9308117b57d4d2da49d53f6. This will be done differently.
* system_id: enable the options in config file and command lineDavid Teigland2015-03-052-6/+9
|
* report: fix seg_monitor field to display monitoring status for thick ↵Peter Rajnoha2015-03-053-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | snapshots and mirrors The seg_monitor did not display monitored status for thick snapshots and mirrors (with mirror log *not* mirrored). The seg monitor did work correctly even before for other segtypes - thins and raids. Before (mirrors and snapshots, only mirrors with mirrored log properly displayed monitoring status): [0] f21/~ # lvs -a -o lv_name,lv_layout,lv_role,seg_monitor vg LV Layout Role Monitor mirror mirror public [mirror_mimage_0] linear private,mirror,image [mirror_mimage_1] linear private,mirror,image [mirror_mlog] linear private,mirror,log mirror_with_mirror_log mirror public monitored [mirror_with_mirror_log_mimage_0] linear private,mirror,image [mirror_with_mirror_log_mimage_1] linear private,mirror,image [mirror_with_mirror_log_mlog] mirror private,mirror,log monitored [mirror_with_mirror_log_mlog_mimage_0] linear private,mirror,image [mirror_with_mirror_log_mlog_mimage_1] linear private,mirror,image thick_origin linear public,origin,thickorigin thick_snapshot linear public,snapshot,thicksnapshot With this patch applied (monitoring status displayed for all mirrors and snapshots): [0] f21/~ # lvs -a -o lv_name,lv_layout,lv_role,seg_monitor vg LV Layout Role Monitor mirror mirror public monitored [mirror_mimage_0] linear private,mirror,image [mirror_mimage_1] linear private,mirror,image [mirror_mlog] linear private,mirror,log mirror_with_mirror_log mirror public monitored [mirror_with_mirror_log_mimage_0] linear private,mirror,image [mirror_with_mirror_log_mimage_1] linear private,mirror,image [mirror_with_mirror_log_mlog] mirror private,mirror,log monitored [mirror_with_mirror_log_mlog_mimage_0] linear private,mirror,image [mirror_with_mirror_log_mlog_mimage_1] linear private,mirror,image thick_origin linear public,origin,thickorigin thick_snapshot linear public,snapshot,thicksnapshot monitored
* post-releaseAlasdair G Kergon2015-03-044-2/+8
|
* pre-releasev2_02_117Alasdair G Kergon2015-03-044-11/+11
|
* cleanup: tools: "or use -S for selection" --> "or use --select for selection"Peter Rajnoha2015-03-045-5/+5
|
* systemid: Disable --systemid.Alasdair G Kergon2015-03-041-5/+2
| | | | Disable use of --systemid for this release.
* config: add CFG_DISABLED flag and mark system_id settings with that flagPeter Rajnoha2015-03-043-20/+41
| | | | | | | | | | | | | | | | | | | | If configuration setting is marked in config_setting.h with CFG_DISABLED flag, default value is always used for such setting, no matter if it's defined by user (in --config/lvm.conf/lvmlocal.conf). A warning message is displayed if this happens: For example: [1] f21/~ # lvm dumpconfig --validate WARNING: Configuration setting global/system_id_source is disabled. Using default value. LVM configuration valid. [1] f21/~ # pvs WARNING: Configuration setting global/system_id_source is disabled. Using default value. PV VG Fmt Attr PSize PFree /dev/sdb lvm2 --- 128.00m 128.00m ...
* vgremove: select: direct selection to be done per-VG, not per-LVPeter Rajnoha2015-03-042-1/+25
| | | | | | | | | | | | | | | | | | Though vgremove operates per VG by definition, internally, it actually means iterating over each LV it contains to do the remove. So we need to direct selection a bit in this case so that the selection is done per-VG, not per-LV. That means, use processing handle with void_handle.internal_report_for_select=0 for the process_each_lv_in_vg that is called later in vgremove_single fn. We need to disable internal selection for process_each_lv_in_vg here as selection is already done by process_each_vg which calls vgremove_single. Otherwise selection would be done per-LV and not per-VG as we intend! An intra-release fix for commit 00744b053f395be79ab1cb80fdf7342548aa79e2.
* systemid: Add ACCESS_NEEDS_SYSTEM_ID VG flag.Alasdair G Kergon2015-03-048-11/+29
| | | | | | | | Set ACCESS_NEEDS_SYSTEM_ID VG status flag whenever there is a non-lvm1 system_id set. Prevents concurrent access from older LVM2 versions. Not set on VGs that bear a system_id only due to conversion from lvm1 metadata.
* systemid: Init and merge lvm2 and lvm1 fields.Alasdair G Kergon2015-03-048-8/+19
| | | | | Use system_id field in preference to lvm1_system_id. Initialise both for now.
* vgchange: Prevent lvm1 system ID changes.Alasdair G Kergon2015-03-041-0/+7
| | | | (This system_id setting code shouldn't be in two places.)
* format1: Export generate_lvm1_system_id.Alasdair G Kergon2015-03-043-4/+11
| | | | | | Export _lvm1_system_id as generate_lvm1_system_id and call it in vg_setup() so it is set before writing the metadata to disk and not missing from the initial metadata backup file.
* archives: Preserve format type in file.Alasdair G Kergon2015-03-044-3/+17
| | | | | | | | | format_text processes both lvm2 on-disk metadata and metadata read from other sources such as backup files. Add original_fmt field to retain the format type of the original metadata. Before this patch, /etc/lvm/archives would contain backups of lvm1 metadata with format = "lvm2" unless the source was lvm1 on-disk metadata.
* lvchange, vgchange: fix the system_id checkDavid Teigland2015-03-032-2/+4
| | | | | The check for matching system_id needs to check that the system_id is not blank.
* vgchange: deactivate LVs in foreign VGDavid Teigland2015-03-031-0/+15
| | | | | Apply the same logic as lvchange, which allows deactivating LVs in a foreign VG.
* spec: Add lvmlocal.conf to RPMs.Petr Rockai2015-03-031-0/+1
|
* metadata: vg: alloc lvm1_system_id in alloc_vg soonerPeter Rajnoha2015-03-021-6/+6
|
* metadata: vg: add missing vg->lvm1_system_id initializationPeter Rajnoha2015-03-021-0/+6
| | | | | | | | The vg->lvm1_systemd_id needs to be initialized as all the code around counts with that. Just like we initialize lvm1_system_id in vg_create (no matter if it's actually LVM1 or LVM2 format), this patch adds this init in alloc_vg as well so the rest of the code does not segfaul when trying to access vg->lvm1_system_id.
* report: check value of args_are_pvs, not the pointer (fix for commit 9ea77b7)Peter Rajnoha2015-03-021-1/+1
|
* system_id: apply consistent namingDavid Teigland2015-02-278-19/+19
| | | | | | | | | | | | In log messages refer to it as system ID (not System ID). Do not put quotes around the system_id string when printing. On the command line use systemid. In code, metadata, and config files use system_id. In lvmsystemid refer to the concept/entity as system_id.
* initscripts: lvm2-monitor: use @DMEVENTD_PIDFILE@ instead of hardcoded ↵Peter Rajnoha2015-02-271-1/+1
| | | | /var/run/dmeventd.pid