| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A lock manager requires an application to "start" or "join"
a lockspace before using locks from it. Start is the point
at which the lock manager on a host begins interacting with
other hosts to coordinate access to locks in the lockspace.
Similarly, an application needs to "stop" or "leave" a
lockspace when it's done using locks from the lockspace so
the lock manager can shut down and clean up the lockspace.
lvmlockd uses a lockspace for each sanlock|dlm VG, and the
lockspace for each VG needs to be started before lvm can use
it. These commands tell lvmlockd to start or stop the
lockspace for a VG:
vgchange --lock-start vg_name
vgchange --lock-stop vg_name
To start the lockspace for a VG, lvmlockd needs to know which
lock manager (sanlock or dlm) to use, and this is stored in the
VG metadata as lock_type = "sanlock|dlm", along with data that
is specific to the lock manager for the VG, saved as lock_args.
For sanlock, lock_args is the location of the locks on disk.
For dlm, lock_args is the name of the cluster the dlm should use.
So, the process for starting a VG includes:
- Reading the VG without a lock (no lock can be acquired
because the lockspace is not started).
- Taking the lock_type and lock_args strings from the
VG metadata.
- Asking lvmlockd to start the VG lockspace, providing
the lock_type and lock_args strings which tell lvmlockd
exactly which lock manager is needed.
- lvmlockd will ask the specific lock manager to join the
lockspace.
The VG read in the first step, without a lock, is not used for
for anything except getting the lock information needed to start
the lockspace. Subsequent use of the VG would use the VG lock.
In the case of a sanlock VG, there is an additional step in the
sequence. Between the second and third steps, the vgchange
lock-start command needs to activate the internal LV in the VG
that holds the sanlock locks. This LV must be active before
sanlock can join the lockspace.
Starting and stopping VG's would typically be done automatically
by the system, similar to the way LV's are automatically activated
by the system. But, it is always possible to directly start/stop VG
lockspaces, as it is always possible to directly activate/deactivate
LVs. Automatic VG start/stop will be added by a later patch, using
the basic functionality from this patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vgcreate calls lvmlockd_init_vg() to do any create/initialize
steps that are needed in lvmlockd for the given lock_type.
vgcreate calls lvmlockd_free_vg_before() to do any removal/freeing
steps that are needed in lvmlockd for the given lock_type
before the VG is removed on disk.
vgcreate calls lvmlockd_free_vg_final() to do any removal/freeing
steps that are needed in lvmlockd for the given lock_type
after the VG is removed on disk.
When the lock_type is sanlock, the init/free also include
lvm client side steps to create/remove an internal LV on
which sanlock will store the locks for the VG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The locking required to access a VG is a property of the VG,
and is specified in the VG metadata as the "lock_type".
When lvm sees a VG, it looks at the VG's lock_type to determine
if locks are needed and from where:
- If the VG has no lock_type, or lock_type "none", then no locks
are needed.
This a "local VG". If the VG is visible to multiple hosts,
the VG system_id provides basic protection. A VG with an
unmatching system_id is inaccessible.
- If the VG has lock_type "sanlock" or "dlm", then locks
are needed from lvmlockd, which acquires locks from either
sanlock or dlm respectively.
This is a "dlock VG". If lvmlockd or the supporting lock
manager are not running, then the dlock VG is inaccessible.
- If the VG has the CLUSTERED status flag (or lock_type "clvm"),
then locks are needed from clvmd.
This is a "clvm VG". If clvmd or the supporting clustering or
locking are not running, then the clvm VG is inaccessible.
Settings in lvm.conf tell lvm commands which locking daemon to use:
- global/use_lvmlockd=1: tells lvm to use lvmlockd when accessing
VGs with lock_type sanlock|dlm.
- global/locking_type=3: tells lvm to use clvmd when accessing
VGs with CLUSTERED flag (or lock_type clvm).
LVM commands cannot use both lvmlockd and clvmd at the same time:
- use_lvmlockd=1 should be combined with locking_type=1
- locking_type=3 (clvmd) should be combined with use_lvmlockd=0
So, different configurations allow access to different VG's:
- When configured to use lvmlockd, lvm commands can access VG's
with lock_type sanlock|dlm, and VG's with CLUSTERED are ignored.
- When configured to use clvmd (locking_type 3), lvm commands
can access VG's with the CLUSTERED flag, and VG's with
lock_type sanlock|dlm are ignored.
- When configured to use neither lvmlockd nor clvmd, lvm commands
can access only local VG's. lvm will ignore VG's with lock_type
sanlock|dlm, and will ignore VG's with CLUSTERED (or lock_type clvm).
A VG is created with a specific lock_type:
- vgcreate --lock_type <arg> is a new syntax that can specify the
lock_type directly. <arg> may be: none, clvm, sanlock, dlm.
sanlock|dlm require lvmlockd to be configured (in lvm.conf) and running.
clvm requires clvmd to be configured (in lvm.conf) and running.
- vgcreate --clustered y (or -cy) is the old syntax that still works,
but it is not preferred because the lock_type is not explicit.
When clvmd is configured, -cy creates a VG with lock_type clvm.
When lvmlockd is configured, -cy creates a VG with lock_type sanlock,
but this can be changed to dlm with lvm.conf vgcreate_cy_lock_type.
Notes:
The LOCK_TYPE status flag is not strictly necessary, but is an
attempt to prevent old versions of lvm (pre-lvmlockd) from using
a VG with a lock_type.
In the VG metadata, the lock_type string is accompanied by
a lock_args string. The lock_args string is lock-manager-specific
data associated with the VG. For sanlock, the location on disk
of the locks, or for dlm, the cluster name.
In a VG with lock_type sanlock|dlm, each LV also has a lock_type
and lock_args in the metadata. The LV lock_type currently always
matches the lock_type of the VG. For sanlock, the LV lock_args
specify the disk location of the LV lock.
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a foreign VG is ignored when it's included by "all vgs",
then the command shouldn't fail.
If a foreign VG is ignored when it's named explicitly as
a command arg, then the command should fail.
Also, remove ignore_vg from reporter functions because it
repeats what has been done in process_each given the recent
new version of process_each.
|
|
|
|
| |
See included lvmsystemid(7) for full description.
|
| |
|
| |
|
|
|
|
|
| |
Calculate dm_list_size only when there is not just a single
ont segment in list - so it's only counted on error path.
|
|
|
|
|
|
|
|
|
| |
When deactivating origin, we may have possibly left table in broken state,
where origin is not active, but snapshot volume is still present.
Let's ensure deactivation of origin detects also all associated
snapshots are inactive - otherwise do not skip deactivation.
(so i.e. 'vgchange -an' would detect errors)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's use this function for more activations in the code.
'needs_exlusive' will enforce exlusive type for any given LV.
We may want to activate LV in exlusive mode, even when we know
the LV (as is) supports non-exlusive activation as well.
lvcreate -ay -> exclusive & local
lvcreate -aay -> exclusive & local
lvcreate -aly -> exclusive & local
lvcreate -aey -> exclusive (might be on any node).
|
|
|
|
| |
Unsupported as of now.
|
|
|
|
| |
understand this properly
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
LVSINFO is just a subtype of LVS report type with extra "info" ioctl
called for each LV reported (per output line) so include its processing
within "case LVS" switch, not as completely different kind of reporting
which may be misleading when reading the code.
There's already the "lv_info_needed" flag set in the _report fn, so
call the approriate reporting function based on this flag within the
"case LVS" switch line.
Actually the same is already done for LV is reported per segments
within the "case SEGS" switch line. So this patch makes the code more
consistent so it's processed the same way for all cases.
Also, this is a preparation for another and new subtype that will
be introduced later - the "LVSSTATUS" and "SEGSSTATUS" report type.
|
|
|
|
| |
Only with -DDEBUG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When responding to DM_EVENT_CMD_GET_REGISTERED_DEVICE no longer
ignore threads that have already been unregistered but which
are still present.
This means the caller can unregister a device and poll dmeventd
to ensure the monitoring thread has gone away before removing
the device. If a device was registered and unregistered in quick
succession and then removed, WAITEVENT could run in parallel with
the REMOVE.
Threads are moved to the _thread_registry_unused list when they
are unregistered.
|
|
|
|
|
|
|
| |
The status of threads in _thread_registry is always DM_THREAD_RUNNING
(zero).
DM_EVENT_REGISTRATION_PENDING is never stored in thread->events.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Activate of new/unused/empty thin pool volume skips
the 'overlay' part and directly provides 'visible' thin-pool LV to the user.
Such thin pool still gets 'private' -tpool UUID suffix for easier
udev detection of protected lvm2 devices, and also gets udev flags to
avoid any scan.
Such pool device is 'public' LV with regular /dev/vgname/poolname link,
but it's still 'udev' hidden device for any other use.
To display proper active state we need to do few explicit tests
for this condition.
Before it's used for any lvm2 thin volume, deactivation is
now needed to avoid any 'race' with external usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Call check_new_thin_pool() to detect in-use thin-pool.
Save extra reactivation of thin-pool when thin pool is not active.
(it's now a bit more expensive to invoke thin_check for new pools.)
For new pools:
We now active locally exclusively thin-pool as 'public' LV.
Validate transaction_id is till 0.
Deactive.
Prepare create message for thin-pool and exclusively active pool.
Active new thin LV.
And deactivate thin pool if it used to be inactive.
|
|
|
|
| |
Function tests, that given new thin pool is still unused.
|
|
|
|
|
|
|
|
| |
Allowing 'external' use of thin-pools requires to validate even
so far 'unused' new thin pools.
Later we may have 'smarter' way to resolve which thin-pools are
owned by lvm2 and which are external.
|
|
|
|
|
| |
Recognize 'new' (and never used) lvm2 thin pool - it has 'transaction_id' == 0
(lv_is_used_thin_pool() has slightly different meaning).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When transaction_id is set 0 for thin-pool, libdm avoids validation
of thin-pool, unless there are real messages to be send to thin-pool.
This relaxes strict policy which always required to know
in front transaction_id for the kernel target.
It now allows to activate thin-pool with any transaction_id
(when transaction_id is passed in)
It is now upto application to validate transaction_id from life
thin-pool volume with transaction_id within it's own metadata.
|
|
|
|
|
|
|
| |
After initial 'size' usage converted to extents, continue to use
only extents.
(in-release fix).
|
| |
|
|
|
|
|
| |
Test -m0 passed with types.
Check --readahead and thins.
|
|
|
|
| |
Use lv_is_pool() to detect both pool versions.
|
| |
|
| |
|
| |
|
|
|
|
| |
Make more clear dm_info type.
|
|
|
|
| |
Pass lvconvert_params as last arg.
|
|
|
|
| |
Use struct initializer instead of memset().
|
|
|
|
| |
Use log_error for real error.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Show some stats with 'lvs'
Display same info for active cache volume and cache-pool.
data% - #used cache blocks/#total cache blocks
meta% - #used metadata blocks/#total metadata blocks
copy% - #dirty/#used cache blocks
TODO: maybe there is a better mapping
- should be seen as first-try-and-see.
|
|
|
|
|
| |
Before we reuse cache-pool - we need to ensure metadata volume
has wiped header.
|
|
|
|
|
|
|
|
| |
When the cache pool is unused, lvm2 code will internally
allow to activate such cache-pool.
Cache-pool is activate as metadata LV, so lvm2 could easily
wipe such volume before cache-pool is reused.
|
|
|
|
|
|
|
|
|
|
| |
Replace lv_cache_block_info() and lv_cache_policy_info()
with lv_cache_status() which directly returns
dm_status_cache structure together with some calculated
values.
After use mem pool stored inside lv_status_cache structure
needs to be destroyed.
|
|
|
|
|
|
|
|
| |
Add init of no_open_count into _setup_task().
Report problem as warning (cannot happen anyway).
Also drop some duplicated debug messages - we have already
printed the info about operation so make log a bit shorter.
|
|
|
|
|
| |
Use standard 'virtual_extents' naming.
Move virtual_size into 'lcp' struct out of lvcreate_params.
|
|
|
|
| |
Lib takes sizes in extens - do the same for pool_metadata.
|
|
|
|
|
| |
Add function for wiping cache pool volume.
Only unused cache-pool could be wiped.
|
|
|
|
|
|
|
|
| |
Tool will use internal activation of unused cache pool to
clear metadata area before next use of cache-pool.
So allow to deactivation unused pool in case some error
case happend and we were not able to deactivation pool
right after metadata wipe.
|
|
|
|
|
|
|
| |
Support caching of thin-pool.
lvresize needs to be resolved - so far, user
has to manually drop cache-pool before resizing.
|
|
|
|
| |
When pool is not used, allow to change its chunksize.
|
|
|
|
|
|
|
| |
Simplify reporting of percentage.
Allows easier support for more types.
Move testing of device availability into activate.c
|
|
|
|
| |
0 size are not supported as well as negative.
|
|
|
|
|
|
| |
No data for writing should be seen as 'dump' success.
(reduces one <bactrace> in the log) - it has no other
effect.
|
|
|
|
|
| |
lvcreate -m0 and -Mn goes with anything.
Read ahead works either with pools or thin/cache, but not with both.
|