diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2021-09-22 22:52:01 +0200 |
---|---|---|
committer | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2021-09-22 22:52:01 +0200 |
commit | 8448a7b76e99b98c3622848f32d418db931a7eac (patch) | |
tree | 230cf5eebc7f813cdd311b2fe277c9a69b0f4b25 /patches/locking-rt--Take-RCU-nesting-into-account-for-might_sleep--.patch | |
parent | d26e8a11d057e46922f191438a4d4ea65e65ce99 (diff) | |
download | linux-rt-8448a7b76e99b98c3622848f32d418db931a7eac.tar.gz |
[ANNOUNCE] v5.15-rc2-rt3v5.15-rc2-rt3-patches
Dear RT folks!
I'm pleased to announce the v5.15-rc2-rt3 patch set.
Changes since v5.15-rc2-rt2:
- Remove kernel_fpu_resched(). A few ciphers were restructured and
this function has no users and can be removed.
- The cpuset code is using spinlock_t again. Since the mm/slub rework
there is need to use raw_spinlock_t.
- Allow to enable CONFIG_RT_GROUP_SCHED on RT again. The original
issue can not be reproduced. Please test and report any issue.
- The RCU warning, that has been fixed Valentin Schneider, has been
replaced with a patch by Thomas Gleixner. There is another issue
open in that area an Frederick Weisbecker is looking into it.
- RCU lock accounting and checking has been reworked by Thomas
Gleixner. A direct effect is that might_sleep() produces a warning
if invoked in a RCU read section. Previously it would only trigger a
warning in schedule() in such a situation.
- The preempt_*_nort() macros have been removed.
- The preempt_enable_no_resched() macro should behave like
preempt_enable() on PREEMPT_RT but was was misplaced in v3.14-rt1
and has has been corrected now.
Known issues
- netconsole triggers WARN.
- The "Memory controller" (CONFIG_MEMCG) has been disabled.
- Valentin Schneider reported a few splats on ARM64, see
https://https://lkml.kernel.org/r/.kernel.org/lkml/20210810134127.1394269-1-valentin.schneider@arm.com/
The delta patch against v5.15-rc2-rt2 is appended below and can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.15/incr/patch-5.15-rc2-rt2-rt3.patch.xz
You can get this release via the git tree at:
git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.15-rc2-rt3
The RT patch against v5.15-rc2 can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.15/older/patch-5.15-rc2-rt3.patch.xz
The split quilt queue is available at:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.15/older/patches-5.15-rc2-rt3.tar.xz
Sebastian
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Diffstat (limited to 'patches/locking-rt--Take-RCU-nesting-into-account-for-might_sleep--.patch')
-rw-r--r-- | patches/locking-rt--Take-RCU-nesting-into-account-for-might_sleep--.patch | 72 |
1 files changed, 72 insertions, 0 deletions
diff --git a/patches/locking-rt--Take-RCU-nesting-into-account-for-might_sleep--.patch b/patches/locking-rt--Take-RCU-nesting-into-account-for-might_sleep--.patch new file mode 100644 index 000000000000..dc3799062497 --- /dev/null +++ b/patches/locking-rt--Take-RCU-nesting-into-account-for-might_sleep--.patch @@ -0,0 +1,72 @@ +Subject: locking/rt: Take RCU nesting into account for might_sleep() +From: Thomas Gleixner <tglx@linutronix.de> +Date: Wed, 22 Sep 2021 12:28:19 +0200 + +The RT patches contained a cheap hack to ignore the RCU nesting depth in +might_sleep() checks, which was a pragmatic but incorrect workaround. + +The general rule that rcu_read_lock() held sections cannot voluntary sleep +does apply even on RT kernels. Though the substitution of spin/rw locks on +RT enabled kernels has to be exempt from that rule. On !RT a spin_lock() +can obviously nest inside a rcu read side critical section as the lock +acquisition is not going to block, but on RT this is not longer the case +due to the 'sleeping' spin lock substitution. + +Instead of generally ignoring the RCU nesting depth in might_sleep() +checks, pass the rcu_preempt_depth() as offset argument to might_sleep() +from spin/read/write_lock() which makes the check work correctly even in +RCU read side critical sections. + +The actual blocking on such a substituted lock within a RCU read side +critical section is already handled correctly in __schedule() by treating +it as a "preemption" of the RCU read side critical section. + +Signed-off-by: Thomas Gleixner <tglx@linutronix.de> +--- + kernel/locking/spinlock_rt.c | 14 +++++++++++--- + 1 file changed, 11 insertions(+), 3 deletions(-) + +--- a/kernel/locking/spinlock_rt.c ++++ b/kernel/locking/spinlock_rt.c +@@ -24,6 +24,14 @@ + #define RT_MUTEX_BUILD_SPINLOCKS + #include "rtmutex.c" + ++/* ++ * Use ___might_sleep() which skips the state check and take RCU nesting ++ * into account as spin/read/write_lock() can legitimately nest into an RCU ++ * read side critical section: ++ */ ++#define rtlock_might_sleep() \ ++ ___might_sleep(__FILE__, __LINE__, rcu_preempt_depth()) ++ + static __always_inline void rtlock_lock(struct rt_mutex_base *rtm) + { + if (unlikely(!rt_mutex_cmpxchg_acquire(rtm, NULL, current))) +@@ -32,7 +40,7 @@ static __always_inline void rtlock_lock( + + static __always_inline void __rt_spin_lock(spinlock_t *lock) + { +- ___might_sleep(__FILE__, __LINE__, 0); ++ rtlock_might_sleep(); + rtlock_lock(&lock->lock); + rcu_read_lock(); + migrate_disable(); +@@ -210,7 +218,7 @@ EXPORT_SYMBOL(rt_write_trylock); + + void __sched rt_read_lock(rwlock_t *rwlock) + { +- ___might_sleep(__FILE__, __LINE__, 0); ++ rtlock_might_sleep(); + rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_); + rwbase_read_lock(&rwlock->rwbase, TASK_RTLOCK_WAIT); + rcu_read_lock(); +@@ -220,7 +228,7 @@ EXPORT_SYMBOL(rt_read_lock); + + void __sched rt_write_lock(rwlock_t *rwlock) + { +- ___might_sleep(__FILE__, __LINE__, 0); ++ rtlock_might_sleep(); + rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_); + rwbase_write_lock(&rwlock->rwbase, TASK_RTLOCK_WAIT); + rcu_read_lock(); |