summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>2021-08-18 10:40:00 +0200
committerSebastian Andrzej Siewior <bigeasy@linutronix.de>2021-08-18 10:40:00 +0200
commit32ffa5bc13e5bb878b58b8b8f437e5caddc45fe0 (patch)
tree32ea079b91b8c46e6fe5ffd32df56c2eb4db8c62
parent7909bbc59ec2a39c42b7b71d0ba1bbfb837c79e5 (diff)
downloadlinux-rt-32ffa5bc13e5bb878b58b8b8f437e5caddc45fe0.tar.gz
[ANNOUNCE] v5.14-rc6-rt11v5.14-rc6-rt11-patches
Dear RT folks! I'm pleased to announce the v5.14-rc6-rt11 patch set. Changes since v5.14-rc6-rt10: - The RCU & ARM64 patches by Valentin Schneider have been updated to v3. The logic in migratable() for UP has been changed and the function itself was renamed (which is different as posted to the list). - The printk.h includes a locking header directly. This unbreaks the POWER and POWER64 build and makes another patch (an earlier attempt to unbreak recursive includes) obsolete. - Update the SLUB series by Vlastimil Babka to slub-local-lock-v4r4: - Clark Williams reported a crash in the SLUB memory allocator. It was there since the SLUB rework in v5.13-rt1. Patch by Vlastimil Babka. - Sven Eckelmann reported a crash on non-RT with LOCKSTAT enabled. Patch by Vlastimil Babka. - rcutorture works again. Patch by Valentin Schneider. - Update RT's locking patches to what has been merge tip/locking/core. A visible change is the definition of local_lock_t on PREEMPT_RT. As a result the access to local_lock_t's dep_map is the same on RT & !RT. Known issues - netconsole triggers WARN. - The "Memory controller" (CONFIG_MEMCG) has been disabled. - A RCU and ARM64 warning has been fixed by Valentin Schneider. It is still not clear if the RCU related change is correct. - Clark Williams reported issues in i915 (execlists_dequeue_irq()) - Clark Williams reported issues with kcov enabled. - Valentin Schneider reported a few splats on ARM64, see https://https://lkml.kernel.org/r/.kernel.org/lkml/20210810134127.1394269-1-valentin.schneider@arm.com/ The delta patch against v5.14-rc6-rt10 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.14/incr/patch-5.14-rc6-rt10-rt11.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.14-rc6-rt11 The RT patch against v5.14-rc6 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.14/older/patch-5.14-rc6-rt11.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.14/older/patches-5.14-rc6-rt11.tar.xz Sebastian Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-rw-r--r--patches/0001-locking-local_lock-Add-missing-owner-initialization.patch (renamed from patches/locking-local_lock--Add-missing-owner-initialization.patch)11
-rw-r--r--patches/0001_cpu_pm_make_notifier_chain_use_a_raw_spinlock_t.patch121
-rw-r--r--patches/0002-locking-rtmutex-Set-proper-wait-context-for-lockdep.patch (renamed from patches/locking-rtmutex--Set-proper-wait-context-for-lockdep.patch)9
-rw-r--r--patches/0002_notifier_remove_atomic_notifier_call_chain_robust.patch54
-rw-r--r--patches/0003-sched-wakeup-Split-out-the-wakeup-__state-check.patch (renamed from patches/sched__Split_out_the_wakeup_state_check.patch)24
-rw-r--r--patches/0004-sched-wakeup-Introduce-the-TASK_RTLOCK_WAIT-state-bi.patch (renamed from patches/sched__Introduce_TASK_RTLOCK_WAIT.patch)24
-rw-r--r--patches/0005-sched-wakeup-Reorganize-the-current-__state-helpers.patch (renamed from patches/sched--Reorganize-current--state-helpers.patch)12
-rw-r--r--patches/0006-sched-wakeup-Prepare-for-RT-sleeping-spin-rwlocks.patch (renamed from patches/sched__Prepare_for_RT_sleeping_spin_rwlocks.patch)23
-rw-r--r--patches/0007-sched-core-Rework-the-__schedule-preempt-argument.patch (renamed from patches/sched__Rework_the___schedule_preempt_argument.patch)29
-rw-r--r--patches/0008-sched-core-Provide-a-scheduling-point-for-RT-locks.patch (renamed from patches/sched__Provide_schedule_point_for_RT_locks.patch)27
-rw-r--r--patches/0009-lockdep-selftests-Use-correct-depmap-for-local_lock-.patch30
-rw-r--r--patches/0009-sched-wake_q-Provide-WAKE_Q_HEAD_INITIALIZER.patch (renamed from patches/sched_wake_q__Provide_WAKE_Q_HEAD_INITIALIZER.patch)15
-rw-r--r--patches/0010-lockdep-selftests-Adapt-ww-tests-for-PREEMPT_RT.patch34
-rw-r--r--patches/0010-media-atomisp-Use-lockdep-instead-of-mutex_is_locked.patch (renamed from patches/media_atomisp_Use_lockdep_instead_of_mutex_is_locked_.patch)15
-rw-r--r--patches/0011-locking-rtmutex-Remove-rt_mutex_is_locked.patch (renamed from patches/rtmutex--Remove-rt_mutex_is_locked--.patch)13
-rw-r--r--patches/0012-locking-rtmutex-Convert-macros-to-inlines.patch (renamed from patches/rtmutex__Convert_macros_to_inlines.patch)13
-rw-r--r--patches/0013-locking-rtmutex-Switch-to-from-cmpxchg_-to-try_cmpxc.patch (renamed from patches/rtmutex--Switch-to-try_cmpxchg--.patch)10
-rw-r--r--patches/0013-mm-slub-do-initial-checks-in-___slab_alloc-with-irqs.patch62
-rw-r--r--patches/0014-locking-rtmutex-Split-API-from-implementation.patch (renamed from patches/rtmutex__Split_API_and_implementation.patch)14
-rw-r--r--patches/0014-mm-slub-move-disabling-irqs-closer-to-get_partial-in.patch10
-rw-r--r--patches/0015-locking-rtmutex-Split-out-the-inner-parts-of-struct-.patch (renamed from patches/rtmutex--Split-out-the-inner-parts-of-struct-rtmutex.patch)22
-rw-r--r--patches/0015-mm-slub-restore-irqs-around-calling-new_slab.patch2
-rw-r--r--patches/0016-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch (renamed from patches/locking_rtmutex__Provide_rt_mutex_slowlock_locked.patch)15
-rw-r--r--patches/0016-mm-slub-validate-slab-from-partial-list-or-page-allo.patch8
-rw-r--r--patches/0017-locking-rtmutex-Provide-rt_mutex_base_is_locked.patch (renamed from patches/rtmutex--Provide-rt_mutex_base_is_locked--.patch)11
-rw-r--r--patches/0017-mm-slub-check-new-pages-with-restored-irqs.patch10
-rw-r--r--patches/0018-locking-rt-Add-base-code-for-RT-rw_semaphore-and-rwl.patch (renamed from patches/locking__Add_base_code_for_RT_rw_semaphore_and_rwlock.patch)47
-rw-r--r--patches/0018-mm-slub-stop-disabling-irqs-around-get_partial.patch4
-rw-r--r--patches/0019-locking-rwsem-Add-rtmutex-based-R-W-semaphore-implem.patch (renamed from patches/locking_rwsem__Add_rtmutex_based_R_W_semaphore_implementation.patch)22
-rw-r--r--patches/0019-mm-slub-move-reset-of-c-page-and-freelist-out-of-dea.patch4
-rw-r--r--patches/0020-locking-rtmutex-Add-wake_state-to-rt_mutex_waiter.patch (renamed from patches/locking_rtmutex__Add_wake_state_to_rt_mutex_waiter.patch)19
-rw-r--r--patches/0021-locking-rtmutex-Provide-rt_wake_q_head-and-helpers.patch (renamed from patches/locking_rtmutex__Provide_rt_mutex_wake_q_and_helpers.patch)19
-rw-r--r--patches/0021-mm-slub-call-deactivate_slab-without-disabling-irqs.patch4
-rw-r--r--patches/0022-locking-rtmutex-Use-rt_mutex_wake_q_head.patch (renamed from patches/locking_rtmutex__Use_rt_mutex_wake_q_head.patch)13
-rw-r--r--patches/0023-locking-rtmutex-Prepare-RT-rt_mutex_wake_q-for-RT-lo.patch (renamed from patches/locking_rtmutex__Prepare_RT_rt_mutex_wake_q_for_RT_locks.patch)38
-rw-r--r--patches/0024-locking-rtmutex-Guard-regular-sleeping-locks-specifi.patch (renamed from patches/locking_rtmutex__Guard_regular_sleeping_locks_specific_functions.patch)14
-rw-r--r--patches/0025-locking-spinlock-Split-the-lock-types-header-and-mov.patch (renamed from patches/locking_spinlock__Split_the_lock_types_header.patch)20
-rw-r--r--patches/0026-locking-rtmutex-Prevent-future-include-recursion-hel.patch (renamed from patches/locking_rtmutex__Prevent_future_include_recursion_hell.patch)19
-rw-r--r--patches/0027-locking-lockdep-Reduce-header-dependencies-in-linux-.patch (renamed from patches/locking_lockdep__Reduce_includes_in_debug_locks.h.patch)12
-rw-r--r--patches/0028-rbtree-Split-out-the-rbtree-type-definitions-into-li.patch (renamed from patches/rbtree__Split_out_the_rbtree_type_definitions.patch)36
-rw-r--r--patches/0029-locking-rtmutex-Reduce-linux-rtmutex.h-header-depend.patch36
-rw-r--r--patches/0029-mm-slub-Move-flush_cpu_slab-invocations-__free_slab-.patch10
-rw-r--r--patches/0030-locking-spinlock-Provide-RT-specific-spinlock_t.patch (renamed from patches/locking_spinlock__Provide_RT_specific_spinlock_type.patch)15
-rw-r--r--patches/0031-locking-spinlock-Provide-RT-variant-header-linux-spi.patch (renamed from patches/locking_spinlock__Provide_RT_variant_header.patch)18
-rw-r--r--patches/0031-mm-slub-optionally-save-restore-irqs-in-slab_-un-loc.patch8
-rw-r--r--patches/0032-locking-rtmutex-Provide-the-spin-rwlock-core-lock-fu.patch (renamed from patches/locking_rtmutex__Provide_the_spin_rwlock_core_lock_function.patch)24
-rw-r--r--patches/0033-locking-spinlock-Provide-RT-variant.patch (renamed from patches/locking_spinlock__Provide_RT_variant.patch)23
-rw-r--r--patches/0034-locking-rwlock-Provide-RT-variant.patch (renamed from patches/locking_rwlock__Provide_RT_variant.patch)25
-rw-r--r--patches/0034-mm-slub-use-migrate_disable-on-PREEMPT_RT.patch16
-rw-r--r--patches/0035-locking-rtmutex-Squash-RT-tasks-to-DEFAULT_PRIO.patch (renamed from patches/rtmutex--Exclude-!RT-tasks-from-PI-boosting.patch)10
-rw-r--r--patches/0035-mm-slub-convert-kmem_cpu_slab-protection-to-local_lo.patch51
-rw-r--r--patches/0036-locking-mutex-Consolidate-core-headers-remove-kernel.patch (renamed from patches/locking_mutex__Consolidate_core_headers.patch)12
-rw-r--r--patches/0037-locking-mutex-Move-the-struct-mutex_waiter-definitio.patch (renamed from patches/locking_mutex__Move_waiter_to_core_header.patch)16
-rw-r--r--patches/0038-locking-ww_mutex-Move-the-ww_mutex-definitions-from-.patch (renamed from patches/locking_ww_mutex__Move_ww_mutex_declarations_into_ww_mutex.h.patch)16
-rw-r--r--patches/0039-locking-mutex-Make-mutex-wait_lock-raw.patch (renamed from patches/locking_mutex__Make_mutex__wait_lock_raw.patch)29
-rw-r--r--patches/0040-locking-ww_mutex-Simplify-lockdep-annotations.patch (renamed from patches/locking_ww_mutex__Simplify_lockdep_annotation.patch)12
-rw-r--r--patches/0041-locking-ww_mutex-Gather-mutex_waiter-initialization.patch (renamed from patches/locking_ww_mutex__Gather_mutex_waiter_initialization.patch)12
-rw-r--r--patches/0042-locking-ww_mutex-Split-up-ww_mutex_unlock.patch (renamed from patches/locking_ww_mutex__Split_up_ww_mutex_unlock.patch)62
-rw-r--r--patches/0043-locking-ww_mutex-Split-out-the-W-W-implementation-lo.patch (renamed from patches/locking_ww_mutex__Split_W_W_implementation_logic.patch)55
-rw-r--r--patches/0044-locking-ww_mutex-Remove-the-__sched-annotation-from-.patch (renamed from patches/locking_ww_mutex__Remove___sched_annotation.patch)12
-rw-r--r--patches/0045-locking-ww_mutex-Abstract-out-the-waiter-iteration.patch (renamed from patches/locking_ww_mutex__Abstract_waiter_iteration.patch)11
-rw-r--r--patches/0046-locking-ww_mutex-Abstract-out-waiter-enqueueing.patch (renamed from patches/locking_ww_mutex__Abstract_waiter_enqueue.patch)12
-rw-r--r--patches/0047-locking-ww_mutex-Abstract-out-mutex-accessors.patch (renamed from patches/locking_ww_mutex__Abstract_mutex_accessors.patch)12
-rw-r--r--patches/0048-locking-ww_mutex-Abstract-out-mutex-types.patch (renamed from patches/locking_ww_mutex__Abstract_mutex_types.patch)12
-rw-r--r--patches/0049-locking-ww_mutex-Abstract-out-internal-lock-accesses.patch (renamed from patches/locking-ww_mutex--Abstract-internal-lock-access.patch)7
-rw-r--r--patches/0050-locking-ww_mutex-Implement-rt_mutex-accessors.patch (renamed from patches/locking_ww_mutex__Implement_rt_mutex_accessors.patch)12
-rw-r--r--patches/0051-locking-ww_mutex-Add-RT-priority-to-W-W-order.patch (renamed from patches/locking_ww_mutex__Add_RT_priority_to_W_W_order.patch)16
-rw-r--r--patches/0052-locking-ww_mutex-Add-rt_mutex-based-lock-type-and-ac.patch (renamed from patches/locking_ww_mutex__Add_ww_rt_mutex_interface.patch)15
-rw-r--r--patches/0053-locking-rtmutex-Extend-the-rtmutex-core-to-support-w.patch (renamed from patches/locking-rtmutex--Extend-the-rtmutex-core-to-support-ww_mutex.patch)18
-rw-r--r--patches/0054-locking-ww_mutex-Implement-rtmutex-based-ww_mutex-AP.patch (renamed from patches/locking_ww_mutex__Implement_ww_rt_mutex.patch)17
-rw-r--r--patches/0055-locking-rtmutex-Add-mutex-variant-for-RT.patch (renamed from patches/locking_rtmutex__Add_mutex_variant_for_RT.patch)15
-rw-r--r--patches/0056-lib-test_lockup-Adapt-to-changed-variables.patch (renamed from patches/lib_test_lockup__Adapt_to_changed_variables..patch)24
-rw-r--r--patches/0057-futex-Validate-waiter-correctly-in-futex_proxy_trylo.patch (renamed from patches/futex__Validate_waiter_correctly_in_futex_proxy_trylock_atomic.patch)14
-rw-r--r--patches/0058-futex-Clean-up-stale-comments.patch (renamed from patches/futex__Cleanup_stale_comments.patch)15
-rw-r--r--patches/0059-futex-Clarify-futex_requeue-PI-handling.patch (renamed from patches/futex--Clarify-futex_requeue---PI-handling.patch)17
-rw-r--r--patches/0060-futex-Remove-bogus-condition-for-requeue-PI.patch (renamed from patches/futex--Remove-bogus-condition-for-requeue-PI.patch)15
-rw-r--r--patches/0061-futex-Correct-the-number-of-requeued-waiters-for-PI.patch (renamed from patches/futex__Correct_the_number_of_requeued_waiters_for_PI.patch)11
-rw-r--r--patches/0062-futex-Restructure-futex_requeue.patch (renamed from patches/futex__Restructure_futex_requeue.patch)11
-rw-r--r--patches/0063-futex-Clarify-comment-in-futex_requeue.patch (renamed from patches/futex__Clarify_comment_in_futex_requeue.patch)17
-rw-r--r--patches/0064-futex-Reorder-sanity-checks-in-futex_requeue.patch (renamed from patches/futex--Reorder-sanity-checks-in-futex_requeue--.patch)15
-rw-r--r--patches/0065-futex-Simplify-handle_early_requeue_pi_wakeup.patch (renamed from patches/futex--Simplify-handle_early_requeue_pi_wakeup--.patch)9
-rw-r--r--patches/0066-futex-Prevent-requeue_pi-lock-nesting-issue-on-RT.patch (renamed from patches/futex__Prevent_requeue_pi_lock_nesting_issue_on_RT.patch)13
-rw-r--r--patches/0067-locking-rtmutex-Prevent-lockdep-false-positive-with-.patch (renamed from patches/rtmutex__Prevent_lockdep_false_positive_with_PI_futexes.patch)16
-rw-r--r--patches/0068-preempt-Adjust-PREEMPT_LOCK_OFFSET-for-RT.patch (renamed from patches/preempt__Adjust_PREEMPT_LOCK_OFFSET_for_RT.patch)11
-rw-r--r--patches/0069-locking-rtmutex-Implement-equal-priority-lock-steali.patch (renamed from patches/locking_rtmutex__Implement_equal_priority_lock_stealing.patch)11
-rw-r--r--patches/0070-locking-rtmutex-Add-adaptive-spinwait-mechanism.patch (renamed from patches/locking_rtmutex__Add_adaptive_spinwait_mechanism.patch)16
-rw-r--r--patches/0071-locking-spinlock-rt-Prepare-for-RT-local_lock.patch (renamed from patches/locking-spinlock-rt--Prepare-for-RT-local_lock.patch)9
-rw-r--r--patches/0072-locking-local_lock-Add-PREEMPT_RT-support.patch (renamed from patches/locking-local_lock--Add-PREEMPT_RT-support.patch)36
-rw-r--r--patches/Add_localversion_for_-RT_release.patch2
-rw-r--r--patches/arm64_mm_Make_arch_faults_on_old_pte_check_for_migratability.patch78
-rw-r--r--patches/arm64_mm_make_arch_faults_on_old_pte_check_for_migratability.patch33
-rw-r--r--patches/locking-Allow-to-include-asm-spinlock_types.h-from-l.patch265
-rw-r--r--patches/locking_rtmutex__Include_only_rbtree_types.patch31
-rw-r--r--patches/notifier__Make_atomic_notifiers_use_raw_spinlock.patch118
-rw-r--r--patches/powerpc__Avoid_recursive_header_includes.patch44
-rw-r--r--patches/rcu_nocb_Check_for_migratability_rather_than_pure_preemptability.patch77
-rw-r--r--patches/rcu_nocb_protect_nocb_state_via_local_lock_under_preempt_rt.patch300
-rw-r--r--patches/rcutorture__Avoid_problematic_critical_section_nesting_on_RT.patch43
-rw-r--r--patches/sched_Introduce_is_pcpu_safe_.patch46
-rw-r--r--patches/sched_introduce_migratable.patch45
-rw-r--r--patches/series163
101 files changed, 1768 insertions, 1235 deletions
diff --git a/patches/locking-local_lock--Add-missing-owner-initialization.patch b/patches/0001-locking-local_lock-Add-missing-owner-initialization.patch
index a81ff1f5400a..fc4549a46df9 100644
--- a/patches/locking-local_lock--Add-missing-owner-initialization.patch
+++ b/patches/0001-locking-local_lock-Add-missing-owner-initialization.patch
@@ -1,8 +1,8 @@
-Subject: locking/local_lock: Add missing owner initialization
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Fri, 13 Aug 2021 16:29:08 +0200
+Date: Sun, 15 Aug 2021 23:27:37 +0200
+Subject: [PATCH 01/72] locking/local_lock: Add missing owner initialization
-If CONFIG_DEBUG_LOCK_ALLOC is enabled then local_lock_t has a 'owner'
+If CONFIG_DEBUG_LOCK_ALLOC=y is enabled then local_lock_t has an 'owner'
member which is checked for consistency, but nothing initialized it to
zero explicitly.
@@ -12,8 +12,9 @@ really good practice.
Fixes: 91710728d172 ("locking: Introduce local_lock()")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V5: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211301.969975279@linutronix.de
---
include/linux/local_lock_internal.h | 42 +++++++++++++++++++-----------------
1 file changed, 23 insertions(+), 19 deletions(-)
diff --git a/patches/0001_cpu_pm_make_notifier_chain_use_a_raw_spinlock_t.patch b/patches/0001_cpu_pm_make_notifier_chain_use_a_raw_spinlock_t.patch
new file mode 100644
index 000000000000..2d3f77768638
--- /dev/null
+++ b/patches/0001_cpu_pm_make_notifier_chain_use_a_raw_spinlock_t.patch
@@ -0,0 +1,121 @@
+From: Valentin Schneider <valentin.schneider@arm.com>
+Subject: cpu_pm: Make notifier chain use a raw_spinlock_t
+Date: Wed, 11 Aug 2021 21:14:31 +0100
+
+Invoking atomic_notifier_chain_notify() requires acquiring a spinlock_t,
+which can block under CONFIG_PREEMPT_RT. Notifications for members of the
+cpu_pm notification chain will be issued by the idle task, which can never
+block.
+
+Making *all* atomic_notifiers use a raw_spinlock is too big of a hammer, as
+only notifications issued by the idle task are problematic.
+
+Special-case cpu_pm_notifier_chain by kludging a raw_notifier and
+raw_spinlock_t together, matching the atomic_notifier behavior with a
+raw_spinlock_t.
+
+Fixes: 70d932985757 ("notifier: Fix broken error handling pattern")
+Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Link: https://lore.kernel.org/r/20210811201432.1976916-2-valentin.schneider@arm.com
+---
+ kernel/cpu_pm.c | 50 ++++++++++++++++++++++++++++++++++++++------------
+ 1 file changed, 38 insertions(+), 12 deletions(-)
+
+--- a/kernel/cpu_pm.c
++++ b/kernel/cpu_pm.c
+@@ -13,19 +13,32 @@
+ #include <linux/spinlock.h>
+ #include <linux/syscore_ops.h>
+
+-static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
++/*
++ * atomic_notifiers use a spinlock_t, which can block under PREEMPT_RT.
++ * Notifications for cpu_pm will be issued by the idle task itself, which can
++ * never block, IOW it requires using a raw_spinlock_t.
++ */
++static struct {
++ struct raw_notifier_head chain;
++ raw_spinlock_t lock;
++} cpu_pm_notifier = {
++ .chain = RAW_NOTIFIER_INIT(cpu_pm_notifier.chain),
++ .lock = __RAW_SPIN_LOCK_UNLOCKED(cpu_pm_notifier.lock),
++};
+
+ static int cpu_pm_notify(enum cpu_pm_event event)
+ {
+ int ret;
+
+ /*
+- * atomic_notifier_call_chain has a RCU read critical section, which
+- * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let
+- * RCU know this.
++ * This introduces a RCU read critical section, which could be
++ * disfunctional in cpu idle. Copy RCU_NONIDLE code to let RCU know
++ * this.
+ */
+ rcu_irq_enter_irqson();
+- ret = atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL);
++ rcu_read_lock();
++ ret = raw_notifier_call_chain(&cpu_pm_notifier.chain, event, NULL);
++ rcu_read_unlock();
+ rcu_irq_exit_irqson();
+
+ return notifier_to_errno(ret);
+@@ -33,10 +46,13 @@ static int cpu_pm_notify(enum cpu_pm_eve
+
+ static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event event_down)
+ {
++ unsigned long flags;
+ int ret;
+
+ rcu_irq_enter_irqson();
+- ret = atomic_notifier_call_chain_robust(&cpu_pm_notifier_chain, event_up, event_down, NULL);
++ raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++ ret = raw_notifier_call_chain_robust(&cpu_pm_notifier.chain, event_up, event_down, NULL);
++ raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
+ rcu_irq_exit_irqson();
+
+ return notifier_to_errno(ret);
+@@ -49,12 +65,17 @@ static int cpu_pm_notify_robust(enum cpu
+ * Add a driver to a list of drivers that are notified about
+ * CPU and CPU cluster low power entry and exit.
+ *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_register.
++ * This function has the same return conditions as raw_notifier_chain_register.
+ */
+ int cpu_pm_register_notifier(struct notifier_block *nb)
+ {
+- return atomic_notifier_chain_register(&cpu_pm_notifier_chain, nb);
++ unsigned long flags;
++ int ret;
++
++ raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++ ret = raw_notifier_chain_register(&cpu_pm_notifier.chain, nb);
++ raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_register_notifier);
+
+@@ -64,12 +85,17 @@ EXPORT_SYMBOL_GPL(cpu_pm_register_notifi
+ *
+ * Remove a driver from the CPU PM notifier list.
+ *
+- * This function may sleep, and has the same return conditions as
+- * raw_notifier_chain_unregister.
++ * This function has the same return conditions as raw_notifier_chain_unregister.
+ */
+ int cpu_pm_unregister_notifier(struct notifier_block *nb)
+ {
+- return atomic_notifier_chain_unregister(&cpu_pm_notifier_chain, nb);
++ unsigned long flags;
++ int ret;
++
++ raw_spin_lock_irqsave(&cpu_pm_notifier.lock, flags);
++ ret = raw_notifier_chain_unregister(&cpu_pm_notifier.chain, nb);
++ raw_spin_unlock_irqrestore(&cpu_pm_notifier.lock, flags);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier);
+
diff --git a/patches/locking-rtmutex--Set-proper-wait-context-for-lockdep.patch b/patches/0002-locking-rtmutex-Set-proper-wait-context-for-lockdep.patch
index d50055af19e3..865fb4883d52 100644
--- a/patches/locking-rtmutex--Set-proper-wait-context-for-lockdep.patch
+++ b/patches/0002-locking-rtmutex-Set-proper-wait-context-for-lockdep.patch
@@ -1,12 +1,13 @@
-Subject: locking/rtmutex: Set proper wait context for lockdep
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Fri, 13 Aug 2021 23:59:51 +0200
+Date: Sun, 15 Aug 2021 23:27:38 +0200
+Subject: [PATCH 02/72] locking/rtmutex: Set proper wait context for lockdep
RT mutexes belong to the LD_WAIT_SLEEP class. Make them so.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V5: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.031014562@linutronix.de
---
include/linux/rtmutex.h | 19 ++++++++++++-------
kernel/locking/rtmutex.c | 2 +-
diff --git a/patches/0002_notifier_remove_atomic_notifier_call_chain_robust.patch b/patches/0002_notifier_remove_atomic_notifier_call_chain_robust.patch
new file mode 100644
index 000000000000..958ff7f7adca
--- /dev/null
+++ b/patches/0002_notifier_remove_atomic_notifier_call_chain_robust.patch
@@ -0,0 +1,54 @@
+From: Valentin Schneider <valentin.schneider@arm.com>
+Subject: notifier: Remove atomic_notifier_call_chain_robust()
+Date: Wed, 11 Aug 2021 21:14:32 +0100
+
+This now has no more users, remove it.
+
+Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Link: https://lore.kernel.org/r/20210811201432.1976916-3-valentin.schneider@arm.com
+---
+ include/linux/notifier.h | 2 --
+ kernel/notifier.c | 19 -------------------
+ 2 files changed, 21 deletions(-)
+
+--- a/include/linux/notifier.h
++++ b/include/linux/notifier.h
+@@ -168,8 +168,6 @@ extern int raw_notifier_call_chain(struc
+ extern int srcu_notifier_call_chain(struct srcu_notifier_head *nh,
+ unsigned long val, void *v);
+
+-extern int atomic_notifier_call_chain_robust(struct atomic_notifier_head *nh,
+- unsigned long val_up, unsigned long val_down, void *v);
+ extern int blocking_notifier_call_chain_robust(struct blocking_notifier_head *nh,
+ unsigned long val_up, unsigned long val_down, void *v);
+ extern int raw_notifier_call_chain_robust(struct raw_notifier_head *nh,
+--- a/kernel/notifier.c
++++ b/kernel/notifier.c
+@@ -172,25 +172,6 @@ int atomic_notifier_chain_unregister(str
+ }
+ EXPORT_SYMBOL_GPL(atomic_notifier_chain_unregister);
+
+-int atomic_notifier_call_chain_robust(struct atomic_notifier_head *nh,
+- unsigned long val_up, unsigned long val_down, void *v)
+-{
+- unsigned long flags;
+- int ret;
+-
+- /*
+- * Musn't use RCU; because then the notifier list can
+- * change between the up and down traversal.
+- */
+- spin_lock_irqsave(&nh->lock, flags);
+- ret = notifier_call_chain_robust(&nh->head, val_up, val_down, v);
+- spin_unlock_irqrestore(&nh->lock, flags);
+-
+- return ret;
+-}
+-EXPORT_SYMBOL_GPL(atomic_notifier_call_chain_robust);
+-NOKPROBE_SYMBOL(atomic_notifier_call_chain_robust);
+-
+ /**
+ * atomic_notifier_call_chain - Call functions in an atomic notifier chain
+ * @nh: Pointer to head of the atomic notifier chain
diff --git a/patches/sched__Split_out_the_wakeup_state_check.patch b/patches/0003-sched-wakeup-Split-out-the-wakeup-__state-check.patch
index 9f665def419c..f4035110014e 100644
--- a/patches/sched__Split_out_the_wakeup_state_check.patch
+++ b/patches/0003-sched-wakeup-Split-out-the-wakeup-__state-check.patch
@@ -1,27 +1,29 @@
-Subject: sched: Split out the wakeup state check
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:43 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:40 +0200
+Subject: [PATCH 03/72] sched/wakeup: Split out the wakeup ->__state check
RT kernels have a slightly more complicated handling of wakeups due to
'sleeping' spin/rwlocks. If a task is blocked on such a lock then the
-original state of the task is preserved over the blocking and any regular
-(non lock related) wakeup has to be targeted at the saved state to ensure
-that these wakeups are not lost. Once the task acquired the lock it
-restores the task state from the saved state.
+original state of the task is preserved over the blocking period, and
+any regular (non lock related) wakeup has to be targeted at the
+saved state to ensure that these wakeups are not lost.
+
+Once the task acquires the lock it restores the task state from the saved state.
-To avoid cluttering try_to_wake_up() with that logic, split the wake up
+To avoid cluttering try_to_wake_up() with that logic, split the wakeup
state check out into an inline helper and use it at both places where
-task::state is checked against the state argument of try_to_wake_up().
+task::__state is checked against the state argument of try_to_wake_up().
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.088945085@linutronix.de
---
kernel/sched/core.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
----
+
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3562,6 +3562,22 @@ static void ttwu_queue(struct task_struc
diff --git a/patches/sched__Introduce_TASK_RTLOCK_WAIT.patch b/patches/0004-sched-wakeup-Introduce-the-TASK_RTLOCK_WAIT-state-bi.patch
index 6f064ffaa0b6..02c4f8056e46 100644
--- a/patches/sched__Introduce_TASK_RTLOCK_WAIT.patch
+++ b/patches/0004-sched-wakeup-Introduce-the-TASK_RTLOCK_WAIT-state-bi.patch
@@ -1,11 +1,9 @@
-Subject: sched: Introduce TASK_RTLOCK_WAIT
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:43 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:41 +0200
+Subject: [PATCH 04/72] sched/wakeup: Introduce the TASK_RTLOCK_WAIT state bit
RT kernels have an extra quirk for try_to_wake_up() to handle task state
-preservation across blocking on a 'sleeping' spin/rwlock.
+preservation across periods of blocking on a 'sleeping' spin/rwlock.
For this to function correctly and under all circumstances try_to_wake_up()
must be able to identify whether the wakeup is lock related or not and
@@ -16,32 +14,36 @@ try_to_wake_up() and just use TASK_UNINTERRUPTIBLE for the tasks wait state
and the try_to_wake_up() state argument.
This works in principle, but due to the fact that try_to_wake_up() cannot
-determine whether the task is waiting for a RT lock wakeup or for a regular
+determine whether the task is waiting for an RT lock wakeup or for a regular
wakeup it's suboptimal.
-RT kernels save the original task state when blocking on a RT lock and
+RT kernels save the original task state when blocking on an RT lock and
restore it when the lock has been acquired. Any non lock related wakeup is
checked against the saved state and if it matches the saved state is set to
running so that the wakeup is not lost when the state is restored.
-While the necessary logic for the wake_flag based solution is trivial the
+While the necessary logic for the wake_flag based solution is trivial, the
downside is that any regular wakeup with TASK_UNINTERRUPTIBLE in the state
argument set will wake the task despite the fact that it is still blocked
on the lock. That's not a fatal problem as the lock wait has do deal with
spurious wakeups anyway, but it introduces unnecessary latencies.
Introduce the TASK_RTLOCK_WAIT state bit which will be set when a task
-blocks on a RT lock.
+blocks on an RT lock.
-The lock wakeup will use wake_up_state(TASK_RTLOCK_WAIT) so both the
+The lock wakeup will use wake_up_state(TASK_RTLOCK_WAIT), so both the
waiting state and the wakeup state are distinguishable, which avoids
spurious wakeups and allows better analysis.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.144989915@linutronix.de
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
include/linux/sched.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
----
+
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -95,7 +95,9 @@ struct task_group;
diff --git a/patches/sched--Reorganize-current--state-helpers.patch b/patches/0005-sched-wakeup-Reorganize-the-current-__state-helpers.patch
index 6ec247c58a21..64c292ea335b 100644
--- a/patches/sched--Reorganize-current--state-helpers.patch
+++ b/patches/0005-sched-wakeup-Reorganize-the-current-__state-helpers.patch
@@ -1,15 +1,17 @@
-Subject: sched: Reorganize current::__state helpers
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue, 03 Aug 2021 21:39:32 +0200
+Date: Sun, 15 Aug 2021 23:27:43 +0200
+Subject: [PATCH 05/72] sched/wakeup: Reorganize the current::__state helpers
In order to avoid more duplicate implementations for the debug and
non-debug variants of the state change macros, split the debug portion out
-and make that conditional on CONFIG_DEBUG_ATOMIC_SLEEP.
+and make that conditional on CONFIG_DEBUG_ATOMIC_SLEEP=y.
Suggested-by: Waiman Long <longman@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V3: New patch.
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.200898048@linutronix.de
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
include/linux/sched.h | 48 +++++++++++++++++++++++-------------------------
1 file changed, 23 insertions(+), 25 deletions(-)
diff --git a/patches/sched__Prepare_for_RT_sleeping_spin_rwlocks.patch b/patches/0006-sched-wakeup-Prepare-for-RT-sleeping-spin-rwlocks.patch
index 506e5a4dfc0d..9877dc34acb5 100644
--- a/patches/sched__Prepare_for_RT_sleeping_spin_rwlocks.patch
+++ b/patches/0006-sched-wakeup-Prepare-for-RT-sleeping-spin-rwlocks.patch
@@ -1,22 +1,20 @@
-Subject: sched: Prepare for RT sleeping spin/rwlocks
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:44 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:44 +0200
+Subject: [PATCH 06/72] sched/wakeup: Prepare for RT sleeping spin/rwlocks
Waiting for spinlocks and rwlocks on non RT enabled kernels is task::state
preserving. Any wakeup which matches the state is valid.
RT enabled kernels substitutes them with 'sleeping' spinlocks. This creates
-an issue vs. task::state.
+an issue vs. task::__state.
-In order to block on the lock the task has to overwrite task::state and a
+In order to block on the lock, the task has to overwrite task::__state and a
consecutive wakeup issued by the unlocker sets the state back to
TASK_RUNNING. As a consequence the task loses the state which was set
before the lock acquire and also any regular wakeup targeted at the task
while it is blocked on the lock.
-To handle this gracefully add a 'saved_state' member to task_struct which
+To handle this gracefully, add a 'saved_state' member to task_struct which
is used in the following way:
1) When a task blocks on a 'sleeping' spinlock, the current state is saved
@@ -32,18 +30,21 @@ is used in the following way:
might have been woken up from the lock wait and has not yet restored
the saved state.
-To make it complete provide the necessary helpers to save and restore the
+To make it complete, provide the necessary helpers to save and restore the
saved state along with the necessary documentation how the RT lock blocking
is supposed to work.
For non-RT kernels there is no functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.258751046@linutronix.de
---
include/linux/sched.h | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++
kernel/sched/core.c | 33 +++++++++++++++++++++++++
2 files changed, 99 insertions(+)
----
+
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -143,9 +143,22 @@ struct task_group;
@@ -174,12 +175,12 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+#ifdef CONFIG_PREEMPT_RT
+ /*
+ * Saved state preserves the task state across blocking on
-+ * a RT lock. If the state matches, set p::saved_state to
++ * an RT lock. If the state matches, set p::saved_state to
+ * TASK_RUNNING, but do not wake the task because it waits
+ * for a lock wakeup. Also indicate success because from
+ * the regular waker's point of view this has succeeded.
+ *
-+ * After acquiring the lock the task will restore p::state
++ * After acquiring the lock the task will restore p::__state
+ * from p::saved_state which ensures that the regular
+ * wakeup is not lost. The restore will also set
+ * p::saved_state to TASK_RUNNING so any further tests will
diff --git a/patches/sched__Rework_the___schedule_preempt_argument.patch b/patches/0007-sched-core-Rework-the-__schedule-preempt-argument.patch
index 0ca621ba4d72..5f16bf2473ca 100644
--- a/patches/sched__Rework_the___schedule_preempt_argument.patch
+++ b/patches/0007-sched-core-Rework-the-__schedule-preempt-argument.patch
@@ -1,31 +1,29 @@
-Subject: sched: Rework the __schedule() preempt argument
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:45 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:46 +0200
+Subject: [PATCH 07/72] sched/core: Rework the __schedule() preempt argument
PREEMPT_RT needs to hand a special state into __schedule() when a task
blocks on a 'sleeping' spin/rwlock. This is required to handle
rcu_note_context_switch() correctly without having special casing in the
RCU code. From an RCU point of view the blocking on the sleeping spinlock
-is equivalent to preemption because the task might be in a read side
+is equivalent to preemption, because the task might be in a read side
critical section.
schedule_debug() also has a check which would trigger with the !preempt
case, but that could be handled differently.
To avoid adding another argument and extra checks which cannot be optimized
-out by the compiler the following solution has been chosen:
+out by the compiler, the following solution has been chosen:
- Replace the boolean 'preempt' argument with an unsigned integer
'sched_mode' argument and define constants to hand in:
- (0 == No preemption, 1 = preemption).
+ (0 == no preemption, 1 = preemption).
- - Add two masks to apply on that mode one for the debug/rcu invocations
+ - Add two masks to apply on that mode: one for the debug/rcu invocations,
and one for the actual scheduling decision.
- For a non RT kernel these masks are UINT_MAX, i.e. all bits are set
- which allows the compiler to optimize the AND operation out because it is
+ For a non RT kernel these masks are UINT_MAX, i.e. all bits are set,
+ which allows the compiler to optimize the AND operation out, because it is
not masking out anything. IOW, it's not different from the boolean.
RT enabled kernels will define these masks separately.
@@ -33,12 +31,13 @@ out by the compiler the following solution has been chosen:
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Simplify the masking logic
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.315473019@linutronix.de
---
kernel/sched/core.c | 34 +++++++++++++++++++++++-----------
1 file changed, 23 insertions(+), 11 deletions(-)
----
+
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5820,6 +5820,18 @@ pick_next_task(struct rq *rq, struct tas
@@ -48,8 +47,8 @@ V2: Simplify the masking logic
+ * Constants for the sched_mode argument of __schedule().
+ *
+ * The mode argument allows RT enabled kernels to differentiate a
-+ * preemption from blocking on an 'sleeping' spin/rwlock. Note, that
-+ * SM_MASK_PREEMPT for !RT has all bits set which allows the compiler to
++ * preemption from blocking on an 'sleeping' spin/rwlock. Note that
++ * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
+ * optimize the AND operation out and just check for zero.
+ */
+#define SM_NONE 0x0
diff --git a/patches/sched__Provide_schedule_point_for_RT_locks.patch b/patches/0008-sched-core-Provide-a-scheduling-point-for-RT-locks.patch
index 59dae55ab72d..61f58a20f59b 100644
--- a/patches/sched__Provide_schedule_point_for_RT_locks.patch
+++ b/patches/0008-sched-core-Provide-a-scheduling-point-for-RT-locks.patch
@@ -1,32 +1,31 @@
-Subject: sched: Provide schedule point for RT locks
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:45 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:48 +0200
+Subject: [PATCH 08/72] sched/core: Provide a scheduling point for RT locks
RT enabled kernels substitute spin/rwlocks with 'sleeping' variants based
-on rtmutex. Blocking on such a lock is similar to preemption versus:
+on rtmutexes. Blocking on such a lock is similar to preemption versus:
- - I/O scheduling and worker handling because these functions might block
- on another substituted lock or come from a lock contention within these
+ - I/O scheduling and worker handling, because these functions might block
+ on another substituted lock, or come from a lock contention within these
functions.
- - RCU considers this like a preemption because the task might be in a read
+ - RCU considers this like a preemption, because the task might be in a read
side critical section.
-Add a separate scheduling point for this and hand a new scheduling mode
-argument to __schedule() which allows along with separate mode masks to
-handle this gracefully from within the scheduler without proliferating that
+Add a separate scheduling point for this, and hand a new scheduling mode
+argument to __schedule() which allows, along with separate mode masks, to
+handle this gracefully from within the scheduler, without proliferating that
to other subsystems like RCU.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Adopt to the simplified mask logic
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.372319055@linutronix.de
---
include/linux/sched.h | 3 +++
kernel/sched/core.c | 20 +++++++++++++++++++-
2 files changed, 22 insertions(+), 1 deletion(-)
----
+
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -288,6 +288,9 @@ extern long schedule_timeout_idle(long t
diff --git a/patches/0009-lockdep-selftests-Use-correct-depmap-for-local_lock-.patch b/patches/0009-lockdep-selftests-Use-correct-depmap-for-local_lock-.patch
deleted file mode 100644
index 060136340be8..000000000000
--- a/patches/0009-lockdep-selftests-Use-correct-depmap-for-local_lock-.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Thu, 12 Aug 2021 16:28:28 +0200
-Subject: [PATCH 09/10] lockdep/selftests: Use correct depmap for local_lock on
- RT
-
-The local_lock_t structure on PREEMPT_RT does not provide a dep_map
-member. The dep_map is available in the inner lock member.
-
-Use the lock.dep_map for local_lock on PREEMPT_RT.
-
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
----
- lib/locking-selftest.c | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
---- a/lib/locking-selftest.c
-+++ b/lib/locking-selftest.c
-@@ -1351,7 +1351,11 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_
- # define I_MUTEX(x) lockdep_reset_lock(&mutex_##x.dep_map)
- # define I_RWSEM(x) lockdep_reset_lock(&rwsem_##x.dep_map)
- # define I_WW(x) lockdep_reset_lock(&x.dep_map)
--# define I_LOCAL_LOCK(x) lockdep_reset_lock(this_cpu_ptr(&local_##x.dep_map))
-+# ifdef CONFIG_PREEMPT_RT
-+# define I_LOCAL_LOCK(x) lockdep_reset_lock(this_cpu_ptr(&local_##x.lock.dep_map))
-+# else
-+# define I_LOCAL_LOCK(x) lockdep_reset_lock(this_cpu_ptr(&local_##x.dep_map))
-+# endif
- #ifdef CONFIG_RT_MUTEXES
- # define I_RTMUTEX(x) lockdep_reset_lock(&rtmutex_##x.dep_map)
- #endif
diff --git a/patches/sched_wake_q__Provide_WAKE_Q_HEAD_INITIALIZER.patch b/patches/0009-sched-wake_q-Provide-WAKE_Q_HEAD_INITIALIZER.patch
index db559757a101..2ae0728ff8f7 100644
--- a/patches/sched_wake_q__Provide_WAKE_Q_HEAD_INITIALIZER.patch
+++ b/patches/0009-sched-wake_q-Provide-WAKE_Q_HEAD_INITIALIZER.patch
@@ -1,18 +1,19 @@
-Subject: sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:45 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:49 +0200
+Subject: [PATCH 09/72] sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER()
The RT specific spin/rwlock implementation requires special handling of the
-to be woken waiters. Provide a WAKE_Q_HEAD_INITIALIZER which can be used by
-the rtmutex code to implement a RT aware wake_q derivative.
+to be woken waiters. Provide a WAKE_Q_HEAD_INITIALIZER(), which can be used by
+the rtmutex code to implement an RT aware wake_q derivative.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.429918071@linutronix.de
---
include/linux/sched/wake_q.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
----
+
--- a/include/linux/sched/wake_q.h
+++ b/include/linux/sched/wake_q.h
@@ -42,8 +42,11 @@ struct wake_q_head {
diff --git a/patches/0010-lockdep-selftests-Adapt-ww-tests-for-PREEMPT_RT.patch b/patches/0010-lockdep-selftests-Adapt-ww-tests-for-PREEMPT_RT.patch
index 7f39759421e5..e561a7d778c4 100644
--- a/patches/0010-lockdep-selftests-Adapt-ww-tests-for-PREEMPT_RT.patch
+++ b/patches/0010-lockdep-selftests-Adapt-ww-tests-for-PREEMPT_RT.patch
@@ -19,7 +19,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
-@@ -1704,6 +1704,20 @@ static void ww_test_fail_acquire(void)
+@@ -1700,6 +1700,20 @@ static void ww_test_fail_acquire(void)
#endif
}
@@ -40,7 +40,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static void ww_test_normal(void)
{
int ret;
-@@ -1718,50 +1732,50 @@ static void ww_test_normal(void)
+@@ -1714,50 +1728,50 @@ static void ww_test_normal(void)
/* mutex_lock (and indirectly, mutex_lock_nested) */
o.ctx = (void *)~0UL;
@@ -104,7 +104,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
WARN_ON(o.ctx != (void *)~0UL);
}
-@@ -1774,7 +1788,7 @@ static void ww_test_two_contexts(void)
+@@ -1770,7 +1784,7 @@ static void ww_test_two_contexts(void)
static void ww_test_diff_class(void)
{
WWAI(&t);
@@ -113,7 +113,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
t.ww_class = NULL;
#endif
WWL(&o, &t);
-@@ -1838,7 +1852,7 @@ static void ww_test_edeadlk_normal(void)
+@@ -1834,7 +1848,7 @@ static void ww_test_edeadlk_normal(void)
{
int ret;
@@ -122,7 +122,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
o2.ctx = &t2;
mutex_release(&o2.base.dep_map, _THIS_IP_);
-@@ -1854,7 +1868,7 @@ static void ww_test_edeadlk_normal(void)
+@@ -1850,7 +1864,7 @@ static void ww_test_edeadlk_normal(void)
o2.ctx = NULL;
mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_);
@@ -131,7 +131,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
WWU(&o);
WWL(&o2, &t);
-@@ -1864,7 +1878,7 @@ static void ww_test_edeadlk_normal_slow(
+@@ -1860,7 +1874,7 @@ static void ww_test_edeadlk_normal_slow(
{
int ret;
@@ -140,7 +140,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o2.base.dep_map, _THIS_IP_);
o2.ctx = &t2;
-@@ -1880,7 +1894,7 @@ static void ww_test_edeadlk_normal_slow(
+@@ -1876,7 +1890,7 @@ static void ww_test_edeadlk_normal_slow(
o2.ctx = NULL;
mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_);
@@ -149,7 +149,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
WWU(&o);
ww_mutex_lock_slow(&o2, &t);
-@@ -1890,7 +1904,7 @@ static void ww_test_edeadlk_no_unlock(vo
+@@ -1886,7 +1900,7 @@ static void ww_test_edeadlk_no_unlock(vo
{
int ret;
@@ -158,7 +158,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
o2.ctx = &t2;
mutex_release(&o2.base.dep_map, _THIS_IP_);
-@@ -1906,7 +1920,7 @@ static void ww_test_edeadlk_no_unlock(vo
+@@ -1902,7 +1916,7 @@ static void ww_test_edeadlk_no_unlock(vo
o2.ctx = NULL;
mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_);
@@ -167,7 +167,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
WWL(&o2, &t);
}
-@@ -1915,7 +1929,7 @@ static void ww_test_edeadlk_no_unlock_sl
+@@ -1911,7 +1925,7 @@ static void ww_test_edeadlk_no_unlock_sl
{
int ret;
@@ -176,7 +176,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o2.base.dep_map, _THIS_IP_);
o2.ctx = &t2;
-@@ -1931,7 +1945,7 @@ static void ww_test_edeadlk_no_unlock_sl
+@@ -1927,7 +1941,7 @@ static void ww_test_edeadlk_no_unlock_sl
o2.ctx = NULL;
mutex_acquire(&o2.base.dep_map, 0, 1, _THIS_IP_);
@@ -185,7 +185,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
ww_mutex_lock_slow(&o2, &t);
}
-@@ -1940,7 +1954,7 @@ static void ww_test_edeadlk_acquire_more
+@@ -1936,7 +1950,7 @@ static void ww_test_edeadlk_acquire_more
{
int ret;
@@ -194,7 +194,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o2.base.dep_map, _THIS_IP_);
o2.ctx = &t2;
-@@ -1961,7 +1975,7 @@ static void ww_test_edeadlk_acquire_more
+@@ -1957,7 +1971,7 @@ static void ww_test_edeadlk_acquire_more
{
int ret;
@@ -203,7 +203,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o2.base.dep_map, _THIS_IP_);
o2.ctx = &t2;
-@@ -1982,11 +1996,11 @@ static void ww_test_edeadlk_acquire_more
+@@ -1978,11 +1992,11 @@ static void ww_test_edeadlk_acquire_more
{
int ret;
@@ -217,7 +217,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o3.base.dep_map, _THIS_IP_);
o3.ctx = &t2;
-@@ -2008,11 +2022,11 @@ static void ww_test_edeadlk_acquire_more
+@@ -2004,11 +2018,11 @@ static void ww_test_edeadlk_acquire_more
{
int ret;
@@ -231,7 +231,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o3.base.dep_map, _THIS_IP_);
o3.ctx = &t2;
-@@ -2033,7 +2047,7 @@ static void ww_test_edeadlk_acquire_wron
+@@ -2029,7 +2043,7 @@ static void ww_test_edeadlk_acquire_wron
{
int ret;
@@ -240,7 +240,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
mutex_release(&o2.base.dep_map, _THIS_IP_);
o2.ctx = &t2;
-@@ -2058,7 +2072,7 @@ static void ww_test_edeadlk_acquire_wron
+@@ -2054,7 +2068,7 @@ static void ww_test_edeadlk_acquire_wron
{
int ret;
diff --git a/patches/media_atomisp_Use_lockdep_instead_of_mutex_is_locked_.patch b/patches/0010-media-atomisp-Use-lockdep-instead-of-mutex_is_locked.patch
index 70f964e3254a..21f5065bdc00 100644
--- a/patches/media_atomisp_Use_lockdep_instead_of_mutex_is_locked_.patch
+++ b/patches/0010-media-atomisp-Use-lockdep-instead-of-mutex_is_locked.patch
@@ -1,18 +1,15 @@
From: Peter Zijlstra <peterz@infradead.org>
-Subject: media/atomisp: Use lockdep instead of *mutex_is_locked()
-Date: Wed, 14 Jul 2021 12:07:19 +0200
-
-From: Peter Zijlstra <peterz@infradead.org>
-
-Subject: media/atomisp: Use lockdep instead of *mutex_is_locked()
+Date: Sun, 15 Aug 2021 23:27:51 +0200
+Subject: [PATCH 10/72] media/atomisp: Use lockdep instead of
+ *mutex_is_locked()
The only user of rt_mutex_is_locked() is an anti-pattern, remove it.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Link: https://lore.kernel.org/r/20210714100719.GA11408@worktop.programming.kicks-ass.net
----
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.491442626@linutronix.de
---
drivers/staging/media/atomisp/pci/atomisp_ioctl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/patches/rtmutex--Remove-rt_mutex_is_locked--.patch b/patches/0011-locking-rtmutex-Remove-rt_mutex_is_locked.patch
index 69d3a1157c88..213aeb84c024 100644
--- a/patches/rtmutex--Remove-rt_mutex_is_locked--.patch
+++ b/patches/0011-locking-rtmutex-Remove-rt_mutex_is_locked.patch
@@ -1,13 +1,14 @@
-Subject: rtmutex: Remove rt_mutex_is_locked()
From: Peter Zijlstra <peterz@infradead.org>
-Date: Wed, 14 Jul 2021 13:38:08 +0200
+Date: Sun, 15 Aug 2021 23:27:52 +0200
+Subject: [PATCH 11/72] locking/rtmutex: Remove rt_mutex_is_locked()
-From: Peter Zijlstra <peterz@infradead.org>
-
-No more users.
+There are no more users left.
-Signed-off-by: Peter Zijlstra <peterz@infradead.org>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.552218335@linutronix.de
---
include/linux/rtmutex.h | 11 -----------
1 file changed, 11 deletions(-)
diff --git a/patches/rtmutex__Convert_macros_to_inlines.patch b/patches/0012-locking-rtmutex-Convert-macros-to-inlines.patch
index 681b5e0335c1..0cb39fe46e32 100644
--- a/patches/rtmutex__Convert_macros_to_inlines.patch
+++ b/patches/0012-locking-rtmutex-Convert-macros-to-inlines.patch
@@ -1,17 +1,18 @@
-Subject: rtmutex: Convert macros to inlines
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Mon Apr 26 09:40:07 2021 +0200
+Date: Sun, 15 Aug 2021 23:27:54 +0200
+Subject: [PATCH 12/72] locking/rtmutex: Convert macros to inlines
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
-Inlines are typesafe...
+Inlines are type-safe...
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.610830960@linutronix.de
---
kernel/locking/rtmutex.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -141,8 +141,19 @@ static __always_inline void fixup_rt_mut
diff --git a/patches/rtmutex--Switch-to-try_cmpxchg--.patch b/patches/0013-locking-rtmutex-Switch-to-from-cmpxchg_-to-try_cmpxc.patch
index 0ac6441f3e2d..8c05bbc1046b 100644
--- a/patches/rtmutex--Switch-to-try_cmpxchg--.patch
+++ b/patches/0013-locking-rtmutex-Switch-to-from-cmpxchg_-to-try_cmpxc.patch
@@ -1,13 +1,15 @@
-Subject: rtmutex: Switch to try_cmpxchg()
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 12 Jul 2021 14:44:55 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:55 +0200
+Subject: [PATCH 13/72] locking/rtmutex: Switch to from cmpxchg_*() to
+ try_cmpxchg_*()
Allows the compiler to generate better code depending on the architecture.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.668958502@linutronix.de
---
kernel/locking/rtmutex.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/patches/0013-mm-slub-do-initial-checks-in-___slab_alloc-with-irqs.patch b/patches/0013-mm-slub-do-initial-checks-in-___slab_alloc-with-irqs.patch
index 06584e4fd355..d9c1bc411bab 100644
--- a/patches/0013-mm-slub-do-initial-checks-in-___slab_alloc-with-irqs.patch
+++ b/patches/0013-mm-slub-do-initial-checks-in-___slab_alloc-with-irqs.patch
@@ -10,16 +10,60 @@ slab and it's suitable for our allocation.
Now we have to recheck c->page after actually disabling irqs as an allocation
in irq handler might have replaced it.
+Because we call pfmemalloc_match() as one of the checks, we might hit
+VM_BUG_ON_PAGE(!PageSlab(page)) in PageSlabPfmemalloc in case we get
+interrupted and the page is freed. Thus introduce a pfmemalloc_match_unsafe()
+variant that lacks the PageSlab check.
+
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
- mm/slub.c | 41 ++++++++++++++++++++++++++++++++---------
- 1 file changed, 32 insertions(+), 9 deletions(-)
+ include/linux/page-flags.h | 9 +++++++
+ mm/slub.c | 54 +++++++++++++++++++++++++++++++++++++--------
+ 2 files changed, 54 insertions(+), 9 deletions(-)
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -815,6 +815,15 @@ static inline int PageSlabPfmemalloc(str
+ return PageActive(page);
+ }
+
++/*
++ * A version of PageSlabPfmemalloc() for opportunistic checks where the page
++ * might have been freed under us and not be a PageSlab anymore.
++ */
++static inline int __PageSlabPfmemalloc(struct page *page)
++{
++ return PageActive(page);
++}
++
+ static inline void SetPageSlabPfmemalloc(struct page *page)
+ {
+ VM_BUG_ON_PAGE(!PageSlab(page), page);
--- a/mm/slub.c
+++ b/mm/slub.c
-@@ -2668,8 +2668,9 @@ static void *___slab_alloc(struct kmem_c
+@@ -2607,6 +2607,19 @@ static inline bool pfmemalloc_match(stru
+ }
+
+ /*
++ * A variant of pfmemalloc_match() that tests page flags without asserting
++ * PageSlab. Intended for opportunistic checks before taking a lock and
++ * rechecking that nobody else freed the page under us.
++ */
++static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags)
++{
++ if (unlikely(__PageSlabPfmemalloc(page)))
++ return gfp_pfmemalloc_allowed(gfpflags);
++
++ return true;
++}
++
++/*
+ * Check the page->freelist of a page and either transfer the freelist to the
+ * per cpu freelist or deactivate the page.
+ *
+@@ -2668,8 +2681,9 @@ static void *___slab_alloc(struct kmem_c
stat(s, ALLOC_SLOWPATH);
@@ -31,7 +75,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (!page) {
/*
* if the node is not online or has no normal memory, just
-@@ -2678,6 +2679,11 @@ static void *___slab_alloc(struct kmem_c
+@@ -2678,6 +2692,11 @@ static void *___slab_alloc(struct kmem_c
if (unlikely(node != NUMA_NO_NODE &&
!node_isset(node, slab_nodes)))
node = NUMA_NO_NODE;
@@ -43,7 +87,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto new_slab;
}
redo:
-@@ -2692,8 +2698,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2692,8 +2711,7 @@ static void *___slab_alloc(struct kmem_c
goto redo;
} else {
stat(s, ALLOC_NODE_MISMATCH);
@@ -53,7 +97,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
}
-@@ -2702,12 +2707,15 @@ static void *___slab_alloc(struct kmem_c
+@@ -2702,12 +2720,15 @@ static void *___slab_alloc(struct kmem_c
* PFMEMALLOC but right now, we are losing the pfmemalloc
* information when the page leaves the per-cpu allocator
*/
@@ -61,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
- deactivate_slab(s, page, c->freelist, c);
- goto new_slab;
- }
-+ if (unlikely(!pfmemalloc_match(page, gfpflags)))
++ if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags)))
+ goto deactivate_slab;
- /* must check again c->freelist in case of cpu migration or IRQ */
@@ -74,7 +118,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
freelist = c->freelist;
if (freelist)
goto load_freelist;
-@@ -2723,6 +2731,9 @@ static void *___slab_alloc(struct kmem_c
+@@ -2723,6 +2744,9 @@ static void *___slab_alloc(struct kmem_c
stat(s, ALLOC_REFILL);
load_freelist:
@@ -84,7 +128,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* freelist is pointing to the list of objects to be used.
* page is pointing to the page from which the objects are obtained.
-@@ -2734,11 +2745,23 @@ static void *___slab_alloc(struct kmem_c
+@@ -2734,11 +2758,23 @@ static void *___slab_alloc(struct kmem_c
local_irq_restore(flags);
return freelist;
diff --git a/patches/rtmutex__Split_API_and_implementation.patch b/patches/0014-locking-rtmutex-Split-API-from-implementation.patch
index 0011c5683845..dcf1e8553ccb 100644
--- a/patches/rtmutex__Split_API_and_implementation.patch
+++ b/patches/0014-locking-rtmutex-Split-API-from-implementation.patch
@@ -1,13 +1,15 @@
-Subject: rtmutex: Split API and implementation
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:46 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:27:57 +0200
+Subject: [PATCH 14/72] locking/rtmutex: Split API from implementation
Prepare for reusing the inner functions of rtmutex for RT lock
-substitutions.
+substitutions: introduce kernel/locking/rtmutex_api.c and move
+them there.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.726560996@linutronix.de
---
kernel/locking/Makefile | 2
kernel/locking/rtmutex.c | 479 +---------------------------------------
@@ -15,7 +17,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
kernel/locking/rtmutex_common.h | 78 +++---
4 files changed, 514 insertions(+), 498 deletions(-)
create mode 100644 kernel/locking/rtmutex_api.c
----
+
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -24,7 +24,7 @@ obj-$(CONFIG_SMP) += spinlock.o
diff --git a/patches/0014-mm-slub-move-disabling-irqs-closer-to-get_partial-in.patch b/patches/0014-mm-slub-move-disabling-irqs-closer-to-get_partial-in.patch
index 12c3971fc931..a4baed5461da 100644
--- a/patches/0014-mm-slub-move-disabling-irqs-closer-to-get_partial-in.patch
+++ b/patches/0014-mm-slub-move-disabling-irqs-closer-to-get_partial-in.patch
@@ -15,7 +15,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/mm/slub.c
+++ b/mm/slub.c
-@@ -2679,11 +2679,6 @@ static void *___slab_alloc(struct kmem_c
+@@ -2692,11 +2692,6 @@ static void *___slab_alloc(struct kmem_c
if (unlikely(node != NUMA_NO_NODE &&
!node_isset(node, slab_nodes)))
node = NUMA_NO_NODE;
@@ -27,7 +27,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto new_slab;
}
redo:
-@@ -2724,6 +2719,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2737,6 +2732,7 @@ static void *___slab_alloc(struct kmem_c
if (!freelist) {
c->page = NULL;
@@ -35,7 +35,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
stat(s, DEACTIVATE_BYPASS);
goto new_slab;
}
-@@ -2753,12 +2749,19 @@ static void *___slab_alloc(struct kmem_c
+@@ -2766,12 +2762,19 @@ static void *___slab_alloc(struct kmem_c
goto reread_page;
}
deactivate_slab(s, page, c->freelist, c);
@@ -57,7 +57,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
page = c->page = slub_percpu_partial(c);
slub_set_percpu_partial(c, page);
local_irq_restore(flags);
-@@ -2766,6 +2769,16 @@ static void *___slab_alloc(struct kmem_c
+@@ -2779,6 +2782,16 @@ static void *___slab_alloc(struct kmem_c
goto redo;
}
@@ -74,7 +74,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
freelist = get_partial(s, gfpflags, node, &page);
if (freelist) {
c->page = page;
-@@ -2798,15 +2811,18 @@ static void *___slab_alloc(struct kmem_c
+@@ -2811,15 +2824,18 @@ static void *___slab_alloc(struct kmem_c
check_new_page:
if (kmem_cache_debug(s)) {
diff --git a/patches/rtmutex--Split-out-the-inner-parts-of-struct-rtmutex.patch b/patches/0015-locking-rtmutex-Split-out-the-inner-parts-of-struct-.patch
index 10d3e179aeed..9aa6ff3e2584 100644
--- a/patches/rtmutex--Split-out-the-inner-parts-of-struct-rtmutex.patch
+++ b/patches/0015-locking-rtmutex-Split-out-the-inner-parts-of-struct-.patch
@@ -1,20 +1,20 @@
-Subject: rtmutex: Split out the inner parts of struct rtmutex
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Wed, 14 Jul 2021 15:30:47 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:27:58 +0200
+Subject: [PATCH 15/72] locking/rtmutex: Split out the inner parts of 'struct
+ rtmutex'
RT builds substitutions for rwsem, mutex, spinlock and rwlock around
rtmutexes. Split the inner working out so each lock substitution can use
-them with the appropriate lockdep annotations. This avoid having an extra
+them with the appropriate lockdep annotations. This avoids having an extra
unused lockdep map in the wrapped rtmutex.
No functional change.
-Signed-off-by: Peter Zijlstra <peterz@infradead.org>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.784739994@linutronix.de
---
include/linux/rtmutex.h | 23 ++++++++++----
kernel/futex.c | 4 +-
@@ -618,7 +618,7 @@ V2: New patch
}
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
-@@ -558,7 +558,7 @@ rcu_preempt_deferred_qs_irqrestore(struc
+@@ -588,7 +588,7 @@ rcu_preempt_deferred_qs_irqrestore(struc
WRITE_ONCE(rnp->exp_tasks, np);
if (IS_ENABLED(CONFIG_RCU_BOOST)) {
/* Snapshot ->boost_mtx ownership w/rnp->lock held. */
@@ -627,7 +627,7 @@ V2: New patch
if (&t->rcu_node_entry == rnp->boost_tasks)
WRITE_ONCE(rnp->boost_tasks, np);
}
-@@ -585,7 +585,7 @@ rcu_preempt_deferred_qs_irqrestore(struc
+@@ -615,7 +615,7 @@ rcu_preempt_deferred_qs_irqrestore(struc
/* Unboost if we were boosted. */
if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
@@ -636,7 +636,7 @@ V2: New patch
/*
* If this was the last task on the expedited lists,
-@@ -1082,7 +1082,7 @@ static int rcu_boost(struct rcu_node *rn
+@@ -1112,7 +1112,7 @@ static int rcu_boost(struct rcu_node *rn
* section.
*/
t = container_of(tb, struct task_struct, rcu_node_entry);
diff --git a/patches/0015-mm-slub-restore-irqs-around-calling-new_slab.patch b/patches/0015-mm-slub-restore-irqs-around-calling-new_slab.patch
index 8b51e8bea23d..09feac10c056 100644
--- a/patches/0015-mm-slub-restore-irqs-around-calling-new_slab.patch
+++ b/patches/0015-mm-slub-restore-irqs-around-calling-new_slab.patch
@@ -34,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (!page)
return NULL;
-@@ -2785,16 +2780,17 @@ static void *___slab_alloc(struct kmem_c
+@@ -2798,16 +2793,17 @@ static void *___slab_alloc(struct kmem_c
goto check_new_page;
}
diff --git a/patches/locking_rtmutex__Provide_rt_mutex_slowlock_locked.patch b/patches/0016-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch
index 2a419a3b26d5..7fded673d146 100644
--- a/patches/locking_rtmutex__Provide_rt_mutex_slowlock_locked.patch
+++ b/patches/0016-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch
@@ -1,21 +1,20 @@
-Subject: locking/rtmutex: Provide rt_mutex_slowlock_locked()
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:46 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:00 +0200
+Subject: [PATCH 16/72] locking/rtmutex: Provide rt_mutex_slowlock_locked()
Split the inner workings of rt_mutex_slowlock() out into a separate
-function which can be reused by the upcoming RT lock substitutions,
+function, which can be reused by the upcoming RT lock substitutions,
e.g. for rw_semaphores.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Add the dropped debug_rt_mutex_free_waiter() - Valentin
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.841971086@linutronix.de
---
kernel/locking/rtmutex.c | 100 ++++++++++++++++++++++++-------------------
kernel/locking/rtmutex_api.c | 2
2 files changed, 59 insertions(+), 43 deletions(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1106,7 +1106,7 @@ static void __sched remove_waiter(struct
diff --git a/patches/0016-mm-slub-validate-slab-from-partial-list-or-page-allo.patch b/patches/0016-mm-slub-validate-slab-from-partial-list-or-page-allo.patch
index deb450363a76..3926e84c00e7 100644
--- a/patches/0016-mm-slub-validate-slab-from-partial-list-or-page-allo.patch
+++ b/patches/0016-mm-slub-validate-slab-from-partial-list-or-page-allo.patch
@@ -19,7 +19,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/mm/slub.c
+++ b/mm/slub.c
-@@ -2775,10 +2775,8 @@ static void *___slab_alloc(struct kmem_c
+@@ -2788,10 +2788,8 @@ static void *___slab_alloc(struct kmem_c
lockdep_assert_irqs_disabled();
freelist = get_partial(s, gfpflags, node, &page);
@@ -31,7 +31,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
local_irq_restore(flags);
put_cpu_ptr(s->cpu_slab);
-@@ -2791,9 +2789,6 @@ static void *___slab_alloc(struct kmem_c
+@@ -2804,9 +2802,6 @@ static void *___slab_alloc(struct kmem_c
}
local_irq_save(flags);
@@ -41,7 +41,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* No other reference to the page yet so we can
* muck around with it freely without cmpxchg
-@@ -2802,14 +2797,12 @@ static void *___slab_alloc(struct kmem_c
+@@ -2815,14 +2810,12 @@ static void *___slab_alloc(struct kmem_c
page->freelist = NULL;
stat(s, ALLOC_SLAB);
@@ -56,7 +56,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
local_irq_restore(flags);
goto new_slab;
} else {
-@@ -2828,10 +2821,18 @@ static void *___slab_alloc(struct kmem_c
+@@ -2841,10 +2834,18 @@ static void *___slab_alloc(struct kmem_c
*/
goto return_single;
diff --git a/patches/rtmutex--Provide-rt_mutex_base_is_locked--.patch b/patches/0017-locking-rtmutex-Provide-rt_mutex_base_is_locked.patch
index 813be494c14a..ee54f8b125ca 100644
--- a/patches/rtmutex--Provide-rt_mutex_base_is_locked--.patch
+++ b/patches/0017-locking-rtmutex-Provide-rt_mutex_base_is_locked.patch
@@ -1,13 +1,14 @@
-Subject: rtmutex: Provide rt_mutex_base_is_locked()
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Fri, 16 Jul 2021 16:21:34 +0200
+Date: Sun, 15 Aug 2021 23:28:02 +0200
+Subject: [PATCH 17/72] locking/rtmutex: Provide rt_mutex_base_is_locked()
-Provide rt_mutex_base_is_locked() which will be used for various wrapped
+Provide rt_mutex_base_is_locked(), which will be used for various wrapped
locking primitives for RT.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: Use READ_ONCE() - Davidlohr
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.899572818@linutronix.de
---
include/linux/rtmutex.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/patches/0017-mm-slub-check-new-pages-with-restored-irqs.patch b/patches/0017-mm-slub-check-new-pages-with-restored-irqs.patch
index b63ef4e8f914..356f13571bc5 100644
--- a/patches/0017-mm-slub-check-new-pages-with-restored-irqs.patch
+++ b/patches/0017-mm-slub-check-new-pages-with-restored-irqs.patch
@@ -24,7 +24,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (!PageSlab(page)) {
slab_err(s, page, "Not a valid slab page");
return 0;
-@@ -2775,10 +2773,10 @@ static void *___slab_alloc(struct kmem_c
+@@ -2788,10 +2786,10 @@ static void *___slab_alloc(struct kmem_c
lockdep_assert_irqs_disabled();
freelist = get_partial(s, gfpflags, node, &page);
@@ -36,7 +36,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
put_cpu_ptr(s->cpu_slab);
page = new_slab(s, gfpflags, node);
c = get_cpu_ptr(s->cpu_slab);
-@@ -2788,7 +2786,6 @@ static void *___slab_alloc(struct kmem_c
+@@ -2801,7 +2799,6 @@ static void *___slab_alloc(struct kmem_c
return NULL;
}
@@ -44,7 +44,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* No other reference to the page yet so we can
* muck around with it freely without cmpxchg
-@@ -2803,7 +2800,6 @@ static void *___slab_alloc(struct kmem_c
+@@ -2816,7 +2813,6 @@ static void *___slab_alloc(struct kmem_c
if (kmem_cache_debug(s)) {
if (!alloc_debug_processing(s, page, freelist, addr)) {
/* Slab failed checks. Next slab needed */
@@ -52,7 +52,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto new_slab;
} else {
/*
-@@ -2821,6 +2817,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2834,6 +2830,7 @@ static void *___slab_alloc(struct kmem_c
*/
goto return_single;
@@ -60,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (unlikely(c->page))
flush_slab(s, c);
c->page = page;
-@@ -2829,6 +2826,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2842,6 +2839,7 @@ static void *___slab_alloc(struct kmem_c
return_single:
diff --git a/patches/locking__Add_base_code_for_RT_rw_semaphore_and_rwlock.patch b/patches/0018-locking-rt-Add-base-code-for-RT-rw_semaphore-and-rwl.patch
index da1c716d85b8..706134bfda28 100644
--- a/patches/locking__Add_base_code_for_RT_rw_semaphore_and_rwlock.patch
+++ b/patches/0018-locking-rt-Add-base-code-for-RT-rw_semaphore-and-rwl.patch
@@ -1,13 +1,12 @@
-Subject: locking: Add base code for RT rw_semaphore and rwlock
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:46 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:03 +0200
+Subject: [PATCH 18/72] locking/rt: Add base code for RT rw_semaphore and
+ rwlock
-From: Thomas Gleixner <tglx@linutronix.de>
-
-On PREEMPT_RT rw_semaphores and rwlocks are substituted with a rtmutex and
-a reader count. The implementation is writer unfair as it is not feasible
+On PREEMPT_RT, rw_semaphores and rwlocks are substituted with an rtmutex and
+a reader count. The implementation is writer unfair, as it is not feasible
to do priority inheritance on multiple readers, but experience has shown
-that realtime workloads are not the typical workloads which are sensitive
+that real-time workloads are not the typical workloads which are sensitive
to writer starvation.
The inner workings of rw_semaphores and rwlocks on RT are almost identical
@@ -24,19 +23,22 @@ into the relevant rw_semaphore/rwlock base code and compiled for each use
case separately.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211302.957920571@linutronix.de
---
- include/linux/rwbase_rt.h | 38 ++++++
+ include/linux/rwbase_rt.h | 39 ++++++
kernel/locking/rwbase_rt.c | 263 +++++++++++++++++++++++++++++++++++++++++++++
- 2 files changed, 301 insertions(+)
+ 2 files changed, 302 insertions(+)
create mode 100644 include/linux/rwbase_rt.h
create mode 100644 kernel/locking/rwbase_rt.c
----
+
--- /dev/null
+++ b/include/linux/rwbase_rt.h
-@@ -0,0 +1,38 @@
+@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0-only
-+#ifndef _LINUX_RW_BASE_RT_H
-+#define _LINUX_RW_BASE_RT_H
++#ifndef _LINUX_RWBASE_RT_H
++#define _LINUX_RWBASE_RT_H
+
+#include <linux/rtmutex.h>
+#include <linux/atomic.h>
@@ -71,7 +73,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+{
+ return atomic_read(&rwb->readers) > 0;
+}
-+#endif
++
++#endif /* _LINUX_RWBASE_RT_H */
--- /dev/null
+++ b/kernel/locking/rwbase_rt.c
@@ -0,0 +1,263 @@
@@ -83,17 +86,17 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ * down_write/write_lock()
+ * 1) Lock rtmutex
+ * 2) Remove the reader BIAS to force readers into the slow path
-+ * 3) Wait until all readers have left the critical region
++ * 3) Wait until all readers have left the critical section
+ * 4) Mark it write locked
+ *
+ * up_write/write_unlock()
+ * 1) Remove the write locked marker
-+ * 2) Set the reader BIAS so readers can use the fast path again
-+ * 3) Unlock rtmutex to release blocked readers
++ * 2) Set the reader BIAS, so readers can use the fast path again
++ * 3) Unlock rtmutex, to release blocked readers
+ *
+ * down_read/read_lock()
+ * 1) Try fast path acquisition (reader BIAS is set)
-+ * 2) Take tmutex::wait_lock which protects the writelocked flag
++ * 2) Take tmutex::wait_lock, which protects the writelocked flag
+ * 3) If !writelocked, acquire it for read
+ * 4) If writelocked, block on tmutex
+ * 5) unlock rtmutex, goto 1)
@@ -144,7 +147,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+
+ raw_spin_lock_irq(&rtm->wait_lock);
+ /*
-+ * Allow readers as long as the writer has not completely
++ * Allow readers, as long as the writer has not completely
+ * acquired the semaphore for write.
+ */
+ if (atomic_read(&rwb->readers) != WRITER_BIAS) {
@@ -180,7 +183,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ * rtmutex_lock(m)
+ *
+ * That would put Reader1 behind the writer waiting on
-+ * Reader2 to call up_read() which might be unbound.
++ * Reader2 to call up_read(), which might be unbound.
+ */
+
+ /*
@@ -238,7 +241,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+{
+ /*
+ * rwb->readers can only hit 0 when a writer is waiting for the
-+ * active readers to leave the critical region.
++ * active readers to leave the critical section.
+ */
+ if (unlikely(atomic_dec_and_test(&rwb->readers)))
+ __rwbase_read_unlock(rwb, state);
@@ -293,7 +296,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ */
+ rwbase_set_and_save_current_state(state);
+
-+ /* Block until all readers have left the critical region. */
++ /* Block until all readers have left the critical section. */
+ for (; atomic_read(&rwb->readers);) {
+ /* Optimized out for rwlocks */
+ if (rwbase_signal_pending_state(state, current)) {
diff --git a/patches/0018-mm-slub-stop-disabling-irqs-around-get_partial.patch b/patches/0018-mm-slub-stop-disabling-irqs-around-get_partial.patch
index 36de2b10c616..147016573331 100644
--- a/patches/0018-mm-slub-stop-disabling-irqs-around-get_partial.patch
+++ b/patches/0018-mm-slub-stop-disabling-irqs-around-get_partial.patch
@@ -54,7 +54,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
return object;
}
-@@ -2752,8 +2753,10 @@ static void *___slab_alloc(struct kmem_c
+@@ -2765,8 +2766,10 @@ static void *___slab_alloc(struct kmem_c
local_irq_restore(flags);
goto reread_page;
}
@@ -66,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
page = c->page = slub_percpu_partial(c);
slub_set_percpu_partial(c, page);
-@@ -2762,18 +2765,9 @@ static void *___slab_alloc(struct kmem_c
+@@ -2775,18 +2778,9 @@ static void *___slab_alloc(struct kmem_c
goto redo;
}
diff --git a/patches/locking_rwsem__Add_rtmutex_based_R_W_semaphore_implementation.patch b/patches/0019-locking-rwsem-Add-rtmutex-based-R-W-semaphore-implem.patch
index 2d9af514bb1b..be9813ae5295 100644
--- a/patches/locking_rwsem__Add_rtmutex_based_R_W_semaphore_implementation.patch
+++ b/patches/0019-locking-rwsem-Add-rtmutex-based-R-W-semaphore-implem.patch
@@ -1,11 +1,10 @@
-Subject: locking/rwsem: Add rtmutex based R/W semaphore implementation
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:47 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:05 +0200
+Subject: [PATCH 19/72] locking/rwsem: Add rtmutex based R/W semaphore
+ implementation
The RT specific R/W semaphore implementation used to restrict the number of
-readers to one because a writer cannot block on multiple readers and
+readers to one, because a writer cannot block on multiple readers and
inherit its priority or budget.
The single reader restricting was painful in various ways:
@@ -16,9 +15,9 @@ The single reader restricting was painful in various ways:
- Progress blocker for drivers which are carefully crafted to avoid the
potential reader/writer deadlock in mainline.
-The analysis of the writer code paths shows, that properly written RT tasks
+The analysis of the writer code paths shows that properly written RT tasks
should not take them. Syscalls like mmap(), file access which take mmap sem
-write locked have unbound latencies which are completely unrelated to mmap
+write locked have unbound latencies, which are completely unrelated to mmap
sem. Other R/W sem users like graphics drivers are not suitable for RT tasks
either.
@@ -33,7 +32,7 @@ done in the following way:
- Readers blocked on a writer inherit their priority/budget in the normal
way.
-There is a drawback with this scheme. R/W semaphores become writer unfair
+There is a drawback with this scheme: R/W semaphores become writer unfair
though the applications which have triggered writer starvation (mostly on
mmap_sem) in the past are not really the typical workloads running on a RT
system. So while it's unlikely to hit writer starvation, it's possible. If
@@ -41,13 +40,14 @@ there are unexpected workloads on RT systems triggering it, the problem
has to be revisited.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Fix indent fail (Peter Z)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.016885947@linutronix.de
---
include/linux/rwsem.h | 78 ++++++++++++++++++++++++++++++-----
kernel/locking/rwsem.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 176 insertions(+), 10 deletions(-)
----
+
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -16,6 +16,19 @@
diff --git a/patches/0019-mm-slub-move-reset-of-c-page-and-freelist-out-of-dea.patch b/patches/0019-mm-slub-move-reset-of-c-page-and-freelist-out-of-dea.patch
index 791c9e50d519..a7beb833a7c1 100644
--- a/patches/0019-mm-slub-move-reset-of-c-page-and-freelist-out-of-dea.patch
+++ b/patches/0019-mm-slub-move-reset-of-c-page-and-freelist-out-of-dea.patch
@@ -67,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/*
-@@ -2742,7 +2748,10 @@ static void *___slab_alloc(struct kmem_c
+@@ -2755,7 +2761,10 @@ static void *___slab_alloc(struct kmem_c
local_irq_restore(flags);
goto reread_page;
}
@@ -79,7 +79,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
local_irq_restore(flags);
new_slab:
-@@ -2821,11 +2830,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2834,11 +2843,7 @@ static void *___slab_alloc(struct kmem_c
return_single:
local_irq_save(flags);
diff --git a/patches/locking_rtmutex__Add_wake_state_to_rt_mutex_waiter.patch b/patches/0020-locking-rtmutex-Add-wake_state-to-rt_mutex_waiter.patch
index 44bf5dc1bc5b..6033cbbce20b 100644
--- a/patches/locking_rtmutex__Add_wake_state_to_rt_mutex_waiter.patch
+++ b/patches/0020-locking-rtmutex-Add-wake_state-to-rt_mutex_waiter.patch
@@ -1,8 +1,6 @@
-Subject: locking/rtmutex: Add wake_state to rt_mutex_waiter
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:47 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:06 +0200
+Subject: [PATCH 20/72] locking/rtmutex: Add wake_state to rt_mutex_waiter
Regular sleeping locks like mutexes, rtmutexes and rw_semaphores are always
entering and leaving a blocking section with task state == TASK_RUNNING.
@@ -10,9 +8,9 @@ entering and leaving a blocking section with task state == TASK_RUNNING.
On a non-RT kernel spinlocks and rwlocks never affect the task state, but
on RT kernels these locks are converted to rtmutex based 'sleeping' locks.
-So in case of contention the task goes to block which requires to carefully
-preserve the task state and restore it after acquiring the lock taking
-regular wakeups for the task into account which happened while the task was
+So in case of contention the task goes to block, which requires to carefully
+preserve the task state, and restore it after acquiring the lock taking
+regular wakeups for the task into account, which happened while the task was
blocked. This state preserving is achieved by having a separate task state
for blocking on a RT spin/rwlock and a saved_state field in task_struct
along with careful handling of these wakeup scenarios in try_to_wake_up().
@@ -22,13 +20,14 @@ to be used for waking a lock waiter in rt_mutex_waiter which allows to
handle the regular and RT spin/rwlocks by handing it to wake_up_state().
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Use unsigned int for wake_state (Peter Z.)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.079800739@linutronix.de
---
kernel/locking/rtmutex.c | 2 +-
kernel/locking/rtmutex_common.h | 9 +++++++++
2 files changed, 10 insertions(+), 1 deletion(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -692,7 +692,7 @@ static int __sched rt_mutex_adjust_prio_
diff --git a/patches/locking_rtmutex__Provide_rt_mutex_wake_q_and_helpers.patch b/patches/0021-locking-rtmutex-Provide-rt_wake_q_head-and-helpers.patch
index b650c32a0968..e6462613c225 100644
--- a/patches/locking_rtmutex__Provide_rt_mutex_wake_q_and_helpers.patch
+++ b/patches/0021-locking-rtmutex-Provide-rt_wake_q_head-and-helpers.patch
@@ -1,27 +1,26 @@
-Subject: locking/rtmutex: Provide rt_wake_q and helpers
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:47 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:08 +0200
+Subject: [PATCH 21/72] locking/rtmutex: Provide rt_wake_q_head and helpers
-From: Thomas Gleixner <tglx@linutronix.de>
-
-To handle the difference of wakeups for regular sleeping locks (mutex,
+To handle the difference between wakeups for regular sleeping locks (mutex,
rtmutex, rw_semaphore) and the wakeups for 'sleeping' spin/rwlocks on
PREEMPT_RT enabled kernels correctly, it is required to provide a
-wake_q construct which allows to keep them separate.
+wake_q_head construct which allows to keep them separate.
-Provide a wrapper around wake_q and the required helpers, which will be
+Provide a wrapper around wake_q_head and the required helpers, which will be
extended with the state handling later.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Rename according to PeterZ
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.139337655@linutronix.de
---
kernel/locking/rtmutex.c | 15 +++++++++++++++
kernel/locking/rtmutex_common.h | 14 ++++++++++++++
2 files changed, 29 insertions(+)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -347,6 +347,21 @@ static __always_inline void rt_mutex_adj
diff --git a/patches/0021-mm-slub-call-deactivate_slab-without-disabling-irqs.patch b/patches/0021-mm-slub-call-deactivate_slab-without-disabling-irqs.patch
index f1506c91c968..576b90ba7e59 100644
--- a/patches/0021-mm-slub-call-deactivate_slab-without-disabling-irqs.patch
+++ b/patches/0021-mm-slub-call-deactivate_slab-without-disabling-irqs.patch
@@ -23,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/mm/slub.c
+++ b/mm/slub.c
-@@ -2752,8 +2752,8 @@ static void *___slab_alloc(struct kmem_c
+@@ -2765,8 +2765,8 @@ static void *___slab_alloc(struct kmem_c
freelist = c->freelist;
c->page = NULL;
c->freelist = NULL;
@@ -33,7 +33,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
new_slab:
-@@ -2821,18 +2821,32 @@ static void *___slab_alloc(struct kmem_c
+@@ -2834,18 +2834,32 @@ static void *___slab_alloc(struct kmem_c
*/
goto return_single;
diff --git a/patches/locking_rtmutex__Use_rt_mutex_wake_q_head.patch b/patches/0022-locking-rtmutex-Use-rt_mutex_wake_q_head.patch
index 19571952220a..4efd0e62df3d 100644
--- a/patches/locking_rtmutex__Use_rt_mutex_wake_q_head.patch
+++ b/patches/0022-locking-rtmutex-Use-rt_mutex_wake_q_head.patch
@@ -1,8 +1,6 @@
-Subject: locking/rtmutex: Use rt_mutex_wake_q_head
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:47 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:09 +0200
+Subject: [PATCH 22/72] locking/rtmutex: Use rt_mutex_wake_q_head
Prepare for the required state aware handling of waiter wakeups via wake_q
and switch the rtmutex code over to the rtmutex specific wrapper.
@@ -10,15 +8,16 @@ and switch the rtmutex code over to the rtmutex specific wrapper.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Adopt to rename
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.197113263@linutronix.de
---
kernel/futex.c | 8 ++++----
kernel/locking/rtmutex.c | 12 ++++++------
kernel/locking/rtmutex_api.c | 19 ++++++++-----------
kernel/locking/rtmutex_common.h | 4 ++--
4 files changed, 20 insertions(+), 23 deletions(-)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1493,11 +1493,11 @@ static void mark_wake_futex(struct wake_
diff --git a/patches/locking_rtmutex__Prepare_RT_rt_mutex_wake_q_for_RT_locks.patch b/patches/0023-locking-rtmutex-Prepare-RT-rt_mutex_wake_q-for-RT-lo.patch
index 45ea7f6f25bc..ef4ef05c4bff 100644
--- a/patches/locking_rtmutex__Prepare_RT_rt_mutex_wake_q_for_RT_locks.patch
+++ b/patches/0023-locking-rtmutex-Prepare-RT-rt_mutex_wake_q-for-RT-lo.patch
@@ -1,40 +1,30 @@
-Subject: locking/rtmutex: Prepare RT rt_mutex_wake_q for RT locks
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:47 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:11 +0200
+Subject: [PATCH 23/72] locking/rtmutex: Prepare RT rt_mutex_wake_q for RT
+ locks
-From: Thomas Gleixner <tglx@linutronix.de>
-
-Add a rtlock_task pointer to rt_mutex_wake_q which allows to handle the RT
+Add an rtlock_task pointer to rt_mutex_wake_q, which allows to handle the RT
specific wakeup for spin/rwlock waiters. The pointer is just consuming 4/8
-bytes on stack so it is provided unconditionaly to avoid #ifdeffery all
+bytes on the stack so it is provided unconditionaly to avoid #ifdeffery all
over the place.
-This cannot use a wake_q because a task can have concurrent wakeups which
-would make it miss either lock or the regular wakeup depending on what gets
-queued first unless task struct gains a separate wake_q_node for this which
-would be overkill because there can only be a single task which gets woken
+This cannot use a regular wake_q, because a task can have concurrent wakeups which
+would make it miss either lock or the regular wakeups, depending on what gets
+queued first, unless task struct gains a separate wake_q_node for this, which
+would be overkill, because there can only be a single task which gets woken
up in the spin/rw_lock unlock path.
No functional change for non-RT enabled kernels.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.253614678@linutronix.de
---
-V3: Switch back to the working version (Mike)
-V2: Make it symmetric (PeterZ)
----
- include/linux/sched/wake_q.h | 1 -
kernel/locking/rtmutex.c | 18 ++++++++++++++++--
kernel/locking/rtmutex_common.h | 5 ++++-
- 3 files changed, 20 insertions(+), 4 deletions(-)
----
---- a/include/linux/sched/wake_q.h
-+++ b/include/linux/sched/wake_q.h
-@@ -62,5 +62,4 @@ static inline bool wake_q_empty(struct w
- extern void wake_q_add(struct wake_q_head *head, struct task_struct *task);
- extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task);
- extern void wake_up_q(struct wake_q_head *head);
--
- #endif /* _LINUX_SCHED_WAKE_Q_H */
+ 2 files changed, 20 insertions(+), 3 deletions(-)
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -351,12 +351,26 @@ static __always_inline void rt_mutex_adj
diff --git a/patches/locking_rtmutex__Guard_regular_sleeping_locks_specific_functions.patch b/patches/0024-locking-rtmutex-Guard-regular-sleeping-locks-specifi.patch
index ba8bfba3544b..af81502aa67b 100644
--- a/patches/locking_rtmutex__Guard_regular_sleeping_locks_specific_functions.patch
+++ b/patches/0024-locking-rtmutex-Guard-regular-sleeping-locks-specifi.patch
@@ -1,10 +1,9 @@
-Subject: locking/rtmutex: Guard regular sleeping locks specific functions
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:47 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:12 +0200
+Subject: [PATCH 24/72] locking/rtmutex: Guard regular sleeping locks specific
+ functions
-From: Thomas Gleixner <tglx@linutronix.de>
-
-Guard the regular sleeping lock specific functionality which is used for
+Guard the regular sleeping lock specific functionality, which is used for
rtmutex on non-RT enabled kernels and for mutex, rtmutex and semaphores on
RT enabled kernels so the code can be reused for the RT specific
implementation of spinlocks and rwlocks in a different compilation unit.
@@ -12,12 +11,15 @@ implementation of spinlocks and rwlocks in a different compilation unit.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.311535693@linutronix.de
---
kernel/locking/rtmutex.c | 254 ++++++++++++++++++++++---------------------
kernel/locking/rtmutex_api.c | 1
kernel/locking/rwsem.c | 1
3 files changed, 133 insertions(+), 123 deletions(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1075,10 +1075,139 @@ static void __sched mark_wakeup_next_wai
diff --git a/patches/locking_spinlock__Split_the_lock_types_header.patch b/patches/0025-locking-spinlock-Split-the-lock-types-header-and-mov.patch
index 41b90fe4bf6b..08602984d97d 100644
--- a/patches/locking_spinlock__Split_the_lock_types_header.patch
+++ b/patches/0025-locking-spinlock-Split-the-lock-types-header-and-mov.patch
@@ -1,18 +1,18 @@
-Subject: locking/spinlock: Split the lock types header
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:48 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:14 +0200
+Subject: [PATCH 25/72] locking/spinlock: Split the lock types header, and move
+ the raw types into <linux/spinlock_types_raw.h>
-From: Thomas Gleixner <tglx@linutronix.de>
-
-Move raw_spinlock into its own file. Prepare for RT 'sleeping spinlocks' to
-avoid header recursion as RT locks require rtmutex.h which in turn requires
+Move raw_spinlock into its own file. Prepare for RT 'sleeping spinlocks', to
+avoid header recursion, as RT locks require rtmutex.h, which in turn requires
the raw spinlock types.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V3: Remove the duplicate defines
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.371269088@linutronix.de
---
include/linux/rwlock_types.h | 4 ++
include/linux/spinlock.h | 4 ++
@@ -20,7 +20,7 @@ V3: Remove the duplicate defines
include/linux/spinlock_types_raw.h | 65 +++++++++++++++++++++++++++++++++++++
4 files changed, 74 insertions(+), 58 deletions(-)
create mode 100644 include/linux/spinlock_types_raw.h
----
+
--- a/include/linux/rwlock_types.h
+++ b/include/linux/rwlock_types.h
@@ -1,6 +1,10 @@
@@ -189,4 +189,4 @@ V3: Remove the duplicate defines
+
+#define DEFINE_RAW_SPINLOCK(x) raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
+
-+#endif
++#endif /* __LINUX_SPINLOCK_TYPES_RAW_H */
diff --git a/patches/locking_rtmutex__Prevent_future_include_recursion_hell.patch b/patches/0026-locking-rtmutex-Prevent-future-include-recursion-hel.patch
index 928987affc0b..12a68ec20a52 100644
--- a/patches/locking_rtmutex__Prevent_future_include_recursion_hell.patch
+++ b/patches/0026-locking-rtmutex-Prevent-future-include-recursion-hel.patch
@@ -1,23 +1,24 @@
-Subject: locking/rtmutex: Prevent future include recursion hell
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Tue Jul 6 16:36:48 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:16 +0200
+Subject: [PATCH 26/72] locking/rtmutex: Prevent future include recursion hell
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
-rtmutex only needs raw_spinlock_t, but it includes spinlock_types.h which
+rtmutex only needs raw_spinlock_t, but it includes spinlock_types.h, which
is not a problem on an non RT enabled kernel.
-RT kernels substitute regular spinlocks with 'sleeping' spinlocks which
-are based on rtmutexes and therefore must be able to include rtmutex.h.
+RT kernels substitute regular spinlocks with 'sleeping' spinlocks, which
+are based on rtmutexes, and therefore must be able to include rtmutex.h.
-Include spinlock_types_raw.h instead.
+Include <linux/spinlock_types_raw.h> instead.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.428224188@linutronix.de
---
include/linux/rtmutex.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
----
+
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -16,7 +16,7 @@
diff --git a/patches/locking_lockdep__Reduce_includes_in_debug_locks.h.patch b/patches/0027-locking-lockdep-Reduce-header-dependencies-in-linux-.patch
index f1c60389eaa4..009ccd84cd38 100644
--- a/patches/locking_lockdep__Reduce_includes_in_debug_locks.h.patch
+++ b/patches/0027-locking-lockdep-Reduce-header-dependencies-in-linux-.patch
@@ -1,8 +1,7 @@
-Subject: locking/lockdep: Reduce includes in debug_locks.h
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Tue Jul 6 16:36:48 2021 +0200
-
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:17 +0200
+Subject: [PATCH 27/72] locking/lockdep: Reduce header dependencies in
+ <linux/debug_locks.h>
The inclusion of printk.h leads to a circular dependency if spinlock_t is
based on rtmutexes on RT enabled kernels.
@@ -12,10 +11,13 @@ what debug_locks.h requires.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.484161136@linutronix.de
---
include/linux/debug_locks.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
----
+
--- a/include/linux/debug_locks.h
+++ b/include/linux/debug_locks.h
@@ -3,8 +3,7 @@
diff --git a/patches/rbtree__Split_out_the_rbtree_type_definitions.patch b/patches/0028-rbtree-Split-out-the-rbtree-type-definitions-into-li.patch
index 5be8cbbfea8e..47292ca7cbea 100644
--- a/patches/rbtree__Split_out_the_rbtree_type_definitions.patch
+++ b/patches/0028-rbtree-Split-out-the-rbtree-type-definitions-into-li.patch
@@ -1,33 +1,41 @@
-Subject: rbtree: Split out the rbtree type definitions
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Tue Jul 6 16:36:48 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:19 +0200
+Subject: [PATCH 28/72] rbtree: Split out the rbtree type definitions into
+ <linux/rbtree_types.h>
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+So we have this header dependency problem on RT:
+
+ - <linux/rtmutex.h> needs the definition of 'struct rb_root_cached'.
+ - <linux/rbtree.h> includes <linux/kernel.h>, which includes <linux/spinlock.h>.
-rtmutex.h needs the definition of struct rb_root_cached. rbtree.h includes
-kernel.h which includes spinlock.h. That works nicely for non-RT enabled
-kernels, but on RT enabled kernels spinlocks are based on rtmutexes which
-creates another circular header dependency as spinlocks.h will require
-rtmutex.h.
+That works nicely for non-RT enabled kernels, but on RT enabled kernels
+spinlocks are based on rtmutexes, which creates another circular header
+dependency, as <linux/spinlocks.h> will require <linux/rtmutex.h>.
Split out the type definitions and move them into their own header file so
the rtmutex header can include just those.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.542123501@linutronix.de
---
- include/linux/rbtree.h | 30 +-----------------------------
+ include/linux/rbtree.h | 31 ++-----------------------------
include/linux/rbtree_types.h | 34 ++++++++++++++++++++++++++++++++++
- 2 files changed, 35 insertions(+), 29 deletions(-)
+ 2 files changed, 36 insertions(+), 29 deletions(-)
create mode 100644 include/linux/rbtree_types.h
----
+
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
-@@ -19,22 +19,11 @@
+@@ -17,24 +17,14 @@
+ #ifndef _LINUX_RBTREE_H
+ #define _LINUX_RBTREE_H
++#include <linux/rbtree_types.h>
++
#include <linux/kernel.h>
#include <linux/stddef.h>
-+#include <linux/rbtree_types.h>
#include <linux/rcupdate.h>
-struct rb_node {
@@ -47,7 +55,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define rb_entry(ptr, type, member) container_of(ptr, type, member)
#define RB_EMPTY_ROOT(root) (READ_ONCE((root)->rb_node) == NULL)
-@@ -112,23 +101,6 @@ static inline void rb_link_node_rcu(stru
+@@ -112,23 +102,6 @@ static inline void rb_link_node_rcu(stru
typeof(*pos), field); 1; }); \
pos = n)
diff --git a/patches/0029-locking-rtmutex-Reduce-linux-rtmutex.h-header-depend.patch b/patches/0029-locking-rtmutex-Reduce-linux-rtmutex.h-header-depend.patch
new file mode 100644
index 000000000000..f8aca74f3d74
--- /dev/null
+++ b/patches/0029-locking-rtmutex-Reduce-linux-rtmutex.h-header-depend.patch
@@ -0,0 +1,36 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:20 +0200
+Subject: [PATCH 29/72] locking/rtmutex: Reduce <linux/rtmutex.h> header
+ dependencies, only include <linux/rbtree_types.h>
+
+We have the following header dependency problem on RT:
+
+ - <linux/rtmutex.h> needs the definition of 'struct rb_root_cached'.
+ - <linux/rbtree.h> includes <linux/kernel.h>, which includes <linux/spinlock.h>
+
+That works nicely for non-RT enabled kernels, but on RT enabled kernels
+spinlocks are based on rtmutexes, which creates another circular header
+dependency as <linux/spinlocks.h> will require <linux/rtmutex.h>.
+
+Include <linux/rbtree_types.h> instead.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.598003167@linutronix.de
+---
+ include/linux/rtmutex.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/include/linux/rtmutex.h
++++ b/include/linux/rtmutex.h
+@@ -15,7 +15,7 @@
+
+ #include <linux/compiler.h>
+ #include <linux/linkage.h>
+-#include <linux/rbtree.h>
++#include <linux/rbtree_types.h>
+ #include <linux/spinlock_types_raw.h>
+
+ extern int max_lock_depth; /* for sysctl */
diff --git a/patches/0029-mm-slub-Move-flush_cpu_slab-invocations-__free_slab-.patch b/patches/0029-mm-slub-Move-flush_cpu_slab-invocations-__free_slab-.patch
index 0907b8849131..641577dcc7e7 100644
--- a/patches/0029-mm-slub-Move-flush_cpu_slab-invocations-__free_slab-.patch
+++ b/patches/0029-mm-slub-Move-flush_cpu_slab-invocations-__free_slab-.patch
@@ -127,7 +127,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/*
-@@ -4074,7 +4120,7 @@ int __kmem_cache_shutdown(struct kmem_ca
+@@ -4087,7 +4133,7 @@ int __kmem_cache_shutdown(struct kmem_ca
int node;
struct kmem_cache_node *n;
@@ -136,16 +136,16 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* Attempt to free all objects */
for_each_kmem_cache_node(s, node, n) {
free_partial(s, n);
-@@ -4350,7 +4396,7 @@ EXPORT_SYMBOL(kfree);
+@@ -4363,7 +4409,7 @@ EXPORT_SYMBOL(kfree);
* being allocated from last increasing the chance that the last objects
* are freed in them.
*/
-int __kmem_cache_shrink(struct kmem_cache *s)
-+int __kmem_cache_do_shrink(struct kmem_cache *s)
++static int __kmem_cache_do_shrink(struct kmem_cache *s)
{
int node;
int i;
-@@ -4362,7 +4408,6 @@ int __kmem_cache_shrink(struct kmem_cach
+@@ -4375,7 +4421,6 @@ int __kmem_cache_shrink(struct kmem_cach
unsigned long flags;
int ret = 0;
@@ -153,7 +153,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
for_each_kmem_cache_node(s, node, n) {
INIT_LIST_HEAD(&discard);
for (i = 0; i < SHRINK_PROMOTE_MAX; i++)
-@@ -4412,13 +4457,21 @@ int __kmem_cache_shrink(struct kmem_cach
+@@ -4425,13 +4470,21 @@ int __kmem_cache_shrink(struct kmem_cach
return ret;
}
diff --git a/patches/locking_spinlock__Provide_RT_specific_spinlock_type.patch b/patches/0030-locking-spinlock-Provide-RT-specific-spinlock_t.patch
index f1ad420500f5..5a91cc70eee6 100644
--- a/patches/locking_spinlock__Provide_RT_specific_spinlock_type.patch
+++ b/patches/0030-locking-spinlock-Provide-RT-specific-spinlock_t.patch
@@ -1,19 +1,20 @@
-Subject: locking/spinlock: Provide RT specific spinlock type
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:49 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:22 +0200
+Subject: [PATCH 30/72] locking/spinlock: Provide RT specific spinlock_t
-From: Thomas Gleixner <tglx@linutronix.de>
-
-RT replaces spinlocks with a simple wrapper around a rtmutex which turns
+RT replaces spinlocks with a simple wrapper around an rtmutex, which turns
spinlocks on RT into 'sleeping' spinlocks. The actual implementation of the
-spinlock API differs from a regular rtmutex as it does neither handle
+spinlock API differs from a regular rtmutex, as it does neither handle
timeouts nor signals and it is state preserving across the lock operation.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.654230709@linutronix.de
---
include/linux/spinlock_types.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
----
+
--- a/include/linux/spinlock_types.h
+++ b/include/linux/spinlock_types.h
@@ -11,6 +11,9 @@
diff --git a/patches/locking_spinlock__Provide_RT_variant_header.patch b/patches/0031-locking-spinlock-Provide-RT-variant-header-linux-spi.patch
index 11b67bd7eb11..ebb3acb58ef7 100644
--- a/patches/locking_spinlock__Provide_RT_variant_header.patch
+++ b/patches/0031-locking-spinlock-Provide-RT-variant-header-linux-spi.patch
@@ -1,22 +1,22 @@
-Subject: locking/spinlock: Provide RT variant header
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:49 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:23 +0200
+Subject: [PATCH 31/72] locking/spinlock: Provide RT variant header:
+ <linux/spinlock_rt.h>
Provide the necessary wrappers around the actual rtmutex based spinlock
implementation.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: spin_unlock() -> rt_spin_unlock() (Peter)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.712897671@linutronix.de
---
include/linux/spinlock.h | 11 ++
include/linux/spinlock_api_smp.h | 3
include/linux/spinlock_rt.h | 149 +++++++++++++++++++++++++++++++++++++++
3 files changed, 162 insertions(+), 1 deletion(-)
create mode 100644 include/linux/spinlock_rt.h
----
+
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -312,8 +312,10 @@ static inline void do_raw_spin_unlock(ra
@@ -35,7 +35,7 @@ V4: spin_unlock() -> rt_spin_unlock() (Peter)
# include <linux/spinlock_api_up.h>
#endif
-+/* Non PREEMPT_RT kernel map to raw spinlocks */
++/* Non PREEMPT_RT kernel, map to raw spinlocks: */
+#ifndef CONFIG_PREEMPT_RT
+
/*
@@ -58,7 +58,7 @@ V4: spin_unlock() -> rt_spin_unlock() (Peter)
return 0;
}
-+/* PREEMPT_RT has it's own rwlock implementation */
++/* PREEMPT_RT has its own rwlock implementation */
+#ifndef CONFIG_PREEMPT_RT
#include <linux/rwlock_api_smp.h>
+#endif
diff --git a/patches/0031-mm-slub-optionally-save-restore-irqs-in-slab_-un-loc.patch b/patches/0031-mm-slub-optionally-save-restore-irqs-in-slab_-un-loc.patch
index 20c099cc8277..e80e03924e66 100644
--- a/patches/0031-mm-slub-optionally-save-restore-irqs-in-slab_-un-loc.patch
+++ b/patches/0031-mm-slub-optionally-save-restore-irqs-in-slab_-un-loc.patch
@@ -107,7 +107,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
spin_unlock_irqrestore(&n->list_lock, flags);
if (!ret)
slab_fix(s, "Object at 0x%p not freed", object);
-@@ -4057,9 +4069,10 @@ static void list_slab_objects(struct kme
+@@ -4070,9 +4082,10 @@ static void list_slab_objects(struct kme
void *addr = page_address(page);
unsigned long *map;
void *p;
@@ -119,7 +119,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
map = get_map(s, page);
for_each_object(p, s, addr, page->objects) {
-@@ -4070,7 +4083,7 @@ static void list_slab_objects(struct kme
+@@ -4083,7 +4096,7 @@ static void list_slab_objects(struct kme
}
}
put_map(map);
@@ -128,7 +128,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#endif
}
-@@ -4802,8 +4815,9 @@ static void validate_slab(struct kmem_ca
+@@ -4815,8 +4828,9 @@ static void validate_slab(struct kmem_ca
{
void *p;
void *addr = page_address(page);
@@ -139,7 +139,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (!check_slab(s, page) || !on_freelist(s, page, NULL))
goto unlock;
-@@ -4818,7 +4832,7 @@ static void validate_slab(struct kmem_ca
+@@ -4831,7 +4845,7 @@ static void validate_slab(struct kmem_ca
break;
}
unlock:
diff --git a/patches/locking_rtmutex__Provide_the_spin_rwlock_core_lock_function.patch b/patches/0032-locking-rtmutex-Provide-the-spin-rwlock-core-lock-fu.patch
index b52adec54680..691203d008eb 100644
--- a/patches/locking_rtmutex__Provide_the_spin_rwlock_core_lock_function.patch
+++ b/patches/0032-locking-rtmutex-Provide-the-spin-rwlock-core-lock-fu.patch
@@ -1,19 +1,21 @@
-Subject: locking/rtmutex: Provide the spin/rwlock core lock function
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:49 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:25 +0200
+Subject: [PATCH 32/72] locking/rtmutex: Provide the spin/rwlock core lock
+ function
-From: Thomas Gleixner <tglx@linutronix.de>
-
-A simplified version of the rtmutex slowlock function which neither handles
-signals nor timeouts and is careful about preserving the state of the
+A simplified version of the rtmutex slowlock function, which neither handles
+signals nor timeouts, and is careful about preserving the state of the
blocked task across the lock operation.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.770228446@linutronix.de
---
kernel/locking/rtmutex.c | 60 ++++++++++++++++++++++++++++++++++++++++
kernel/locking/rtmutex_common.h | 2 -
2 files changed, 61 insertions(+), 1 deletion(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1416,3 +1416,63 @@ static __always_inline int __rt_mutex_lo
@@ -28,7 +30,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+
+/**
+ * rtlock_slowlock_locked - Slow path lock acquisition for RT locks
-+ * @lock: The underlying rt mutex
++ * @lock: The underlying RT mutex
+ */
+static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
+{
@@ -47,7 +49,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ task_blocks_on_rt_mutex(lock, &waiter, current, RT_MUTEX_MIN_CHAINWALK);
+
+ for (;;) {
-+ /* Try to acquire the lock again. */
++ /* Try to acquire the lock again */
+ if (try_to_take_rt_mutex(lock, current, &waiter))
+ break;
+
@@ -63,8 +65,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ current_restore_rtlock_saved_state();
+
+ /*
-+ * try_to_take_rt_mutex() sets the waiter bit unconditionally. We
-+ * might have to fix that up:
++ * try_to_take_rt_mutex() sets the waiter bit unconditionally.
++ * We might have to fix that up:
+ */
+ fixup_rt_mutex_waiters(lock);
+ debug_rt_mutex_free_waiter(&waiter);
diff --git a/patches/locking_spinlock__Provide_RT_variant.patch b/patches/0033-locking-spinlock-Provide-RT-variant.patch
index 8e1569a0a0b0..a24c4e6331c8 100644
--- a/patches/locking_spinlock__Provide_RT_variant.patch
+++ b/patches/0033-locking-spinlock-Provide-RT-variant.patch
@@ -1,21 +1,20 @@
-Subject: locking/spinlock: Provide RT variant
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:49 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:27 +0200
+Subject: [PATCH 33/72] locking/spinlock: Provide RT variant
Provide the actual locking functions which make use of the general and
spinlock specific rtmutex code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V5: Add missing might_sleep() and fix lockdep init (Sebastian)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.826621464@linutronix.de
---
kernel/locking/Makefile | 1
kernel/locking/spinlock_rt.c | 129 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 130 insertions(+)
create mode 100644 kernel/locking/spinlock_rt.c
----
+
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -25,6 +25,7 @@ obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_
@@ -33,16 +32,16 @@ V5: Add missing might_sleep() and fix lockdep init (Sebastian)
+/*
+ * PREEMPT_RT substitution for spin/rw_locks
+ *
-+ * spin_lock and rw_lock on RT are based on rtmutex with a few twists to
-+ * resemble the non RT semantics
++ * spinlocks and rwlocks on RT are based on rtmutexes, with a few twists to
++ * resemble the non RT semantics:
+ *
-+ * - Contrary to a plain rtmutex, spin_lock and rw_lock are state
++ * - Contrary to plain rtmutexes, spinlocks and rwlocks are state
+ * preserving. The task state is saved before blocking on the underlying
-+ * rtmutex and restored when the lock has been acquired. Regular wakeups
++ * rtmutex, and restored when the lock has been acquired. Regular wakeups
+ * during that time are redirected to the saved state so no wake up is
+ * missed.
+ *
-+ * - Non RT spin/rw_locks disable preemption and eventually interrupts.
++ * - Non RT spin/rwlocks disable preemption and eventually interrupts.
+ * Disabling preemption has the side effect of disabling migration and
+ * preventing RCU grace periods.
+ *
diff --git a/patches/locking_rwlock__Provide_RT_variant.patch b/patches/0034-locking-rwlock-Provide-RT-variant.patch
index 66ce98bc62c9..41b69b692901 100644
--- a/patches/locking_rwlock__Provide_RT_variant.patch
+++ b/patches/0034-locking-rwlock-Provide-RT-variant.patch
@@ -1,18 +1,17 @@
-Subject: locking/rwlock: Provide RT variant
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:49 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:28 +0200
+Subject: [PATCH 34/72] locking/rwlock: Provide RT variant
-From: Thomas Gleixner <tglx@linutronix.de>
-
-Similar to rw_semaphores on RT the rwlock substitution is not writer fair
-because it's not feasible to have a writer inherit it's priority to
+Similar to rw_semaphores, on RT the rwlock substitution is not writer fair,
+because it's not feasible to have a writer inherit its priority to
multiple readers. Readers blocked on a writer follow the normal rules of
-priority inheritance. Like RT spinlocks RT rwlocks are state preserving
+priority inheritance. Like RT spinlocks, RT rwlocks are state preserving
across the slow lock operations (contended case).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V5: Add missing might_sleep() and fix lockdep init (Sebastian)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.882793524@linutronix.de
---
include/linux/rwlock_rt.h | 140 ++++++++++++++++++++++++++++++++++++++++
include/linux/rwlock_types.h | 49 ++++++++++----
@@ -23,7 +22,7 @@ V5: Add missing might_sleep() and fix lockdep init (Sebastian)
kernel/locking/spinlock_rt.c | 131 +++++++++++++++++++++++++++++++++++++
7 files changed, 323 insertions(+), 13 deletions(-)
create mode 100644 include/linux/rwlock_rt.h
----
+
--- /dev/null
+++ b/include/linux/rwlock_rt.h
@@ -0,0 +1,140 @@
@@ -32,7 +31,7 @@ V5: Add missing might_sleep() and fix lockdep init (Sebastian)
+#define __LINUX_RWLOCK_RT_H
+
+#ifndef __LINUX_SPINLOCK_RT_H
-+#error Do not include directly. Use spinlock.h
++#error Do not #include directly. Use <linux/spinlock.h>.
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
@@ -49,7 +48,7 @@ V5: Add missing might_sleep() and fix lockdep init (Sebastian)
+do { \
+ static struct lock_class_key __key; \
+ \
-+ init_rwbase_rt(&(rwl)->rwbase); \
++ init_rwbase_rt(&(rwl)->rwbase); \
+ __rt_rwlock_init(rwl, #rwl, &__key); \
+} while (0)
+
@@ -166,7 +165,7 @@ V5: Add missing might_sleep() and fix lockdep init (Sebastian)
+
+#define rwlock_is_contended(lock) (((void)(lock), 0))
+
-+#endif
++#endif /* __LINUX_RWLOCK_RT_H */
--- a/include/linux/rwlock_types.h
+++ b/include/linux/rwlock_types.h
@@ -5,9 +5,19 @@
diff --git a/patches/0034-mm-slub-use-migrate_disable-on-PREEMPT_RT.patch b/patches/0034-mm-slub-use-migrate_disable-on-PREEMPT_RT.patch
index 22c20bbbe03a..07772dcd3f4a 100644
--- a/patches/0034-mm-slub-use-migrate_disable-on-PREEMPT_RT.patch
+++ b/patches/0034-mm-slub-use-migrate_disable-on-PREEMPT_RT.patch
@@ -46,8 +46,8 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#ifdef CONFIG_SLUB_DEBUG
#ifdef CONFIG_SLUB_DEBUG_ON
DEFINE_STATIC_KEY_TRUE(slub_debug_enabled);
-@@ -2815,7 +2835,7 @@ static void *___slab_alloc(struct kmem_c
- if (unlikely(!pfmemalloc_match(page, gfpflags)))
+@@ -2828,7 +2848,7 @@ static void *___slab_alloc(struct kmem_c
+ if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags)))
goto deactivate_slab;
- /* must check again c->page in case IRQ handler changed it */
@@ -55,7 +55,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
local_irq_save(flags);
if (unlikely(page != c->page)) {
local_irq_restore(flags);
-@@ -2874,7 +2894,8 @@ static void *___slab_alloc(struct kmem_c
+@@ -2887,7 +2907,8 @@ static void *___slab_alloc(struct kmem_c
}
if (unlikely(!slub_percpu_partial(c))) {
local_irq_restore(flags);
@@ -65,7 +65,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
page = c->page = slub_percpu_partial(c);
-@@ -2890,9 +2911,9 @@ static void *___slab_alloc(struct kmem_c
+@@ -2903,9 +2924,9 @@ static void *___slab_alloc(struct kmem_c
if (freelist)
goto check_new_page;
@@ -77,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (unlikely(!page)) {
slab_out_of_memory(s, gfpflags, node);
-@@ -2975,12 +2996,12 @@ static void *__slab_alloc(struct kmem_ca
+@@ -2988,12 +3009,12 @@ static void *__slab_alloc(struct kmem_ca
* cpu before disabling preemption. Need to reload cpu area
* pointer.
*/
@@ -92,7 +92,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#endif
return p;
}
-@@ -3509,7 +3530,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3522,7 +3543,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
* IRQs, which protects against PREEMPT and interrupts
* handlers invoking normal fastpath.
*/
@@ -101,7 +101,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
local_irq_disable();
for (i = 0; i < size; i++) {
-@@ -3555,7 +3576,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3568,7 +3589,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
}
c->tid = next_tid(c->tid);
local_irq_enable();
@@ -110,7 +110,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* memcg and kmem_cache debug support and memory initialization.
-@@ -3565,7 +3586,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3578,7 +3599,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
slab_want_init_on_alloc(flags, s));
return i;
error:
diff --git a/patches/rtmutex--Exclude-!RT-tasks-from-PI-boosting.patch b/patches/0035-locking-rtmutex-Squash-RT-tasks-to-DEFAULT_PRIO.patch
index 56e2e0815bd7..5a8a6cc86dbf 100644
--- a/patches/rtmutex--Exclude-!RT-tasks-from-PI-boosting.patch
+++ b/patches/0035-locking-rtmutex-Squash-RT-tasks-to-DEFAULT_PRIO.patch
@@ -1,6 +1,6 @@
From: Peter Zijlstra <peterz@infradead.org>
-Date: Mon, 09 Aug 2021 14:58:11 +0200
-Subject: locking/rtmutex: Squash !RT tasks to DEFAULT_PRIO
+Date: Sun, 15 Aug 2021 23:28:30 +0200
+Subject: [PATCH 35/72] locking/rtmutex: Squash !RT tasks to DEFAULT_PRIO
Ensure all !RT tasks have the same prio such that they end up in FIFO
order and aren't split up according to nice level.
@@ -11,9 +11,9 @@ deboosting a larger coverage.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: Picked up as a new patch
----
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.938676930@linutronix.de
---
kernel/locking/rtmutex.c | 25 ++++++++++++++++++++-----
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/patches/0035-mm-slub-convert-kmem_cpu_slab-protection-to-local_lo.patch b/patches/0035-mm-slub-convert-kmem_cpu_slab-protection-to-local_lo.patch
index a976674cc793..5778979fe3c7 100644
--- a/patches/0035-mm-slub-convert-kmem_cpu_slab-protection-to-local_lo.patch
+++ b/patches/0035-mm-slub-convert-kmem_cpu_slab-protection-to-local_lo.patch
@@ -29,8 +29,8 @@ by this series.
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
- include/linux/slub_def.h | 2
- mm/slub.c | 146 ++++++++++++++++++++++++++++++++++++-----------
+ include/linux/slub_def.h | 6 +
+ mm/slub.c | 142 ++++++++++++++++++++++++++++++++++++-----------
2 files changed, 115 insertions(+), 33 deletions(-)
--- a/include/linux/slub_def.h
@@ -43,14 +43,25 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
enum stat_item {
ALLOC_FASTPATH, /* Allocation from cpu slab */
-@@ -41,6 +42,7 @@ enum stat_item {
+@@ -40,6 +41,10 @@ enum stat_item {
+ CPU_PARTIAL_DRAIN, /* Drain cpu partial to node partial */
NR_SLUB_STAT_ITEMS };
++/*
++ * When changing the layout, make sure freelist and tid are still compatible
++ * with this_cpu_cmpxchg_double() alignment requirements.
++ */
struct kmem_cache_cpu {
-+ local_lock_t lock; /* Protects the fields below except stat */
void **freelist; /* Pointer to next available object */
unsigned long tid; /* Globally unique transaction id */
- struct page *page; /* The slab from which we are allocating */
+@@ -47,6 +52,7 @@ struct kmem_cache_cpu {
+ #ifdef CONFIG_SLUB_CPU_PARTIAL
+ struct page *partial; /* Partially allocated frozen slabs */
+ #endif
++ local_lock_t lock; /* Protects the fields above */
+ #ifdef CONFIG_SLUB_STATS
+ unsigned stat[NR_SLUB_STAT_ITEMS];
+ #endif
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -46,13 +46,21 @@
@@ -201,7 +212,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (page)
deactivate_slab(s, page, freelist);
-@@ -2836,9 +2878,9 @@ static void *___slab_alloc(struct kmem_c
+@@ -2849,9 +2891,9 @@ static void *___slab_alloc(struct kmem_c
goto deactivate_slab;
/* must check again c->page in case we got preempted and it changed */
@@ -213,7 +224,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto reread_page;
}
freelist = c->freelist;
-@@ -2849,7 +2891,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2862,7 +2904,7 @@ static void *___slab_alloc(struct kmem_c
if (!freelist) {
c->page = NULL;
@@ -222,20 +233,16 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
stat(s, DEACTIVATE_BYPASS);
goto new_slab;
}
-@@ -2858,7 +2900,11 @@ static void *___slab_alloc(struct kmem_c
+@@ -2871,7 +2913,7 @@ static void *___slab_alloc(struct kmem_c
load_freelist:
- lockdep_assert_irqs_disabled();
-+#ifdef CONFIG_PREEMPT_RT
-+ lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock.lock));
-+#else
+ lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock));
-+#endif
/*
* freelist is pointing to the list of objects to be used.
-@@ -2868,39 +2914,39 @@ static void *___slab_alloc(struct kmem_c
+@@ -2881,39 +2923,39 @@ static void *___slab_alloc(struct kmem_c
VM_BUG_ON(!c->page->frozen);
c->freelist = get_freepointer(s, freelist);
c->tid = next_tid(c->tid);
@@ -283,7 +290,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
stat(s, CPU_PARTIAL_ALLOC);
goto redo;
}
-@@ -2953,7 +2999,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2966,7 +3008,7 @@ static void *___slab_alloc(struct kmem_c
retry_load_page:
@@ -292,7 +299,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (unlikely(c->page)) {
void *flush_freelist = c->freelist;
struct page *flush_page = c->page;
-@@ -2962,7 +3008,7 @@ static void *___slab_alloc(struct kmem_c
+@@ -2975,7 +3017,7 @@ static void *___slab_alloc(struct kmem_c
c->freelist = NULL;
c->tid = next_tid(c->tid);
@@ -301,7 +308,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
deactivate_slab(s, flush_page, flush_freelist);
-@@ -3081,7 +3127,15 @@ static __always_inline void *slab_alloc_
+@@ -3094,7 +3136,15 @@ static __always_inline void *slab_alloc_
object = c->freelist;
page = c->page;
@@ -318,7 +325,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
object = __slab_alloc(s, gfpflags, node, addr, c);
} else {
void *next_object = get_freepointer_safe(s, object);
-@@ -3341,6 +3395,7 @@ static __always_inline void do_slab_free
+@@ -3354,6 +3404,7 @@ static __always_inline void do_slab_free
barrier();
if (likely(page == c->page)) {
@@ -326,7 +333,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
void **freelist = READ_ONCE(c->freelist);
set_freepointer(s, tail_obj, freelist);
-@@ -3353,6 +3408,31 @@ static __always_inline void do_slab_free
+@@ -3366,6 +3417,31 @@ static __always_inline void do_slab_free
note_cmpxchg_failure("slab_free", s, tid);
goto redo;
}
@@ -358,7 +365,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
stat(s, FREE_FASTPATH);
} else
__slab_free(s, page, head, tail_obj, cnt, addr);
-@@ -3531,7 +3611,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3544,7 +3620,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
* handlers invoking normal fastpath.
*/
c = slub_get_cpu_ptr(s->cpu_slab);
@@ -367,7 +374,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
for (i = 0; i < size; i++) {
void *object = kfence_alloc(s, s->object_size, flags);
-@@ -3552,7 +3632,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3565,7 +3641,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
*/
c->tid = next_tid(c->tid);
@@ -376,7 +383,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* Invoking slow path likely have side-effect
-@@ -3566,7 +3646,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3579,7 +3655,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
c = this_cpu_ptr(s->cpu_slab);
maybe_wipe_obj_freeptr(s, p[i]);
@@ -385,7 +392,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
continue; /* goto for-loop */
}
-@@ -3575,7 +3655,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
+@@ -3588,7 +3664,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
maybe_wipe_obj_freeptr(s, p[i]);
}
c->tid = next_tid(c->tid);
diff --git a/patches/locking_mutex__Consolidate_core_headers.patch b/patches/0036-locking-mutex-Consolidate-core-headers-remove-kernel.patch
index 94f76073b117..3c8b893ba562 100644
--- a/patches/locking_mutex__Consolidate_core_headers.patch
+++ b/patches/0036-locking-mutex-Consolidate-core-headers-remove-kernel.patch
@@ -1,8 +1,7 @@
-Subject: locking/mutex: Consolidate core headers
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:50 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Tue, 17 Aug 2021 16:17:38 +0200
+Subject: [PATCH 36/72] locking/mutex: Consolidate core headers, remove
+ kernel/locking/mutex-debug.h
Having two header files which contain just the non-debug and debug variants
is mostly waste of disc space and has no real value. Stick the debug
@@ -10,6 +9,9 @@ variants into the common mutex.h file as counterpart to the stubs for the
non-debug case.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211303.995350521@linutronix.de
---
kernel/locking/mutex-debug.c | 4 +---
kernel/locking/mutex-debug.h | 29 -----------------------------
@@ -17,7 +19,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
kernel/locking/mutex.h | 37 +++++++++++++++++++++++--------------
4 files changed, 25 insertions(+), 51 deletions(-)
delete mode 100644 kernel/locking/mutex-debug.h
----
+
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -1,6 +1,4 @@
diff --git a/patches/locking_mutex__Move_waiter_to_core_header.patch b/patches/0037-locking-mutex-Move-the-struct-mutex_waiter-definitio.patch
index 4c35116ea052..515bb5a1add6 100644
--- a/patches/locking_mutex__Move_waiter_to_core_header.patch
+++ b/patches/0037-locking-mutex-Move-the-struct-mutex_waiter-definitio.patch
@@ -1,18 +1,22 @@
-Subject: locking/mutex: Move waiter to core header
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:50 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:33 +0200
+Subject: [PATCH 37/72] locking/mutex: Move the 'struct mutex_waiter'
+ definition from <linux/mutex.h> to the internal header
-From: Thomas Gleixner <tglx@linutronix.de>
+Move the mutex waiter declaration from the public <linux/mutex.h> header
+to the internal kernel/locking/mutex.h header.
-Move the mutex waiter declaration from the global to the core local
-header. There is no reason to expose it outside of the core code.
+There is no reason to expose it outside of the core code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.054325923@linutronix.de
---
include/linux/mutex.h | 13 -------------
kernel/locking/mutex.h | 13 +++++++++++++
2 files changed, 13 insertions(+), 13 deletions(-)
----
+
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -74,19 +74,6 @@ struct ww_mutex {
diff --git a/patches/locking_ww_mutex__Move_ww_mutex_declarations_into_ww_mutex.h.patch b/patches/0038-locking-ww_mutex-Move-the-ww_mutex-definitions-from-.patch
index b832a941ea07..19203b8fbdce 100644
--- a/patches/locking_ww_mutex__Move_ww_mutex_declarations_into_ww_mutex.h.patch
+++ b/patches/0038-locking-ww_mutex-Move-the-ww_mutex-definitions-from-.patch
@@ -1,20 +1,22 @@
-Subject: locking/ww_mutex: Move ww_mutex declarations into ww_mutex.h
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:50 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:34 +0200
+Subject: [PATCH 38/72] locking/ww_mutex: Move the ww_mutex definitions from
+ <linux/mutex.h> into <linux/ww_mutex.h>
-From: Thomas Gleixner <tglx@linutronix.de>
-
-Move the ww_mutex declarations into the ww_mutex specific header where they
+Move the ww_mutex definitions into the ww_mutex specific header where they
belong.
-Preparatory change to allow compiling ww_mutex standalone.
+Preparatory change to allow compiling ww_mutexes standalone.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.110216293@linutronix.de
---
include/linux/mutex.h | 11 -----------
include/linux/ww_mutex.h | 8 ++++++++
2 files changed, 8 insertions(+), 11 deletions(-)
----
+
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -20,9 +20,6 @@
diff --git a/patches/locking_mutex__Make_mutex__wait_lock_raw.patch b/patches/0039-locking-mutex-Make-mutex-wait_lock-raw.patch
index 195759c5769d..d8dde9af739f 100644
--- a/patches/locking_mutex__Make_mutex__wait_lock_raw.patch
+++ b/patches/0039-locking-mutex-Make-mutex-wait_lock-raw.patch
@@ -1,18 +1,24 @@
-Subject: locking/mutex: Make mutex::wait_lock raw
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:50 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:28:36 +0200
+Subject: [PATCH 39/72] locking/mutex: Make mutex::wait_lock raw
The wait_lock of mutex is really a low level lock. Convert it to a
raw_spinlock like the wait_lock of rtmutex.
+[ mingo: backmerged the test_lockup.c build fix by bigeasy. ]
+
+Co-developed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.166863404@linutronix.de
---
include/linux/mutex.h | 4 ++--
kernel/locking/mutex.c | 22 +++++++++++-----------
- 2 files changed, 13 insertions(+), 13 deletions(-)
----
+ lib/test_lockup.c | 2 +-
+ 3 files changed, 14 insertions(+), 14 deletions(-)
+
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -50,7 +50,7 @@
@@ -122,3 +128,14 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
wake_up_q(&wake_q);
}
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -502,7 +502,7 @@ static int __init test_lockup_init(void)
+ offsetof(rwlock_t, magic),
+ RWLOCK_MAGIC) ||
+ test_magic(lock_mutex_ptr,
+- offsetof(struct mutex, wait_lock.rlock.magic),
++ offsetof(struct mutex, wait_lock.magic),
+ SPINLOCK_MAGIC) ||
+ test_magic(lock_rwsem_ptr,
+ offsetof(struct rw_semaphore, wait_lock.magic),
diff --git a/patches/locking_ww_mutex__Simplify_lockdep_annotation.patch b/patches/0040-locking-ww_mutex-Simplify-lockdep-annotations.patch
index 60e87f863cff..ad32e4a5759d 100644
--- a/patches/locking_ww_mutex__Simplify_lockdep_annotation.patch
+++ b/patches/0040-locking-ww_mutex-Simplify-lockdep-annotations.patch
@@ -1,18 +1,18 @@
-Subject: locking/ww_mutex: Simplify lockdep annotation
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:54 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:38 +0200
+Subject: [PATCH 40/72] locking/ww_mutex: Simplify lockdep annotations
No functional change.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.222921634@linutronix.de
---
kernel/locking/mutex.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
----
+
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -949,6 +949,10 @@ static __always_inline int __sched
diff --git a/patches/locking_ww_mutex__Gather_mutex_waiter_initialization.patch b/patches/0041-locking-ww_mutex-Gather-mutex_waiter-initialization.patch
index d8b9c118c615..d4cb6e3b4f8a 100644
--- a/patches/locking_ww_mutex__Gather_mutex_waiter_initialization.patch
+++ b/patches/0041-locking-ww_mutex-Gather-mutex_waiter-initialization.patch
@@ -1,17 +1,17 @@
-Subject: locking/ww_mutex: Gather mutex_waiter initialization
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:53 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:39 +0200
+Subject: [PATCH 41/72] locking/ww_mutex: Gather mutex_waiter initialization
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.281927514@linutronix.de
---
kernel/locking/mutex-debug.c | 1 +
kernel/locking/mutex.c | 12 +++---------
2 files changed, 4 insertions(+), 9 deletions(-)
----
+
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -30,6 +30,7 @@ void debug_mutex_lock_common(struct mute
diff --git a/patches/locking_ww_mutex__Split_up_ww_mutex_unlock.patch b/patches/0042-locking-ww_mutex-Split-up-ww_mutex_unlock.patch
index a117129c04c3..06a2051a2306 100644
--- a/patches/locking_ww_mutex__Split_up_ww_mutex_unlock.patch
+++ b/patches/0042-locking-ww_mutex-Split-up-ww_mutex_unlock.patch
@@ -1,22 +1,45 @@
-Subject: locking/ww_mutex: Split up ww_mutex_unlock()
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:52 2021 +0200
-
-From: Peter Zijlstra <peterz@infradead.org>
+From: "Peter Zijlstra (Intel)" <peterz@infradead.org>
+Date: Tue, 17 Aug 2021 16:19:04 +0200
+Subject: [PATCH 42/72] locking/ww_mutex: Split up ww_mutex_unlock()
Split the ww related part out into a helper function so it can be reused
for a rtmutex based ww_mutex implementation.
+[ mingo: Fixed bisection failure. ]
+
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
----
- kernel/locking/mutex.c | 26 +++++++++++++-------------
- 1 file changed, 13 insertions(+), 13 deletions(-)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.340166556@linutronix.de
---
+ kernel/locking/mutex.c | 28 +++++++++++++++-------------
+ 1 file changed, 15 insertions(+), 13 deletions(-)
+
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
-@@ -750,19 +750,7 @@ EXPORT_SYMBOL(mutex_unlock);
+@@ -737,6 +737,20 @@ void __sched mutex_unlock(struct mutex *
+ }
+ EXPORT_SYMBOL(mutex_unlock);
+
++static void __ww_mutex_unlock(struct ww_mutex *lock)
++{
++ /*
++ * The unlocking fastpath is the 0->1 transition from 'locked'
++ * into 'unlocked' state:
++ */
++ if (lock->ctx) {
++ MUTEX_WARN_ON(!lock->ctx->acquired);
++ if (lock->ctx->acquired > 0)
++ lock->ctx->acquired--;
++ lock->ctx = NULL;
++ }
++}
++
+ /**
+ * ww_mutex_unlock - release the w/w mutex
+ * @lock: the mutex to be released
+@@ -750,19 +764,7 @@ EXPORT_SYMBOL(mutex_unlock);
*/
void __sched ww_mutex_unlock(struct ww_mutex *lock)
{
@@ -37,22 +60,3 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
mutex_unlock(&lock->base);
}
EXPORT_SYMBOL(ww_mutex_unlock);
-@@ -915,6 +903,18 @@ static inline int __sched
- return 0;
- }
-
-+static void __ww_mutex_unlock(struct ww_mutex *lock)
-+{
-+ if (lock->ctx) {
-+#ifdef CONFIG_DEBUG_MUTEXES
-+ DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired);
-+#endif
-+ if (lock->ctx->acquired > 0)
-+ lock->ctx->acquired--;
-+ lock->ctx = NULL;
-+ }
-+}
-+
- /*
- * Lock a mutex (possibly interruptible), slowpath:
- */
diff --git a/patches/locking_ww_mutex__Split_W_W_implementation_logic.patch b/patches/0043-locking-ww_mutex-Split-out-the-W-W-implementation-lo.patch
index ed8f81df9c20..80ce2ab5f4e1 100644
--- a/patches/locking_ww_mutex__Split_W_W_implementation_logic.patch
+++ b/patches/0043-locking-ww_mutex-Split-out-the-W-W-implementation-lo.patch
@@ -1,20 +1,22 @@
-Subject: locking/ww_mutex: Split W/W implementation logic
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:50 2021 +0200
+From: "Peter Zijlstra (Intel)" <peterz@infradead.org>
+Date: Tue, 17 Aug 2021 16:31:54 +0200
+Subject: [PATCH 43/72] locking/ww_mutex: Split out the W/W implementation
+ logic into kernel/locking/ww_mutex.h
-From: Peter Zijlstra <peterz@infradead.org>
-
-Split the W/W mutex helper functions out into a separate header file so
+Split the W/W mutex helper functions out into a separate header file, so
they can be shared with a rtmutex based variant later.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.396893399@linutronix.de
---
- kernel/locking/mutex.c | 370 ----------------------------------------------
+ kernel/locking/mutex.c | 372 ----------------------------------------------
kernel/locking/ww_mutex.h | 369 +++++++++++++++++++++++++++++++++++++++++++++
- 2 files changed, 370 insertions(+), 369 deletions(-)
+ 2 files changed, 370 insertions(+), 371 deletions(-)
create mode 100644 kernel/locking/ww_mutex.h
----
+
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -282,215 +282,7 @@ void __sched mutex_lock(struct mutex *lo
@@ -234,7 +236,28 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
-@@ -755,166 +547,6 @@ void __sched ww_mutex_unlock(struct ww_m
+@@ -737,20 +529,6 @@ void __sched mutex_unlock(struct mutex *
+ }
+ EXPORT_SYMBOL(mutex_unlock);
+
+-static void __ww_mutex_unlock(struct ww_mutex *lock)
+-{
+- /*
+- * The unlocking fastpath is the 0->1 transition from 'locked'
+- * into 'unlocked' state:
+- */
+- if (lock->ctx) {
+- MUTEX_WARN_ON(!lock->ctx->acquired);
+- if (lock->ctx->acquired > 0)
+- lock->ctx->acquired--;
+- lock->ctx = NULL;
+- }
+-}
+-
+ /**
+ * ww_mutex_unlock - release the w/w mutex
+ * @lock: the mutex to be released
+@@ -769,154 +547,6 @@ void __sched ww_mutex_unlock(struct ww_m
}
EXPORT_SYMBOL(ww_mutex_unlock);
@@ -386,18 +409,6 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
- return 0;
-}
-
--static void __ww_mutex_unlock(struct ww_mutex *lock)
--{
-- if (lock->ctx) {
--#ifdef CONFIG_DEBUG_MUTEXES
-- DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired);
--#endif
-- if (lock->ctx->acquired > 0)
-- lock->ctx->acquired--;
-- lock->ctx = NULL;
-- }
--}
--
/*
* Lock a mutex (possibly interruptible), slowpath:
*/
diff --git a/patches/locking_ww_mutex__Remove___sched_annotation.patch b/patches/0044-locking-ww_mutex-Remove-the-__sched-annotation-from-.patch
index 3e5f684adb71..cb2811324528 100644
--- a/patches/locking_ww_mutex__Remove___sched_annotation.patch
+++ b/patches/0044-locking-ww_mutex-Remove-the-__sched-annotation-from-.patch
@@ -1,18 +1,20 @@
-Subject: locking/ww_mutex: Remove __sched annotation
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:49 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:44 +0200
+Subject: [PATCH 44/72] locking/ww_mutex: Remove the __sched annotation from
+ ww_mutex APIs
None of these functions will be on the stack when blocking in
schedule(), hence __sched is not needed.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.453235952@linutronix.de
---
kernel/locking/ww_mutex.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -62,7 +62,7 @@ ww_mutex_lock_acquired(struct ww_mutex *
diff --git a/patches/locking_ww_mutex__Abstract_waiter_iteration.patch b/patches/0045-locking-ww_mutex-Abstract-out-the-waiter-iteration.patch
index ce60e5aa12f7..7ee6b1af494f 100644
--- a/patches/locking_ww_mutex__Abstract_waiter_iteration.patch
+++ b/patches/0045-locking-ww_mutex-Abstract-out-the-waiter-iteration.patch
@@ -1,18 +1,19 @@
-Subject: locking/ww_mutex: Abstract waiter iteration
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:48 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:45 +0200
+Subject: [PATCH 45/72] locking/ww_mutex: Abstract out the waiter iteration
Split out the waiter iteration functions so they can be substituted for a
rtmutex based ww_mutex later.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.509186185@linutronix.de
---
kernel/locking/ww_mutex.h | 57 ++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 53 insertions(+), 4 deletions(-)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -1,5 +1,49 @@
diff --git a/patches/locking_ww_mutex__Abstract_waiter_enqueue.patch b/patches/0046-locking-ww_mutex-Abstract-out-waiter-enqueueing.patch
index 41f8110161c6..7a67aa989c36 100644
--- a/patches/locking_ww_mutex__Abstract_waiter_enqueue.patch
+++ b/patches/0046-locking-ww_mutex-Abstract-out-waiter-enqueueing.patch
@@ -1,19 +1,19 @@
-Subject: locking/ww_mutex: Abstract waiter enqueueing
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:47 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:47 +0200
+Subject: [PATCH 46/72] locking/ww_mutex: Abstract out waiter enqueueing
The upcoming rtmutex based ww_mutex needs a different handling for
enqueueing a waiter. Split it out into a helper function.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.566318143@linutronix.de
---
kernel/locking/ww_mutex.h | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -44,6 +44,15 @@ static inline struct mutex_waiter *
diff --git a/patches/locking_ww_mutex__Abstract_mutex_accessors.patch b/patches/0047-locking-ww_mutex-Abstract-out-mutex-accessors.patch
index aa1799a498cd..773ec97978a1 100644
--- a/patches/locking_ww_mutex__Abstract_mutex_accessors.patch
+++ b/patches/0047-locking-ww_mutex-Abstract-out-mutex-accessors.patch
@@ -1,19 +1,19 @@
-Subject: locking/ww_mutex: Abstract mutex accessors
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:46 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:49 +0200
+Subject: [PATCH 47/72] locking/ww_mutex: Abstract out mutex accessors
Move the mutex related access from various ww_mutex functions into helper
functions so they can be substituted for rtmutex based ww_mutex later.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.622477030@linutronix.de
---
kernel/locking/ww_mutex.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -53,6 +53,18 @@ static inline void
diff --git a/patches/locking_ww_mutex__Abstract_mutex_types.patch b/patches/0048-locking-ww_mutex-Abstract-out-mutex-types.patch
index 5b16e7751824..9aaabc2595e4 100644
--- a/patches/locking_ww_mutex__Abstract_mutex_types.patch
+++ b/patches/0048-locking-ww_mutex-Abstract-out-mutex-types.patch
@@ -1,8 +1,6 @@
-Subject: locking/ww_mutex: Abstract mutex types
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:45 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:50 +0200
+Subject: [PATCH 48/72] locking/ww_mutex: Abstract out mutex types
Some ww_mutex helper functions use pointers for the underlying mutex and
mutex_waiter. The upcoming rtmutex based implementation needs to share
@@ -11,11 +9,13 @@ types in the affected functions.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.678720245@linutronix.de
---
kernel/locking/ww_mutex.h | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -1,5 +1,8 @@
diff --git a/patches/locking-ww_mutex--Abstract-internal-lock-access.patch b/patches/0049-locking-ww_mutex-Abstract-out-internal-lock-accesses.patch
index 1c27f8e1b5e8..e3401407b50a 100644
--- a/patches/locking-ww_mutex--Abstract-internal-lock-access.patch
+++ b/patches/0049-locking-ww_mutex-Abstract-out-internal-lock-accesses.patch
@@ -1,11 +1,14 @@
-Subject: locking/ww_mutex: Abstract internal lock access
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 26 Jul 2021 11:57:07 +0200
+Date: Sun, 15 Aug 2021 23:28:52 +0200
+Subject: [PATCH 49/72] locking/ww_mutex: Abstract out internal lock accesses
Accessing the internal wait_lock of mutex and rtmutex is slightly
different. Provide helper functions for that.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.734635961@linutronix.de
---
include/linux/ww_mutex.h | 13 +++++++++----
kernel/locking/ww_mutex.h | 23 +++++++++++++++++++----
diff --git a/patches/locking_ww_mutex__Implement_rt_mutex_accessors.patch b/patches/0050-locking-ww_mutex-Implement-rt_mutex-accessors.patch
index d52bd9f962a2..20420adb97d4 100644
--- a/patches/locking_ww_mutex__Implement_rt_mutex_accessors.patch
+++ b/patches/0050-locking-ww_mutex-Implement-rt_mutex-accessors.patch
@@ -1,18 +1,18 @@
-Subject: locking/ww_mutex: Implement rt_mutex accessors
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:44 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:53 +0200
+Subject: [PATCH 50/72] locking/ww_mutex: Implement rt_mutex accessors
Provide the type defines and the helper inlines for rtmutex based ww_mutexes.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.790760545@linutronix.de
---
kernel/locking/ww_mutex.h | 80 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -1,5 +1,7 @@
diff --git a/patches/locking_ww_mutex__Add_RT_priority_to_W_W_order.patch b/patches/0051-locking-ww_mutex-Add-RT-priority-to-W-W-order.patch
index ad1dda2f4ba0..492ab72ec046 100644
--- a/patches/locking_ww_mutex__Add_RT_priority_to_W_W_order.patch
+++ b/patches/0051-locking-ww_mutex-Add-RT-priority-to-W-W-order.patch
@@ -1,19 +1,19 @@
-Subject: locking/ww_mutex: Add RT priority to W/W order
From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:42 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:55 +0200
+Subject: [PATCH 51/72] locking/ww_mutex: Add RT priority to W/W order
-From: Peter Zijlstra <peterz@infradead.org>
-
-RTmutex based ww_mutexes cannot order based on timestamp. They have to
+RT mutex based ww_mutexes cannot order based on timestamps. They have to
order based on priority. Add the necessary decision logic.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.847536630@linutronix.de
---
kernel/locking/ww_mutex.h | 64 +++++++++++++++++++++++++++++++++++-----------
1 file changed, 49 insertions(+), 15 deletions(-)
----
+
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -219,19 +219,54 @@ ww_mutex_lock_acquired(struct ww_mutex *
@@ -33,7 +33,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+__ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
{
+/*
-+ * Can only do the RT prio for WW_RT because task->prio isn't stable due to PI,
++ * Can only do the RT prio for WW_RT, because task->prio isn't stable due to PI,
+ * so the wait_list ordering will go wobbly. rt_mutex re-queues the waiter and
+ * isn't affected by this.
+ */
diff --git a/patches/locking_ww_mutex__Add_ww_rt_mutex_interface.patch b/patches/0052-locking-ww_mutex-Add-rt_mutex-based-lock-type-and-ac.patch
index 51f9dd871255..65fb9d09c6dd 100644
--- a/patches/locking_ww_mutex__Add_ww_rt_mutex_interface.patch
+++ b/patches/0052-locking-ww_mutex-Add-rt_mutex-based-lock-type-and-ac.patch
@@ -1,20 +1,21 @@
-Subject: locking/ww_mutex: Add rt_mutex based lock type and accessors
From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:41 2021 +0200
+Date: Sun, 15 Aug 2021 23:28:56 +0200
+Subject: [PATCH 52/72] locking/ww_mutex: Add rt_mutex based lock type and
+ accessors
-From: Peter Zijlstra <peterz@infradead.org>
-
-Provide the defines for RT mutex based ww_mutexes and fixup the debug logic
+Provide the defines for RT mutex based ww_mutexes and fix up the debug logic
so it's either enabled by DEBUG_MUTEXES or DEBUG_RT_MUTEXES on RT kernels.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.908012566@linutronix.de
---
include/linux/ww_mutex.h | 33 ++++++++++++++++++++++++---------
kernel/locking/ww_mutex.h | 6 +++---
2 files changed, 27 insertions(+), 12 deletions(-)
----
+
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -18,11 +18,24 @@
diff --git a/patches/locking-rtmutex--Extend-the-rtmutex-core-to-support-ww_mutex.patch b/patches/0053-locking-rtmutex-Extend-the-rtmutex-core-to-support-w.patch
index e429439659a0..f355ccdaf0d3 100644
--- a/patches/locking-rtmutex--Extend-the-rtmutex-core-to-support-ww_mutex.patch
+++ b/patches/0053-locking-rtmutex-Extend-the-rtmutex-core-to-support-w.patch
@@ -1,17 +1,17 @@
-Subject: locking/rtmutex: Extend the rtmutex core to support ww_mutex
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:41 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:28:58 +0200
+Subject: [PATCH 53/72] locking/rtmutex: Extend the rtmutex core to support
+ ww_mutex
Add a ww acquire context pointer to the waiter and various functions and
add the ww_mutex related invocations to the proper spots in the locking
-code similar to the mutex based variant.
+code, similar to the mutex based variant.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: Simplify __waiter_less() (PeterZ)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211304.966139174@linutronix.de
---
kernel/locking/rtmutex.c | 119 ++++++++++++++++++++++++++++++++++++----
kernel/locking/rtmutex_api.c | 4 -
@@ -111,7 +111,7 @@ V4: Simplify __waiter_less() (PeterZ)
+ if (build_ww_mutex() && ww_ctx) {
+ struct rt_mutex *rtm;
+
-+ /* Check whether the waiter should backout immediately */
++ /* Check whether the waiter should back out immediately */
+ rtm = container_of(lock, struct rt_mutex, rtmutex);
+ res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx);
+ if (res)
@@ -274,7 +274,7 @@ V4: Simplify __waiter_less() (PeterZ)
+ task_blocks_on_rt_mutex(lock, &waiter, current, NULL, RT_MUTEX_MIN_CHAINWALK);
for (;;) {
- /* Try to acquire the lock again. */
+ /* Try to acquire the lock again */
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -267,7 +267,7 @@ int __sched __rt_mutex_start_proxy_lock(
diff --git a/patches/locking_ww_mutex__Implement_ww_rt_mutex.patch b/patches/0054-locking-ww_mutex-Implement-rtmutex-based-ww_mutex-AP.patch
index 8d69d9f0b218..af188ca82f91 100644
--- a/patches/locking_ww_mutex__Implement_ww_rt_mutex.patch
+++ b/patches/0054-locking-ww_mutex-Implement-rtmutex-based-ww_mutex-AP.patch
@@ -1,21 +1,22 @@
-Subject: locking/ww_mutex: Implement rtmutex based ww_mutex API functions
-From: Peter Zijlstra <peterz@infradead.org>
-Date: Fri Jul 16 18:07:38 2021 +0200
-
From: Peter Zijlstra <peterz@infradead.org>
+Date: Sun, 15 Aug 2021 23:29:00 +0200
+Subject: [PATCH 54/72] locking/ww_mutex: Implement rtmutex based ww_mutex API
+ functions
Add the actual ww_mutex API functions which replace the mutex based variant
on RT enabled kernels.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V3: Make lock_interruptible interruptible for real (Mike)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.024057938@linutronix.de
---
kernel/locking/Makefile | 2 -
kernel/locking/ww_rt_mutex.c | 76 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 77 insertions(+), 1 deletion(-)
----
+ create mode 100644 kernel/locking/ww_rt_mutex.c
+
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -25,7 +25,7 @@ obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_
@@ -57,7 +58,7 @@ V3: Make lock_interruptible interruptible for real (Mike)
+
+ /*
+ * Reset the wounded flag after a kill. No other process can
-+ * race and wound us here since they can't have a valid owner
++ * race and wound us here, since they can't have a valid owner
+ * pointer if we don't have any locks held.
+ */
+ if (ww_ctx->acquired == 0)
diff --git a/patches/locking_rtmutex__Add_mutex_variant_for_RT.patch b/patches/0055-locking-rtmutex-Add-mutex-variant-for-RT.patch
index 0ae3711f8978..f1432e7d0536 100644
--- a/patches/locking_rtmutex__Add_mutex_variant_for_RT.patch
+++ b/patches/0055-locking-rtmutex-Add-mutex-variant-for-RT.patch
@@ -1,22 +1,23 @@
-Subject: locking/rtmutex: Add mutex variant for RT
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:52 2021 +0200
+Date: Sun, 15 Aug 2021 23:29:01 +0200
+Subject: [PATCH 55/72] locking/rtmutex: Add mutex variant for RT
-From: Thomas Gleixner <tglx@linutronix.de>
-
-Add the necessary defines, helpers and API functions for replacing mutex on
-a PREEMPT_RT enabled kernel with a rtmutex based variant.
+Add the necessary defines, helpers and API functions for replacing struct mutex on
+a PREEMPT_RT enabled kernel with an rtmutex based variant.
No functional change when CONFIG_PREEMPT_RT=n
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.081517417@linutronix.de
---
include/linux/mutex.h | 66 +++++++++++++++++++----
kernel/locking/mutex.c | 4 +
kernel/locking/rtmutex_api.c | 122 +++++++++++++++++++++++++++++++++++++++++++
lib/Kconfig.debug | 11 ++-
4 files changed, 187 insertions(+), 16 deletions(-)
----
+
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -20,6 +20,18 @@
diff --git a/patches/lib_test_lockup__Adapt_to_changed_variables..patch b/patches/0056-lib-test_lockup-Adapt-to-changed-variables.patch
index 0a97ca8fa18d..cfe81b123f0b 100644
--- a/patches/lib_test_lockup__Adapt_to_changed_variables..patch
+++ b/patches/0056-lib-test_lockup-Adapt-to-changed-variables.patch
@@ -1,8 +1,6 @@
-Subject: lib/test_lockup: Adapt to changed variables.
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Thu Jul 1 17:50:20 2021 +0200
-
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:03 +0200
+Subject: [PATCH 56/72] lib/test_lockup: Adapt to changed variables
The inner parts of certain locks (mutex, rwlocks) changed due to a rework for
RT and non RT code. Most users remain unaffected, but those who fiddle around
@@ -12,10 +10,13 @@ Match the struct names to the new layout.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.137982730@linutronix.de
---
- lib/test_lockup.c | 8 ++++----
- 1 file changed, 4 insertions(+), 4 deletions(-)
----
+ lib/test_lockup.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
--- a/lib/test_lockup.c
+++ b/lib/test_lockup.c
@@ -485,13 +485,13 @@ static int __init test_lockup_init(void)
@@ -35,12 +36,3 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
SPINLOCK_MAGIC))
return -EINVAL;
#else
-@@ -502,7 +502,7 @@ static int __init test_lockup_init(void)
- offsetof(rwlock_t, magic),
- RWLOCK_MAGIC) ||
- test_magic(lock_mutex_ptr,
-- offsetof(struct mutex, wait_lock.rlock.magic),
-+ offsetof(struct mutex, wait_lock.magic),
- SPINLOCK_MAGIC) ||
- test_magic(lock_rwsem_ptr,
- offsetof(struct rw_semaphore, wait_lock.magic),
diff --git a/patches/futex__Validate_waiter_correctly_in_futex_proxy_trylock_atomic.patch b/patches/0057-futex-Validate-waiter-correctly-in-futex_proxy_trylo.patch
index 9741df01d595..99f119d09355 100644
--- a/patches/futex__Validate_waiter_correctly_in_futex_proxy_trylock_atomic.patch
+++ b/patches/0057-futex-Validate-waiter-correctly-in-futex_proxy_trylo.patch
@@ -1,19 +1,21 @@
-Subject: futex: Validate waiter correctly in futex_proxy_trylock_atomic()
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:54 2021 +0200
+Date: Sun, 15 Aug 2021 23:29:04 +0200
+Subject: [PATCH 57/72] futex: Validate waiter correctly in
+ futex_proxy_trylock_atomic()
-From: Thomas Gleixner <tglx@linutronix.de>
-
-The loop in futex_requeue() has a sanity check for the waiter which is
+The loop in futex_requeue() has a sanity check for the waiter, which is
missing in futex_proxy_trylock_atomic(). In theory the key2 check is
sufficient, but futexes are cursed so add it for completeness and paranoia
sake.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.193767519@linutronix.de
---
kernel/futex.c | 7 +++++++
1 file changed, 7 insertions(+)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1879,6 +1879,13 @@ futex_proxy_trylock_atomic(u32 __user *p
diff --git a/patches/futex__Cleanup_stale_comments.patch b/patches/0058-futex-Clean-up-stale-comments.patch
index 208995ccd6e4..ea205f2b2d4a 100644
--- a/patches/futex__Cleanup_stale_comments.patch
+++ b/patches/0058-futex-Clean-up-stale-comments.patch
@@ -1,19 +1,18 @@
-Subject: futex: Cleanup stale comments
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:54 2021 +0200
+Date: Sun, 15 Aug 2021 23:29:06 +0200
+Subject: [PATCH 58/72] futex: Clean up stale comments
-From: Thomas Gleixner <tglx@linutronix.de>
-
-The futex key reference mechanism is long gone. Cleanup the stale comments
+The futex key reference mechanism is long gone. Clean up the stale comments
which still mention it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V2: Cleanup more key ref comments - Andre
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.249178312@linutronix.de
---
kernel/futex.c | 18 +++++++-----------
1 file changed, 7 insertions(+), 11 deletions(-)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1354,7 +1354,7 @@ static int lock_pi_update_atomic(u32 __u
diff --git a/patches/futex--Clarify-futex_requeue---PI-handling.patch b/patches/0059-futex-Clarify-futex_requeue-PI-handling.patch
index 85531ac41f0a..a9fc3d51d452 100644
--- a/patches/futex--Clarify-futex_requeue---PI-handling.patch
+++ b/patches/0059-futex-Clarify-futex_requeue-PI-handling.patch
@@ -1,14 +1,14 @@
-Subject: futex: Clarify futex_requeue() PI handling
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 09 Aug 2021 13:22:19 +0200
+Date: Sun, 15 Aug 2021 23:29:07 +0200
+Subject: [PATCH 59/72] futex: Clarify futex_requeue() PI handling
-When requeuing to a PI futex then the requeue code tries to trylock the PI
+When requeuing to a PI futex, then the requeue code tries to trylock the PI
futex on behalf of the topmost waiter on the inner 'waitqueue' futex. If
-that succeeds then PI state has to be allocated in order to requeue further
+that succeeds, then PI state has to be allocated in order to requeue further
waiters to the PI futex.
-The comment and the code are confusing as the PI state allocation uses
-lookup_pi_state() which either attaches to an existing waiter or to the
+The comment and the code are confusing, as the PI state allocation uses
+lookup_pi_state(), which either attaches to an existing waiter or to the
owner. As the PI futex was just acquired, there cannot be a waiter on the
PI futex because the hash bucket lock is held.
@@ -17,8 +17,9 @@ which behalf the PI futex has been acquired is guaranteed to be alive and
not exiting, this call must succeed. Add a WARN_ON() in case that fails.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.305142462@linutronix.de
---
kernel/futex.c | 61 +++++++++++++++++++++------------------------------------
1 file changed, 23 insertions(+), 38 deletions(-)
diff --git a/patches/futex--Remove-bogus-condition-for-requeue-PI.patch b/patches/0060-futex-Remove-bogus-condition-for-requeue-PI.patch
index f1ef52bb8523..158414a75e25 100644
--- a/patches/futex--Remove-bogus-condition-for-requeue-PI.patch
+++ b/patches/0060-futex-Remove-bogus-condition-for-requeue-PI.patch
@@ -1,11 +1,11 @@
-Subject: futex: Remove bogus condition for requeue PI
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 09 Aug 2021 14:47:51 +0200
+Date: Sun, 15 Aug 2021 23:29:09 +0200
+Subject: [PATCH 60/72] futex: Remove bogus condition for requeue PI
For requeue PI it's required to establish PI state for the PI futex to
which waiters are requeued. This either acquires the user space futex on
-behalf of the top most waiter on the inner 'waitqueue' futex or attaches to
-the PI state of an existing waiter or creates on attached to the owner of
+behalf of the top most waiter on the inner 'waitqueue' futex, or attaches to
+the PI state of an existing waiter, or creates on attached to the owner of
the futex.
This code can retry in case of failure, but retry can never happen when the
@@ -19,11 +19,12 @@ which is always true because:
nr_wake = 1
nr_requeue >= 0
-Remove it all together.
+Remove it completely.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.362730187@linutronix.de
---
kernel/futex.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/patches/futex__Correct_the_number_of_requeued_waiters_for_PI.patch b/patches/0061-futex-Correct-the-number-of-requeued-waiters-for-PI.patch
index 0cefc5a885bc..feedde19f5e2 100644
--- a/patches/futex__Correct_the_number_of_requeued_waiters_for_PI.patch
+++ b/patches/0061-futex-Correct-the-number-of-requeued-waiters-for-PI.patch
@@ -1,8 +1,6 @@
-Subject: futex: Correct the number of requeued waiters for PI
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:55 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:10 +0200
+Subject: [PATCH 61/72] futex: Correct the number of requeued waiters for PI
The accounting is wrong when either the PI sanity check or the
requeue PI operation fails. Adjust it in the failure path.
@@ -10,10 +8,13 @@ requeue PI operation fails. Adjust it in the failure path.
Will be simplified in the next step.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.416427548@linutronix.de
---
kernel/futex.c | 4 ++++
1 file changed, 4 insertions(+)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -2116,6 +2116,8 @@ static int futex_requeue(u32 __user *uad
diff --git a/patches/futex__Restructure_futex_requeue.patch b/patches/0062-futex-Restructure-futex_requeue.patch
index 8ead237a3318..5129293f9ebf 100644
--- a/patches/futex__Restructure_futex_requeue.patch
+++ b/patches/0062-futex-Restructure-futex_requeue.patch
@@ -1,8 +1,6 @@
-Subject: futex: Restructure futex_requeue()
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:56 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:12 +0200
+Subject: [PATCH 62/72] futex: Restructure futex_requeue()
No point in taking two more 'requeue_pi' conditionals just to get to the
requeue. Same for the requeue_pi case just the other way round.
@@ -10,10 +8,13 @@ requeue. Same for the requeue_pi case just the other way round.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.468835790@linutronix.de
---
kernel/futex.c | 90 +++++++++++++++++++++++++--------------------------------
1 file changed, 41 insertions(+), 49 deletions(-)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -2104,20 +2104,17 @@ static int futex_requeue(u32 __user *uad
diff --git a/patches/futex__Clarify_comment_in_futex_requeue.patch b/patches/0063-futex-Clarify-comment-in-futex_requeue.patch
index b08390b5d25a..6c7146172ecb 100644
--- a/patches/futex__Clarify_comment_in_futex_requeue.patch
+++ b/patches/0063-futex-Clarify-comment-in-futex_requeue.patch
@@ -1,17 +1,18 @@
-Subject: futex: Clarify comment in futex_requeue()
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:56 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:14 +0200
+Subject: [PATCH 63/72] futex: Clarify comment in futex_requeue()
The comment about the restriction of the number of waiters to wake for the
REQUEUE_PI case is confusing at best. Rewrite it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.524990421@linutronix.de
---
kernel/futex.c | 28 ++++++++++++++++++++--------
1 file changed, 20 insertions(+), 8 deletions(-)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1939,15 +1939,27 @@ static int futex_requeue(u32 __user *uad
@@ -30,18 +31,18 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
- * use nr_wake=1.
+ * futex_requeue() allows the caller to define the number
+ * of waiters to wake up via the @nr_wake argument. With
-+ * REQUEUE_PI waking up more than one waiter is creating
++ * REQUEUE_PI, waking up more than one waiter is creating
+ * more problems than it solves. Waking up a waiter makes
+ * only sense if the PI futex @uaddr2 is uncontended as
+ * this allows the requeue code to acquire the futex
+ * @uaddr2 before waking the waiter. The waiter can then
+ * return to user space without further action. A secondary
+ * wakeup would just make the futex_wait_requeue_pi()
-+ * handling more complex because that code would have to
++ * handling more complex, because that code would have to
+ * look up pi_state and do more or less all the handling
+ * which the requeue code has to do for the to be requeued
+ * waiters. So restrict the number of waiters to wake to
-+ * one and only wake it up when the PI futex is
++ * one, and only wake it up when the PI futex is
+ * uncontended. Otherwise requeue it and let the unlock of
+ * the PI futex handle the wakeup.
+ *
diff --git a/patches/futex--Reorder-sanity-checks-in-futex_requeue--.patch b/patches/0064-futex-Reorder-sanity-checks-in-futex_requeue.patch
index a4a04c0fe94f..59e1a554652f 100644
--- a/patches/futex--Reorder-sanity-checks-in-futex_requeue--.patch
+++ b/patches/0064-futex-Reorder-sanity-checks-in-futex_requeue.patch
@@ -1,14 +1,15 @@
-Subject: futex: Reorder sanity checks in futex_requeue()
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 09 Aug 2021 12:55:19 +0200
+Date: Sun, 15 Aug 2021 23:29:15 +0200
+Subject: [PATCH 64/72] futex: Reorder sanity checks in futex_requeue()
-No point in allocating memory when the input parameters are bogus. Validate
-all parameters before proceeding.
+No point in allocating memory when the input parameters are bogus.
+Validate all parameters before proceeding.
Suggested-by: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.581789253@linutronix.de
---
kernel/futex.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
@@ -28,7 +29,7 @@ V4: New patch
- /*
* futex_requeue() allows the caller to define the number
* of waiters to wake up via the @nr_wake argument. With
- * REQUEUE_PI waking up more than one waiter is creating
+ * REQUEUE_PI, waking up more than one waiter is creating
@@ -1963,6 +1956,13 @@ static int futex_requeue(u32 __user *uad
*/
if (nr_wake != 1)
diff --git a/patches/futex--Simplify-handle_early_requeue_pi_wakeup--.patch b/patches/0065-futex-Simplify-handle_early_requeue_pi_wakeup.patch
index e07582dbf3ee..a044685b042f 100644
--- a/patches/futex--Simplify-handle_early_requeue_pi_wakeup--.patch
+++ b/patches/0065-futex-Simplify-handle_early_requeue_pi_wakeup.patch
@@ -1,14 +1,15 @@
-Subject: futex: Simplify handle_early_requeue_pi_wakeup()
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue, 03 Aug 2021 23:16:08 +0200
+Date: Sun, 15 Aug 2021 23:29:17 +0200
+Subject: [PATCH 65/72] futex: Simplify handle_early_requeue_pi_wakeup()
Move the futex key match out of handle_early_requeue_pi_wakeup() which
allows to simplify that function. The upcoming state machine for
requeue_pi() will make that go away.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V3: New patch - Suggested by Peter
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.638938670@linutronix.de
---
kernel/futex.c | 50 +++++++++++++++++++++++---------------------------
1 file changed, 23 insertions(+), 27 deletions(-)
diff --git a/patches/futex__Prevent_requeue_pi_lock_nesting_issue_on_RT.patch b/patches/0066-futex-Prevent-requeue_pi-lock-nesting-issue-on-RT.patch
index c234e2741c9b..50a480706a96 100644
--- a/patches/futex__Prevent_requeue_pi_lock_nesting_issue_on_RT.patch
+++ b/patches/0066-futex-Prevent-requeue_pi-lock-nesting-issue-on-RT.patch
@@ -1,8 +1,6 @@
-Subject: futex: Prevent requeue_pi() lock nesting issue on RT
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:57 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:18 +0200
+Subject: [PATCH 66/72] futex: Prevent requeue_pi() lock nesting issue on RT
The requeue_pi() operation on RT kernels creates a problem versus the
task::pi_blocked_on state when a waiter is woken early (signal, timeout)
@@ -86,12 +84,13 @@ handled by rcuwait_wait_event() and the corresponding wake up on the
requeue side.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V3: Folded Peter's improvements
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.693317658@linutronix.de
---
kernel/futex.c | 308 +++++++++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 259 insertions(+), 49 deletions(-)
----
+
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -197,6 +197,8 @@ struct futex_pi_state {
diff --git a/patches/rtmutex__Prevent_lockdep_false_positive_with_PI_futexes.patch b/patches/0067-locking-rtmutex-Prevent-lockdep-false-positive-with-.patch
index 1fcae8ca5bbc..1b53bfa2455e 100644
--- a/patches/rtmutex__Prevent_lockdep_false_positive_with_PI_futexes.patch
+++ b/patches/0067-locking-rtmutex-Prevent-lockdep-false-positive-with-.patch
@@ -1,8 +1,7 @@
-Subject: rtmutex: Prevent lockdep false positive with PI futexes
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:57 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:20 +0200
+Subject: [PATCH 67/72] locking/rtmutex: Prevent lockdep false positive with PI
+ futexes
On PREEMPT_RT the futex hashbucket spinlock becomes 'sleeping' and rtmutex
based. That causes a lockdep false positive because some of the futex
@@ -14,10 +13,13 @@ lock recursion.
Give the futex/rtmutex wait_lock a separate key.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.750701219@linutronix.de
---
kernel/locking/rtmutex_api.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
----
+
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -214,7 +214,19 @@ EXPORT_SYMBOL_GPL(__rt_mutex_init);
@@ -29,11 +31,11 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
__rt_mutex_base_init(lock);
+ /*
+ * On PREEMPT_RT the futex hashbucket spinlock becomes 'sleeping'
-+ * and rtmutex based. That causes a lockdep false positive because
++ * and rtmutex based. That causes a lockdep false positive, because
+ * some of the futex functions invoke spin_unlock(&hb->lock) with
+ * the wait_lock of the rtmutex associated to the pi_futex held.
+ * spin_unlock() in turn takes wait_lock of the rtmutex on which
-+ * the spinlock is based which makes lockdep notice a lock
++ * the spinlock is based, which makes lockdep notice a lock
+ * recursion. Give the futex/rtmutex wait_lock a separate key.
+ */
+ lockdep_set_class(&lock->wait_lock, &pi_futex_key);
diff --git a/patches/preempt__Adjust_PREEMPT_LOCK_OFFSET_for_RT.patch b/patches/0068-preempt-Adjust-PREEMPT_LOCK_OFFSET-for-RT.patch
index a28e803bfd51..d59f8a6a45a3 100644
--- a/patches/preempt__Adjust_PREEMPT_LOCK_OFFSET_for_RT.patch
+++ b/patches/0068-preempt-Adjust-PREEMPT_LOCK_OFFSET-for-RT.patch
@@ -1,8 +1,6 @@
-Subject: preempt: Adjust PREEMPT_LOCK_OFFSET for RT
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue Jul 6 16:36:57 2021 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:22 +0200
+Subject: [PATCH 68/72] preempt: Adjust PREEMPT_LOCK_OFFSET for RT
On PREEMPT_RT regular spinlocks and rwlocks are substituted with rtmutex
based constructs. spin/rwlock held regions are preemptible on PREEMPT_RT,
@@ -10,10 +8,13 @@ so PREEMPT_LOCK_OFFSET has to be 0 to make the various cond_resched_*lock()
functions work correctly.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.804246275@linutronix.de
---
include/linux/preempt.h | 4 ++++
1 file changed, 4 insertions(+)
----
+
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -121,7 +121,11 @@
diff --git a/patches/locking_rtmutex__Implement_equal_priority_lock_stealing.patch b/patches/0069-locking-rtmutex-Implement-equal-priority-lock-steali.patch
index 2deb2cbf6d18..b9b81160b829 100644
--- a/patches/locking_rtmutex__Implement_equal_priority_lock_stealing.patch
+++ b/patches/0069-locking-rtmutex-Implement-equal-priority-lock-steali.patch
@@ -1,8 +1,6 @@
-Subject: locking/rtmutex: Implement equal priority lock stealing
-From: Gregory Haskins <ghaskins@novell.com>
-Date: Tue Jul 6 16:36:57 2021 +0200
-
From: Gregory Haskins <ghaskins@novell.com>
+Date: Sun, 15 Aug 2021 23:29:23 +0200
+Subject: [PATCH 69/72] locking/rtmutex: Implement equal priority lock stealing
The current logic only allows lock stealing to occur if the current task is
of higher priority than the pending owner.
@@ -24,10 +22,13 @@ RT kernel.
Signed-off-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.857240222@linutronix.de
---
kernel/locking/rtmutex.c | 52 +++++++++++++++++++++++++++++++----------------
1 file changed, 35 insertions(+), 17 deletions(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -338,6 +338,26 @@ static __always_inline int rt_mutex_wait
diff --git a/patches/locking_rtmutex__Add_adaptive_spinwait_mechanism.patch b/patches/0070-locking-rtmutex-Add-adaptive-spinwait-mechanism.patch
index fed32b51eb6d..9abfaa66bc97 100644
--- a/patches/locking_rtmutex__Add_adaptive_spinwait_mechanism.patch
+++ b/patches/0070-locking-rtmutex-Add-adaptive-spinwait-mechanism.patch
@@ -1,8 +1,6 @@
-Subject: locking/rtmutex: Add adaptive spinwait mechanism
-From: Steven Rostedt <rostedt@goodmis.org>
-Date: Tue Jul 6 16:36:57 2021 +0200
-
From: Steven Rostedt <rostedt@goodmis.org>
+Date: Sun, 15 Aug 2021 23:29:25 +0200
+Subject: [PATCH 70/72] locking/rtmutex: Add adaptive spinwait mechanism
Going to sleep when locks are contended can be quite inefficient when the
contention time is short and the lock owner is running on a different CPU.
@@ -19,15 +17,13 @@ spinning to the top priority waiter.
Originally-by: Gregory Haskins <ghaskins@novell.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V4: Rename to rtmutex_spin_on_owner() (PeterZ)
- Check for top waiter changes and rewrite comment (Davidlohr)
-V3: Fold the extension for regular sleeping locks and add the missing spin
- wait checks (PeterZ)
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.912050691@linutronix.de
---
kernel/locking/rtmutex.c | 67 +++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 65 insertions(+), 2 deletions(-)
----
+
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -8,6 +8,11 @@
diff --git a/patches/locking-spinlock-rt--Prepare-for-RT-local_lock.patch b/patches/0071-locking-spinlock-rt-Prepare-for-RT-local_lock.patch
index 55a83438a93c..b39d48c86465 100644
--- a/patches/locking-spinlock-rt--Prepare-for-RT-local_lock.patch
+++ b/patches/0071-locking-spinlock-rt-Prepare-for-RT-local_lock.patch
@@ -1,14 +1,15 @@
-Subject: locking/spinlock/rt: Prepare for RT local_lock
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Fri, 13 Aug 2021 17:00:22 +0200
+Date: Sun, 15 Aug 2021 23:29:27 +0200
+Subject: [PATCH 71/72] locking/spinlock/rt: Prepare for RT local_lock
Add the static and runtime initializer mechanics to support the RT variant
of local_lock, which requires the lock type in the lockdep map to be set
to LD_LOCK_PERCPU.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
-V5: New patch
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211305.967526724@linutronix.de
---
include/linux/spinlock_rt.h | 24 ++++++++++++++++--------
include/linux/spinlock_types.h | 6 ++++++
diff --git a/patches/locking-local_lock--Add-PREEMPT_RT-support.patch b/patches/0072-locking-local_lock-Add-PREEMPT_RT-support.patch
index 6e4bf800a988..6110585267f1 100644
--- a/patches/locking-local_lock--Add-PREEMPT_RT-support.patch
+++ b/patches/0072-locking-local_lock-Add-PREEMPT_RT-support.patch
@@ -1,8 +1,6 @@
-Subject: locking/local_lock: Add PREEMPT_RT support
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Fri, 13 Aug 2021 10:35:01 +0200
-
From: Thomas Gleixner <tglx@linutronix.de>
+Date: Sun, 15 Aug 2021 23:29:28 +0200
+Subject: [PATCH 72/72] locking/local_lock: Add PREEMPT_RT support
On PREEMPT_RT enabled kernels local_lock maps to a per CPU 'sleeping'
spinlock which protects the critical section while staying preemptible. CPU
@@ -10,12 +8,16 @@ locality is established by disabling migration.
Provide the necessary types and macros to substitute the non-RT variant.
+Co-developed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Link: https://lore.kernel.org/r/20210815211306.023630962@linutronix.de
---
-V5: New patch
----
- include/linux/local_lock_internal.h | 48 ++++++++++++++++++++++++++++++++++++
- 1 file changed, 48 insertions(+)
+ include/linux/local_lock_internal.h | 44 ++++++++++++++++++++++++++++++++++++
+ 1 file changed, 44 insertions(+)
+
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -6,6 +6,8 @@
@@ -27,7 +29,7 @@ V5: New patch
typedef struct {
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
-@@ -95,3 +97,49 @@ do { \
+@@ -95,3 +97,45 @@ do { \
local_lock_release(this_cpu_ptr(lock)); \
local_irq_restore(flags); \
} while (0)
@@ -35,26 +37,22 @@ V5: New patch
+#else /* !CONFIG_PREEMPT_RT */
+
+/*
-+ * On PREEMPT_RT local_lock maps to a per CPU spinlock which protects the
++ * On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the
+ * critical section while staying preemptible.
+ */
-+typedef struct {
-+ spinlock_t lock;
-+} local_lock_t;
++typedef spinlock_t local_lock_t;
+
-+#define INIT_LOCAL_LOCK(lockname) { \
-+ __LOCAL_SPIN_LOCK_UNLOCKED((lockname).lock) \
-+ }
++#define INIT_LOCAL_LOCK(lockname) __LOCAL_SPIN_LOCK_UNLOCKED((lockname))
+
+#define __local_lock_init(l) \
+ do { \
-+ local_spin_lock_init(&(l)->lock); \
++ local_spin_lock_init((l)); \
+ } while (0)
+
+#define __local_lock(__lock) \
+ do { \
+ migrate_disable(); \
-+ spin_lock(&(this_cpu_ptr((__lock)))->lock); \
++ spin_lock(this_cpu_ptr((__lock))); \
+ } while (0)
+
+#define __local_lock_irq(lock) __local_lock(lock)
@@ -68,7 +66,7 @@ V5: New patch
+
+#define __local_unlock(__lock) \
+ do { \
-+ spin_unlock(&(this_cpu_ptr((__lock)))->lock); \
++ spin_unlock(this_cpu_ptr((__lock))); \
+ migrate_enable(); \
+ } while (0)
+
diff --git a/patches/Add_localversion_for_-RT_release.patch b/patches/Add_localversion_for_-RT_release.patch
index 6b1364508a7c..34da917f8c9e 100644
--- a/patches/Add_localversion_for_-RT_release.patch
+++ b/patches/Add_localversion_for_-RT_release.patch
@@ -15,4 +15,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- /dev/null
+++ b/localversion-rt
@@ -0,0 +1 @@
-+-rt10
++-rt11
diff --git a/patches/arm64_mm_Make_arch_faults_on_old_pte_check_for_migratability.patch b/patches/arm64_mm_Make_arch_faults_on_old_pte_check_for_migratability.patch
deleted file mode 100644
index 18b78e9b4752..000000000000
--- a/patches/arm64_mm_Make_arch_faults_on_old_pte_check_for_migratability.patch
+++ /dev/null
@@ -1,78 +0,0 @@
-From: Valentin Schneider <valentin.schneider@arm.com>
-Subject: arm64: mm: Make arch_faults_on_old_pte() check for migratability
-Date: Wed, 21 Jul 2021 12:51:18 +0100
-
-From: Valentin Schneider <valentin.schneider@arm.com>
-
-Running v5.13-rt1 on my arm64 Juno board triggers:
-
-[ 30.430643] WARNING: CPU: 4 PID: 1 at arch/arm64/include/asm/pgtable.h:985 do_set_pte (./arch/arm64/include/asm/pgtable.h:985 ./arch/arm64/include/asm/pgtable.h:997 mm/memory.c:3830)
-[ 30.430669] Modules linked in:
-[ 30.430679] CPU: 4 PID: 1 Comm: init Tainted: G W 5.13.0-rt1-00002-gcb994ad7c570 #35
-[ 30.430690] Hardware name: ARM Juno development board (r0) (DT)
-[ 30.430695] pstate: 80000005 (Nzcv daif -PAN -UAO -TCO BTYPE=--)
-[ 30.430705] pc : do_set_pte (./arch/arm64/include/asm/pgtable.h:985 ./arch/arm64/include/asm/pgtable.h:997 mm/memory.c:3830)
-[ 30.430713] lr : filemap_map_pages (mm/filemap.c:3222)
-[ 30.430725] sp : ffff800012f4bb90
-[ 30.430729] x29: ffff800012f4bb90 x28: fffffc0025d81900 x27: 0000000000000100
-[ 30.430745] x26: fffffc0025d81900 x25: ffff000803460000 x24: ffff000801bbf428
-[ 30.430760] x23: ffff00080317d900 x22: 0000ffffb4c3e000 x21: fffffc0025d81900
-[ 30.430775] x20: ffff800012f4bd10 x19: 00200009f6064fc3 x18: 000000000000ca01
-[ 30.430790] x17: 0000000000000000 x16: 000000000000ca06 x15: ffff80001240e128
-[ 30.430804] x14: ffff8000124b0128 x13: 000000000000000a x12: ffff80001205e5f0
-[ 30.430819] x11: 0000000000000000 x10: ffff800011a37d28 x9 : 00000000000000c8
-[ 30.430833] x8 : ffff000800160000 x7 : 0000000000000002 x6 : 0000000000000000
-[ 30.430847] x5 : 0000000000000000 x4 : 0000ffffb4c2f000 x3 : 0020000000000fc3
-[ 30.430861] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000
-[ 30.430874] Call trace:
-[ 30.430878] do_set_pte (./arch/arm64/include/asm/pgtable.h:985 ./arch/arm64/include/asm/pgtable.h:997 mm/memory.c:3830)
-[ 30.430886] filemap_map_pages (mm/filemap.c:3222)
-[ 30.430895] __handle_mm_fault (mm/memory.c:4006 mm/memory.c:4020 mm/memory.c:4153 mm/memory.c:4412 mm/memory.c:4547)
-[ 30.430904] handle_mm_fault (mm/memory.c:4645)
-[ 30.430912] do_page_fault (arch/arm64/mm/fault.c:507 arch/arm64/mm/fault.c:607)
-[ 30.430925] do_translation_fault (arch/arm64/mm/fault.c:692)
-[ 30.430936] do_mem_abort (arch/arm64/mm/fault.c:821)
-[ 30.430946] el0_ia (arch/arm64/kernel/entry-common.c:324)
-[ 30.430959] el0_sync_handler (arch/arm64/kernel/entry-common.c:431)
-[ 30.430967] el0_sync (arch/arm64/kernel/entry.S:744)
-[ 30.430977] irq event stamp: 1228384
-[ 30.430981] hardirqs last enabled at (1228383): lock_page_memcg (mm/memcontrol.c:2005 (discriminator 1))
-[ 30.430993] hardirqs last disabled at (1228384): el1_dbg (arch/arm64/kernel/entry-common.c:144 arch/arm64/kernel/entry-common.c:234)
-[ 30.431007] softirqs last enabled at (1228260): __local_bh_enable_ip (./arch/arm64/include/asm/irqflags.h:85 kernel/softirq.c:262)
-[ 30.431022] softirqs last disabled at (1228232): fpsimd_restore_current_state (./include/linux/bottom_half.h:19 arch/arm64/kernel/fpsimd.c:183 arch/arm64/kernel/fpsimd.c:1182)
-
-CONFIG_PREEMPT_RT turns the PTE lock into a sleepable spinlock. Since
-acquiring such a lock also disables migration, any per-CPU access done
-under the lock remains safe even if preemptible.
-
-This affects:
-
- filemap_map_pages()
- `\
- do_set_pte()
- `\
- arch_wants_old_prefaulted_pte()
-
-which checks preemptible() to figure out if the output of
-cpu_has_hw_af() (IOW the underlying CPU) will remain stable for the
-subsequent operations. Make it use is_pcpu_safe() instead.
-
-Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Link: https://lore.kernel.org/r/20210721115118.729943-4-valentin.schneider@arm.com
-
----
- arch/arm64/include/asm/pgtable.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
---- a/arch/arm64/include/asm/pgtable.h
-+++ b/arch/arm64/include/asm/pgtable.h
-@@ -995,7 +995,7 @@ static inline void update_mmu_cache(stru
- */
- static inline bool arch_faults_on_old_pte(void)
- {
-- WARN_ON(preemptible());
-+ WARN_ON(!is_pcpu_safe());
-
- return !cpu_has_hw_af();
- }
diff --git a/patches/arm64_mm_make_arch_faults_on_old_pte_check_for_migratability.patch b/patches/arm64_mm_make_arch_faults_on_old_pte_check_for_migratability.patch
new file mode 100644
index 000000000000..0c882e494c7e
--- /dev/null
+++ b/patches/arm64_mm_make_arch_faults_on_old_pte_check_for_migratability.patch
@@ -0,0 +1,33 @@
+From: Valentin Schneider <valentin.schneider@arm.com>
+Subject: arm64: mm: Make arch_faults_on_old_pte() check for migratability
+Date: Wed, 11 Aug 2021 21:13:54 +0100
+
+arch_faults_on_old_pte() relies on the calling context being
+non-preemptible. CONFIG_PREEMPT_RT turns the PTE lock into a sleepable
+spinlock, which doesn't disable preemption once acquired, triggering the
+warning in arch_faults_on_old_pte().
+
+It does however disable migration, ensuring the task remains on the same
+CPU during the entirety of the critical section, making the read of
+cpu_has_hw_af() safe and stable.
+
+Make arch_faults_on_old_pte() check migratable() instead of preemptible().
+
+Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Link: https://lore.kernel.org/r/20210811201354.1976839-5-valentin.schneider@arm.com
+---
+ arch/arm64/include/asm/pgtable.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -995,7 +995,7 @@ static inline void update_mmu_cache(stru
+ */
+ static inline bool arch_faults_on_old_pte(void)
+ {
+- WARN_ON(preemptible());
++ WARN_ON(is_migratable());
+
+ return !cpu_has_hw_af();
+ }
diff --git a/patches/locking-Allow-to-include-asm-spinlock_types.h-from-l.patch b/patches/locking-Allow-to-include-asm-spinlock_types.h-from-l.patch
new file mode 100644
index 000000000000..b9f9b86a4c3a
--- /dev/null
+++ b/patches/locking-Allow-to-include-asm-spinlock_types.h-from-l.patch
@@ -0,0 +1,265 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Tue, 17 Aug 2021 09:48:31 +0200
+Subject: [PATCH] locking: Allow to include asm/spinlock_types.h from
+ linux/spinlock_types_raw.h
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+The printk header file includes ratelimit_types.h for its __ratelimit()
+based usage. It requires it for the static initializer used in
+printk_ratelimited(). It uses a raw_spinlock_t and includes the
+spinlock_types.h. It makes no difference on non PREEMPT-RT builds but
+PREEMPT-RT replaces the inner part of some locks and therefore includes
+rtmutex.h and atomic.h which leads to recursive includes where defines
+are missing.
+By including only the raw_spinlock_t defines it avoids the atomic.h
+related includes at this stage.
+
+An example on powerpc:
+
+| CALL scripts/atomic/check-atomics.sh
+|In file included from include/linux/bug.h:5,
+| from include/linux/page-flags.h:10,
+| from kernel/bounds.c:10:
+|arch/powerpc/include/asm/page_32.h: In function ‘clear_page’:
+|arch/powerpc/include/asm/bug.h:87:4: error: implicit declaration of function ‘__WARN’ [-Werror=implicit-function-declaration]
+| 87 | __WARN(); \
+| | ^~~~~~
+|arch/powerpc/include/asm/page_32.h:48:2: note: in expansion of macro ‘WARN_ON’
+| 48 | WARN_ON((unsigned long)addr & (L1_CACHE_BYTES - 1));
+| | ^~~~~~~
+|arch/powerpc/include/asm/bug.h:58:17: error: invalid application of ‘sizeof’ to incomplete type ‘struct bug_entry’
+| 58 | "i" (sizeof(struct bug_entry)), \
+| | ^~~~~~
+|arch/powerpc/include/asm/bug.h:89:3: note: in expansion of macro ‘BUG_ENTRY’
+| 89 | BUG_ENTRY(PPC_TLNEI " %4, 0", \
+| | ^~~~~~~~~
+|arch/powerpc/include/asm/page_32.h:48:2: note: in expansion of macro ‘WARN_ON’
+| 48 | WARN_ON((unsigned long)addr & (L1_CACHE_BYTES - 1));
+| | ^~~~~~~
+|In file included from arch/powerpc/include/asm/ptrace.h:298,
+| from arch/powerpc/include/asm/hw_irq.h:12,
+| from arch/powerpc/include/asm/irqflags.h:12,
+| from include/linux/irqflags.h:16,
+| from include/asm-generic/cmpxchg-local.h:6,
+| from arch/powerpc/include/asm/cmpxchg.h:526,
+| from arch/powerpc/include/asm/atomic.h:11,
+| from include/linux/atomic.h:7,
+| from include/linux/rwbase_rt.h:6,
+| from include/linux/rwlock_types.h:55,
+| from include/linux/spinlock_types.h:74,
+| from include/linux/ratelimit_types.h:7,
+| from include/linux/printk.h:10,
+| from include/asm-generic/bug.h:22,
+| from arch/powerpc/include/asm/bug.h:109,
+| from include/linux/bug.h:5,
+| from include/linux/page-flags.h:10,
+| from kernel/bounds.c:10:
+|include/linux/thread_info.h: In function ‘copy_overflow’:
+|include/linux/thread_info.h:210:2: error: implicit declaration of function ‘WARN’ [-Werror=implicit-function-declaration]
+| 210 | WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count);
+| | ^~~~
+
+The WARN / BUG include pulls in printk.h and then ptrace.h expects WARN
+(from bug.h) which is not yet complete. Even hw_irq.h has WARN_ON()
+statements.
+
+On POWERPC64 there are missing atomic64 defines while building 32bit
+VDSO:
+| VDSO32C arch/powerpc/kernel/vdso32/vgettimeofday.o
+|In file included from include/linux/atomic.h:80,
+| from include/linux/rwbase_rt.h:6,
+| from include/linux/rwlock_types.h:55,
+| from include/linux/spinlock_types.h:74,
+| from include/linux/ratelimit_types.h:7,
+| from include/linux/printk.h:10,
+| from include/linux/kernel.h:19,
+| from arch/powerpc/include/asm/page.h:11,
+| from arch/powerpc/include/asm/vdso/gettimeofday.h:5,
+| from include/vdso/datapage.h:137,
+| from lib/vdso/gettimeofday.c:5,
+| from <command-line>:
+|include/linux/atomic-arch-fallback.h: In function ‘arch_atomic64_inc’:
+|include/linux/atomic-arch-fallback.h:1447:2: error: implicit declaration of function ‘arch_atomic64_add’; did you mean ‘arch_atomic_add’? [-Werror=impl
+|icit-function-declaration]
+| 1447 | arch_atomic64_add(1, v);
+| | ^~~~~~~~~~~~~~~~~
+| | arch_atomic_add
+
+The generic fallback is not included, atomics itself are not used. If
+kernel.h does not include printk.h then it comes later from the bug.h
+include.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/alpha/include/asm/spinlock_types.h | 2 +-
+ arch/arm/include/asm/spinlock_types.h | 2 +-
+ arch/arm64/include/asm/spinlock_types.h | 2 +-
+ arch/csky/include/asm/spinlock_types.h | 2 +-
+ arch/hexagon/include/asm/spinlock_types.h | 2 +-
+ arch/ia64/include/asm/spinlock_types.h | 2 +-
+ arch/powerpc/include/asm/simple_spinlock_types.h | 2 +-
+ arch/powerpc/include/asm/spinlock_types.h | 2 +-
+ arch/riscv/include/asm/spinlock_types.h | 2 +-
+ arch/s390/include/asm/spinlock_types.h | 2 +-
+ arch/sh/include/asm/spinlock_types.h | 2 +-
+ arch/xtensa/include/asm/spinlock_types.h | 2 +-
+ include/linux/ratelimit_types.h | 2 +-
+ include/linux/spinlock_types_up.h | 2 +-
+ 14 files changed, 14 insertions(+), 14 deletions(-)
+
+--- a/arch/alpha/include/asm/spinlock_types.h
++++ b/arch/alpha/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ALPHA_SPINLOCK_TYPES_H
+ #define _ALPHA_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/arm/include/asm/spinlock_types.h
++++ b/arch/arm/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/arm64/include/asm/spinlock_types.h
++++ b/arch/arm64/include/asm/spinlock_types.h
+@@ -5,7 +5,7 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#if !defined(__LINUX_SPINLOCK_TYPES_H) && !defined(__ASM_SPINLOCK_H)
++#if !defined(__LINUX_SPINLOCK_TYPES_RAW_H) && !defined(__ASM_SPINLOCK_H)
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/csky/include/asm/spinlock_types.h
++++ b/arch/csky/include/asm/spinlock_types.h
+@@ -3,7 +3,7 @@
+ #ifndef __ASM_CSKY_SPINLOCK_TYPES_H
+ #define __ASM_CSKY_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/hexagon/include/asm/spinlock_types.h
++++ b/arch/hexagon/include/asm/spinlock_types.h
+@@ -8,7 +8,7 @@
+ #ifndef _ASM_SPINLOCK_TYPES_H
+ #define _ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/ia64/include/asm/spinlock_types.h
++++ b/arch/ia64/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_IA64_SPINLOCK_TYPES_H
+ #define _ASM_IA64_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/powerpc/include/asm/simple_spinlock_types.h
++++ b/arch/powerpc/include/asm/simple_spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H
+ #define _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/powerpc/include/asm/spinlock_types.h
++++ b/arch/powerpc/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef _ASM_POWERPC_SPINLOCK_TYPES_H
+ #define _ASM_POWERPC_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/riscv/include/asm/spinlock_types.h
++++ b/arch/riscv/include/asm/spinlock_types.h
+@@ -6,7 +6,7 @@
+ #ifndef _ASM_RISCV_SPINLOCK_TYPES_H
+ #define _ASM_RISCV_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/s390/include/asm/spinlock_types.h
++++ b/arch/s390/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/sh/include/asm/spinlock_types.h
++++ b/arch/sh/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef __ASM_SH_SPINLOCK_TYPES_H
+ #define __ASM_SH_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
+--- a/arch/xtensa/include/asm/spinlock_types.h
++++ b/arch/xtensa/include/asm/spinlock_types.h
+@@ -2,7 +2,7 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#if !defined(__LINUX_SPINLOCK_TYPES_H) && !defined(__ASM_SPINLOCK_H)
++#if !defined(__LINUX_SPINLOCK_TYPES_RAW_H) && !defined(__ASM_SPINLOCK_H)
+ # error "please don't include this file directly"
+ #endif
+
+--- a/include/linux/ratelimit_types.h
++++ b/include/linux/ratelimit_types.h
+@@ -4,7 +4,7 @@
+
+ #include <linux/bits.h>
+ #include <linux/param.h>
+-#include <linux/spinlock_types.h>
++#include <linux/spinlock_types_raw.h>
+
+ #define DEFAULT_RATELIMIT_INTERVAL (5 * HZ)
+ #define DEFAULT_RATELIMIT_BURST 10
+--- a/include/linux/spinlock_types_up.h
++++ b/include/linux/spinlock_types_up.h
+@@ -1,7 +1,7 @@
+ #ifndef __LINUX_SPINLOCK_TYPES_UP_H
+ #define __LINUX_SPINLOCK_TYPES_UP_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+ # error "please don't include this file directly"
+ #endif
+
diff --git a/patches/locking_rtmutex__Include_only_rbtree_types.patch b/patches/locking_rtmutex__Include_only_rbtree_types.patch
deleted file mode 100644
index a7dd4755c6c7..000000000000
--- a/patches/locking_rtmutex__Include_only_rbtree_types.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-Subject: locking/rtmutex: Include only rbtree types
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Tue Jul 6 16:36:48 2021 +0200
-
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
-rtmutex.h needs the definition of struct rb_root_cached. rbtree.h includes
-kernel.h which includes spinlock.h. That works nicely for non-RT enabled
-kernels, but on RT enabled kernels spinlocks are based on rtmutexes which
-creates another circular header dependency as spinlocks.h will require
-rtmutex.h.
-
-Include rbtree_types.h instead.
-
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
----
- include/linux/rtmutex.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
----
---- a/include/linux/rtmutex.h
-+++ b/include/linux/rtmutex.h
-@@ -15,7 +15,7 @@
-
- #include <linux/compiler.h>
- #include <linux/linkage.h>
--#include <linux/rbtree.h>
-+#include <linux/rbtree_types.h>
- #include <linux/spinlock_types_raw.h>
-
- extern int max_lock_depth; /* for sysctl */
diff --git a/patches/notifier__Make_atomic_notifiers_use_raw_spinlock.patch b/patches/notifier__Make_atomic_notifiers_use_raw_spinlock.patch
deleted file mode 100644
index 1bd68e8e8f35..000000000000
--- a/patches/notifier__Make_atomic_notifiers_use_raw_spinlock.patch
+++ /dev/null
@@ -1,118 +0,0 @@
-From: Valentin Schneider <valentin.schneider@arm.com>
-Subject: notifier: Make atomic_notifiers use raw_spinlock
-Date: Sun, 22 Nov 2020 20:19:04 +0000
-
-Booting a recent PREEMPT_RT kernel (v5.10-rc3-rt7-rebase) on my arm64 Juno
-leads to the idle task blocking on an RT sleeping spinlock down some
-notifier path:
-
-| BUG: scheduling while atomic: swapper/5/0/0x00000002
-…
-| atomic_notifier_call_chain_robust (kernel/notifier.c:71 kernel/notifier.c:118 kernel/notifier.c:186)
-| cpu_pm_enter (kernel/cpu_pm.c:39 kernel/cpu_pm.c:93)
-| psci_enter_idle_state (drivers/cpuidle/cpuidle-psci.c:52 drivers/cpuidle/cpuidle-psci.c:129)
-| cpuidle_enter_state (drivers/cpuidle/cpuidle.c:238)
-| cpuidle_enter (drivers/cpuidle/cpuidle.c:353)
-| do_idle (kernel/sched/idle.c:132 kernel/sched/idle.c:213 kernel/sched/idle.c:273)
-| cpu_startup_entry (kernel/sched/idle.c:368 (discriminator 1))
-| secondary_start_kernel (arch/arm64/kernel/smp.c:273)
-
-Two points worth noting:
-
-1) That this is conceptually the same issue as pointed out in:
- 313c8c16ee62 ("PM / CPU: replace raw_notifier with atomic_notifier")
-2) Only the _robust() variant of atomic_notifier callchains suffer from
- this
-
-AFAICT only the cpu_pm_notifier_chain really needs to be changed, but
-singling it out would mean introducing a new (truly) non-blocking API. At
-the same time, callers that are fine with any blocking within the call
-chain should use blocking notifiers, so patching up all atomic_notifier's
-doesn't seem *too* crazy to me.
-
-Fixes: 70d932985757 ("notifier: Fix broken error handling pattern")
-Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com>
-Link: https://lkml.kernel.org/r/20201122201904.30940-1-valentin.schneider@arm.com
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Link: https://lore.kernel.org/r/20210806140718.mxss3cbqijfebdo5@linutronix.de
----
-
-What do we do with this?
-Do we merge this as-is, add another "robust atomic notifier" using only
-raw_spinlock_t for registration and notification (for only
-cpu_pm_notifier_chain) instead of switching to raw_spinlock_t for all
-atomic notifier in -tree?
-
- include/linux/notifier.h | 6 +++---
- kernel/notifier.c | 12 ++++++------
- 2 files changed, 9 insertions(+), 9 deletions(-)
-
---- a/include/linux/notifier.h
-+++ b/include/linux/notifier.h
-@@ -58,7 +58,7 @@ struct notifier_block {
- };
-
- struct atomic_notifier_head {
-- spinlock_t lock;
-+ raw_spinlock_t lock;
- struct notifier_block __rcu *head;
- };
-
-@@ -78,7 +78,7 @@ struct srcu_notifier_head {
- };
-
- #define ATOMIC_INIT_NOTIFIER_HEAD(name) do { \
-- spin_lock_init(&(name)->lock); \
-+ raw_spin_lock_init(&(name)->lock); \
- (name)->head = NULL; \
- } while (0)
- #define BLOCKING_INIT_NOTIFIER_HEAD(name) do { \
-@@ -95,7 +95,7 @@ extern void srcu_init_notifier_head(stru
- cleanup_srcu_struct(&(name)->srcu);
-
- #define ATOMIC_NOTIFIER_INIT(name) { \
-- .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
-+ .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
- .head = NULL }
- #define BLOCKING_NOTIFIER_INIT(name) { \
- .rwsem = __RWSEM_INITIALIZER((name).rwsem), \
---- a/kernel/notifier.c
-+++ b/kernel/notifier.c
-@@ -142,9 +142,9 @@ int atomic_notifier_chain_register(struc
- unsigned long flags;
- int ret;
-
-- spin_lock_irqsave(&nh->lock, flags);
-+ raw_spin_lock_irqsave(&nh->lock, flags);
- ret = notifier_chain_register(&nh->head, n);
-- spin_unlock_irqrestore(&nh->lock, flags);
-+ raw_spin_unlock_irqrestore(&nh->lock, flags);
- return ret;
- }
- EXPORT_SYMBOL_GPL(atomic_notifier_chain_register);
-@@ -164,9 +164,9 @@ int atomic_notifier_chain_unregister(str
- unsigned long flags;
- int ret;
-
-- spin_lock_irqsave(&nh->lock, flags);
-+ raw_spin_lock_irqsave(&nh->lock, flags);
- ret = notifier_chain_unregister(&nh->head, n);
-- spin_unlock_irqrestore(&nh->lock, flags);
-+ raw_spin_unlock_irqrestore(&nh->lock, flags);
- synchronize_rcu();
- return ret;
- }
-@@ -182,9 +182,9 @@ int atomic_notifier_call_chain_robust(st
- * Musn't use RCU; because then the notifier list can
- * change between the up and down traversal.
- */
-- spin_lock_irqsave(&nh->lock, flags);
-+ raw_spin_lock_irqsave(&nh->lock, flags);
- ret = notifier_call_chain_robust(&nh->head, val_up, val_down, v);
-- spin_unlock_irqrestore(&nh->lock, flags);
-+ raw_spin_unlock_irqrestore(&nh->lock, flags);
-
- return ret;
- }
diff --git a/patches/powerpc__Avoid_recursive_header_includes.patch b/patches/powerpc__Avoid_recursive_header_includes.patch
deleted file mode 100644
index 82bab4bdb187..000000000000
--- a/patches/powerpc__Avoid_recursive_header_includes.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-Subject: powerpc: Avoid recursive header includes
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Fri Jan 8 19:48:21 2021 +0100
-
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
-- The include of bug.h leads to an include of printk.h which gets back
- to spinlock.h and complains then about missing xchg().
- Remove bug.h and add bits.h which is needed for BITS_PER_BYTE.
-
-- Avoid the "please don't include this file directly" error from
- rwlock-rt. Allow an include from/with rtmutex.h.
-
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
-
----
- arch/powerpc/include/asm/cmpxchg.h | 2 +-
- arch/powerpc/include/asm/simple_spinlock_types.h | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
----
---- a/arch/powerpc/include/asm/cmpxchg.h
-+++ b/arch/powerpc/include/asm/cmpxchg.h
-@@ -5,7 +5,7 @@
- #ifdef __KERNEL__
- #include <linux/compiler.h>
- #include <asm/synch.h>
--#include <linux/bug.h>
-+#include <linux/bits.h>
-
- #ifdef __BIG_ENDIAN
- #define BITOFF_CAL(size, off) ((sizeof(u32) - size - off) * BITS_PER_BYTE)
---- a/arch/powerpc/include/asm/simple_spinlock_types.h
-+++ b/arch/powerpc/include/asm/simple_spinlock_types.h
-@@ -2,7 +2,7 @@
- #ifndef _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H
- #define _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H
-
--#ifndef __LINUX_SPINLOCK_TYPES_H
-+#if !defined(__LINUX_SPINLOCK_TYPES_H) && !defined(__LINUX_RT_MUTEX_H)
- # error "please don't include this file directly"
- #endif
-
diff --git a/patches/rcu_nocb_Check_for_migratability_rather_than_pure_preemptability.patch b/patches/rcu_nocb_Check_for_migratability_rather_than_pure_preemptability.patch
deleted file mode 100644
index 028502857f18..000000000000
--- a/patches/rcu_nocb_Check_for_migratability_rather_than_pure_preemptability.patch
+++ /dev/null
@@ -1,77 +0,0 @@
-From: Valentin Schneider <valentin.schneider@arm.com>
-Subject: rcu/nocb: Check for migratability rather than pure preemptability
-Date: Wed, 21 Jul 2021 12:51:17 +0100
-
-From: Valentin Schneider <valentin.schneider@arm.com>
-
-Running v5.13-rt1 on my arm64 Juno board triggers:
-
-[ 0.156302] =============================
-[ 0.160416] WARNING: suspicious RCU usage
-[ 0.164529] 5.13.0-rt1 #20 Not tainted
-[ 0.168300] -----------------------------
-[ 0.172409] kernel/rcu/tree_plugin.h:69 Unsafe read of RCU_NOCB offloaded state!
-[ 0.179920]
-[ 0.179920] other info that might help us debug this:
-[ 0.179920]
-[ 0.188037]
-[ 0.188037] rcu_scheduler_active = 1, debug_locks = 1
-[ 0.194677] 3 locks held by rcuc/0/11:
-[ 0.198448] #0: ffff00097ef10cf8 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip (./include/linux/rcupdate.h:662 kernel/softirq.c:171)
-[ 0.208709] #1: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock (kernel/locking/spinlock_rt.c:43 (discriminator 4))
-[ 0.217134] #2: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip (kernel/softirq.c:169)
-[ 0.226428]
-[ 0.226428] stack backtrace:
-[ 0.230889] CPU: 0 PID: 11 Comm: rcuc/0 Not tainted 5.13.0-rt1 #20
-[ 0.237100] Hardware name: ARM Juno development board (r0) (DT)
-[ 0.243041] Call trace:
-[ 0.245497] dump_backtrace (arch/arm64/kernel/stacktrace.c:163)
-[ 0.249185] show_stack (arch/arm64/kernel/stacktrace.c:219)
-[ 0.252522] dump_stack (lib/dump_stack.c:122)
-[ 0.255947] lockdep_rcu_suspicious (kernel/locking/lockdep.c:6439)
-[ 0.260328] rcu_rdp_is_offloaded (kernel/rcu/tree_plugin.h:69 kernel/rcu/tree_plugin.h:58)
-[ 0.264537] rcu_core (kernel/rcu/tree.c:2332 kernel/rcu/tree.c:2398 kernel/rcu/tree.c:2777)
-[ 0.267786] rcu_cpu_kthread (./include/linux/bottom_half.h:32 kernel/rcu/tree.c:2876)
-[ 0.271644] smpboot_thread_fn (kernel/smpboot.c:165 (discriminator 3))
-[ 0.275767] kthread (kernel/kthread.c:321)
-[ 0.279013] ret_from_fork (arch/arm64/kernel/entry.S:1005)
-
-In this case, this is the RCU core kthread accessing the local CPU's
-rdp. Before that, rcu_cpu_kthread() invokes local_bh_disable().
-
-Under !CONFIG_PREEMPT_RT (and rcutree.use_softirq=0), this ends up
-incrementing the preempt_count, which satisfies the "local non-preemptible
-read" of rcu_rdp_is_offloaded().
-
-Under CONFIG_PREEMPT_RT however, this becomes
-
- local_lock(&softirq_ctrl.lock)
-
-which, under the same config, is migrate_disable() + rt_spin_lock().
-This *does* prevent the task from migrating away, but not in a way
-rcu_rdp_is_offloaded() can notice. Note that the invoking task is an
-smpboot thread, and thus cannot be migrated away in the first place.
-
-Check is_pcpu_safe() here rather than preemptible().
-
-Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Acked-by: Paul E. McKenney <paulmck@kernel.org>
-Link: https://lore.kernel.org/r/20210721115118.729943-3-valentin.schneider@arm.com
-
----
- kernel/rcu/tree_plugin.h | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
---- a/kernel/rcu/tree_plugin.h
-+++ b/kernel/rcu/tree_plugin.h
-@@ -61,8 +61,7 @@ static bool rcu_rdp_is_offloaded(struct
- !(lockdep_is_held(&rcu_state.barrier_mutex) ||
- (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
- rcu_lockdep_is_held_nocb(rdp) ||
-- (rdp == this_cpu_ptr(&rcu_data) &&
-- !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
-+ (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) ||
- rcu_current_is_nocb_kthread(rdp)),
- "Unsafe read of RCU_NOCB offloaded state"
- );
diff --git a/patches/rcu_nocb_protect_nocb_state_via_local_lock_under_preempt_rt.patch b/patches/rcu_nocb_protect_nocb_state_via_local_lock_under_preempt_rt.patch
new file mode 100644
index 000000000000..47df24bb99e1
--- /dev/null
+++ b/patches/rcu_nocb_protect_nocb_state_via_local_lock_under_preempt_rt.patch
@@ -0,0 +1,300 @@
+From: Valentin Schneider <valentin.schneider@arm.com>
+Subject: rcu/nocb: Protect NOCB state via local_lock() under PREEMPT_RT
+Date: Wed, 11 Aug 2021 21:13:53 +0100
+
+Warning
+=======
+
+Running v5.13-rt1 on my arm64 Juno board triggers:
+
+[ 0.156302] =============================
+[ 0.160416] WARNING: suspicious RCU usage
+[ 0.164529] 5.13.0-rt1 #20 Not tainted
+[ 0.168300] -----------------------------
+[ 0.172409] kernel/rcu/tree_plugin.h:69 Unsafe read of RCU_NOCB offloaded state!
+[ 0.179920]
+[ 0.179920] other info that might help us debug this:
+[ 0.179920]
+[ 0.188037]
+[ 0.188037] rcu_scheduler_active = 1, debug_locks = 1
+[ 0.194677] 3 locks held by rcuc/0/11:
+[ 0.198448] #0: ffff00097ef10cf8 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip (./include/linux/rcupdate.h:662 kernel/softirq.c:171)
+[ 0.208709] #1: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock (kernel/locking/spinlock_rt.c:43 (discriminator 4))
+[ 0.217134] #2: ffff80001205e5f0 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip (kernel/softirq.c:169)
+[ 0.226428]
+[ 0.226428] stack backtrace:
+[ 0.230889] CPU: 0 PID: 11 Comm: rcuc/0 Not tainted 5.13.0-rt1 #20
+[ 0.237100] Hardware name: ARM Juno development board (r0) (DT)
+[ 0.243041] Call trace:
+[ 0.245497] dump_backtrace (arch/arm64/kernel/stacktrace.c:163)
+[ 0.249185] show_stack (arch/arm64/kernel/stacktrace.c:219)
+[ 0.252522] dump_stack (lib/dump_stack.c:122)
+[ 0.255947] lockdep_rcu_suspicious (kernel/locking/lockdep.c:6439)
+[ 0.260328] rcu_rdp_is_offloaded (kernel/rcu/tree_plugin.h:69 kernel/rcu/tree_plugin.h:58)
+[ 0.264537] rcu_core (kernel/rcu/tree.c:2332 kernel/rcu/tree.c:2398 kernel/rcu/tree.c:2777)
+[ 0.267786] rcu_cpu_kthread (./include/linux/bottom_half.h:32 kernel/rcu/tree.c:2876)
+[ 0.271644] smpboot_thread_fn (kernel/smpboot.c:165 (discriminator 3))
+[ 0.275767] kthread (kernel/kthread.c:321)
+[ 0.279013] ret_from_fork (arch/arm64/kernel/entry.S:1005)
+
+In this case, this is the RCU core kthread accessing the local CPU's
+rdp. Before that, rcu_cpu_kthread() invokes local_bh_disable().
+
+Under !CONFIG_PREEMPT_RT (and rcutree.use_softirq=0), this ends up
+incrementing the preempt_count, which satisfies the "local non-preemptible
+read" of rcu_rdp_is_offloaded().
+
+Under CONFIG_PREEMPT_RT however, this becomes
+
+ local_lock(&softirq_ctrl.lock)
+
+which, under the same config, is migrate_disable() + rt_spin_lock(). As
+pointed out by Frederic, this is not sufficient to safely access an rdp's
+offload state, as the RCU core kthread can be preempted by a kworker
+executing rcu_nocb_rdp_offload() [1].
+
+Introduce a local_lock to serialize an rdp's offload state while the rdp's
+associated core kthread is executing rcu_core().
+
+rcu_core() preemptability considerations
+========================================
+
+As pointed out by Paul [2], keeping rcu_check_quiescent_state() preemptible
+(which is the case under CONFIG_PREEMPT_RT) requires some consideration.
+
+note_gp_changes() itself runs with irqs off, and enters
+__note_gp_changes() with rnp->lock held (raw_spinlock), thus is safe vs
+preemption.
+
+rdp->core_needs_qs *could* change after being read by the RCU core
+kthread if it then gets preempted. Consider, with
+CONFIG_RCU_STRICT_GRACE_PERIOD:
+
+ rcuc/x task_foo
+
+ rcu_check_quiescent_state()
+ `\
+ rdp->core_needs_qs == true
+ <PREEMPT>
+ rcu_read_unlock()
+ `\
+ rcu_preempt_deferred_qs_irqrestore()
+ `\
+ rcu_report_qs_rdp()
+ `\
+ rdp->core_needs_qs := false;
+
+This would let rcuc/x's rcu_check_quiescent_state() proceed further down to
+rcu_report_qs_rdp(), but if task_foo's earlier rcu_report_qs_rdp()
+invocation would have cleared the rdp grpmask from the rnp mask, so
+rcuc/x's invocation would simply bail.
+
+Since rcu_report_qs_rdp() can be safely invoked, even if rdp->core_needs_qs
+changed, it appears safe to keep rcu_check_quiescent_state() preemptible.
+
+[1]: http://lore.kernel.org/r/20210727230814.GC283787@lothringen
+[2]: http://lore.kernel.org/r/20210729010445.GO4397@paulmck-ThinkPad-P17-Gen-1
+
+Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Link: https://lore.kernel.org/r/20210811201354.1976839-4-valentin.schneider@arm.com
+---
+ kernel/rcu/tree.c | 4 ++
+ kernel/rcu/tree.h | 4 ++
+ kernel/rcu/tree_plugin.h | 76 +++++++++++++++++++++++++++++++++++++++++------
+ 3 files changed, 75 insertions(+), 9 deletions(-)
+
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -87,6 +87,7 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(str
+ .dynticks = ATOMIC_INIT(RCU_DYNTICK_CTRL_CTR),
+ #ifdef CONFIG_RCU_NOCB_CPU
+ .cblist.flags = SEGCBLIST_SOFTIRQ_ONLY,
++ .nocb_local_lock = INIT_LOCAL_LOCK(nocb_local_lock),
+ #endif
+ };
+ static struct rcu_state rcu_state = {
+@@ -2853,10 +2854,12 @@ static void rcu_cpu_kthread(unsigned int
+ {
+ unsigned int *statusp = this_cpu_ptr(&rcu_data.rcu_cpu_kthread_status);
+ char work, *workp = this_cpu_ptr(&rcu_data.rcu_cpu_has_work);
++ struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
+ int spincnt;
+
+ trace_rcu_utilization(TPS("Start CPU kthread@rcu_run"));
+ for (spincnt = 0; spincnt < 10; spincnt++) {
++ rcu_nocb_local_lock(rdp);
+ local_bh_disable();
+ *statusp = RCU_KTHREAD_RUNNING;
+ local_irq_disable();
+@@ -2866,6 +2869,7 @@ static void rcu_cpu_kthread(unsigned int
+ if (work)
+ rcu_core();
+ local_bh_enable();
++ rcu_nocb_local_unlock(rdp);
+ if (*workp == 0) {
+ trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
+ *statusp = RCU_KTHREAD_WAITING;
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -210,6 +210,8 @@ struct rcu_data {
+ struct timer_list nocb_timer; /* Enforce finite deferral. */
+ unsigned long nocb_gp_adv_time; /* Last call_rcu() CB adv (jiffies). */
+
++ local_lock_t nocb_local_lock;
++
+ /* The following fields are used by call_rcu, hence own cacheline. */
+ raw_spinlock_t nocb_bypass_lock ____cacheline_internodealigned_in_smp;
+ struct rcu_cblist nocb_bypass; /* Lock-contention-bypass CB list. */
+@@ -445,6 +447,8 @@ static void rcu_nocb_unlock(struct rcu_d
+ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp,
+ unsigned long flags);
+ static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp);
++static void rcu_nocb_local_lock(struct rcu_data *rdp);
++static void rcu_nocb_local_unlock(struct rcu_data *rdp);
+ #ifdef CONFIG_RCU_NOCB_CPU
+ static void __init rcu_organize_nocb_kthreads(void);
+ #define rcu_nocb_lock_irqsave(rdp, flags) \
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -21,6 +21,11 @@ static inline int rcu_lockdep_is_held_no
+ return lockdep_is_held(&rdp->nocb_lock);
+ }
+
++static inline int rcu_lockdep_is_held_nocb_local(struct rcu_data *rdp)
++{
++ return lockdep_is_held(&rdp->nocb_local_lock);
++}
++
+ static inline bool rcu_current_is_nocb_kthread(struct rcu_data *rdp)
+ {
+ /* Race on early boot between thread creation and assignment */
+@@ -38,7 +43,10 @@ static inline int rcu_lockdep_is_held_no
+ {
+ return 0;
+ }
+-
++static inline int rcu_lockdep_is_held_nocb_local(struct rcu_data *rdp)
++{
++ return 0;
++}
+ static inline bool rcu_current_is_nocb_kthread(struct rcu_data *rdp)
+ {
+ return false;
+@@ -46,23 +54,44 @@ static inline bool rcu_current_is_nocb_k
+
+ #endif /* #ifdef CONFIG_RCU_NOCB_CPU */
+
++/*
++ * Is a local read of the rdp's offloaded state safe and stable?
++ * See rcu_nocb_local_lock() & family.
++ */
++static inline bool rcu_local_offload_access_safe(struct rcu_data *rdp)
++{
++ if (!preemptible())
++ return true;
++
++ if (!is_migratable()) {
++ if (!IS_ENABLED(CONFIG_RCU_NOCB))
++ return true;
++
++ return rcu_lockdep_is_held_nocb_local(rdp);
++ }
++
++ return false;
++}
++
+ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
+ {
+ /*
+- * In order to read the offloaded state of an rdp is a safe
+- * and stable way and prevent from its value to be changed
+- * under us, we must either hold the barrier mutex, the cpu
+- * hotplug lock (read or write) or the nocb lock. Local
+- * non-preemptible reads are also safe. NOCB kthreads and
+- * timers have their own means of synchronization against the
+- * offloaded state updaters.
++ * In order to read the offloaded state of an rdp is a safe and stable
++ * way and prevent from its value to be changed under us, we must either...
+ */
+ RCU_LOCKDEP_WARN(
++ // ...hold the barrier mutex...
+ !(lockdep_is_held(&rcu_state.barrier_mutex) ||
++ // ... the cpu hotplug lock (read or write)...
+ (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
++ // ... or the NOCB lock.
+ rcu_lockdep_is_held_nocb(rdp) ||
++ // Local reads still require the local state to remain stable
++ // (preemption disabled / local lock held)
+ (rdp == this_cpu_ptr(&rcu_data) &&
+- !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) ||
++ rcu_local_offload_access_safe(rdp)) ||
++ // NOCB kthreads and timers have their own means of synchronization
++ // against the offloaded state updaters.
+ rcu_current_is_nocb_kthread(rdp)),
+ "Unsafe read of RCU_NOCB offloaded state"
+ );
+@@ -1629,6 +1658,22 @@ static void rcu_nocb_unlock_irqrestore(s
+ }
+ }
+
++/*
++ * The invocation of rcu_core() within the RCU core kthreads remains preemptible
++ * under PREEMPT_RT, thus the offload state of a CPU could change while
++ * said kthreads are preempted. Prevent this from happening by protecting the
++ * offload state with a local_lock().
++ */
++static void rcu_nocb_local_lock(struct rcu_data *rdp)
++{
++ local_lock(&rcu_data.nocb_local_lock);
++}
++
++static void rcu_nocb_local_unlock(struct rcu_data *rdp)
++{
++ local_unlock(&rcu_data.nocb_local_lock);
++}
++
+ /* Lockdep check that ->cblist may be safely accessed. */
+ static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp)
+ {
+@@ -2396,6 +2441,7 @@ static int rdp_offload_toggle(struct rcu
+ if (rdp->nocb_cb_sleep)
+ rdp->nocb_cb_sleep = false;
+ rcu_nocb_unlock_irqrestore(rdp, flags);
++ rcu_nocb_local_unlock(rdp);
+
+ /*
+ * Ignore former value of nocb_cb_sleep and force wake up as it could
+@@ -2427,6 +2473,7 @@ static long rcu_nocb_rdp_deoffload(void
+
+ pr_info("De-offloading %d\n", rdp->cpu);
+
++ rcu_nocb_local_lock(rdp);
+ rcu_nocb_lock_irqsave(rdp, flags);
+ /*
+ * Flush once and for all now. This suffices because we are
+@@ -2509,6 +2556,7 @@ static long rcu_nocb_rdp_offload(void *a
+ * Can't use rcu_nocb_lock_irqsave() while we are in
+ * SEGCBLIST_SOFTIRQ_ONLY mode.
+ */
++ rcu_nocb_local_lock(rdp);
+ raw_spin_lock_irqsave(&rdp->nocb_lock, flags);
+
+ /*
+@@ -2868,6 +2916,16 @@ static void rcu_nocb_unlock_irqrestore(s
+ local_irq_restore(flags);
+ }
+
++/* No ->nocb_local_lock to acquire. */
++static void rcu_nocb_local_lock(struct rcu_data *rdp)
++{
++}
++
++/* No ->nocb_local_lock to release. */
++static void rcu_nocb_local_unlock(struct rcu_data *rdp)
++{
++}
++
+ /* Lockdep check that ->cblist may be safely accessed. */
+ static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp)
+ {
diff --git a/patches/rcutorture__Avoid_problematic_critical_section_nesting_on_RT.patch b/patches/rcutorture__Avoid_problematic_critical_section_nesting_on_RT.patch
index 55c7406a11d7..852b23cfed8d 100644
--- a/patches/rcutorture__Avoid_problematic_critical_section_nesting_on_RT.patch
+++ b/patches/rcutorture__Avoid_problematic_critical_section_nesting_on_RT.patch
@@ -7,19 +7,7 @@ From: Scott Wood <swood@redhat.com>
rcutorture was generating some nesting scenarios that are not
reasonable. Constrain the state selection to avoid them.
-Example #1:
-
-1. preempt_disable()
-2. local_bh_disable()
-3. preempt_enable()
-4. local_bh_enable()
-
-On PREEMPT_RT, BH disabling takes a local lock only when called in
-non-atomic context. Thus, atomic context must be retained until after BH
-is re-enabled. Likewise, if BH is initially disabled in non-atomic
-context, it cannot be re-enabled in atomic context.
-
-Example #2:
+Example:
1. rcu_read_lock()
2. local_irq_disable()
@@ -36,13 +24,13 @@ kernels, until debug checks are added to ensure that they are not
happening elsewhere.
Signed-off-by: Scott Wood <swood@redhat.com>
+[valentin.schneider@arm.com: Don't disable BH in atomic context]
+[bigeasy: remove 'preempt_disable(); local_bh_disable(); preempt_enable();
+ local_bh_enable();' from the examples because this works on RT now. ]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
-
---
- kernel/rcu/rcutorture.c | 97 +++++++++++++++++++++++++++++++++++++++++-------
- 1 file changed, 83 insertions(+), 14 deletions(-)
+ kernel/rcu/rcutorture.c | 94 ++++++++++++++++++++++++++++++++++++++++--------
+ 1 file changed, 80 insertions(+), 14 deletions(-)
---
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -136,7 +124,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
WARN_ON_ONCE(mask >> RCUTORTURE_RDR_SHIFT);
/* Mostly only one bit (need preemption!), sometimes lots of bits. */
-@@ -1503,11 +1534,49 @@ rcutorture_extend_mask(int oldmask, stru
+@@ -1503,11 +1534,46 @@ rcutorture_extend_mask(int oldmask, stru
mask = mask & randmask2;
else
mask = mask & (1 << (randmask2 % RCUTORTURE_RDR_NBITS));
@@ -160,6 +148,13 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ /*
++ * Can't disable bh in atomic context if bh was already
++ * disabled by another task on the same CPU. Instead of
++ * attempting to track this, just avoid disabling bh in atomic
++ * context.
++ */
++ mask &= ~atomic_bhs;
++ /*
+ * Can't release the outermost rcu lock in an irq disabled
+ * section without preemption also being disabled, if irqs
+ * had ever been enabled during this RCU critical section
@@ -170,16 +165,6 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ !(mask & preempts))
+ mask |= RCUTORTURE_RDR_RCU;
+
-+ /* Can't modify atomic bh in non-atomic context */
-+ if ((oldmask & atomic_bhs) && (mask & atomic_bhs) &&
-+ !(mask & preempts_irq)) {
-+ mask |= oldmask & preempts_irq;
-+ if (mask & RCUTORTURE_RDR_IRQ)
-+ mask |= oldmask & tmp;
-+ }
-+ if ((mask & atomic_bhs) && !(mask & preempts_irq))
-+ mask |= RCUTORTURE_RDR_PREEMPT;
-+
+ /* Can't modify non-atomic bh in atomic context */
+ tmp = nonatomic_bhs;
+ if (oldmask & preempts_irq)
diff --git a/patches/sched_Introduce_is_pcpu_safe_.patch b/patches/sched_Introduce_is_pcpu_safe_.patch
deleted file mode 100644
index a64a5789e95a..000000000000
--- a/patches/sched_Introduce_is_pcpu_safe_.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From: Valentin Schneider <valentin.schneider@arm.com>
-Subject: sched: Introduce is_pcpu_safe()
-Date: Wed, 21 Jul 2021 12:51:16 +0100
-
-From: Valentin Schneider <valentin.schneider@arm.com>
-
-Some areas use preempt_disable() + preempt_enable() to safely access
-per-CPU data. The PREEMPT_RT folks have shown this can also be done by
-keeping preemption enabled and instead disabling migration (and acquiring a
-sleepable lock, if relevant).
-
-Introduce a helper which checks whether the current task can safely access
-per-CPU data, IOW if the task's context guarantees the accesses will target
-a single CPU. This accounts for preemption, CPU affinity, and migrate
-disable - note that the CPU affinity check also mandates the presence of
-PF_NO_SETAFFINITY, as otherwise userspace could concurrently render the
-upcoming per-CPU access(es) unsafe.
-
-Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Acked-by: Paul E. McKenney <paulmck@kernel.org>
-Link: https://lore.kernel.org/r/20210721115118.729943-2-valentin.schneider@arm.com
-
----
- include/linux/sched.h | 10 ++++++++++
- 1 file changed, 10 insertions(+)
-
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -1646,6 +1646,16 @@ static inline bool is_percpu_thread(void
- #endif
- }
-
-+/* Is the current task guaranteed not to be migrated elsewhere? */
-+static inline bool is_pcpu_safe(void)
-+{
-+#ifdef CONFIG_SMP
-+ return !preemptible() || is_percpu_thread() || current->migration_disabled;
-+#else
-+ return true;
-+#endif
-+}
-+
- /* Per-process atomic flags. */
- #define PFA_NO_NEW_PRIVS 0 /* May not gain new privileges. */
- #define PFA_SPREAD_PAGE 1 /* Spread page cache over cpuset */
diff --git a/patches/sched_introduce_migratable.patch b/patches/sched_introduce_migratable.patch
new file mode 100644
index 000000000000..87b06e306777
--- /dev/null
+++ b/patches/sched_introduce_migratable.patch
@@ -0,0 +1,45 @@
+From: Valentin Schneider <valentin.schneider@arm.com>
+Subject: sched: Introduce migratable()
+Date: Wed, 11 Aug 2021 21:13:52 +0100
+
+Some areas use preempt_disable() + preempt_enable() to safely access
+per-CPU data. The PREEMPT_RT folks have shown this can also be done by
+keeping preemption enabled and instead disabling migration (and acquiring a
+sleepable lock, if relevant).
+
+Introduce a helper which checks whether the current task can be migrated
+elsewhere, IOW if it is pinned to its local CPU in the current
+context. This can help determining if per-CPU properties can be safely
+accessed.
+
+Note that CPU affinity is not checked here, as a preemptible task can have
+its affinity changed at any given time (including if it has
+PF_NO_SETAFFINITY, when hotplug gets involved).
+
+Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
+[bigeasy: Return false on UP, call it is_migratable().]
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Link: https://lore.kernel.org/r/20210811201354.1976839-3-valentin.schneider@arm.com
+---
+ include/linux/sched.h | 10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1646,6 +1646,16 @@ static inline bool is_percpu_thread(void
+ #endif
+ }
+
++/* Is the current task guaranteed to stay on its current CPU? */
++static inline bool is_migratable(void)
++{
++#ifdef CONFIG_SMP
++ return preemptible() && !current->migration_disabled;
++#else
++ return false;
++#endif
++}
++
+ /* Per-process atomic flags. */
+ #define PFA_NO_NEW_PRIVS 0 /* May not gain new privileges. */
+ #define PFA_SPREAD_PAGE 1 /* Spread page cache over cpuset */
diff --git a/patches/series b/patches/series
index cdc8aaa88579..cfcf73bb21ee 100644
--- a/patches/series
+++ b/patches/series
@@ -1,10 +1,15 @@
+# Applied upstream
+# PM-Tree, v5.15
+0001_cpu_pm_make_notifier_chain_use_a_raw_spinlock_t.patch
+0002_notifier_remove_atomic_notifier_call_chain_robust.patch
+
###########################################################################
# Valentin's PCP fixes
###########################################################################
eventfd-Make-signal-recursion-protection-a-task-bit.patch
-sched_Introduce_is_pcpu_safe_.patch
-rcu_nocb_Check_for_migratability_rather_than_pure_preemptability.patch
-arm64_mm_Make_arch_faults_on_old_pte_check_for_migratability.patch
+sched_introduce_migratable.patch
+rcu_nocb_protect_nocb_state_via_local_lock_under_preempt_rt.patch
+arm64_mm_make_arch_faults_on_old_pte_check_for_migratability.patch
###########################################################################
# John's printk queue
@@ -32,7 +37,7 @@ printk__Enhance_the_condition_check_of_msleep_in_pr_flush.patch
###########################################################################
# mm bits polished by Mel and Vlastimil
-# slub-local-lock-v4r1
+# slub-local-lock-v4r3
###########################################################################
0001-mm-slub-don-t-call-flush_all-from-slab_debug_trace_o.patch
0002-mm-slub-allocate-private-object-map-for-debugfs-list.patch
@@ -80,7 +85,6 @@ highmem-Don-t-disable-preemption-on-RT-in-kmap_atomi.patch
###########################################################################
kthread__Move_prio_affinite_change_into_the_newly_created_thread.patch
genirq__Move_prio_assignment_into_the_newly_created_thread.patch
-notifier__Make_atomic_notifiers_use_raw_spinlock.patch
cgroup__use_irqsave_in_cgroup_rstat_flush_locked.patch
mm__workingset__replace_IRQ-off_check_with_a_lockdep_assert..patch
shmem__Use_raw_spinlock_t_for_-stat_lock.patch
@@ -126,78 +130,78 @@ debugobjects__Make_RT_aware.patch
###########################################################################
# Locking core
###########################################################################
-locking-local_lock--Add-missing-owner-initialization.patch
-locking-rtmutex--Set-proper-wait-context-for-lockdep.patch
-sched__Split_out_the_wakeup_state_check.patch
-sched__Introduce_TASK_RTLOCK_WAIT.patch
-sched--Reorganize-current--state-helpers.patch
-sched__Prepare_for_RT_sleeping_spin_rwlocks.patch
-sched__Rework_the___schedule_preempt_argument.patch
-sched__Provide_schedule_point_for_RT_locks.patch
-sched_wake_q__Provide_WAKE_Q_HEAD_INITIALIZER.patch
-media_atomisp_Use_lockdep_instead_of_mutex_is_locked_.patch
-rtmutex--Remove-rt_mutex_is_locked--.patch
-rtmutex__Convert_macros_to_inlines.patch
-rtmutex--Switch-to-try_cmpxchg--.patch
-rtmutex__Split_API_and_implementation.patch
-rtmutex--Split-out-the-inner-parts-of-struct-rtmutex.patch
-locking_rtmutex__Provide_rt_mutex_slowlock_locked.patch
-rtmutex--Provide-rt_mutex_base_is_locked--.patch
-locking__Add_base_code_for_RT_rw_semaphore_and_rwlock.patch
-locking_rwsem__Add_rtmutex_based_R_W_semaphore_implementation.patch
-locking_rtmutex__Add_wake_state_to_rt_mutex_waiter.patch
-locking_rtmutex__Provide_rt_mutex_wake_q_and_helpers.patch
-locking_rtmutex__Use_rt_mutex_wake_q_head.patch
-locking_rtmutex__Prepare_RT_rt_mutex_wake_q_for_RT_locks.patch
-locking_rtmutex__Guard_regular_sleeping_locks_specific_functions.patch
-locking_spinlock__Split_the_lock_types_header.patch
-locking_rtmutex__Prevent_future_include_recursion_hell.patch
-locking_lockdep__Reduce_includes_in_debug_locks.h.patch
-rbtree__Split_out_the_rbtree_type_definitions.patch
-locking_rtmutex__Include_only_rbtree_types.patch
-locking_spinlock__Provide_RT_specific_spinlock_type.patch
-locking_spinlock__Provide_RT_variant_header.patch
-locking_rtmutex__Provide_the_spin_rwlock_core_lock_function.patch
-locking_spinlock__Provide_RT_variant.patch
-locking_rwlock__Provide_RT_variant.patch
-rtmutex--Exclude-!RT-tasks-from-PI-boosting.patch
-locking_mutex__Consolidate_core_headers.patch
-locking_mutex__Move_waiter_to_core_header.patch
-locking_ww_mutex__Move_ww_mutex_declarations_into_ww_mutex.h.patch
-locking_mutex__Make_mutex__wait_lock_raw.patch
-locking_ww_mutex__Simplify_lockdep_annotation.patch
-locking_ww_mutex__Gather_mutex_waiter_initialization.patch
-locking_ww_mutex__Split_up_ww_mutex_unlock.patch
-locking_ww_mutex__Split_W_W_implementation_logic.patch
-locking_ww_mutex__Remove___sched_annotation.patch
-locking_ww_mutex__Abstract_waiter_iteration.patch
-locking_ww_mutex__Abstract_waiter_enqueue.patch
-locking_ww_mutex__Abstract_mutex_accessors.patch
-locking_ww_mutex__Abstract_mutex_types.patch
-locking-ww_mutex--Abstract-internal-lock-access.patch
-locking_ww_mutex__Implement_rt_mutex_accessors.patch
-locking_ww_mutex__Add_RT_priority_to_W_W_order.patch
-locking_ww_mutex__Add_ww_rt_mutex_interface.patch
-locking-rtmutex--Extend-the-rtmutex-core-to-support-ww_mutex.patch
-locking_ww_mutex__Implement_ww_rt_mutex.patch
-locking_rtmutex__Add_mutex_variant_for_RT.patch
-lib_test_lockup__Adapt_to_changed_variables..patch
-futex__Validate_waiter_correctly_in_futex_proxy_trylock_atomic.patch
-futex__Cleanup_stale_comments.patch
-futex--Clarify-futex_requeue---PI-handling.patch
-futex--Remove-bogus-condition-for-requeue-PI.patch
-futex__Correct_the_number_of_requeued_waiters_for_PI.patch
-futex__Restructure_futex_requeue.patch
-futex__Clarify_comment_in_futex_requeue.patch
-futex--Reorder-sanity-checks-in-futex_requeue--.patch
-futex--Simplify-handle_early_requeue_pi_wakeup--.patch
-futex__Prevent_requeue_pi_lock_nesting_issue_on_RT.patch
-rtmutex__Prevent_lockdep_false_positive_with_PI_futexes.patch
-preempt__Adjust_PREEMPT_LOCK_OFFSET_for_RT.patch
-locking_rtmutex__Implement_equal_priority_lock_stealing.patch
-locking_rtmutex__Add_adaptive_spinwait_mechanism.patch
-locking-spinlock-rt--Prepare-for-RT-local_lock.patch
-locking-local_lock--Add-PREEMPT_RT-support.patch
+0001-locking-local_lock-Add-missing-owner-initialization.patch
+0002-locking-rtmutex-Set-proper-wait-context-for-lockdep.patch
+0003-sched-wakeup-Split-out-the-wakeup-__state-check.patch
+0004-sched-wakeup-Introduce-the-TASK_RTLOCK_WAIT-state-bi.patch
+0005-sched-wakeup-Reorganize-the-current-__state-helpers.patch
+0006-sched-wakeup-Prepare-for-RT-sleeping-spin-rwlocks.patch
+0007-sched-core-Rework-the-__schedule-preempt-argument.patch
+0008-sched-core-Provide-a-scheduling-point-for-RT-locks.patch
+0009-sched-wake_q-Provide-WAKE_Q_HEAD_INITIALIZER.patch
+0010-media-atomisp-Use-lockdep-instead-of-mutex_is_locked.patch
+0011-locking-rtmutex-Remove-rt_mutex_is_locked.patch
+0012-locking-rtmutex-Convert-macros-to-inlines.patch
+0013-locking-rtmutex-Switch-to-from-cmpxchg_-to-try_cmpxc.patch
+0014-locking-rtmutex-Split-API-from-implementation.patch
+0015-locking-rtmutex-Split-out-the-inner-parts-of-struct-.patch
+0016-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch
+0017-locking-rtmutex-Provide-rt_mutex_base_is_locked.patch
+0018-locking-rt-Add-base-code-for-RT-rw_semaphore-and-rwl.patch
+0019-locking-rwsem-Add-rtmutex-based-R-W-semaphore-implem.patch
+0020-locking-rtmutex-Add-wake_state-to-rt_mutex_waiter.patch
+0021-locking-rtmutex-Provide-rt_wake_q_head-and-helpers.patch
+0022-locking-rtmutex-Use-rt_mutex_wake_q_head.patch
+0023-locking-rtmutex-Prepare-RT-rt_mutex_wake_q-for-RT-lo.patch
+0024-locking-rtmutex-Guard-regular-sleeping-locks-specifi.patch
+0025-locking-spinlock-Split-the-lock-types-header-and-mov.patch
+0026-locking-rtmutex-Prevent-future-include-recursion-hel.patch
+0027-locking-lockdep-Reduce-header-dependencies-in-linux-.patch
+0028-rbtree-Split-out-the-rbtree-type-definitions-into-li.patch
+0029-locking-rtmutex-Reduce-linux-rtmutex.h-header-depend.patch
+0030-locking-spinlock-Provide-RT-specific-spinlock_t.patch
+0031-locking-spinlock-Provide-RT-variant-header-linux-spi.patch
+0032-locking-rtmutex-Provide-the-spin-rwlock-core-lock-fu.patch
+0033-locking-spinlock-Provide-RT-variant.patch
+0034-locking-rwlock-Provide-RT-variant.patch
+0035-locking-rtmutex-Squash-RT-tasks-to-DEFAULT_PRIO.patch
+0036-locking-mutex-Consolidate-core-headers-remove-kernel.patch
+0037-locking-mutex-Move-the-struct-mutex_waiter-definitio.patch
+0038-locking-ww_mutex-Move-the-ww_mutex-definitions-from-.patch
+0039-locking-mutex-Make-mutex-wait_lock-raw.patch
+0040-locking-ww_mutex-Simplify-lockdep-annotations.patch
+0041-locking-ww_mutex-Gather-mutex_waiter-initialization.patch
+0042-locking-ww_mutex-Split-up-ww_mutex_unlock.patch
+0043-locking-ww_mutex-Split-out-the-W-W-implementation-lo.patch
+0044-locking-ww_mutex-Remove-the-__sched-annotation-from-.patch
+0045-locking-ww_mutex-Abstract-out-the-waiter-iteration.patch
+0046-locking-ww_mutex-Abstract-out-waiter-enqueueing.patch
+0047-locking-ww_mutex-Abstract-out-mutex-accessors.patch
+0048-locking-ww_mutex-Abstract-out-mutex-types.patch
+0049-locking-ww_mutex-Abstract-out-internal-lock-accesses.patch
+0050-locking-ww_mutex-Implement-rt_mutex-accessors.patch
+0051-locking-ww_mutex-Add-RT-priority-to-W-W-order.patch
+0052-locking-ww_mutex-Add-rt_mutex-based-lock-type-and-ac.patch
+0053-locking-rtmutex-Extend-the-rtmutex-core-to-support-w.patch
+0054-locking-ww_mutex-Implement-rtmutex-based-ww_mutex-AP.patch
+0055-locking-rtmutex-Add-mutex-variant-for-RT.patch
+0056-lib-test_lockup-Adapt-to-changed-variables.patch
+0057-futex-Validate-waiter-correctly-in-futex_proxy_trylo.patch
+0058-futex-Clean-up-stale-comments.patch
+0059-futex-Clarify-futex_requeue-PI-handling.patch
+0060-futex-Remove-bogus-condition-for-requeue-PI.patch
+0061-futex-Correct-the-number-of-requeued-waiters-for-PI.patch
+0062-futex-Restructure-futex_requeue.patch
+0063-futex-Clarify-comment-in-futex_requeue.patch
+0064-futex-Reorder-sanity-checks-in-futex_requeue.patch
+0065-futex-Simplify-handle_early_requeue_pi_wakeup.patch
+0066-futex-Prevent-requeue_pi-lock-nesting-issue-on-RT.patch
+0067-locking-rtmutex-Prevent-lockdep-false-positive-with-.patch
+0068-preempt-Adjust-PREEMPT_LOCK_OFFSET-for-RT.patch
+0069-locking-rtmutex-Implement-equal-priority-lock-steali.patch
+0070-locking-rtmutex-Add-adaptive-spinwait-mechanism.patch
+0071-locking-spinlock-rt-Prepare-for-RT-local_lock.patch
+0072-locking-local_lock-Add-PREEMPT_RT-support.patch
###########################################################################
# Locking: RT bits. Need review
@@ -210,9 +214,11 @@ lockdep-selftests-Avoid-using-local_lock_-acquire-re.patch
0006-lockdep-selftests-Add-rtmutex-to-the-last-column.patch
0007-lockdep-selftests-Unbalanced-migrate_disable-rcu_rea.patch
0008-lockdep-selftests-Skip-the-softirq-related-tests-on-.patch
-0009-lockdep-selftests-Use-correct-depmap-for-local_lock-.patch
0010-lockdep-selftests-Adapt-ww-tests-for-PREEMPT_RT.patch
+# Unbreaks powerpc
+locking-Allow-to-include-asm-spinlock_types.h-from-l.patch
+
###########################################################################
# preempt: Conditional variants
###########################################################################
@@ -376,7 +382,6 @@ powerpc__traps__Use_PREEMPT_RT.patch
powerpc_pseries_iommu__Use_a_locallock_instead_local_irq_save.patch
powerpc_kvm__Disable_in-kernel_MPIC_emulation_for_PREEMPT_RT.patch
powerpc_stackprotector__work_around_stack-guard_init_from_atomic.patch
-powerpc__Avoid_recursive_header_includes.patch
POWERPC__Allow_to_enable_RT.patch
###########################################################################