diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2020-10-28 20:55:55 +0100 |
---|---|---|
committer | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2020-10-28 20:55:55 +0100 |
commit | 9082365bd2c373340b3b1628bd01bb4d54d5724e (patch) | |
tree | 77dcfcac187344234cd3a5582056efd22306f5b8 /patches | |
parent | e8077fadb604e13404de3e31ed2d674b8fca3e5c (diff) | |
download | linux-rt-9082365bd2c373340b3b1628bd01bb4d54d5724e.tar.gz |
[ANNOUNCE] v5.9.1-rt20v5.9.1-rt20-patches
Dear RT folks!
I'm pleased to announce the v5.9.1-rt20 patch set.
Changes since v5.9.1-rt19:
- Tiny update to the rtmutex patches (make __read_rt_trylock()
static).
- The test_lockup module failed to compile. Reported by Fernando
Lopez-Lezcano.
- The `kcompactd' daemon together with MEMCG could have accessed
per-CPU variables in preemtible context.
- The patch for the crash in the block layer (previously reported by
David Runge) has been replaced with another set of patches which
were submitted upstream.
Known issues
- It has been pointed out that due to changes to the printk code the
internal buffer representation changed. This is only an issue if tools
like `crash' are used to extract the printk buffer from a kernel memory
image.
The delta patch against v5.9.1-rt19 is appended below and can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.9/incr/patch-5.9.1-rt19-rt20.patch.xz
You can get this release via the git tree at:
git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.9.1-rt20
The RT patch against v5.9.1 can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.9/older/patch-5.9.1-rt20.patch.xz
The split quilt queue is available at:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.9/older/patches-5.9.1-rt20.tar.xz
Sebastian
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Diffstat (limited to 'patches')
35 files changed, 402 insertions, 118 deletions
diff --git a/patches/0001-blk-mq-Don-t-complete-on-a-remote-CPU-in-force-threa.patch b/patches/0001-blk-mq-Don-t-complete-on-a-remote-CPU-in-force-threa.patch new file mode 100644 index 000000000000..14890678d37b --- /dev/null +++ b/patches/0001-blk-mq-Don-t-complete-on-a-remote-CPU-in-force-threa.patch @@ -0,0 +1,37 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Wed, 28 Oct 2020 11:07:44 +0100 +Subject: [PATCH 1/3] blk-mq: Don't complete on a remote CPU in force threaded + mode + +With force threaded interrupts enabled, raising softirq from an SMP +function call will always result in waking the ksoftirqd thread. This is +not optimal given that the thread runs at SCHED_OTHER priority. + +Completing the request in hard IRQ-context on PREEMPT_RT (which enforces +the force threaded mode) is bad because the completion handler may +acquire sleeping locks which violate the locking context. + +Disable request completing on a remote CPU in force threaded mode. + +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + block/blk-mq.c | 8 ++++++++ + 1 file changed, 8 insertions(+) + +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -648,6 +648,14 @@ static inline bool blk_mq_complete_need_ + if (!IS_ENABLED(CONFIG_SMP) || + !test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags)) + return false; ++ /* ++ * With force threaded interrupts enabled, raising softirq from an SMP ++ * function call will always result in waking the ksoftirqd thread. ++ * This is probably worse than completing the request on a different ++ * cache domain. ++ */ ++ if (force_irqthreads) ++ return false; + + /* same CPU or cache domain? Complete locally */ + if (cpu == rq->mq_ctx->cpu || diff --git a/patches/0001-locking-rtmutex-Remove-cruft.patch b/patches/0001-locking-rtmutex-Remove-cruft.patch index d353ef8aca37..7e9c2e00ab7d 100644 --- a/patches/0001-locking-rtmutex-Remove-cruft.patch +++ b/patches/0001-locking-rtmutex-Remove-cruft.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Tue, 29 Sep 2020 15:21:17 +0200 -Subject: [PATCH 01/23] locking/rtmutex: Remove cruft +Subject: [PATCH 01/22] locking/rtmutex: Remove cruft Most of this is around since the very beginning. I'm not sure if this was used while the rtmutex-deadlock-tester was around but today it seems diff --git a/patches/0002-blk-mq-Always-complete-remote-completions-requests-i.patch b/patches/0002-blk-mq-Always-complete-remote-completions-requests-i.patch new file mode 100644 index 000000000000..b96b949edc61 --- /dev/null +++ b/patches/0002-blk-mq-Always-complete-remote-completions-requests-i.patch @@ -0,0 +1,38 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Wed, 28 Oct 2020 11:07:09 +0100 +Subject: [PATCH 2/3] blk-mq: Always complete remote completions requests in + softirq + +Controllers with multiple queues have their IRQ-handelers pinned to a +CPU. The core shouldn't need to complete the request on a remote CPU. + +Remove this case and always raise the softirq to complete the request. + +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + block/blk-mq.c | 14 +------------- + 1 file changed, 1 insertion(+), 13 deletions(-) + +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -626,19 +626,7 @@ static void __blk_mq_complete_request_re + { + struct request *rq = data; + +- /* +- * For most of single queue controllers, there is only one irq vector +- * for handling I/O completion, and the only irq's affinity is set +- * to all possible CPUs. On most of ARCHs, this affinity means the irq +- * is handled on one specific CPU. +- * +- * So complete I/O requests in softirq context in case of single queue +- * devices to avoid degrading I/O performance due to irqsoff latency. +- */ +- if (rq->q->nr_hw_queues == 1) +- blk_mq_trigger_softirq(rq); +- else +- rq->q->mq_ops->complete(rq); ++ blk_mq_trigger_softirq(rq); + } + + static inline bool blk_mq_complete_need_ipi(struct request *rq) diff --git a/patches/0002-locking-rtmutex-Remove-output-from-deadlock-detector.patch b/patches/0002-locking-rtmutex-Remove-output-from-deadlock-detector.patch index 5ba0b7240ea9..b317425feb04 100644 --- a/patches/0002-locking-rtmutex-Remove-output-from-deadlock-detector.patch +++ b/patches/0002-locking-rtmutex-Remove-output-from-deadlock-detector.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Tue, 29 Sep 2020 16:05:11 +0200 -Subject: [PATCH 02/23] locking/rtmutex: Remove output from deadlock detector. +Subject: [PATCH 02/22] locking/rtmutex: Remove output from deadlock detector. In commit f5694788ad8da ("rt_mutex: Add lockdep annotations") diff --git a/patches/0003-blk-mq-Use-llist_head-for-blk_cpu_done.patch b/patches/0003-blk-mq-Use-llist_head-for-blk_cpu_done.patch new file mode 100644 index 000000000000..e80b8253bdef --- /dev/null +++ b/patches/0003-blk-mq-Use-llist_head-for-blk_cpu_done.patch @@ -0,0 +1,165 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Wed, 28 Oct 2020 11:08:21 +0100 +Subject: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done + +With llist_head it is possible to avoid the locking (the irq-off region) +when items are added. This makes it possible to add items on a remote +CPU. +llist_add() returns true if the list was previously empty. This can be +used to invoke the SMP function call / raise sofirq only if the first +item was added (otherwise it is already pending). +This simplifies the code a little and reduces the IRQ-off regions. With +this change it possible to reduce the SMP-function call a simple +__raise_softirq_irqoff(). + +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + block/blk-mq.c | 78 +++++++++++++++---------------------------------- + include/linux/blkdev.h | 2 - + 2 files changed, 26 insertions(+), 54 deletions(-) + +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -41,7 +41,7 @@ + #include "blk-mq-sched.h" + #include "blk-rq-qos.h" + +-static DEFINE_PER_CPU(struct list_head, blk_cpu_done); ++static DEFINE_PER_CPU(struct llist_head, blk_cpu_done); + + static void blk_mq_poll_stats_start(struct request_queue *q); + static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb); +@@ -565,68 +565,32 @@ void blk_mq_end_request(struct request * + } + EXPORT_SYMBOL(blk_mq_end_request); + +-/* +- * Softirq action handler - move entries to local list and loop over them +- * while passing them to the queue registered handler. +- */ +-static __latent_entropy void blk_done_softirq(struct softirq_action *h) ++static void blk_complete_reqs(struct llist_head *cpu_list) + { +- struct list_head *cpu_list, local_list; ++ struct llist_node *entry; ++ struct request *rq, *rq_next; + +- local_irq_disable(); +- cpu_list = this_cpu_ptr(&blk_cpu_done); +- list_replace_init(cpu_list, &local_list); +- local_irq_enable(); ++ entry = llist_del_all(cpu_list); ++ entry = llist_reverse_order(entry); + +- while (!list_empty(&local_list)) { +- struct request *rq; +- +- rq = list_entry(local_list.next, struct request, ipi_list); +- list_del_init(&rq->ipi_list); ++ llist_for_each_entry_safe(rq, rq_next, entry, ipi_list) + rq->q->mq_ops->complete(rq); +- } + } + +-static void blk_mq_trigger_softirq(struct request *rq) ++static __latent_entropy void blk_done_softirq(struct softirq_action *h) + { +- struct list_head *list; +- unsigned long flags; +- +- local_irq_save(flags); +- list = this_cpu_ptr(&blk_cpu_done); +- list_add_tail(&rq->ipi_list, list); +- +- /* +- * If the list only contains our just added request, signal a raise of +- * the softirq. If there are already entries there, someone already +- * raised the irq but it hasn't run yet. +- */ +- if (list->next == &rq->ipi_list) +- raise_softirq_irqoff(BLOCK_SOFTIRQ); +- local_irq_restore(flags); ++ blk_complete_reqs(this_cpu_ptr(&blk_cpu_done)); + } + + static int blk_softirq_cpu_dead(unsigned int cpu) + { +- /* +- * If a CPU goes away, splice its entries to the current CPU +- * and trigger a run of the softirq +- */ +- local_irq_disable(); +- list_splice_init(&per_cpu(blk_cpu_done, cpu), +- this_cpu_ptr(&blk_cpu_done)); +- raise_softirq_irqoff(BLOCK_SOFTIRQ); +- local_irq_enable(); +- ++ blk_complete_reqs(&per_cpu(blk_cpu_done, cpu)); + return 0; + } + +- + static void __blk_mq_complete_request_remote(void *data) + { +- struct request *rq = data; +- +- blk_mq_trigger_softirq(rq); ++ __raise_softirq_irqoff(BLOCK_SOFTIRQ); + } + + static inline bool blk_mq_complete_need_ipi(struct request *rq) +@@ -657,6 +621,7 @@ static inline bool blk_mq_complete_need_ + + bool blk_mq_complete_request_remote(struct request *rq) + { ++ struct llist_head *cpu_list; + WRITE_ONCE(rq->state, MQ_RQ_COMPLETE); + + /* +@@ -667,14 +632,21 @@ bool blk_mq_complete_request_remote(stru + return false; + + if (blk_mq_complete_need_ipi(rq)) { +- rq->csd.func = __blk_mq_complete_request_remote; +- rq->csd.info = rq; +- rq->csd.flags = 0; +- smp_call_function_single_async(rq->mq_ctx->cpu, &rq->csd); ++ unsigned int cpu; ++ ++ cpu = rq->mq_ctx->cpu; ++ cpu_list = &per_cpu(blk_cpu_done, cpu); ++ if (llist_add(&rq->ipi_list, cpu_list)) { ++ rq->csd.func = __blk_mq_complete_request_remote; ++ rq->csd.flags = 0; ++ smp_call_function_single_async(cpu, &rq->csd); ++ } + } else { + if (rq->q->nr_hw_queues > 1) + return false; +- blk_mq_trigger_softirq(rq); ++ cpu_list = this_cpu_ptr(&blk_cpu_done); ++ if (llist_add(&rq->ipi_list, cpu_list)) ++ raise_softirq(BLOCK_SOFTIRQ); + } + + return true; +@@ -3877,7 +3849,7 @@ static int __init blk_mq_init(void) + int i; + + for_each_possible_cpu(i) +- INIT_LIST_HEAD(&per_cpu(blk_cpu_done, i)); ++ init_llist_head(&per_cpu(blk_cpu_done, i)); + open_softirq(BLOCK_SOFTIRQ, blk_done_softirq); + + cpuhp_setup_state_nocalls(CPUHP_BLOCK_SOFTIRQ_DEAD, +--- a/include/linux/blkdev.h ++++ b/include/linux/blkdev.h +@@ -154,7 +154,7 @@ struct request { + */ + union { + struct hlist_node hash; /* merge hash */ +- struct list_head ipi_list; ++ struct llist_node ipi_list; + }; + + /* diff --git a/patches/0003-locking-rtmutex-Move-rt_mutex_init-outside-of-CONFIG.patch b/patches/0003-locking-rtmutex-Move-rt_mutex_init-outside-of-CONFIG.patch index 6a89e32343ba..6cbb41b8b7bd 100644 --- a/patches/0003-locking-rtmutex-Move-rt_mutex_init-outside-of-CONFIG.patch +++ b/patches/0003-locking-rtmutex-Move-rt_mutex_init-outside-of-CONFIG.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Tue, 29 Sep 2020 16:32:49 +0200 -Subject: [PATCH 03/23] locking/rtmutex: Move rt_mutex_init() outside of +Subject: [PATCH 03/22] locking/rtmutex: Move rt_mutex_init() outside of CONFIG_DEBUG_RT_MUTEXES rt_mutex_init() only initializes lockdep if CONFIG_DEBUG_RT_MUTEXES is diff --git a/patches/0004-locking-rtmutex-Remove-rt_mutex_timed_lock.patch b/patches/0004-locking-rtmutex-Remove-rt_mutex_timed_lock.patch index f58a7401300c..a7c2235c44b4 100644 --- a/patches/0004-locking-rtmutex-Remove-rt_mutex_timed_lock.patch +++ b/patches/0004-locking-rtmutex-Remove-rt_mutex_timed_lock.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Wed, 7 Oct 2020 12:11:33 +0200 -Subject: [PATCH 04/23] locking/rtmutex: Remove rt_mutex_timed_lock() +Subject: [PATCH 04/22] locking/rtmutex: Remove rt_mutex_timed_lock() rt_mutex_timed_lock() has no callers since commit c051b21f71d1f ("rtmutex: Confine deadlock logic to futex") diff --git a/patches/0005-locking-rtmutex-Handle-the-various-new-futex-race-co.patch b/patches/0005-locking-rtmutex-Handle-the-various-new-futex-race-co.patch index 0853ac5edb8e..02687998046c 100644 --- a/patches/0005-locking-rtmutex-Handle-the-various-new-futex-race-co.patch +++ b/patches/0005-locking-rtmutex-Handle-the-various-new-futex-race-co.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Fri, 10 Jun 2011 11:04:15 +0200 -Subject: [PATCH 05/23] locking/rtmutex: Handle the various new futex race +Subject: [PATCH 05/22] locking/rtmutex: Handle the various new futex race conditions RT opens a few new interesting race conditions in the rtmutex/futex diff --git a/patches/0006-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch b/patches/0006-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch index 7a6645c7c6a6..26d919fdb2fc 100644 --- a/patches/0006-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch +++ b/patches/0006-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch @@ -1,6 +1,6 @@ From: Steven Rostedt <rostedt@goodmis.org> Date: Tue, 14 Jul 2015 14:26:34 +0200 -Subject: [PATCH 06/23] futex: Fix bug on when a requeued RT task times out +Subject: [PATCH 06/22] futex: Fix bug on when a requeued RT task times out Requeue with timeout causes a bug with PREEMPT_RT. diff --git a/patches/0008-locking-rtmutex-Make-lock_killable-work.patch b/patches/0007-locking-rtmutex-Make-lock_killable-work.patch index 8e871378cb80..f1e672e9f1bb 100644 --- a/patches/0008-locking-rtmutex-Make-lock_killable-work.patch +++ b/patches/0007-locking-rtmutex-Make-lock_killable-work.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Sat, 1 Apr 2017 12:50:59 +0200 -Subject: [PATCH 08/23] locking/rtmutex: Make lock_killable work +Subject: [PATCH 07/22] locking/rtmutex: Make lock_killable work Locking an rt mutex killable does not work because signal handling is restricted to TASK_INTERRUPTIBLE. diff --git a/patches/0009-locking-spinlock-Split-the-lock-types-header.patch b/patches/0008-locking-spinlock-Split-the-lock-types-header.patch index 029dd86e567e..d6b9e9d20504 100644 --- a/patches/0009-locking-spinlock-Split-the-lock-types-header.patch +++ b/patches/0008-locking-spinlock-Split-the-lock-types-header.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Wed, 29 Jun 2011 19:34:01 +0200 -Subject: [PATCH 09/23] locking/spinlock: Split the lock types header +Subject: [PATCH 08/22] locking/spinlock: Split the lock types header Split raw_spinlock into its own file and the remaining spinlock_t into its own non-RT header. The non-RT header will be replaced later by sleeping diff --git a/patches/0010-locking-rtmutex-Avoid-include-hell.patch b/patches/0009-locking-rtmutex-Avoid-include-hell.patch index 9b305295caf8..4eb12e8898da 100644 --- a/patches/0010-locking-rtmutex-Avoid-include-hell.patch +++ b/patches/0009-locking-rtmutex-Avoid-include-hell.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Wed, 29 Jun 2011 20:06:39 +0200 -Subject: [PATCH 10/23] locking/rtmutex: Avoid include hell +Subject: [PATCH 09/22] locking/rtmutex: Avoid include hell Include only the required raw types. This avoids pulling in the complete spinlock header which in turn requires rtmutex.h at some point. diff --git a/patches/0011-lockdep-Reduce-header-files-in-debug_locks.h.patch b/patches/0010-lockdep-Reduce-header-files-in-debug_locks.h.patch index fe0a6fad4153..fe0a6fad4153 100644 --- a/patches/0011-lockdep-Reduce-header-files-in-debug_locks.h.patch +++ b/patches/0010-lockdep-Reduce-header-files-in-debug_locks.h.patch diff --git a/patches/0012-locking-split-out-the-rbtree-definition.patch b/patches/0011-locking-split-out-the-rbtree-definition.patch index 7dab8848df37..cb0ab1fb16e8 100644 --- a/patches/0012-locking-split-out-the-rbtree-definition.patch +++ b/patches/0011-locking-split-out-the-rbtree-definition.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Fri, 14 Aug 2020 17:08:41 +0200 -Subject: [PATCH 12/23] locking: split out the rbtree definition +Subject: [PATCH 11/22] locking: split out the rbtree definition rtmutex.h needs the definition for rb_root_cached. By including kernel.h we will get to spinlock.h which requires rtmutex.h again. diff --git a/patches/0013-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch b/patches/0012-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch index 3a27371b9df0..682db58002dc 100644 --- a/patches/0013-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch +++ b/patches/0012-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 16:14:22 +0200 -Subject: [PATCH 13/23] locking/rtmutex: Provide rt_mutex_slowlock_locked() +Subject: [PATCH 12/22] locking/rtmutex: Provide rt_mutex_slowlock_locked() This is the inner-part of rt_mutex_slowlock(), required for rwsem-rt. diff --git a/patches/0014-locking-rtmutex-export-lockdep-less-version-of-rt_mu.patch b/patches/0013-locking-rtmutex-export-lockdep-less-version-of-rt_mu.patch index da06ec23a1bd..11d1fdbedf82 100644 --- a/patches/0014-locking-rtmutex-export-lockdep-less-version-of-rt_mu.patch +++ b/patches/0013-locking-rtmutex-export-lockdep-less-version-of-rt_mu.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 16:36:39 +0200 -Subject: [PATCH 14/23] locking/rtmutex: export lockdep-less version of +Subject: [PATCH 13/22] locking/rtmutex: export lockdep-less version of rt_mutex's lock, trylock and unlock Required for lock implementation ontop of rtmutex. diff --git a/patches/0015-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch b/patches/0014-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch index 20af7d2fe22b..c02ef85dddcb 100644 --- a/patches/0015-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch +++ b/patches/0014-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Sat, 25 Jun 2011 09:21:04 +0200 -Subject: [PATCH 15/23] sched: Add saved_state for tasks blocked on sleeping +Subject: [PATCH 14/22] sched: Add saved_state for tasks blocked on sleeping locks Spinlocks are state preserving in !RT. RT changes the state when a diff --git a/patches/0016-locking-rtmutex-add-sleeping-lock-implementation.patch b/patches/0015-locking-rtmutex-add-sleeping-lock-implementation.patch index 3830e36e4126..af0f0fcdfaea 100644 --- a/patches/0016-locking-rtmutex-add-sleeping-lock-implementation.patch +++ b/patches/0015-locking-rtmutex-add-sleeping-lock-implementation.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 17:11:19 +0200 -Subject: [PATCH 16/23] locking/rtmutex: add sleeping lock implementation +Subject: [PATCH 15/22] locking/rtmutex: add sleeping lock implementation Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> diff --git a/patches/0017-locking-rtmutex-Allow-rt_mutex_trylock-on-PREEMPT_RT.patch b/patches/0016-locking-rtmutex-Allow-rt_mutex_trylock-on-PREEMPT_RT.patch index cc03f0bb1b3d..0412101446f0 100644 --- a/patches/0017-locking-rtmutex-Allow-rt_mutex_trylock-on-PREEMPT_RT.patch +++ b/patches/0016-locking-rtmutex-Allow-rt_mutex_trylock-on-PREEMPT_RT.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Wed, 2 Dec 2015 11:34:07 +0100 -Subject: [PATCH 17/23] locking/rtmutex: Allow rt_mutex_trylock() on PREEMPT_RT +Subject: [PATCH 16/22] locking/rtmutex: Allow rt_mutex_trylock() on PREEMPT_RT Non PREEMPT_RT kernel can deadlock on rt_mutex_trylock() in softirq context. diff --git a/patches/0018-locking-rtmutex-add-mutex-implementation-based-on-rt.patch b/patches/0017-locking-rtmutex-add-mutex-implementation-based-on-rt.patch index 21037b2580d8..06198cd72d91 100644 --- a/patches/0018-locking-rtmutex-add-mutex-implementation-based-on-rt.patch +++ b/patches/0017-locking-rtmutex-add-mutex-implementation-based-on-rt.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 17:17:03 +0200 -Subject: [PATCH 18/23] locking/rtmutex: add mutex implementation based on +Subject: [PATCH 17/22] locking/rtmutex: add mutex implementation based on rtmutex Signed-off-by: Thomas Gleixner <tglx@linutronix.de> diff --git a/patches/0019-locking-rtmutex-add-rwsem-implementation-based-on-rt.patch b/patches/0018-locking-rtmutex-add-rwsem-implementation-based-on-rt.patch index c3d3173593f4..a3f5e00e9a26 100644 --- a/patches/0019-locking-rtmutex-add-rwsem-implementation-based-on-rt.patch +++ b/patches/0018-locking-rtmutex-add-rwsem-implementation-based-on-rt.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 17:28:34 +0200 -Subject: [PATCH 19/23] locking/rtmutex: add rwsem implementation based on +Subject: [PATCH 18/22] locking/rtmutex: add rwsem implementation based on rtmutex The RT specific R/W semaphore implementation restricts the number of readers diff --git a/patches/0020-locking-rtmutex-add-rwlock-implementation-based-on-r.patch b/patches/0019-locking-rtmutex-add-rwlock-implementation-based-on-r.patch index c9ab69d70be1..99f27069ce8f 100644 --- a/patches/0020-locking-rtmutex-add-rwlock-implementation-based-on-r.patch +++ b/patches/0019-locking-rtmutex-add-rwlock-implementation-based-on-r.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 17:18:06 +0200 -Subject: [PATCH 20/23] locking/rtmutex: add rwlock implementation based on +Subject: [PATCH 19/22] locking/rtmutex: add rwlock implementation based on rtmutex The implementation is bias-based, similar to the rwsem implementation. @@ -265,7 +265,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> + lock->rtmutex.save_state = 1; +} + -+int __read_rt_trylock(struct rt_rw_lock *lock) ++static int __read_rt_trylock(struct rt_rw_lock *lock) +{ + int r, old; + diff --git a/patches/0021-locking-rtmutex-wire-up-RT-s-locking.patch b/patches/0020-locking-rtmutex-wire-up-RT-s-locking.patch index d64a9058c47c..3faa45082217 100644 --- a/patches/0021-locking-rtmutex-wire-up-RT-s-locking.patch +++ b/patches/0020-locking-rtmutex-wire-up-RT-s-locking.patch @@ -1,6 +1,6 @@ From: Thomas Gleixner <tglx@linutronix.de> Date: Thu, 12 Oct 2017 17:31:14 +0200 -Subject: [PATCH 21/23] locking/rtmutex: wire up RT's locking +Subject: [PATCH 20/22] locking/rtmutex: wire up RT's locking Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> diff --git a/patches/0022-locking-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch b/patches/0021-locking-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch index e59b8c3c3c11..05720343f8e9 100644 --- a/patches/0022-locking-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch +++ b/patches/0021-locking-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Thu, 12 Oct 2017 17:34:38 +0200 -Subject: [PATCH 22/23] locking/rtmutex: add ww_mutex addon for mutex-rt +Subject: [PATCH 21/22] locking/rtmutex: add ww_mutex addon for mutex-rt Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- diff --git a/patches/0023-locking-rtmutex-Use-custom-scheduling-function-for-s.patch b/patches/0022-locking-rtmutex-Use-custom-scheduling-function-for-s.patch index 0837da1c0e57..a2bc95cb8ec1 100644 --- a/patches/0023-locking-rtmutex-Use-custom-scheduling-function-for-s.patch +++ b/patches/0022-locking-rtmutex-Use-custom-scheduling-function-for-s.patch @@ -1,6 +1,6 @@ From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Date: Tue, 6 Oct 2020 13:07:17 +0200 -Subject: [PATCH 23/23] locking/rtmutex: Use custom scheduling function for +Subject: [PATCH 22/22] locking/rtmutex: Use custom scheduling function for spin-schedule() PREEMPT_RT builds the rwsem, mutex, spinlock and rwlock typed locks on diff --git a/patches/blk-mq-Don-t-IPI-requests-on-PREEMPT_RT.patch b/patches/blk-mq-Don-t-IPI-requests-on-PREEMPT_RT.patch deleted file mode 100644 index 4754e90fd194..000000000000 --- a/patches/blk-mq-Don-t-IPI-requests-on-PREEMPT_RT.patch +++ /dev/null @@ -1,37 +0,0 @@ -From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> -Date: Fri, 23 Oct 2020 12:21:51 +0200 -Subject: [PATCH] blk-mq: Don't IPI requests on PREEMPT_RT - -blk_mq_complete_request_remote() will dispatch request completion to -another CPU via IPI if the CPU belongs to a different cache domain. - -This breaks on PREEMPT_RT because the IPI function will complete the -request in IRQ context which includes acquiring spinlock_t typed locks. -Completing the IPI request in softirq on the remote CPU is probably less -efficient because it would require to wake ksoftirqd for this task -(which runs at SCHED_OTHER). - -Ignoring the IPI request and completing the request locally is probably -the best option. It be completed either in the IRQ-thread or at the end -of the routine in softirq context. - -Let blk_mq_complete_need_ipi() return that there is no need for IPI on -PREEMPT_RT. - -Reported-by: David Runge <dave@sleepmap.de> -Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> ---- - block/blk-mq.c | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - ---- a/block/blk-mq.c -+++ b/block/blk-mq.c -@@ -645,7 +645,7 @@ static inline bool blk_mq_complete_need_ - { - int cpu = raw_smp_processor_id(); - -- if (!IS_ENABLED(CONFIG_SMP) || -+ if (!IS_ENABLED(CONFIG_SMP) || IS_ENABLED(CONFIG_PREEMPT_RT) || - !test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags)) - return false; - diff --git a/patches/block-mq-drop-preempt-disable.patch b/patches/block-mq-drop-preempt-disable.patch index e04415e39b40..349623235226 100644 --- a/patches/block-mq-drop-preempt-disable.patch +++ b/patches/block-mq-drop-preempt-disable.patch @@ -13,7 +13,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- a/block/blk-mq.c +++ b/block/blk-mq.c -@@ -1605,14 +1605,14 @@ static void __blk_mq_delay_run_hw_queue( +@@ -1571,14 +1571,14 @@ static void __blk_mq_delay_run_hw_queue( return; if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) { diff --git a/patches/lib-test_lockup-Minimum-fix-to-get-it-compiled-on-PR.patch b/patches/lib-test_lockup-Minimum-fix-to-get-it-compiled-on-PR.patch new file mode 100644 index 000000000000..04b1d80ad4ca --- /dev/null +++ b/patches/lib-test_lockup-Minimum-fix-to-get-it-compiled-on-PR.patch @@ -0,0 +1,57 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Wed, 28 Oct 2020 18:55:27 +0100 +Subject: [PATCH] lib/test_lockup: Minimum fix to get it compiled on PREEMPT_RT + +On PREEMPT_RT the locks are quite different so they can't be tested as +it is done below. The alternative is test for the waitlock within +rtmutex. + +This is the bare minim to get it compiled. Problems which exists on +PREEMP_RT: +- none of the locks (spinlock_t, rwlock_t, mutex_t, rw_semaphore) may be + acquired with disabled preemption or interrupts. + If I read the code correct the it is possible to acquire a mutex with + disabled interrupts. + I don't know how to obtain a lock pointer. Technically they are not + exported to userland. + +- memory can not be allocated with disabled premption or interrupts even + with GFP_ATOMIC. + +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + lib/test_lockup.c | 16 ++++++++++++++++ + 1 file changed, 16 insertions(+) + +--- a/lib/test_lockup.c ++++ b/lib/test_lockup.c +@@ -480,6 +480,21 @@ static int __init test_lockup_init(void) + return -EINVAL; + + #ifdef CONFIG_DEBUG_SPINLOCK ++#ifdef CONFIG_PREEMPT_RT ++ if (test_magic(lock_spinlock_ptr, ++ offsetof(spinlock_t, lock.wait_lock.magic), ++ SPINLOCK_MAGIC) || ++ test_magic(lock_rwlock_ptr, ++ offsetof(rwlock_t, rtmutex.wait_lock.magic), ++ SPINLOCK_MAGIC) || ++ test_magic(lock_mutex_ptr, ++ offsetof(struct mutex, lock.wait_lock.magic), ++ SPINLOCK_MAGIC) || ++ test_magic(lock_rwsem_ptr, ++ offsetof(struct rw_semaphore, rtmutex.wait_lock.magic), ++ SPINLOCK_MAGIC)) ++ return -EINVAL; ++#else + if (test_magic(lock_spinlock_ptr, + offsetof(spinlock_t, rlock.magic), + SPINLOCK_MAGIC) || +@@ -494,6 +509,7 @@ static int __init test_lockup_init(void) + SPINLOCK_MAGIC)) + return -EINVAL; + #endif ++#endif + + if ((wait_state != TASK_RUNNING || + (call_cond_resched && !reacquire_locks) || diff --git a/patches/localversion.patch b/patches/localversion.patch index 19d7ea05016c..d7c1a50b87ee 100644 --- a/patches/localversion.patch +++ b/patches/localversion.patch @@ -10,4 +10,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- /dev/null +++ b/localversion-rt @@ -0,0 +1 @@ -+-rt19 ++-rt20 diff --git a/patches/mm-memcontrol-Disable-preemption-in-__mod_memcg_lruv.patch b/patches/mm-memcontrol-Disable-preemption-in-__mod_memcg_lruv.patch new file mode 100644 index 000000000000..0ba8d2d99d95 --- /dev/null +++ b/patches/mm-memcontrol-Disable-preemption-in-__mod_memcg_lruv.patch @@ -0,0 +1,37 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Wed, 28 Oct 2020 18:15:32 +0100 +Subject: [PATCH] mm/memcontrol: Disable preemption in + __mod_memcg_lruvec_state() + +The callers expect disabled preemption/interrupts while invoking +__mod_memcg_lruvec_state(). This works mainline because a lock of +somekind is acquired. + +Use preempt_disable_rt() where per-CPU variables are accessed and a +stable pointer is expected. This is also done in __mod_zone_page_state() +for the same reason. + +Cc: stable-rt@vger.kernel.org +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + mm/memcontrol.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -821,6 +821,7 @@ void __mod_memcg_lruvec_state(struct lru + pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); + memcg = pn->memcg; + ++ preempt_disable_rt(); + /* Update memcg */ + __mod_memcg_state(memcg, idx, val); + +@@ -840,6 +841,7 @@ void __mod_memcg_lruvec_state(struct lru + x = 0; + } + __this_cpu_write(pn->lruvec_stat_cpu->count[idx], x); ++ preempt_enable_rt(); + } + + /** diff --git a/patches/mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch b/patches/mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch index d2486ad392f5..fbdee090e8c5 100644 --- a/patches/mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch +++ b/patches/mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch @@ -48,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- a/mm/memcontrol.c +++ b/mm/memcontrol.c -@@ -2301,7 +2301,7 @@ static void drain_all_stock(struct mem_c +@@ -2303,7 +2303,7 @@ static void drain_all_stock(struct mem_c * as well as workers from this path always operate on the local * per-cpu data. CPU up doesn't touch memcg_stock at all. */ @@ -57,7 +57,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> for_each_online_cpu(cpu) { struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); struct mem_cgroup *memcg; -@@ -2324,7 +2324,7 @@ static void drain_all_stock(struct mem_c +@@ -2326,7 +2326,7 @@ static void drain_all_stock(struct mem_c schedule_work_on(cpu, &stock->work); } } diff --git a/patches/mm-memcontrol-Provide-a-local_lock-for-per-CPU-memcg.patch b/patches/mm-memcontrol-Provide-a-local_lock-for-per-CPU-memcg.patch index effaabd7d665..f82c37d5b343 100644 --- a/patches/mm-memcontrol-Provide-a-local_lock-for-per-CPU-memcg.patch +++ b/patches/mm-memcontrol-Provide-a-local_lock-for-per-CPU-memcg.patch @@ -20,7 +20,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- a/mm/memcontrol.c +++ b/mm/memcontrol.c -@@ -2154,6 +2154,7 @@ void unlock_page_memcg(struct page *page +@@ -2156,6 +2156,7 @@ void unlock_page_memcg(struct page *page EXPORT_SYMBOL(unlock_page_memcg); struct memcg_stock_pcp { @@ -28,7 +28,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> struct mem_cgroup *cached; /* this never be root cgroup */ unsigned int nr_pages; -@@ -2205,7 +2206,7 @@ static bool consume_stock(struct mem_cgr +@@ -2207,7 +2208,7 @@ static bool consume_stock(struct mem_cgr if (nr_pages > MEMCG_CHARGE_BATCH) return ret; @@ -37,7 +37,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> stock = this_cpu_ptr(&memcg_stock); if (memcg == stock->cached && stock->nr_pages >= nr_pages) { -@@ -2213,7 +2214,7 @@ static bool consume_stock(struct mem_cgr +@@ -2215,7 +2216,7 @@ static bool consume_stock(struct mem_cgr ret = true; } @@ -46,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> return ret; } -@@ -2248,14 +2249,14 @@ static void drain_local_stock(struct wor +@@ -2250,14 +2251,14 @@ static void drain_local_stock(struct wor * The only protection from memory hotplug vs. drain_stock races is * that we always operate on local CPU stock here with IRQ disabled */ @@ -63,7 +63,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } /* -@@ -2267,7 +2268,7 @@ static void refill_stock(struct mem_cgro +@@ -2269,7 +2270,7 @@ static void refill_stock(struct mem_cgro struct memcg_stock_pcp *stock; unsigned long flags; @@ -72,7 +72,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> stock = this_cpu_ptr(&memcg_stock); if (stock->cached != memcg) { /* reset if necessary */ -@@ -2280,7 +2281,7 @@ static void refill_stock(struct mem_cgro +@@ -2282,7 +2283,7 @@ static void refill_stock(struct mem_cgro if (stock->nr_pages > MEMCG_CHARGE_BATCH) drain_stock(stock); @@ -81,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } /* -@@ -3084,7 +3085,7 @@ static bool consume_obj_stock(struct obj +@@ -3086,7 +3087,7 @@ static bool consume_obj_stock(struct obj unsigned long flags; bool ret = false; @@ -90,7 +90,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> stock = this_cpu_ptr(&memcg_stock); if (objcg == stock->cached_objcg && stock->nr_bytes >= nr_bytes) { -@@ -3092,7 +3093,7 @@ static bool consume_obj_stock(struct obj +@@ -3094,7 +3095,7 @@ static bool consume_obj_stock(struct obj ret = true; } @@ -99,7 +99,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> return ret; } -@@ -3151,7 +3152,7 @@ static void refill_obj_stock(struct obj_ +@@ -3153,7 +3154,7 @@ static void refill_obj_stock(struct obj_ struct memcg_stock_pcp *stock; unsigned long flags; @@ -108,7 +108,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> stock = this_cpu_ptr(&memcg_stock); if (stock->cached_objcg != objcg) { /* reset if necessary */ -@@ -3165,7 +3166,7 @@ static void refill_obj_stock(struct obj_ +@@ -3167,7 +3168,7 @@ static void refill_obj_stock(struct obj_ if (stock->nr_bytes > PAGE_SIZE) drain_obj_stock(stock); @@ -117,7 +117,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) -@@ -7050,9 +7051,13 @@ static int __init mem_cgroup_init(void) +@@ -7052,9 +7053,13 @@ static int __init mem_cgroup_init(void) cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL, memcg_hotplug_cpu_dead); diff --git a/patches/mm-memcontrol-do_not_disable_irq.patch b/patches/mm-memcontrol-do_not_disable_irq.patch index 7a9663c1a82b..e5cdba20e942 100644 --- a/patches/mm-memcontrol-do_not_disable_irq.patch +++ b/patches/mm-memcontrol-do_not_disable_irq.patch @@ -36,7 +36,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> /* Whether legacy memory+swap accounting is active */ static bool do_memsw_account(void) { -@@ -5682,12 +5690,12 @@ static int mem_cgroup_move_account(struc +@@ -5684,12 +5692,12 @@ static int mem_cgroup_move_account(struc ret = 0; @@ -51,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> out_unlock: unlock_page(page); out: -@@ -6723,10 +6731,10 @@ int mem_cgroup_charge(struct page *page, +@@ -6725,10 +6733,10 @@ int mem_cgroup_charge(struct page *page, css_get(&memcg->css); commit_charge(page, memcg); @@ -64,7 +64,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> if (PageSwapCache(page)) { swp_entry_t entry = { .val = page_private(page) }; -@@ -6770,11 +6778,11 @@ static void uncharge_batch(const struct +@@ -6772,11 +6780,11 @@ static void uncharge_batch(const struct memcg_oom_recover(ug->memcg); } @@ -78,7 +78,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> /* drop reference from uncharge_page */ css_put(&ug->memcg->css); -@@ -6928,10 +6936,10 @@ void mem_cgroup_migrate(struct page *old +@@ -6930,10 +6938,10 @@ void mem_cgroup_migrate(struct page *old css_get(&memcg->css); commit_charge(newpage, memcg); @@ -91,7 +91,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); -@@ -7106,6 +7114,7 @@ void mem_cgroup_swapout(struct page *pag +@@ -7108,6 +7116,7 @@ void mem_cgroup_swapout(struct page *pag struct mem_cgroup *memcg, *swap_memcg; unsigned int nr_entries; unsigned short oldid; @@ -99,7 +99,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page), page); -@@ -7151,9 +7160,13 @@ void mem_cgroup_swapout(struct page *pag +@@ -7153,9 +7162,13 @@ void mem_cgroup_swapout(struct page *pag * important here to have the interrupts disabled because it is the * only synchronisation we have for updating the per-CPU variables. */ diff --git a/patches/series b/patches/series index 4a85452aa9e4..c6e540434395 100644 --- a/patches/series +++ b/patches/series @@ -79,8 +79,13 @@ io_wq-Make-io_wqe-lock-a-raw_spinlock_t.patch # 20200915074816.52zphpywj4zidspk@linutronix.de bus-mhi-Remove-include-of-rwlock_types.h.patch -# 20201023110400.bx3uzsb7xy5jtsea@linutronix.de -blk-mq-Don-t-IPI-requests-on-PREEMPT_RT.patch +# 20201028141251.3608598-1-bigeasy@linutronix.de +0001-blk-mq-Don-t-complete-on-a-remote-CPU-in-force-threa.patch +0002-blk-mq-Always-complete-remote-completions-requests-i.patch +0003-blk-mq-Use-llist_head-for-blk_cpu_done.patch + +# 20201028181041.xyeothhkouc3p4md@linutronix.de +lib-test_lockup-Minimum-fix-to-get-it-compiled-on-PR.patch ############################################################ # Ready for posting @@ -146,22 +151,22 @@ tasklets-Use-static-line-for-functions.patch 0004-locking-rtmutex-Remove-rt_mutex_timed_lock.patch 0005-locking-rtmutex-Handle-the-various-new-futex-race-co.patch 0006-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch -0008-locking-rtmutex-Make-lock_killable-work.patch -0009-locking-spinlock-Split-the-lock-types-header.patch -0010-locking-rtmutex-Avoid-include-hell.patch -0011-lockdep-Reduce-header-files-in-debug_locks.h.patch -0012-locking-split-out-the-rbtree-definition.patch -0013-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch -0014-locking-rtmutex-export-lockdep-less-version-of-rt_mu.patch -0015-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch -0016-locking-rtmutex-add-sleeping-lock-implementation.patch -0017-locking-rtmutex-Allow-rt_mutex_trylock-on-PREEMPT_RT.patch -0018-locking-rtmutex-add-mutex-implementation-based-on-rt.patch -0019-locking-rtmutex-add-rwsem-implementation-based-on-rt.patch -0020-locking-rtmutex-add-rwlock-implementation-based-on-r.patch -0021-locking-rtmutex-wire-up-RT-s-locking.patch -0022-locking-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch -0023-locking-rtmutex-Use-custom-scheduling-function-for-s.patch +0007-locking-rtmutex-Make-lock_killable-work.patch +0008-locking-spinlock-Split-the-lock-types-header.patch +0009-locking-rtmutex-Avoid-include-hell.patch +0010-lockdep-Reduce-header-files-in-debug_locks.h.patch +0011-locking-split-out-the-rbtree-definition.patch +0012-locking-rtmutex-Provide-rt_mutex_slowlock_locked.patch +0013-locking-rtmutex-export-lockdep-less-version-of-rt_mu.patch +0014-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch +0015-locking-rtmutex-add-sleeping-lock-implementation.patch +0016-locking-rtmutex-Allow-rt_mutex_trylock-on-PREEMPT_RT.patch +0017-locking-rtmutex-add-mutex-implementation-based-on-rt.patch +0018-locking-rtmutex-add-rwsem-implementation-based-on-rt.patch +0019-locking-rtmutex-add-rwlock-implementation-based-on-r.patch +0020-locking-rtmutex-wire-up-RT-s-locking.patch +0021-locking-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch +0022-locking-rtmutex-Use-custom-scheduling-function-for-s.patch ############################################################### # Stuff broken upstream and upstream wants something different @@ -179,6 +184,7 @@ signal-revert-ptrace-preempt-magic.patch # PREEMPT NORT preempt-nort-rt-variants.patch mm-make-vmstat-rt-aware.patch +mm-memcontrol-Disable-preemption-in-__mod_memcg_lruv.patch # seqcount # https://lkml.kernel.org/r/20200817000200.20993-1-rdunlap@infradead.org diff --git a/patches/softirq-preempt-fix-3-re.patch b/patches/softirq-preempt-fix-3-re.patch index 9448f670bb2c..367a926cead5 100644 --- a/patches/softirq-preempt-fix-3-re.patch +++ b/patches/softirq-preempt-fix-3-re.patch @@ -14,30 +14,11 @@ Reported-by: Carsten Emde <cbe@osadl.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- - block/blk-mq.c | 2 ++ include/linux/preempt.h | 3 +++ lib/irq_poll.c | 5 +++++ net/core/dev.c | 7 +++++++ - 4 files changed, 17 insertions(+) + 3 files changed, 15 insertions(+) ---- a/block/blk-mq.c -+++ b/block/blk-mq.c -@@ -604,6 +604,7 @@ static void blk_mq_trigger_softirq(struc - if (list->next == &rq->ipi_list) - raise_softirq_irqoff(BLOCK_SOFTIRQ); - local_irq_restore(flags); -+ preempt_check_resched_rt(); - } - - static int blk_softirq_cpu_dead(unsigned int cpu) -@@ -617,6 +618,7 @@ static int blk_softirq_cpu_dead(unsigned - this_cpu_ptr(&blk_cpu_done)); - raise_softirq_irqoff(BLOCK_SOFTIRQ); - local_irq_enable(); -+ preempt_check_resched_rt(); - - return 0; - } --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -187,8 +187,10 @@ do { \ |