diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2018-08-02 17:35:05 +0200 |
---|---|---|
committer | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2018-08-02 17:35:05 +0200 |
commit | 4b3ce7fa614d094486109e64cd40550dd8172c2f (patch) | |
tree | 07b9468109fe8442aefe364eaf41a2804e375d42 | |
parent | 30255b10d2525de53ec1e348b0525a9176ebabd9 (diff) | |
download | linux-rt-4b3ce7fa614d094486109e64cd40550dd8172c2f.tar.gz |
[ANNOUNCE] v4.16.18-rt12v4.16.18-rt12-patches
Dear RT folks!
I'm pleased to announce the v4.16.18-rt12 patch set.
Changes since v4.16.18-rt11:
- Mark RCU's "rcu_iw" irqwork to be invoked in hardirq context as
expected by RCU. Reported by John Ogness.
- Drop the "is_special_task_state()" check from rtmutex's custom
set_state function. This avoids a warning if the sleeping-lock code
is restoring back to this "special" state.
- If a kworker invokes schedule() it is possible that it wakes another
kworker and invokes schedule() again. Try to avoid the second
schedule(). Reported and patched by Daniel Bristot de Oliveira.
- Enable XEN on ARM64. Iain Hunter reported that there are no problems
with it so there is no reason to keep disabled.
Known issues
- A warning triggered in "rcu_note_context_switch" originated from
SyS_timer_gettime(). The issue was always there, it is now
visible. Reported by Grygorii Strashko and Daniel Wagner.
The delta patch against v4.16.18-rt11 is appended below and can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.16/incr/patch-4.16.18-rt11-rt12.patch.xz
You can get this release via the git tree at:
git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.16.18-rt12
The RT patch against v4.16.18 can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.16/older/patch-4.16.18-rt12.patch.xz
The split quilt queue is available at:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.16/older/patches-4.16.18-rt12.tar.xz
Sebastian
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-rw-r--r-- | patches/Revert-arm64-xen-Make-XEN-depend-on-RT.patch | 29 | ||||
-rw-r--r-- | patches/localversion.patch | 2 | ||||
-rw-r--r-- | patches/rcu-mark-rcu_iw_handler-as-IRQ_WORK_HARD_IRQ.patch | 31 | ||||
-rw-r--r-- | patches/sched-core-Avoid-__schedule-being-called-twice-in-a-.patch | 52 | ||||
-rw-r--r-- | patches/sched-drop-is_special_task_state-check-from-__set_cu.patch | 49 | ||||
-rw-r--r-- | patches/series | 4 | ||||
-rw-r--r-- | patches/workqueue-prevent-deadlock-stall.patch | 13 |
7 files changed, 172 insertions, 8 deletions
diff --git a/patches/Revert-arm64-xen-Make-XEN-depend-on-RT.patch b/patches/Revert-arm64-xen-Make-XEN-depend-on-RT.patch new file mode 100644 index 000000000000..e7bef17576a1 --- /dev/null +++ b/patches/Revert-arm64-xen-Make-XEN-depend-on-RT.patch @@ -0,0 +1,29 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Thu, 2 Aug 2018 17:11:01 +0200 +Subject: [PATCH] Revert "arm64/xen: Make XEN depend on !RT" + +Iain Hunter reported that there are no problems with it so there is no +reason to keep it disabled. + +Reported-by: Iain Hunter <drhunter95@gmail.com> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + arch/arm64/Kconfig | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig +index d2370c7829451..3a6f0ae739d39 100644 +--- a/arch/arm64/Kconfig ++++ b/arch/arm64/Kconfig +@@ -861,7 +861,7 @@ config XEN_DOM0 + + config XEN + bool "Xen guest support on ARM64" +- depends on ARM64 && OF && !PREEMPT_RT_FULL ++ depends on ARM64 && OF + select SWIOTLB_XEN + select PARAVIRT + help +-- +2.18.0 + diff --git a/patches/localversion.patch b/patches/localversion.patch index 58842b503a27..12bd473a33f5 100644 --- a/patches/localversion.patch +++ b/patches/localversion.patch @@ -10,4 +10,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- /dev/null +++ b/localversion-rt @@ -0,0 +1 @@ -+-rt11 ++-rt12 diff --git a/patches/rcu-mark-rcu_iw_handler-as-IRQ_WORK_HARD_IRQ.patch b/patches/rcu-mark-rcu_iw_handler-as-IRQ_WORK_HARD_IRQ.patch new file mode 100644 index 000000000000..075b37c20bb8 --- /dev/null +++ b/patches/rcu-mark-rcu_iw_handler-as-IRQ_WORK_HARD_IRQ.patch @@ -0,0 +1,31 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Mon, 30 Jul 2018 18:26:48 +0200 +Subject: [PATCH] rcu: mark rcu_iw_handler() as IRQ_WORK_HARD_IRQ + +RCU's rcu_iw irq-work (rcu_iw_handler()) acquires the raw spinlock +rnp->lock without disabling interrupts. The lock is held normally with +disabled interrupts for a short time. +Mark irq-work as IRQ_WORK_HARD_IRQ so it is invoked in IRQ context like +on !RT. + +Reported-by: John Ogness <john.ogness@linutronix.de> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + kernel/rcu/tree.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 5a9c5abb17da..4fb983f4b9fd 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -1294,6 +1294,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) + !rdp->rcu_iw_pending && rdp->rcu_iw_gpnum != rnp->gpnum && + (rnp->ffmask & rdp->grpmask)) { + init_irq_work(&rdp->rcu_iw, rcu_iw_handler); ++ rdp->rcu_iw.flags = IRQ_WORK_HARD_IRQ; + rdp->rcu_iw_pending = true; + rdp->rcu_iw_gpnum = rnp->gpnum; + irq_work_queue_on(&rdp->rcu_iw, rdp->cpu); +-- +2.18.0 + diff --git a/patches/sched-core-Avoid-__schedule-being-called-twice-in-a-.patch b/patches/sched-core-Avoid-__schedule-being-called-twice-in-a-.patch new file mode 100644 index 000000000000..c9b4fb8ab55a --- /dev/null +++ b/patches/sched-core-Avoid-__schedule-being-called-twice-in-a-.patch @@ -0,0 +1,52 @@ +From: Daniel Bristot de Oliveira <bristot@redhat.com> +Date: Mon, 30 Jul 2018 15:00:00 +0200 +Subject: [PATCH] sched/core: Avoid __schedule() being called twice in a row + +If a worker invokes schedule() then we may have the call chain: + schedule() + -> sched_submit_work() + -> wq_worker_sleeping() + -> wake_up_worker() + -> wake_up_process(). + +The last wakeup may cause a schedule which is unnecessary because we are +already in schedule() and do it anyway. + +Add a preempt_disable() + preempt_enable_no_resched() around +wq_worker_sleeping() so the context switch could be delayed until +__schedule(). + +Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com> +Cc: Clark Williams <williams@redhat.com> +Cc: Tommaso Cucinotta <tommaso.cucinotta@sssup.it> +Cc: Romulo da Silva de Oliveira <romulo.deoliveira@ufsc.br> +Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Cc: Steven Rostedt <rostedt@goodmis.org> +Cc: Thomas Gleixner <tglx@linutronix.de> +Cc: Ingo Molnar <mingo@redhat.com> +Cc: Peter Zijlstra <peterz@infradead.org> +[bigeasy: rewrite changelog] +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + kernel/sched/core.c | 8 +++++++- + 1 file changed, 7 insertions(+), 1 deletion(-) + +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -3500,9 +3500,15 @@ static inline void sched_submit_work(str + /* + * If a worker went to sleep, notify and ask workqueue whether + * it wants to wake up a task to maintain concurrency. ++ * As this function is called inside the schedule() context, ++ * we disable preemption to avoid it calling schedule() again ++ * in the possible wakeup of a kworker. + */ +- if (tsk->flags & PF_WQ_WORKER) ++ if (tsk->flags & PF_WQ_WORKER) { ++ preempt_disable(); + wq_worker_sleeping(tsk); ++ preempt_enable_no_resched(); ++ } + + /* + * If we are going to sleep and we have plugged IO queued, diff --git a/patches/sched-drop-is_special_task_state-check-from-__set_cu.patch b/patches/sched-drop-is_special_task_state-check-from-__set_cu.patch new file mode 100644 index 000000000000..3e028a304346 --- /dev/null +++ b/patches/sched-drop-is_special_task_state-check-from-__set_cu.patch @@ -0,0 +1,49 @@ +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Wed, 1 Aug 2018 10:33:05 +0200 +Subject: [PATCH] sched: drop is_special_task_state() check from + __set_current_state_no_track() + +The is_special_task_state() check in __set_current_state_no_track() +has been wrongly placed. __set_current_state_no_track() is used in RT +while a sleeping lock is acquired. It is used at the begin of the wait +loop with TASK_UNINTERRUPTIBLE and while leaving it and restoring the +original state. The latter part triggers the warning. + +Drop the special state check. This is only used within the sleeping lock +implementation and the assignment happens while the PI lock is held. +While at it, drop set_current_state_no_track() because it has no users. + +Cc: stable-rt@vger.kernel.org +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + include/linux/sched.h | 12 +----------- + 1 file changed, 1 insertion(+), 11 deletions(-) + +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -134,16 +134,7 @@ struct task_group; + } while (0) + + #define __set_current_state_no_track(state_value) \ +- do { \ +- WARN_ON_ONCE(is_special_task_state(state_value));\ +- current->state = (state_value); \ +- } while (0) +- +-#define set_current_state_no_track(state_value) \ +- do { \ +- WARN_ON_ONCE(is_special_task_state(state_value));\ +- smp_store_mb(current->state, (state_value)); \ +- } while (0) ++ current->state = (state_value); + + #define set_special_state(state_value) \ + do { \ +@@ -199,7 +190,6 @@ struct task_group; + smp_store_mb(current->state, (state_value)) + + #define __set_current_state_no_track(state_value) __set_current_state(state_value) +-#define set_current_state_no_track(state_value) set_current_state(state_value) + + /* + * set_special_state() should be used for those states when the blocking task diff --git a/patches/series b/patches/series index 134482620004..6a238889eedb 100644 --- a/patches/series +++ b/patches/series @@ -235,6 +235,7 @@ sched-disable-rt-group-sched-on-rt.patch net_disable_NET_RX_BUSY_POLL.patch arm-disable-NEON-in-kernel-mode.patch arm64-xen--Make-XEN-depend-on-non-rt.patch +Revert-arm64-xen-Make-XEN-depend-on-RT.patch power-use-generic-rwsem-on-rt.patch powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch power-disable-highmem-on-rt.patch @@ -384,6 +385,7 @@ rtmutex_dont_include_rcu.patch rtmutex-Provide-rt_mutex_slowlock_locked.patch rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch rtmutex-add-sleeping-lock-implementation.patch +sched-drop-is_special_task_state-check-from-__set_cu.patch rtmutex-add-mutex-implementation-based-on-rtmutex.patch rtmutex-add-rwsem-implementation-based-on-rtmutex.patch rtmutex-add-rwlock-implementation-based-on-rtmutex.patch @@ -448,6 +450,7 @@ workqueue-use-rcu.patch workqueue-use-locallock.patch work-queue-work-around-irqsafe-timer-optimization.patch workqueue-distangle-from-rq-lock.patch +sched-core-Avoid-__schedule-being-called-twice-in-a-.patch # DEBUGOBJECTS debugobjects-rt.patch @@ -474,6 +477,7 @@ net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch # irqwork irqwork-push_most_work_into_softirq_context.patch irqwork-Move-irq-safe-work-to-irq-context.patch +rcu-mark-rcu_iw_handler-as-IRQ_WORK_HARD_IRQ.patch # CONSOLE. NEEDS more thought !!! printk-rt-aware.patch diff --git a/patches/workqueue-prevent-deadlock-stall.patch b/patches/workqueue-prevent-deadlock-stall.patch index b5c60e78ee99..d376f19ba75d 100644 --- a/patches/workqueue-prevent-deadlock-stall.patch +++ b/patches/workqueue-prevent-deadlock-stall.patch @@ -37,13 +37,13 @@ Cc: Richard Weinberger <richard.weinberger@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> --- - kernel/sched/core.c | 7 ++++-- + kernel/sched/core.c | 6 +++-- kernel/workqueue.c | 60 ++++++++++++++++++++++++++++++++++++++++------------ - 2 files changed, 52 insertions(+), 15 deletions(-) + 2 files changed, 51 insertions(+), 15 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -3550,9 +3550,8 @@ void __noreturn do_task_dead(void) +@@ -3540,9 +3540,8 @@ void __noreturn do_task_dead(void) static inline void sched_submit_work(struct task_struct *tsk) { @@ -54,11 +54,10 @@ Cc: Steven Rostedt <rostedt@goodmis.org> /* * If a worker went to sleep, notify and ask workqueue whether * it wants to wake up a task to maintain concurrency. -@@ -3560,6 +3559,10 @@ static inline void sched_submit_work(str - if (tsk->flags & PF_WQ_WORKER) - wq_worker_sleeping(tsk); +@@ -3556,6 +3555,9 @@ static inline void sched_submit_work(str + preempt_enable_no_resched(); + } -+ + if (tsk_is_pi_blocked(tsk)) + return; + |