summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>2020-12-04 18:17:39 +0100
committerSebastian Andrzej Siewior <bigeasy@linutronix.de>2020-12-04 18:17:39 +0100
commitf14f9210377d5756b475e0e9993c9c954037569b (patch)
treed017cd964f59b4bc8364a6585c58b278cdfb5805
parent0162011ad222d71ed173bc3c32f847aa245555fa (diff)
downloadlinux-rt-5.10-rc6-rt14-patches.tar.gz
[ANNOUNCE] v5.10-rc6-rt14v5.10-rc6-rt14-patches
Dear RT folks! I'm pleased to announce the v5.10-rc6-rt14 patch set. Changes since v5.10-rc6-rt13: - Update Thomas Gleixner's softirq patches. This updated version has been posted as v2. This update also includes a handful patches by Frederic Weisbecker which were required as a prerequisite. Known issues - None. The delta patch against v5.10-rc6-rt13 is appended below and can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.10/incr/patch-5.10-rc6-rt13-rt14.patch.xz You can get this release via the git tree at: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.10-rc6-rt14 The RT patch against v5.10-rc6 can be found here: https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.10/older/patch-5.10-rc6-rt14.patch.xz The split quilt queue is available at: https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.10/older/patches-5.10-rc6-rt14.tar.xz Sebastian Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-rw-r--r--patches/0001-parisc-Remove-bogus-__IRQ_STAT-macro.patch9
-rw-r--r--patches/0001-sched-cputime-Remove-symbol-exports-from-IRQ-time-ac.patch64
-rw-r--r--patches/0002-s390-vtime-Use-the-generic-IRQ-entry-accounting.patch111
-rw-r--r--patches/0002-sh-Get-rid-of-nmi_count.patch9
-rw-r--r--patches/0003-irqstat-Get-rid-of-nmi_count-and-__IRQ_STAT.patch6
-rw-r--r--patches/0003-sched-vtime-Consolidate-IRQ-time-accounting.patch288
-rw-r--r--patches/0004-irqtime-Move-irqtime-entry-accounting-after-irq-offs.patch200
-rw-r--r--patches/0004-um-irqstat-Get-rid-of-the-duplicated-declarations.patch10
-rw-r--r--patches/0005-ARM-irqstat-Get-rid-of-duplicated-declaration.patch11
-rw-r--r--patches/0005-irq-Call-tick_irq_enter-inside-HARDIRQ_OFFSET.patch44
-rw-r--r--patches/0006-arm64-irqstat-Get-rid-of-duplicated-declaration.patch12
-rw-r--r--patches/0007-asm-generic-irqstat-Add-optional-__nmi_count-member.patch6
-rw-r--r--patches/0008-sh-irqstat-Use-the-generic-irq_cpustat_t.patch9
-rw-r--r--patches/0009-irqstat-Move-declaration-into-asm-generic-hardirq.h.patch6
-rw-r--r--patches/0010-preempt-Cleanup-the-macro-maze-a-bit.patch6
-rw-r--r--patches/0011-softirq-Move-related-code-into-one-section.patch6
-rw-r--r--patches/0012-sh-irq-Add-missing-closing-parentheses-in-arch_show_.patch34
-rw-r--r--patches/irqtime-Use-irq_count-instead-of-preempt_count.patch32
-rw-r--r--patches/localversion.patch2
-rw-r--r--patches/rcu_Prevent_false_positive_softirq_warning_on_RT.patch (renamed from patches/0016-rcu-Prevent-false-positive-softirq-warning-on-RT.patch)7
-rw-r--r--patches/series29
-rw-r--r--patches/softirq_Add_RT_specific_softirq_accounting.patch (renamed from patches/0012-softirq-Add-RT-specific-softirq-accounting.patch)21
-rw-r--r--patches/softirq_Make_softirq_control_and_processing_RT_aware.patch (renamed from patches/0014-softirq-Make-softirq-control-and-processing-RT-aware.patch)66
-rw-r--r--patches/softirq_Move_various_protections_into_inline_helpers.patch (renamed from patches/0013-softirq-Move-various-protections-into-inline-helpers.patch)49
-rw-r--r--patches/softirq_Replace_barrier_with_cpu_relax_in_tasklet_unlock_wait_.patch (renamed from patches/0017-softirq-Replace-barrier-with-cpu_relax-in-tasklet_un.patch)8
-rw-r--r--patches/tasklets_Prevent_kill_unlock_wait_deadlock_on_RT.patch (renamed from patches/0019-tasklets-Prevent-kill-unlock_wait-deadlock-on-RT.patch)25
-rw-r--r--patches/tasklets_Use_static_inlines_for_stub_implementations.patch (renamed from patches/0018-tasklets-Use-static-inlines-for-stub-implementations.patch)7
-rw-r--r--patches/tick_sched_Prevent_false_positive_softirq_pending_warnings_on_RT.patch (renamed from patches/0015-tick-sched-Prevent-false-positive-softirq-pending-wa.patch)8
28 files changed, 926 insertions, 159 deletions
diff --git a/patches/0001-parisc-Remove-bogus-__IRQ_STAT-macro.patch b/patches/0001-parisc-Remove-bogus-__IRQ_STAT-macro.patch
index 3f1cd47d14a7..ac5257e2d055 100644
--- a/patches/0001-parisc-Remove-bogus-__IRQ_STAT-macro.patch
+++ b/patches/0001-parisc-Remove-bogus-__IRQ_STAT-macro.patch
@@ -1,13 +1,12 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:22:54 +0100
-Subject: [PATCH 01/19] parisc: Remove bogus __IRQ_STAT macro
+Date: Fri, 13 Nov 2020 15:02:08 +0100
+Subject: [PATCH 01/12] parisc: Remove bogus __IRQ_STAT macro
This is a leftover from a historical array based implementation and unused.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
-Cc: Helge Deller <deller@gmx.de>
-Cc: linux-parisc@vger.kernel.org
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141732.680780121@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/parisc/include/asm/hardirq.h | 1 -
diff --git a/patches/0001-sched-cputime-Remove-symbol-exports-from-IRQ-time-ac.patch b/patches/0001-sched-cputime-Remove-symbol-exports-from-IRQ-time-ac.patch
new file mode 100644
index 000000000000..f16e003a4f43
--- /dev/null
+++ b/patches/0001-sched-cputime-Remove-symbol-exports-from-IRQ-time-ac.patch
@@ -0,0 +1,64 @@
+From: Frederic Weisbecker <frederic@kernel.org>
+Date: Wed, 2 Dec 2020 12:57:28 +0100
+Subject: [PATCH 1/5] sched/cputime: Remove symbol exports from IRQ time
+ accounting
+
+account_irq_enter_time() and account_irq_exit_time() are not called
+from modules. EXPORT_SYMBOL_GPL() can be safely removed from the IRQ
+cputime accounting functions called from there.
+
+Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20201202115732.27827-2-frederic@kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/s390/kernel/vtime.c | 10 +++++-----
+ kernel/sched/cputime.c | 2 --
+ 2 files changed, 5 insertions(+), 7 deletions(-)
+
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -226,7 +226,7 @@ void vtime_flush(struct task_struct *tsk
+ * Update process times based on virtual cpu times stored by entry.S
+ * to the lowcore fields user_timer, system_timer & steal_clock.
+ */
+-void vtime_account_irq_enter(struct task_struct *tsk)
++void vtime_account_kernel(struct task_struct *tsk)
+ {
+ u64 timer;
+
+@@ -245,12 +245,12 @@ void vtime_account_irq_enter(struct task
+
+ virt_timer_forward(timer);
+ }
+-EXPORT_SYMBOL_GPL(vtime_account_irq_enter);
+-
+-void vtime_account_kernel(struct task_struct *tsk)
+-__attribute__((alias("vtime_account_irq_enter")));
+ EXPORT_SYMBOL_GPL(vtime_account_kernel);
+
++void vtime_account_irq_enter(struct task_struct *tsk)
++__attribute__((alias("vtime_account_kernel")));
++
++
+ /*
+ * Sorted add to a list. List is linear searched until first bigger
+ * element is found.
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -71,7 +71,6 @@ void irqtime_account_irq(struct task_str
+ else if (in_serving_softirq() && curr != this_cpu_ksoftirqd())
+ irqtime_account_delta(irqtime, delta, CPUTIME_SOFTIRQ);
+ }
+-EXPORT_SYMBOL_GPL(irqtime_account_irq);
+
+ static u64 irqtime_tick_accounted(u64 maxtime)
+ {
+@@ -434,7 +433,6 @@ void vtime_account_irq_enter(struct task
+ else
+ vtime_account_kernel(tsk);
+ }
+-EXPORT_SYMBOL_GPL(vtime_account_irq_enter);
+ #endif /* __ARCH_HAS_VTIME_ACCOUNT */
+
+ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
diff --git a/patches/0002-s390-vtime-Use-the-generic-IRQ-entry-accounting.patch b/patches/0002-s390-vtime-Use-the-generic-IRQ-entry-accounting.patch
new file mode 100644
index 000000000000..c76ff7547ae6
--- /dev/null
+++ b/patches/0002-s390-vtime-Use-the-generic-IRQ-entry-accounting.patch
@@ -0,0 +1,111 @@
+From: Frederic Weisbecker <frederic@kernel.org>
+Date: Wed, 2 Dec 2020 12:57:29 +0100
+Subject: [PATCH 2/5] s390/vtime: Use the generic IRQ entry accounting
+
+s390 has its own version of IRQ entry accounting because it doesn't
+account the idle time the same way the other architectures do. Only
+the actual idle sleep time is accounted as idle time, the rest of the
+idle task execution is accounted as system time.
+
+Make the generic IRQ entry accounting aware of architectures that have
+their own way of accounting idle time and convert s390 to use it.
+
+This prepares s390 to get involved in further consolidations of IRQ
+time accounting.
+
+Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20201202115732.27827-3-frederic@kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/Kconfig | 7 ++++++-
+ arch/s390/Kconfig | 1 +
+ arch/s390/include/asm/vtime.h | 1 -
+ arch/s390/kernel/vtime.c | 4 ----
+ kernel/sched/cputime.c | 13 ++-----------
+ 5 files changed, 9 insertions(+), 17 deletions(-)
+
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -627,6 +627,12 @@ config HAVE_TIF_NOHZ
+ config HAVE_VIRT_CPU_ACCOUNTING
+ bool
+
++config HAVE_VIRT_CPU_ACCOUNTING_IDLE
++ bool
++ help
++ Architecture has its own way to account idle CPU time and therefore
++ doesn't implement vtime_account_idle().
++
+ config ARCH_HAS_SCALED_CPUTIME
+ bool
+
+@@ -641,7 +647,6 @@ config HAVE_VIRT_CPU_ACCOUNTING_GEN
+ some 32-bit arches may require multiple accesses, so proper
+ locking is needed to protect against concurrent accesses.
+
+-
+ config HAVE_IRQ_TIME_ACCOUNTING
+ bool
+ help
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -181,6 +181,7 @@ config S390
+ select HAVE_RSEQ
+ select HAVE_SYSCALL_TRACEPOINTS
+ select HAVE_VIRT_CPU_ACCOUNTING
++ select HAVE_VIRT_CPU_ACCOUNTING_IDLE
+ select IOMMU_HELPER if PCI
+ select IOMMU_SUPPORT if PCI
+ select MODULES_USE_ELF_RELA
+--- a/arch/s390/include/asm/vtime.h
++++ b/arch/s390/include/asm/vtime.h
+@@ -2,7 +2,6 @@
+ #ifndef _S390_VTIME_H
+ #define _S390_VTIME_H
+
+-#define __ARCH_HAS_VTIME_ACCOUNT
+ #define __ARCH_HAS_VTIME_TASK_SWITCH
+
+ #endif /* _S390_VTIME_H */
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -247,10 +247,6 @@ void vtime_account_kernel(struct task_st
+ }
+ EXPORT_SYMBOL_GPL(vtime_account_kernel);
+
+-void vtime_account_irq_enter(struct task_struct *tsk)
+-__attribute__((alias("vtime_account_kernel")));
+-
+-
+ /*
+ * Sorted add to a list. List is linear searched until first bigger
+ * element is found.
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -417,23 +417,14 @@ void vtime_task_switch(struct task_struc
+ }
+ # endif
+
+-/*
+- * Archs that account the whole time spent in the idle task
+- * (outside irq) as idle time can rely on this and just implement
+- * vtime_account_kernel() and vtime_account_idle(). Archs that
+- * have other meaning of the idle time (s390 only includes the
+- * time spent by the CPU when it's in low power mode) must override
+- * vtime_account().
+- */
+-#ifndef __ARCH_HAS_VTIME_ACCOUNT
+ void vtime_account_irq_enter(struct task_struct *tsk)
+ {
+- if (!in_interrupt() && is_idle_task(tsk))
++ if (!IS_ENABLED(CONFIG_HAVE_VIRT_CPU_ACCOUNTING_IDLE) &&
++ !in_interrupt() && is_idle_task(tsk))
+ vtime_account_idle(tsk);
+ else
+ vtime_account_kernel(tsk);
+ }
+-#endif /* __ARCH_HAS_VTIME_ACCOUNT */
+
+ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ u64 *ut, u64 *st)
diff --git a/patches/0002-sh-Get-rid-of-nmi_count.patch b/patches/0002-sh-Get-rid-of-nmi_count.patch
index fe2ebefeca63..2493189d3c08 100644
--- a/patches/0002-sh-Get-rid-of-nmi_count.patch
+++ b/patches/0002-sh-Get-rid-of-nmi_count.patch
@@ -1,14 +1,13 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:25:34 +0100
-Subject: [PATCH 02/19] sh: Get rid of nmi_count()
+Date: Fri, 13 Nov 2020 15:02:09 +0100
+Subject: [PATCH 02/12] sh: Get rid of nmi_count()
nmi_count() is a historical leftover and SH is the only user. Replace it
with regular per cpu accessors.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
-Cc: Rich Felker <dalias@libc.org>
-Cc: linux-sh@vger.kernel.org
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141732.844232404@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/sh/kernel/irq.c | 2 +-
diff --git a/patches/0003-irqstat-Get-rid-of-nmi_count-and-__IRQ_STAT.patch b/patches/0003-irqstat-Get-rid-of-nmi_count-and-__IRQ_STAT.patch
index d017013b185b..85064cd7668c 100644
--- a/patches/0003-irqstat-Get-rid-of-nmi_count-and-__IRQ_STAT.patch
+++ b/patches/0003-irqstat-Get-rid-of-nmi_count-and-__IRQ_STAT.patch
@@ -1,10 +1,12 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:27:38 +0100
-Subject: [PATCH 03/19] irqstat: Get rid of nmi_count() and __IRQ_STAT()
+Date: Fri, 13 Nov 2020 15:02:10 +0100
+Subject: [PATCH 03/12] irqstat: Get rid of nmi_count() and __IRQ_STAT()
Nothing uses this anymore.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.005212732@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/irq_cpustat.h | 4 ----
diff --git a/patches/0003-sched-vtime-Consolidate-IRQ-time-accounting.patch b/patches/0003-sched-vtime-Consolidate-IRQ-time-accounting.patch
new file mode 100644
index 000000000000..79d6db6f5740
--- /dev/null
+++ b/patches/0003-sched-vtime-Consolidate-IRQ-time-accounting.patch
@@ -0,0 +1,288 @@
+From: Frederic Weisbecker <frederic@kernel.org>
+Date: Wed, 2 Dec 2020 12:57:30 +0100
+Subject: [PATCH 3/5] sched/vtime: Consolidate IRQ time accounting
+
+The 3 architectures implementing CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+all have their own version of irq time accounting that dispatch the
+cputime to the appropriate index: hardirq, softirq, system, idle,
+guest... from an all-in-one function.
+
+Instead of having these ad-hoc versions, move the cputime destination
+dispatch decision to the core code and leave only the actual per-index
+cputime accounting to the architecture.
+
+Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20201202115732.27827-4-frederic@kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/ia64/kernel/time.c | 20 ++++++++++++----
+ arch/powerpc/kernel/time.c | 56 ++++++++++++++++++++++++++++++++-------------
+ arch/s390/kernel/vtime.c | 45 +++++++++++++++++++++++++-----------
+ include/linux/vtime.h | 16 ++++--------
+ kernel/sched/cputime.c | 13 +++++++---
+ 5 files changed, 102 insertions(+), 48 deletions(-)
+
+--- a/arch/ia64/kernel/time.c
++++ b/arch/ia64/kernel/time.c
+@@ -138,12 +138,8 @@ void vtime_account_kernel(struct task_st
+ struct thread_info *ti = task_thread_info(tsk);
+ __u64 stime = vtime_delta(tsk);
+
+- if ((tsk->flags & PF_VCPU) && !irq_count())
++ if (tsk->flags & PF_VCPU)
+ ti->gtime += stime;
+- else if (hardirq_count())
+- ti->hardirq_time += stime;
+- else if (in_serving_softirq())
+- ti->softirq_time += stime;
+ else
+ ti->stime += stime;
+ }
+@@ -156,6 +152,20 @@ void vtime_account_idle(struct task_stru
+ ti->idle_time += vtime_delta(tsk);
+ }
+
++void vtime_account_softirq(struct task_struct *tsk)
++{
++ struct thread_info *ti = task_thread_info(tsk);
++
++ ti->softirq_time += vtime_delta(tsk);
++}
++
++void vtime_account_hardirq(struct task_struct *tsk)
++{
++ struct thread_info *ti = task_thread_info(tsk);
++
++ ti->hardirq_time += vtime_delta(tsk);
++}
++
+ #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
+
+ static irqreturn_t
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -311,12 +311,11 @@ static unsigned long vtime_delta_scaled(
+ return stime_scaled;
+ }
+
+-static unsigned long vtime_delta(struct task_struct *tsk,
++static unsigned long vtime_delta(struct cpu_accounting_data *acct,
+ unsigned long *stime_scaled,
+ unsigned long *steal_time)
+ {
+ unsigned long now, stime;
+- struct cpu_accounting_data *acct = get_accounting(tsk);
+
+ WARN_ON_ONCE(!irqs_disabled());
+
+@@ -331,29 +330,30 @@ static unsigned long vtime_delta(struct
+ return stime;
+ }
+
++static void vtime_delta_kernel(struct cpu_accounting_data *acct,
++ unsigned long *stime, unsigned long *stime_scaled)
++{
++ unsigned long steal_time;
++
++ *stime = vtime_delta(acct, stime_scaled, &steal_time);
++ *stime -= min(*stime, steal_time);
++ acct->steal_time += steal_time;
++}
++
+ void vtime_account_kernel(struct task_struct *tsk)
+ {
+- unsigned long stime, stime_scaled, steal_time;
+ struct cpu_accounting_data *acct = get_accounting(tsk);
++ unsigned long stime, stime_scaled;
+
+- stime = vtime_delta(tsk, &stime_scaled, &steal_time);
+-
+- stime -= min(stime, steal_time);
+- acct->steal_time += steal_time;
++ vtime_delta_kernel(acct, &stime, &stime_scaled);
+
+- if ((tsk->flags & PF_VCPU) && !irq_count()) {
++ if (tsk->flags & PF_VCPU) {
+ acct->gtime += stime;
+ #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
+ acct->utime_scaled += stime_scaled;
+ #endif
+ } else {
+- if (hardirq_count())
+- acct->hardirq_time += stime;
+- else if (in_serving_softirq())
+- acct->softirq_time += stime;
+- else
+- acct->stime += stime;
+-
++ acct->stime += stime;
+ #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
+ acct->stime_scaled += stime_scaled;
+ #endif
+@@ -366,10 +366,34 @@ void vtime_account_idle(struct task_stru
+ unsigned long stime, stime_scaled, steal_time;
+ struct cpu_accounting_data *acct = get_accounting(tsk);
+
+- stime = vtime_delta(tsk, &stime_scaled, &steal_time);
++ stime = vtime_delta(acct, &stime_scaled, &steal_time);
+ acct->idle_time += stime + steal_time;
+ }
+
++static void vtime_account_irq_field(struct cpu_accounting_data *acct,
++ unsigned long *field)
++{
++ unsigned long stime, stime_scaled;
++
++ vtime_delta_kernel(acct, &stime, &stime_scaled);
++ *field += stime;
++#ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
++ acct->stime_scaled += stime_scaled;
++#endif
++}
++
++void vtime_account_softirq(struct task_struct *tsk)
++{
++ struct cpu_accounting_data *acct = get_accounting(tsk);
++ vtime_account_irq_field(acct, &acct->softirq_time);
++}
++
++void vtime_account_hardirq(struct task_struct *tsk)
++{
++ struct cpu_accounting_data *acct = get_accounting(tsk);
++ vtime_account_irq_field(acct, &acct->hardirq_time);
++}
++
+ static void vtime_flush_scaled(struct task_struct *tsk,
+ struct cpu_accounting_data *acct)
+ {
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -222,31 +222,50 @@ void vtime_flush(struct task_struct *tsk
+ S390_lowcore.avg_steal_timer = avg_steal;
+ }
+
++static u64 vtime_delta(void)
++{
++ u64 timer = S390_lowcore.last_update_timer;
++
++ S390_lowcore.last_update_timer = get_vtimer();
++
++ return timer - S390_lowcore.last_update_timer;
++}
++
+ /*
+ * Update process times based on virtual cpu times stored by entry.S
+ * to the lowcore fields user_timer, system_timer & steal_clock.
+ */
+ void vtime_account_kernel(struct task_struct *tsk)
+ {
+- u64 timer;
+-
+- timer = S390_lowcore.last_update_timer;
+- S390_lowcore.last_update_timer = get_vtimer();
+- timer -= S390_lowcore.last_update_timer;
++ u64 delta = vtime_delta();
+
+- if ((tsk->flags & PF_VCPU) && (irq_count() == 0))
+- S390_lowcore.guest_timer += timer;
+- else if (hardirq_count())
+- S390_lowcore.hardirq_timer += timer;
+- else if (in_serving_softirq())
+- S390_lowcore.softirq_timer += timer;
++ if (tsk->flags & PF_VCPU)
++ S390_lowcore.guest_timer += delta;
+ else
+- S390_lowcore.system_timer += timer;
++ S390_lowcore.system_timer += delta;
+
+- virt_timer_forward(timer);
++ virt_timer_forward(delta);
+ }
+ EXPORT_SYMBOL_GPL(vtime_account_kernel);
+
++void vtime_account_softirq(struct task_struct *tsk)
++{
++ u64 delta = vtime_delta();
++
++ S390_lowcore.softirq_timer += delta;
++
++ virt_timer_forward(delta);
++}
++
++void vtime_account_hardirq(struct task_struct *tsk)
++{
++ u64 delta = vtime_delta();
++
++ S390_lowcore.hardirq_timer += delta;
++
++ virt_timer_forward(delta);
++}
++
+ /*
+ * Sorted add to a list. List is linear searched until first bigger
+ * element is found.
+--- a/include/linux/vtime.h
++++ b/include/linux/vtime.h
+@@ -83,16 +83,12 @@ static inline void vtime_init_idle(struc
+ #endif
+
+ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+-extern void vtime_account_irq_enter(struct task_struct *tsk);
+-static inline void vtime_account_irq_exit(struct task_struct *tsk)
+-{
+- /* On hard|softirq exit we always account to hard|softirq cputime */
+- vtime_account_kernel(tsk);
+-}
++extern void vtime_account_irq(struct task_struct *tsk);
++extern void vtime_account_softirq(struct task_struct *tsk);
++extern void vtime_account_hardirq(struct task_struct *tsk);
+ extern void vtime_flush(struct task_struct *tsk);
+ #else /* !CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
+-static inline void vtime_account_irq_enter(struct task_struct *tsk) { }
+-static inline void vtime_account_irq_exit(struct task_struct *tsk) { }
++static inline void vtime_account_irq(struct task_struct *tsk) { }
+ static inline void vtime_flush(struct task_struct *tsk) { }
+ #endif
+
+@@ -105,13 +101,13 @@ static inline void irqtime_account_irq(s
+
+ static inline void account_irq_enter_time(struct task_struct *tsk)
+ {
+- vtime_account_irq_enter(tsk);
++ vtime_account_irq(tsk);
+ irqtime_account_irq(tsk);
+ }
+
+ static inline void account_irq_exit_time(struct task_struct *tsk)
+ {
+- vtime_account_irq_exit(tsk);
++ vtime_account_irq(tsk);
+ irqtime_account_irq(tsk);
+ }
+
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -417,13 +417,18 @@ void vtime_task_switch(struct task_struc
+ }
+ # endif
+
+-void vtime_account_irq_enter(struct task_struct *tsk)
++void vtime_account_irq(struct task_struct *tsk)
+ {
+- if (!IS_ENABLED(CONFIG_HAVE_VIRT_CPU_ACCOUNTING_IDLE) &&
+- !in_interrupt() && is_idle_task(tsk))
++ if (hardirq_count()) {
++ vtime_account_hardirq(tsk);
++ } else if (in_serving_softirq()) {
++ vtime_account_softirq(tsk);
++ } else if (!IS_ENABLED(CONFIG_HAVE_VIRT_CPU_ACCOUNTING_IDLE) &&
++ is_idle_task(tsk)) {
+ vtime_account_idle(tsk);
+- else
++ } else {
+ vtime_account_kernel(tsk);
++ }
+ }
+
+ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
diff --git a/patches/0004-irqtime-Move-irqtime-entry-accounting-after-irq-offs.patch b/patches/0004-irqtime-Move-irqtime-entry-accounting-after-irq-offs.patch
new file mode 100644
index 000000000000..ce09230ad865
--- /dev/null
+++ b/patches/0004-irqtime-Move-irqtime-entry-accounting-after-irq-offs.patch
@@ -0,0 +1,200 @@
+From: Frederic Weisbecker <frederic@kernel.org>
+Date: Wed, 2 Dec 2020 12:57:31 +0100
+Subject: [PATCH 4/5] irqtime: Move irqtime entry accounting after irq offset
+ incrementation
+
+IRQ time entry is currently accounted before HARDIRQ_OFFSET or
+SOFTIRQ_OFFSET are incremented. This is convenient to decide to which
+index the cputime to account is dispatched.
+
+Unfortunately it prevents tick_irq_enter() from being called under
+HARDIRQ_OFFSET because tick_irq_enter() has to be called before the IRQ
+entry accounting due to the necessary clock catch up. As a result we
+don't benefit from appropriate lockdep coverage on tick_irq_enter().
+
+To prepare for fixing this, move the IRQ entry cputime accounting after
+the preempt offset is incremented. This requires the cputime dispatch
+code to handle the extra offset.
+
+Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://lore.kernel.org/r/20201202115732.27827-5-frederic@kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ include/linux/hardirq.h | 4 ++--
+ include/linux/vtime.h | 34 ++++++++++++++++++++++++----------
+ kernel/sched/cputime.c | 18 +++++++++++-------
+ kernel/softirq.c | 6 +++---
+ 4 files changed, 40 insertions(+), 22 deletions(-)
+
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -32,9 +32,9 @@ static __always_inline void rcu_irq_ente
+ */
+ #define __irq_enter() \
+ do { \
+- account_irq_enter_time(current); \
+ preempt_count_add(HARDIRQ_OFFSET); \
+ lockdep_hardirq_enter(); \
++ account_hardirq_enter(current); \
+ } while (0)
+
+ /*
+@@ -62,8 +62,8 @@ void irq_enter_rcu(void);
+ */
+ #define __irq_exit() \
+ do { \
++ account_hardirq_exit(current); \
+ lockdep_hardirq_exit(); \
+- account_irq_exit_time(current); \
+ preempt_count_sub(HARDIRQ_OFFSET); \
+ } while (0)
+
+--- a/include/linux/vtime.h
++++ b/include/linux/vtime.h
+@@ -83,32 +83,46 @@ static inline void vtime_init_idle(struc
+ #endif
+
+ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+-extern void vtime_account_irq(struct task_struct *tsk);
++extern void vtime_account_irq(struct task_struct *tsk, unsigned int offset);
+ extern void vtime_account_softirq(struct task_struct *tsk);
+ extern void vtime_account_hardirq(struct task_struct *tsk);
+ extern void vtime_flush(struct task_struct *tsk);
+ #else /* !CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
+-static inline void vtime_account_irq(struct task_struct *tsk) { }
++static inline void vtime_account_irq(struct task_struct *tsk, unsigned int offset) { }
++static inline void vtime_account_softirq(struct task_struct *tsk) { }
++static inline void vtime_account_hardirq(struct task_struct *tsk) { }
+ static inline void vtime_flush(struct task_struct *tsk) { }
+ #endif
+
+
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+-extern void irqtime_account_irq(struct task_struct *tsk);
++extern void irqtime_account_irq(struct task_struct *tsk, unsigned int offset);
+ #else
+-static inline void irqtime_account_irq(struct task_struct *tsk) { }
++static inline void irqtime_account_irq(struct task_struct *tsk, unsigned int offset) { }
+ #endif
+
+-static inline void account_irq_enter_time(struct task_struct *tsk)
++static inline void account_softirq_enter(struct task_struct *tsk)
+ {
+- vtime_account_irq(tsk);
+- irqtime_account_irq(tsk);
++ vtime_account_irq(tsk, SOFTIRQ_OFFSET);
++ irqtime_account_irq(tsk, SOFTIRQ_OFFSET);
+ }
+
+-static inline void account_irq_exit_time(struct task_struct *tsk)
++static inline void account_softirq_exit(struct task_struct *tsk)
+ {
+- vtime_account_irq(tsk);
+- irqtime_account_irq(tsk);
++ vtime_account_softirq(tsk);
++ irqtime_account_irq(tsk, 0);
++}
++
++static inline void account_hardirq_enter(struct task_struct *tsk)
++{
++ vtime_account_irq(tsk, HARDIRQ_OFFSET);
++ irqtime_account_irq(tsk, HARDIRQ_OFFSET);
++}
++
++static inline void account_hardirq_exit(struct task_struct *tsk)
++{
++ vtime_account_hardirq(tsk);
++ irqtime_account_irq(tsk, 0);
+ }
+
+ #endif /* _LINUX_KERNEL_VTIME_H */
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -44,12 +44,13 @@ static void irqtime_account_delta(struct
+ }
+
+ /*
+- * Called before incrementing preempt_count on {soft,}irq_enter
++ * Called after incrementing preempt_count on {soft,}irq_enter
+ * and before decrementing preempt_count on {soft,}irq_exit.
+ */
+-void irqtime_account_irq(struct task_struct *curr)
++void irqtime_account_irq(struct task_struct *curr, unsigned int offset)
+ {
+ struct irqtime *irqtime = this_cpu_ptr(&cpu_irqtime);
++ unsigned int pc;
+ s64 delta;
+ int cpu;
+
+@@ -59,6 +60,7 @@ void irqtime_account_irq(struct task_str
+ cpu = smp_processor_id();
+ delta = sched_clock_cpu(cpu) - irqtime->irq_start_time;
+ irqtime->irq_start_time += delta;
++ pc = preempt_count() - offset;
+
+ /*
+ * We do not account for softirq time from ksoftirqd here.
+@@ -66,9 +68,9 @@ void irqtime_account_irq(struct task_str
+ * in that case, so as not to confuse scheduler with a special task
+ * that do not consume any time, but still wants to run.
+ */
+- if (hardirq_count())
++ if (pc & HARDIRQ_MASK)
+ irqtime_account_delta(irqtime, delta, CPUTIME_IRQ);
+- else if (in_serving_softirq() && curr != this_cpu_ksoftirqd())
++ else if ((pc & SOFTIRQ_OFFSET) && curr != this_cpu_ksoftirqd())
+ irqtime_account_delta(irqtime, delta, CPUTIME_SOFTIRQ);
+ }
+
+@@ -417,11 +419,13 @@ void vtime_task_switch(struct task_struc
+ }
+ # endif
+
+-void vtime_account_irq(struct task_struct *tsk)
++void vtime_account_irq(struct task_struct *tsk, unsigned int offset)
+ {
+- if (hardirq_count()) {
++ unsigned int pc = preempt_count() - offset;
++
++ if (pc & HARDIRQ_OFFSET) {
+ vtime_account_hardirq(tsk);
+- } else if (in_serving_softirq()) {
++ } else if (pc & SOFTIRQ_OFFSET) {
+ vtime_account_softirq(tsk);
+ } else if (!IS_ENABLED(CONFIG_HAVE_VIRT_CPU_ACCOUNTING_IDLE) &&
+ is_idle_task(tsk)) {
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -315,10 +315,10 @@ asmlinkage __visible void __softirq_entr
+ current->flags &= ~PF_MEMALLOC;
+
+ pending = local_softirq_pending();
+- account_irq_enter_time(current);
+
+ __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+ in_hardirq = lockdep_softirq_start();
++ account_softirq_enter(current);
+
+ restart:
+ /* Reset the pending bitmask before enabling irqs */
+@@ -365,8 +365,8 @@ asmlinkage __visible void __softirq_entr
+ wakeup_softirqd();
+ }
+
++ account_softirq_exit(current);
+ lockdep_softirq_end(in_hardirq);
+- account_irq_exit_time(current);
+ __local_bh_enable(SOFTIRQ_OFFSET);
+ WARN_ON_ONCE(in_interrupt());
+ current_restore_flags(old_flags, PF_MEMALLOC);
+@@ -418,7 +418,7 @@ static inline void __irq_exit_rcu(void)
+ #else
+ lockdep_assert_irqs_disabled();
+ #endif
+- account_irq_exit_time(current);
++ account_hardirq_exit(current);
+ preempt_count_sub(HARDIRQ_OFFSET);
+ if (!in_interrupt() && local_softirq_pending())
+ invoke_softirq();
diff --git a/patches/0004-um-irqstat-Get-rid-of-the-duplicated-declarations.patch b/patches/0004-um-irqstat-Get-rid-of-the-duplicated-declarations.patch
index b70502cda2a6..e5500c23b77a 100644
--- a/patches/0004-um-irqstat-Get-rid-of-the-duplicated-declarations.patch
+++ b/patches/0004-um-irqstat-Get-rid-of-the-duplicated-declarations.patch
@@ -1,15 +1,13 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:36:28 +0100
-Subject: [PATCH 04/19] um/irqstat: Get rid of the duplicated declarations
+Date: Fri, 13 Nov 2020 15:02:11 +0100
+Subject: [PATCH 04/12] um/irqstat: Get rid of the duplicated declarations
irq_cpustat_t and ack_bad_irq() are exactly the same as the asm-generic
ones.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Cc: Jeff Dike <jdike@addtoit.com>
-Cc: Richard Weinberger <richard@nod.at>
-Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
-Cc: linux-um@lists.infradead.org
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.156361337@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/um/include/asm/hardirq.h | 17 +----------------
diff --git a/patches/0005-ARM-irqstat-Get-rid-of-duplicated-declaration.patch b/patches/0005-ARM-irqstat-Get-rid-of-duplicated-declaration.patch
index 87a41f93329d..a4a017db0ac5 100644
--- a/patches/0005-ARM-irqstat-Get-rid-of-duplicated-declaration.patch
+++ b/patches/0005-ARM-irqstat-Get-rid-of-duplicated-declaration.patch
@@ -1,15 +1,14 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:41:11 +0100
-Subject: [PATCH 05/19] ARM: irqstat: Get rid of duplicated declaration
+Date: Fri, 13 Nov 2020 15:02:12 +0100
+Subject: [PATCH 05/12] ARM: irqstat: Get rid of duplicated declaration
irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Cc: Russell King <linux@armlinux.org.uk>
-Cc: Marc Zyngier <maz@kernel.org>
-Cc: Valentin Schneider <valentin.schneider@arm.com>
-Cc: linux-arm-kernel@lists.infradead.org
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
+Link: https://lore.kernel.org/r/20201113141733.276505871@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm/include/asm/hardirq.h | 11 +++--------
diff --git a/patches/0005-irq-Call-tick_irq_enter-inside-HARDIRQ_OFFSET.patch b/patches/0005-irq-Call-tick_irq_enter-inside-HARDIRQ_OFFSET.patch
new file mode 100644
index 000000000000..a8a253432eab
--- /dev/null
+++ b/patches/0005-irq-Call-tick_irq_enter-inside-HARDIRQ_OFFSET.patch
@@ -0,0 +1,44 @@
+From: Frederic Weisbecker <frederic@kernel.org>
+Date: Wed, 2 Dec 2020 12:57:32 +0100
+Subject: [PATCH 5/5] irq: Call tick_irq_enter() inside HARDIRQ_OFFSET
+
+Now that account_hardirq_enter() is called after HARDIRQ_OFFSET has
+been incremented, there is nothing left that prevents us from also
+moving tick_irq_enter() after HARDIRQ_OFFSET is incremented.
+
+The desired outcome is to remove the nasty hack that prevents softirqs
+from being raised through ksoftirqd instead of the hardirq bottom half.
+Also tick_irq_enter() then becomes appropriately covered by lockdep.
+
+Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20201202115732.27827-6-frederic@kernel.org
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/softirq.c | 14 +++++---------
+ 1 file changed, 5 insertions(+), 9 deletions(-)
+
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -377,16 +377,12 @@ asmlinkage __visible void __softirq_entr
+ */
+ void irq_enter_rcu(void)
+ {
+- if (is_idle_task(current) && !in_interrupt()) {
+- /*
+- * Prevent raise_softirq from needlessly waking up ksoftirqd
+- * here, as softirq will be serviced on return from interrupt.
+- */
+- local_bh_disable();
++ __irq_enter_raw();
++
++ if (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))
+ tick_irq_enter();
+- _local_bh_enable();
+- }
+- __irq_enter();
++
++ account_hardirq_enter(current);
+ }
+
+ /**
diff --git a/patches/0006-arm64-irqstat-Get-rid-of-duplicated-declaration.patch b/patches/0006-arm64-irqstat-Get-rid-of-duplicated-declaration.patch
index 5e1aa7a16172..c142a7df1595 100644
--- a/patches/0006-arm64-irqstat-Get-rid-of-duplicated-declaration.patch
+++ b/patches/0006-arm64-irqstat-Get-rid-of-duplicated-declaration.patch
@@ -1,15 +1,15 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:43:46 +0100
-Subject: [PATCH 06/19] arm64: irqstat: Get rid of duplicated declaration
+Date: Fri, 13 Nov 2020 15:02:13 +0100
+Subject: [PATCH 06/12] arm64: irqstat: Get rid of duplicated declaration
irq_cpustat_t is exactly the same as the asm-generic one. Define
ack_bad_irq so the generic header does not emit the generic version of it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Cc: Catalin Marinas <catalin.marinas@arm.com>
-Cc: Will Deacon <will@kernel.org>
-Cc: Marc Zyngier <maz@kernel.org>
-Cc: linux-arm-kernel@lists.infradead.org
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Acked-by: Will Deacon <will@kernel.org>
+Acked-by: Marc Zyngier <maz@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.392015387@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/arm64/include/asm/hardirq.h | 7 ++-----
diff --git a/patches/0007-asm-generic-irqstat-Add-optional-__nmi_count-member.patch b/patches/0007-asm-generic-irqstat-Add-optional-__nmi_count-member.patch
index 449eea83f51f..614a5eb64735 100644
--- a/patches/0007-asm-generic-irqstat-Add-optional-__nmi_count-member.patch
+++ b/patches/0007-asm-generic-irqstat-Add-optional-__nmi_count-member.patch
@@ -1,11 +1,13 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:46:50 +0100
-Subject: [PATCH 07/19] asm-generic/irqstat: Add optional __nmi_count member
+Date: Fri, 13 Nov 2020 15:02:14 +0100
+Subject: [PATCH 07/12] asm-generic/irqstat: Add optional __nmi_count member
Add an optional __nmi_count member to irq_cpustat_t so more architectures
can use the generic version.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.501611990@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/asm-generic/hardirq.h | 3 +++
diff --git a/patches/0008-sh-irqstat-Use-the-generic-irq_cpustat_t.patch b/patches/0008-sh-irqstat-Use-the-generic-irq_cpustat_t.patch
index 71c2b50a6878..75979197f895 100644
--- a/patches/0008-sh-irqstat-Use-the-generic-irq_cpustat_t.patch
+++ b/patches/0008-sh-irqstat-Use-the-generic-irq_cpustat_t.patch
@@ -1,14 +1,13 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:47:51 +0100
-Subject: [PATCH 08/19] sh: irqstat: Use the generic irq_cpustat_t
+Date: Fri, 13 Nov 2020 15:02:15 +0100
+Subject: [PATCH 08/12] sh: irqstat: Use the generic irq_cpustat_t
SH can now use the generic irq_cpustat_t. Define ack_bad_irq so the generic
header does not emit the generic version of it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
-Cc: Rich Felker <dalias@libc.org>
-Cc: linux-sh@vger.kernel.org
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.625146223@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/sh/include/asm/hardirq.h | 14 ++++----------
diff --git a/patches/0009-irqstat-Move-declaration-into-asm-generic-hardirq.h.patch b/patches/0009-irqstat-Move-declaration-into-asm-generic-hardirq.h.patch
index 2be24eb7c3bd..fcfdf1401983 100644
--- a/patches/0009-irqstat-Move-declaration-into-asm-generic-hardirq.h.patch
+++ b/patches/0009-irqstat-Move-declaration-into-asm-generic-hardirq.h.patch
@@ -1,11 +1,13 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:50:13 +0100
-Subject: [PATCH 09/19] irqstat: Move declaration into asm-generic/hardirq.h
+Date: Fri, 13 Nov 2020 15:02:16 +0100
+Subject: [PATCH 09/12] irqstat: Move declaration into asm-generic/hardirq.h
Move the declaration of the irq_cpustat per cpu variable to
asm-generic/hardirq.h and remove the now empty linux/irq_cpustat.h header.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.737377332@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/asm-generic/hardirq.h | 3 ++-
diff --git a/patches/0010-preempt-Cleanup-the-macro-maze-a-bit.patch b/patches/0010-preempt-Cleanup-the-macro-maze-a-bit.patch
index 1bf87ddbd290..a91220dbf90a 100644
--- a/patches/0010-preempt-Cleanup-the-macro-maze-a-bit.patch
+++ b/patches/0010-preempt-Cleanup-the-macro-maze-a-bit.patch
@@ -1,6 +1,6 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 16:52:38 +0100
-Subject: [PATCH 10/19] preempt: Cleanup the macro maze a bit
+Date: Fri, 13 Nov 2020 15:02:17 +0100
+Subject: [PATCH 10/12] preempt: Cleanup the macro maze a bit
Make the macro maze consistent and prepare it for adding the RT variant for
BH accounting.
@@ -12,6 +12,8 @@ BH accounting.
- Update comments and move the deprecated macros aside
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.864469886@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/preempt.h | 30 ++++++++++++++++--------------
diff --git a/patches/0011-softirq-Move-related-code-into-one-section.patch b/patches/0011-softirq-Move-related-code-into-one-section.patch
index 6824f0c7ccf4..531e099c60a9 100644
--- a/patches/0011-softirq-Move-related-code-into-one-section.patch
+++ b/patches/0011-softirq-Move-related-code-into-one-section.patch
@@ -1,12 +1,14 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 15:36:05 +0100
-Subject: [PATCH 11/19] softirq: Move related code into one section
+Date: Fri, 13 Nov 2020 15:02:18 +0100
+Subject: [PATCH 11/12] softirq: Move related code into one section
To prepare for adding a RT aware variant of softirq serialization and
processing move related code into one section so the necessary #ifdeffery
is reduced to one.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
+Link: https://lore.kernel.org/r/20201113141733.974214480@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/softirq.c | 107 +++++++++++++++++++++++++++----------------------------
diff --git a/patches/0012-sh-irq-Add-missing-closing-parentheses-in-arch_show_.patch b/patches/0012-sh-irq-Add-missing-closing-parentheses-in-arch_show_.patch
new file mode 100644
index 000000000000..74af652bb8df
--- /dev/null
+++ b/patches/0012-sh-irq-Add-missing-closing-parentheses-in-arch_show_.patch
@@ -0,0 +1,34 @@
+From 15b8d9372f27c47e17c91f6f16d359314cf11404 Mon Sep 17 00:00:00 2001
+From: Geert Uytterhoeven <geert+renesas@glider.be>
+Date: Tue, 24 Nov 2020 14:06:56 +0100
+Subject: [PATCH 12/12] sh/irq: Add missing closing parentheses in
+ arch_show_interrupts()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+ arch/sh/kernel/irq.c: In function ‘arch_show_interrupts’:
+ arch/sh/kernel/irq.c:47:58: error: expected ‘)’ before ‘;’ token
+ 47 | seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
+ | ^
+
+Fixes: fe3f1d5d7cd3062c ("sh: Get rid of nmi_count()")
+Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20201124130656.2741743-1-geert+renesas@glider.be
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/sh/kernel/irq.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/sh/kernel/irq.c
++++ b/arch/sh/kernel/irq.c
+@@ -44,7 +44,7 @@ int arch_show_interrupts(struct seq_file
+
+ seq_printf(p, "%*s: ", prec, "NMI");
+ for_each_online_cpu(j)
+- seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j);
++ seq_printf(p, "%10u ", per_cpu(irq_stat.__nmi_count, j));
+ seq_printf(p, " Non-maskable interrupts\n");
+
+ seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
diff --git a/patches/irqtime-Use-irq_count-instead-of-preempt_count.patch b/patches/irqtime-Use-irq_count-instead-of-preempt_count.patch
new file mode 100644
index 000000000000..2097a520c75e
--- /dev/null
+++ b/patches/irqtime-Use-irq_count-instead-of-preempt_count.patch
@@ -0,0 +1,32 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 4 Dec 2020 18:00:31 +0100
+Subject: [PATCH] irqtime: Use irq_count() instead of preempt_count()
+
+preempt_count() does not contain the softirq bits on a PREEMPT_RT
+kernel. irq_count() does.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/sched/cputime.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -60,7 +60,7 @@ void irqtime_account_irq(struct task_str
+ cpu = smp_processor_id();
+ delta = sched_clock_cpu(cpu) - irqtime->irq_start_time;
+ irqtime->irq_start_time += delta;
+- pc = preempt_count() - offset;
++ pc = irq_count() - offset;
+
+ /*
+ * We do not account for softirq time from ksoftirqd here.
+@@ -421,7 +421,7 @@ void vtime_task_switch(struct task_struc
+
+ void vtime_account_irq(struct task_struct *tsk, unsigned int offset)
+ {
+- unsigned int pc = preempt_count() - offset;
++ unsigned int pc = irq_count() - offset;
+
+ if (pc & HARDIRQ_OFFSET) {
+ vtime_account_hardirq(tsk);
diff --git a/patches/localversion.patch b/patches/localversion.patch
index 25e5fadbaae8..e1f3b8d87864 100644
--- a/patches/localversion.patch
+++ b/patches/localversion.patch
@@ -10,4 +10,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- /dev/null
+++ b/localversion-rt
@@ -0,0 +1 @@
-+-rt13
++-rt14
diff --git a/patches/0016-rcu-Prevent-false-positive-softirq-warning-on-RT.patch b/patches/rcu_Prevent_false_positive_softirq_warning_on_RT.patch
index 753694b38a91..1ac6defa709e 100644
--- a/patches/0016-rcu-Prevent-false-positive-softirq-warning-on-RT.patch
+++ b/patches/rcu_Prevent_false_positive_softirq_warning_on_RT.patch
@@ -1,13 +1,14 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 31 Aug 2020 17:26:08 +0200
-Subject: [PATCH 16/19] rcu: Prevent false positive softirq warning on RT
+Subject: rcu: Prevent false positive softirq warning on RT
+Date: Fri, 13 Nov 2020 15:02:23 +0100
+
+From: Thomas Gleixner <tglx@linutronix.de>
Soft interrupt disabled sections can legitimately be preempted or schedule
out when blocking on a lock on RT enabled kernels so the RCU preempt check
warning has to be disabled for RT kernels.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/rcupdate.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/patches/series b/patches/series
index b2b8ba0b94bb..1acbc4b59749 100644
--- a/patches/series
+++ b/patches/series
@@ -159,6 +159,7 @@ net--Move-lockdep-where-it-belongs.patch
tcp-Remove-superfluous-BH-disable-around-listening_h.patch
# SoftIRQ
+# TIP 9f112156f8da016df2dcbe77108e5b070aa58992 and later
0001-parisc-Remove-bogus-__IRQ_STAT-macro.patch
0002-sh-Get-rid-of-nmi_count.patch
0003-irqstat-Get-rid-of-nmi_count-and-__IRQ_STAT.patch
@@ -170,14 +171,26 @@ tcp-Remove-superfluous-BH-disable-around-listening_h.patch
0009-irqstat-Move-declaration-into-asm-generic-hardirq.h.patch
0010-preempt-Cleanup-the-macro-maze-a-bit.patch
0011-softirq-Move-related-code-into-one-section.patch
-0012-softirq-Add-RT-specific-softirq-accounting.patch
-0013-softirq-Move-various-protections-into-inline-helpers.patch
-0014-softirq-Make-softirq-control-and-processing-RT-aware.patch
-0015-tick-sched-Prevent-false-positive-softirq-pending-wa.patch
-0016-rcu-Prevent-false-positive-softirq-warning-on-RT.patch
-0017-softirq-Replace-barrier-with-cpu_relax-in-tasklet_un.patch
-0018-tasklets-Use-static-inlines-for-stub-implementations.patch
-0019-tasklets-Prevent-kill-unlock_wait-deadlock-on-RT.patch
+0012-sh-irq-Add-missing-closing-parentheses-in-arch_show_.patch
+
+# TIP 7197688b2006357da75a014e0a76be89ca9c2d46 and later
+0001-sched-cputime-Remove-symbol-exports-from-IRQ-time-ac.patch
+0002-s390-vtime-Use-the-generic-IRQ-entry-accounting.patch
+0003-sched-vtime-Consolidate-IRQ-time-accounting.patch
+0004-irqtime-Move-irqtime-entry-accounting-after-irq-offs.patch
+0005-irq-Call-tick_irq_enter-inside-HARDIRQ_OFFSET.patch
+
+# WIP
+softirq_Add_RT_specific_softirq_accounting.patch
+softirq_Move_various_protections_into_inline_helpers.patch
+softirq_Make_softirq_control_and_processing_RT_aware.patch
+tick_sched_Prevent_false_positive_softirq_pending_warnings_on_RT.patch
+rcu_Prevent_false_positive_softirq_warning_on_RT.patch
+softirq_Replace_barrier_with_cpu_relax_in_tasklet_unlock_wait_.patch
+tasklets_Use_static_inlines_for_stub_implementations.patch
+tasklets_Prevent_kill_unlock_wait_deadlock_on_RT.patch
+#
+irqtime-Use-irq_count-instead-of-preempt_count.patch
# TIP 5f0c71278d6848b4809f83af90f28196e1505ab1
x86-fpu-Simplify-fpregs_-un-lock.patch
diff --git a/patches/0012-softirq-Add-RT-specific-softirq-accounting.patch b/patches/softirq_Add_RT_specific_softirq_accounting.patch
index 419be259b505..759cb05d3602 100644
--- a/patches/0012-softirq-Add-RT-specific-softirq-accounting.patch
+++ b/patches/softirq_Add_RT_specific_softirq_accounting.patch
@@ -1,16 +1,25 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 9 Nov 2020 15:46:23 +0100
-Subject: [PATCH 12/19] softirq: Add RT specific softirq accounting
+Subject: softirq: Add RT specific softirq accounting
+Date: Fri, 13 Nov 2020 15:02:19 +0100
-RT requires the softirq to be preemptible and uses a per CPU local lock to
-protect BH disabled sections and softirq processing. Therefore RT cannot
-use the preempt counter to keep track of BH disabled/serving.
+From: Thomas Gleixner <tglx@linutronix.de>
+
+RT requires the softirq processing and local bottomhalf disabled regions to
+be preemptible. Using the normal preempt count based serialization is
+therefore not possible because this implicitely disables preemption.
+
+RT kernels use a per CPU local lock to serialize bottomhalfs. As
+local_bh_disable() can nest the lock can only be acquired on the outermost
+invocation of local_bh_disable() and released when the nest count becomes
+zero. Tasks which hold the local lock can be preempted so its required to
+keep track of the nest count per task.
Add a RT only counter to task struct and adjust the relevant macros in
preempt.h.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+V2: Rewrote changelog.
---
include/linux/hardirq.h | 1 +
include/linux/preempt.h | 6 +++++-
diff --git a/patches/0014-softirq-Make-softirq-control-and-processing-RT-aware.patch b/patches/softirq_Make_softirq_control_and_processing_RT_aware.patch
index 2e270cd226ef..a4c56cdf50d8 100644
--- a/patches/0014-softirq-Make-softirq-control-and-processing-RT-aware.patch
+++ b/patches/softirq_Make_softirq_control_and_processing_RT_aware.patch
@@ -1,6 +1,8 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 21 Sep 2020 17:26:19 +0200
-Subject: [PATCH 14/19] softirq: Make softirq control and processing RT aware
+Subject: softirq: Make softirq control and processing RT aware
+Date: Fri, 13 Nov 2020 15:02:21 +0100
+
+From: Thomas Gleixner <tglx@linutronix.de>
Provide a local lock based serialization for soft interrupts on RT which
allows the local_bh_disabled() sections and servicing soft interrupts to be
@@ -10,11 +12,12 @@ Provide the necessary inline helpers which allow to reuse the bulk of the
softirq processing code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+V2: Adjusted to Frederic's changes
---
include/linux/bottom_half.h | 2
- kernel/softirq.c | 207 ++++++++++++++++++++++++++++++++++++++++++--
- 2 files changed, 201 insertions(+), 8 deletions(-)
+ kernel/softirq.c | 188 ++++++++++++++++++++++++++++++++++++++++++--
+ 2 files changed, 182 insertions(+), 8 deletions(-)
--- a/include/linux/bottom_half.h
+++ b/include/linux/bottom_half.h
@@ -37,7 +40,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#include <linux/mm.h>
#include <linux/notifier.h>
#include <linux/percpu.h>
-@@ -100,20 +101,208 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_contex
+@@ -100,20 +101,189 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirq_contex
#endif
/*
@@ -60,10 +63,8 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
* softirq and whether we just have bh disabled.
*/
+#ifdef CONFIG_PREEMPT_RT
-
--#ifdef CONFIG_TRACE_IRQFLAGS
- /*
-- * This is for softirq.c-internal use, where hardirqs are disabled
++
++/*
+ * RT accounts for BH disabled sections in task::softirqs_disabled_cnt and
+ * also in per CPU softirq_ctrl::cnt. This is necessary to allow tasks in a
+ * softirq disabled section to be preempted.
@@ -95,18 +96,18 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ if (!current->softirq_disable_cnt) {
+ if (preemptible()) {
+ local_lock(&softirq_ctrl.lock);
++ /* Required to meet the RCU bottomhalf requirements. */
+ rcu_read_lock();
+ } else {
+ DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
+ }
+ }
+
-+ preempt_disable();
+ /*
+ * Track the per CPU softirq disabled state. On RT this is per CPU
+ * state to allow preemption of bottom half disabled sections.
+ */
-+ newcnt = this_cpu_add_return(softirq_ctrl.cnt, cnt);
++ newcnt = __this_cpu_add_return(softirq_ctrl.cnt, cnt);
+ /*
+ * Reflect the result in the task state to prevent recursion on the
+ * local lock and to make softirq_count() & al work.
@@ -118,7 +119,6 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ lockdep_softirqs_off(ip);
+ raw_local_irq_restore(flags);
+ }
-+ preempt_enable();
+}
+EXPORT_SYMBOL(__local_bh_disable_ip);
+
@@ -130,16 +130,14 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ DEBUG_LOCKS_WARN_ON(current->softirq_disable_cnt !=
+ this_cpu_read(softirq_ctrl.cnt));
+
-+ preempt_disable();
+ if (IS_ENABLED(CONFIG_TRACE_IRQFLAGS) && softirq_count() == cnt) {
+ raw_local_irq_save(flags);
+ lockdep_softirqs_on(_RET_IP_);
+ raw_local_irq_restore(flags);
+ }
+
-+ newcnt = this_cpu_sub_return(softirq_ctrl.cnt, cnt);
++ newcnt = __this_cpu_sub_return(softirq_ctrl.cnt, cnt);
+ current->softirq_disable_cnt = newcnt;
-+ preempt_enable();
+
+ if (!newcnt && unlock) {
+ rcu_read_unlock();
@@ -195,22 +193,6 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+EXPORT_SYMBOL(__local_bh_enable_ip);
+
+/*
-+ * Invoked from irq_enter_rcu() to prevent that tick_irq_enter()
-+ * pointlessly wakes the softirq daemon. That's handled in __irq_exit_rcu().
-+ * None of the above logic in the regular bh_disable/enable functions is
-+ * required here.
-+ */
-+static inline void local_bh_disable_irq_enter(void)
-+{
-+ this_cpu_add(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
-+}
-+
-+static inline void local_bh_enable_irq_enter(void)
-+{
-+ this_cpu_sub(softirq_ctrl.cnt, SOFTIRQ_DISABLE_OFFSET);
-+}
-+
-+/*
+ * Invoked from ksoftirqd_run() outside of the interrupt disabled section
+ * to acquire the per CPU local lock for reentrancy protection.
+ */
@@ -231,20 +213,22 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+static inline void softirq_handle_begin(void) { }
+static inline void softirq_handle_end(void) { }
+
-+static inline void invoke_softirq(void)
++static inline bool should_wake_ksoftirqd(void)
+{
-+ if (!this_cpu_read(softirq_ctrl.cnt))
-+ wakeup_softirqd();
++ return !this_cpu_read(softirq_ctrl.cnt);
+}
+
-+static inline bool should_wake_ksoftirqd(void)
++static inline void invoke_softirq(void)
+{
-+ return !this_cpu_read(softirq_ctrl.cnt);
++ if (should_wake_ksoftirqd())
++ wakeup_softirqd();
+}
+
+#else /* CONFIG_PREEMPT_RT */
-+
-+/*
+
+-#ifdef CONFIG_TRACE_IRQFLAGS
+ /*
+- * This is for softirq.c-internal use, where hardirqs are disabled
+ * This one is for softirq.c-internal use, where hardirqs are disabled
* legitimately:
*/
@@ -252,7 +236,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
{
unsigned long flags;
-@@ -284,6 +473,8 @@ asmlinkage __visible void do_softirq(voi
+@@ -274,6 +444,8 @@ asmlinkage __visible void do_softirq(voi
local_irq_restore(flags);
}
@@ -261,7 +245,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
* but break the loop if need_resched() is set or after 2 ms.
-@@ -388,8 +579,10 @@ asmlinkage __visible void __softirq_entr
+@@ -378,8 +550,10 @@ asmlinkage __visible void __softirq_entr
pending >>= softirq_bit;
}
diff --git a/patches/0013-softirq-Move-various-protections-into-inline-helpers.patch b/patches/softirq_Move_various_protections_into_inline_helpers.patch
index a8ad78bc5431..959a06de1166 100644
--- a/patches/0013-softirq-Move-various-protections-into-inline-helpers.patch
+++ b/patches/softirq_Move_various_protections_into_inline_helpers.patch
@@ -1,33 +1,24 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Tue, 10 Nov 2020 16:19:16 +0100
-Subject: [PATCH 13/19] softirq: Move various protections into inline helpers
+Subject: softirq: Move various protections into inline helpers
+Date: Fri, 13 Nov 2020 15:02:20 +0100
To allow reuse of the bulk of softirq processing code for RT and to avoid
#ifdeffery all over the place, split protections for various code sections
out into inline helpers so the RT variant can just replace them in one go.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
- kernel/softirq.c | 53 ++++++++++++++++++++++++++++++++++++++++++++---------
- 1 file changed, 44 insertions(+), 9 deletions(-)
+V2: Adapt to Frederics rework
+---
+ kernel/softirq.c | 39 ++++++++++++++++++++++++++++++++-------
+ 1 file changed, 32 insertions(+), 7 deletions(-)
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
-@@ -204,6 +204,42 @@ void __local_bh_enable_ip(unsigned long
+@@ -204,6 +204,32 @@ void __local_bh_enable_ip(unsigned long
}
EXPORT_SYMBOL(__local_bh_enable_ip);
-+static inline void local_bh_disable_irq_enter(void)
-+{
-+ local_bh_disable();
-+}
-+
-+static inline void local_bh_enable_irq_enter(void)
-+{
-+ _local_bh_enable();
-+}
-+
+static inline void softirq_handle_begin(void)
+{
+ __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
@@ -57,38 +48,26 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static inline void invoke_softirq(void)
{
if (ksoftirqd_running(local_softirq_pending()))
-@@ -317,7 +353,7 @@ asmlinkage __visible void __softirq_entr
+@@ -316,7 +342,7 @@ asmlinkage __visible void __softirq_entr
+
pending = local_softirq_pending();
- account_irq_enter_time(current);
- __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+ softirq_handle_begin();
in_hardirq = lockdep_softirq_start();
+ account_softirq_enter(current);
- restart:
-@@ -367,8 +403,7 @@ asmlinkage __visible void __softirq_entr
+@@ -367,8 +393,7 @@ asmlinkage __visible void __softirq_entr
+ account_softirq_exit(current);
lockdep_softirq_end(in_hardirq);
- account_irq_exit_time(current);
- __local_bh_enable(SOFTIRQ_OFFSET);
- WARN_ON_ONCE(in_interrupt());
+ softirq_handle_end();
current_restore_flags(old_flags, PF_MEMALLOC);
}
-@@ -382,9 +417,9 @@ void irq_enter_rcu(void)
- * Prevent raise_softirq from needlessly waking up ksoftirqd
- * here, as softirq will be serviced on return from interrupt.
- */
-- local_bh_disable();
-+ local_bh_disable_irq_enter();
- tick_irq_enter();
-- _local_bh_enable();
-+ local_bh_enable_irq_enter();
- }
- __irq_enter();
- }
-@@ -467,7 +502,7 @@ inline void raise_softirq_irqoff(unsigne
+@@ -463,7 +488,7 @@ inline void raise_softirq_irqoff(unsigne
* Otherwise we wake up ksoftirqd to make sure we
* schedule the softirq soon.
*/
@@ -97,7 +76,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
wakeup_softirqd();
}
-@@ -645,18 +680,18 @@ static int ksoftirqd_should_run(unsigned
+@@ -641,18 +666,18 @@ static int ksoftirqd_should_run(unsigned
static void run_ksoftirqd(unsigned int cpu)
{
diff --git a/patches/0017-softirq-Replace-barrier-with-cpu_relax-in-tasklet_un.patch b/patches/softirq_Replace_barrier_with_cpu_relax_in_tasklet_unlock_wait_.patch
index 871e9dcde55a..6c5bfbcdcd55 100644
--- a/patches/0017-softirq-Replace-barrier-with-cpu_relax-in-tasklet_un.patch
+++ b/patches/softirq_Replace_barrier_with_cpu_relax_in_tasklet_unlock_wait_.patch
@@ -1,14 +1,14 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 31 Aug 2020 15:12:38 +0200
-Subject: [PATCH 17/19] softirq: Replace barrier() with cpu_relax() in
- tasklet_unlock_wait()
+Subject: softirq: Replace barrier() with cpu_relax() in tasklet_unlock_wait()
+Date: Fri, 13 Nov 2020 15:02:24 +0100
+
+From: Thomas Gleixner <tglx@linutronix.de>
A barrier() in a tight loop which waits for something to happen on a remote
CPU is a pointless exercise. Replace it with cpu_relax() which allows HT
siblings to make progress.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/interrupt.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/patches/0019-tasklets-Prevent-kill-unlock_wait-deadlock-on-RT.patch b/patches/tasklets_Prevent_kill_unlock_wait_deadlock_on_RT.patch
index 40bb6fe62379..029987bd40d4 100644
--- a/patches/0019-tasklets-Prevent-kill-unlock_wait-deadlock-on-RT.patch
+++ b/patches/tasklets_Prevent_kill_unlock_wait_deadlock_on_RT.patch
@@ -1,6 +1,8 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 21 Sep 2020 17:47:34 +0200
-Subject: [PATCH 19/19] tasklets: Prevent kill/unlock_wait deadlock on RT
+Subject: tasklets: Prevent kill/unlock_wait deadlock on RT
+Date: Fri, 13 Nov 2020 15:02:26 +0100
+
+From: Thomas Gleixner <tglx@linutronix.de>
tasklet_kill() and tasklet_unlock_wait() spin and wait for the
TASKLET_STATE_SCHED resp. TASKLET_STATE_RUN bit in the tasklet state to be
@@ -8,15 +10,16 @@ cleared. This works on !RT nicely because the corresponding execution can
only happen on a different CPU.
On RT softirq processing is preemptible, therefore a task preempting the
-softirq processing thread can spin forever. Prevent this by invoking
-local_bh_disable()/enable() inside the loop. In case that the softirq
-processing thread was preempted by the current task, current will block on
-the local lock which yields the CPU to the preempted softirq processing
-thread. If the tasklet is processed on a different CPU then the
-local_bh_disable()/enable() pair is just a waste of processor cycles.
+softirq processing thread can spin forever.
+
+Prevent this by invoking local_bh_disable()/enable() inside the loop. In
+case that the softirq processing thread was preempted by the current task,
+current will block on the local lock which yields the CPU to the preempted
+softirq processing thread. If the tasklet is processed on a different CPU
+then the local_bh_disable()/enable() pair is just a waste of processor
+cycles.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/interrupt.h | 8 ++------
kernel/softirq.c | 38 +++++++++++++++++++++++++++++++++++++-
@@ -48,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static inline void tasklet_unlock(struct tasklet_struct *t) { }
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
-@@ -851,6 +851,29 @@ void tasklet_init(struct tasklet_struct
+@@ -818,6 +818,29 @@ void tasklet_init(struct tasklet_struct
}
EXPORT_SYMBOL(tasklet_init);
@@ -78,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
void tasklet_kill(struct tasklet_struct *t)
{
if (in_interrupt())
-@@ -858,7 +881,20 @@ void tasklet_kill(struct tasklet_struct
+@@ -825,7 +848,20 @@ void tasklet_kill(struct tasklet_struct
while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
do {
diff --git a/patches/0018-tasklets-Use-static-inlines-for-stub-implementations.patch b/patches/tasklets_Use_static_inlines_for_stub_implementations.patch
index 63f715ddd5e8..6b92dba44d51 100644
--- a/patches/0018-tasklets-Use-static-inlines-for-stub-implementations.patch
+++ b/patches/tasklets_Use_static_inlines_for_stub_implementations.patch
@@ -1,11 +1,12 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 7 Sep 2020 22:57:32 +0200
-Subject: [PATCH 18/19] tasklets: Use static inlines for stub implementations
+Subject: tasklets: Use static inlines for stub implementations
+Date: Fri, 13 Nov 2020 15:02:25 +0100
+
+From: Thomas Gleixner <tglx@linutronix.de>
Inlines exist for a reason.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/interrupt.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/patches/0015-tick-sched-Prevent-false-positive-softirq-pending-wa.patch b/patches/tick_sched_Prevent_false_positive_softirq_pending_warnings_on_RT.patch
index a40d9406f779..5f25f1289c64 100644
--- a/patches/0015-tick-sched-Prevent-false-positive-softirq-pending-wa.patch
+++ b/patches/tick_sched_Prevent_false_positive_softirq_pending_warnings_on_RT.patch
@@ -1,7 +1,8 @@
From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 31 Aug 2020 17:02:36 +0200
-Subject: [PATCH 15/19] tick/sched: Prevent false positive softirq pending
- warnings on RT
+Subject: tick/sched: Prevent false positive softirq pending warnings on RT
+Date: Fri, 13 Nov 2020 15:02:22 +0100
+
+From: Thomas Gleixner <tglx@linutronix.de>
On RT a task which has soft interrupts disabled can block on a lock and
schedule out to idle while soft interrupts are pending. This triggers the
@@ -14,7 +15,6 @@ To prevent that check the per CPU state which indicates that a scheduled
out task has soft interrupts disabled.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/bottom_half.h | 6 ++++++
kernel/softirq.c | 15 +++++++++++++++