summaryrefslogtreecommitdiff
path: root/patches/crypto__limit_more_FPU-enabled_sections.patch
diff options
context:
space:
mode:
Diffstat (limited to 'patches/crypto__limit_more_FPU-enabled_sections.patch')
-rw-r--r--patches/crypto__limit_more_FPU-enabled_sections.patch67
1 files changed, 0 insertions, 67 deletions
diff --git a/patches/crypto__limit_more_FPU-enabled_sections.patch b/patches/crypto__limit_more_FPU-enabled_sections.patch
deleted file mode 100644
index c09e6227c51f..000000000000
--- a/patches/crypto__limit_more_FPU-enabled_sections.patch
+++ /dev/null
@@ -1,67 +0,0 @@
-Subject: crypto: limit more FPU-enabled sections
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Date: Thu Nov 30 13:40:10 2017 +0100
-
-From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
-Those crypto drivers use SSE/AVX/… for their crypto work and in order to
-do so in kernel they need to enable the "FPU" in kernel mode which
-disables preemption.
-There are two problems with the way they are used:
-- the while loop which processes X bytes may create latency spikes and
- should be avoided or limited.
-- the cipher-walk-next part may allocate/free memory and may use
- kmap_atomic().
-
-The whole kernel_fpu_begin()/end() processing isn't probably that cheap.
-It most likely makes sense to process as much of those as possible in one
-go. The new *_fpu_sched_rt() schedules only if a RT task is pending.
-
-Probably we should measure the performance those ciphers in pure SW
-mode and with this optimisations to see if it makes sense to keep them
-for RT.
-
-This kernel_fpu_resched() makes the code more preemptible which might hurt
-performance.
-
-Cc: stable-rt@vger.kernel.org
-Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
-
----
- arch/x86/include/asm/fpu/api.h | 1 +
- arch/x86/kernel/fpu/core.c | 12 ++++++++++++
- 2 files changed, 13 insertions(+)
----
---- a/arch/x86/include/asm/fpu/api.h
-+++ b/arch/x86/include/asm/fpu/api.h
-@@ -28,6 +28,7 @@ extern void kernel_fpu_begin_mask(unsign
- extern void kernel_fpu_end(void);
- extern bool irq_fpu_usable(void);
- extern void fpregs_mark_activate(void);
-+extern void kernel_fpu_resched(void);
-
- /* Code that is unaware of kernel_fpu_begin_mask() can use this */
- static inline void kernel_fpu_begin(void)
---- a/arch/x86/kernel/fpu/core.c
-+++ b/arch/x86/kernel/fpu/core.c
-@@ -185,6 +185,18 @@ void kernel_fpu_end(void)
- }
- EXPORT_SYMBOL_GPL(kernel_fpu_end);
-
-+void kernel_fpu_resched(void)
-+{
-+ WARN_ON_FPU(!this_cpu_read(in_kernel_fpu));
-+
-+ if (should_resched(PREEMPT_OFFSET)) {
-+ kernel_fpu_end();
-+ cond_resched();
-+ kernel_fpu_begin();
-+ }
-+}
-+EXPORT_SYMBOL_GPL(kernel_fpu_resched);
-+
- /*
- * Sync the FPU register state to current's memory register state when the
- * current task owns the FPU. The hardware register state is preserved.