diff options
author | Peter Zijlstra <peterz@infradead.org> | 2017-08-10 17:10:26 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2017-08-25 11:12:20 +0200 |
commit | bbdacdfed2f5fa50a2cc9f500a36e05990a0837d (patch) | |
tree | 0ada5cb03b7133b9a42a456cede81e8e1b7a2ba9 /kernel/sched/sched.h | |
parent | 09e0dd8e0f2e197690d34fed8cb4737114d3dd5f (diff) | |
download | linux-rt-bbdacdfed2f5fa50a2cc9f500a36e05990a0837d.tar.gz |
sched/debug: Optimize sched_domain sysctl generation
Currently we unconditionally destroy all sysctl bits and regenerate
them after we've rebuild the domains (even if that rebuild is a
no-op).
And since we unconditionally (re)build the sysctl for all possible
CPUs, onlining all CPUs gets us O(n^2) time. Instead change this to
only rebuild the bits for CPUs we've actually installed new domains
on.
Reported-by: Ofer Levi(SW) <oferle@mellanox.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r-- | kernel/sched/sched.h | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index eeef1a3086d1..25e5cb1107f3 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1120,11 +1120,15 @@ extern int group_balance_cpu(struct sched_group *sg); #if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL) void register_sched_domain_sysctl(void); +void dirty_sched_domain_sysctl(int cpu); void unregister_sched_domain_sysctl(void); #else static inline void register_sched_domain_sysctl(void) { } +static inline void dirty_sched_domain_sysctl(int cpu) +{ +} static inline void unregister_sched_domain_sysctl(void) { } |