summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorSteven Rostedt <rostedt@goodmis.org>2011-09-29 12:24:30 -0500
committerDaniel Wagner <wagi@monom.org>2018-07-26 06:42:51 +0200
commitd7ca1f4b4092fee9a9ef18d17de6770f9774d468 (patch)
tree081811486db7df5e662b93669c6086c004f5be54 /kernel/sched
parentb855db3515ea060844a9e59d82bb6d284f266c39 (diff)
downloadlinux-rt-d7ca1f4b4092fee9a9ef18d17de6770f9774d468.tar.gz
tracing: Account for preempt off in preempt_schedule()
The preempt_schedule() uses the preempt_disable_notrace() version because it can cause infinite recursion by the function tracer as the function tracer uses preempt_enable_notrace() which may call back into the preempt_schedule() code as the NEED_RESCHED is still set and the PREEMPT_ACTIVE has not been set yet. See commit: d1f74e20b5b064a130cd0743a256c2d3cfe84010 that made this change. The preemptoff and preemptirqsoff latency tracers require the first and last preempt count modifiers to enable tracing. But this skips the checks. Since we can not convert them back to the non notrace version, we can use the idle() hooks for the latency tracers here. That is, the start/stop_critical_timings() works well to manually start and stop the latency tracer for preempt off timings. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Clark Williams <williams@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/core.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 65ed3501c2ca..178042f66370 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3333,7 +3333,16 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
* an infinite recursion.
*/
prev_ctx = exception_enter();
+ /*
+ * The add/subtract must not be traced by the function
+ * tracer. But we still want to account for the
+ * preempt off latency tracer. Since the _notrace versions
+ * of add/subtract skip the accounting for latency tracer
+ * we must force it manually.
+ */
+ start_critical_timings();
__schedule(true);
+ stop_critical_timings();
exception_exit(prev_ctx);
preempt_enable_no_resched_notrace();