summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorSteven Rostedt (Red Hat) <rostedt@goodmis.org>2016-11-23 11:36:30 -0500
committerSteven Rostedt <rostedt@goodmis.org>2016-11-23 11:36:30 -0500
commitfa7ffb39efccd574163ebc5dbfe4ff066186f261 (patch)
treee4782a1fae2f735ae3d8ecbda2d37a3e58ee6f1a /kernel
parent3e9a8aadca4807b4eadd33a50014c9b2767a4f1f (diff)
downloadlinux-next-fa7ffb39efccd574163ebc5dbfe4ff066186f261.tar.gz
ring-buffer: Make rb_reserve_next_event() always inlined
The function rb_reserved_next_event() is called by two functions: ring_buffer_lock_reserve() and ring_buffer_write(). This is in a very hot path of the tracing code, and it is best that they are not functions. The two callers are basically wrapers for rb_reserver_next_event(). Removing the function calls can save execution time in the hotpath of tracing. Link: http://lkml.kernel.org/r/20161121183700.GW26852@two.firstfloor.org Reported-by: Andi Kleen <andi@firstfloor.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/trace/ring_buffer.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 9c143739b8d7..1f3580cee6cc 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2733,7 +2733,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer,
return event;
}
-static struct ring_buffer_event *
+static __always_inline struct ring_buffer_event *
rb_reserve_next_event(struct ring_buffer *buffer,
struct ring_buffer_per_cpu *cpu_buffer,
unsigned long length)