diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-10-01 13:28:49 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-10-01 13:28:49 -0400 |
commit | 7e92daaefa68e5ef1e1732e45231e73adbb724e7 (patch) | |
tree | 8e7f8ac9d82654df4c65939c6682f95510e22977 /kernel/trace | |
parent | 7a68294278ae714ce2632a54f0f46916dca64f56 (diff) | |
parent | 1d787d37c8ff6612b8151c6dff15bfa7347bcbdf (diff) |
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf update from Ingo Molnar:
"Lots of changes in this cycle as well, with hundreds of commits from
over 30 contributors. Most of the activity was on the tooling side.
Higher level changes:
- New 'perf kvm' analysis tool, from Xiao Guangrong.
- New 'perf trace' system-wide tracing tool
- uprobes fixes + cleanups from Oleg Nesterov.
- Lots of patches to make perf build on Android out of box, from
Irina Tirdea
- Extend ftrace function tracing utility to be more dynamic for its
users. It allows for data passing to the callback functions, as
well as reading regs as if a breakpoint were to trigger at function
entry.
The main goal of this patch series was to allow kprobes to use
ftrace as an optimized probe point when a probe is placed on an
ftrace nop. With lots of help from Masami Hiramatsu, and going
through lots of iterations, we finally came up with a good
solution.
- Add cpumask for uncore pmu, use it in 'stat', from Yan, Zheng.
- Various tracing updates from Steve Rostedt
- Clean up and improve 'perf sched' performance by elliminating lots
of needless calls to libtraceevent.
- Event group parsing support, from Jiri Olsa
- UI/gtk refactorings and improvements from Namhyung Kim
- Add support for non-tracepoint events in perf script python, from
Feng Tang
- Add --symbols to 'script', similar to the one in 'report', from
Feng Tang.
Infrastructure enhancements and fixes:
- Convert the trace builtins to use the growing evsel/evlist
tracepoint infrastructure, removing several open coded constructs
like switch like series of strcmp to dispatch events, etc.
Basically what had already been showcased in 'perf sched'.
- Add evsel constructor for tracepoints, that uses libtraceevent just
to parse the /format events file, use it in a new 'perf test' to
make sure the libtraceevent format parsing regressions can be more
readily caught.
- Some strange errors were happening in some builds, but not on the
next, reported by several people, problem was some parser related
files, generated during the build, didn't had proper make deps, fix
from Eric Sandeen.
- Introduce struct and cache information about the environment where
a perf.data file was captured, from Namhyung Kim.
- Fix handling of unresolved samples when --symbols is used in
'report', from Feng Tang.
- Add union member access support to 'probe', from Hyeoncheol Lee.
- Fixups to die() removal, from Namhyung Kim.
- Render fixes for the TUI, from Namhyung Kim.
- Don't enable annotation in non symbolic view, from Namhyung Kim.
- Fix pipe mode in 'report', from Namhyung Kim.
- Move related stats code from stat to util/, will be used by the
'stat' kvm tool, from Xiao Guangrong.
- Remove die()/exit() calls from several tools.
- Resolve vdso callchains, from Jiri Olsa
- Don't pass const char pointers to basename, so that we can
unconditionally use libgen.h and thus avoid ifdef BIONIC lines,
from David Ahern
- Refactor hist formatting so that it can be reused with the GTK
browser, From Namhyung Kim
- Fix build for another rbtree.c change, from Adrian Hunter.
- Make 'perf diff' command work with evsel hists, from Jiri Olsa.
- Use the only field_sep var that is set up: symbol_conf.field_sep,
fix from Jiri Olsa.
- .gitignore compiled python binaries, from Namhyung Kim.
- Get rid of die() in more libtraceevent places, from Namhyung Kim.
- Rename libtraceevent 'private' struct member to 'priv' so that it
works in C++, from Steven Rostedt
- Remove lots of exit()/die() calls from tools so that the main perf
exit routine can take place, from David Ahern
- Fix x86 build on x86-64, from David Ahern.
- {int,str,rb}list fixes from Suzuki K Poulose
- perf.data header fixes from Namhyung Kim
- Allow user to indicate objdump path, needed in cross environments,
from Maciek Borzecki
- Fix hardware cache event name generation, fix from Jiri Olsa
- Add round trip test for sw, hw and cache event names, catching the
problem Jiri fixed, after Jiri's patch, the test passes
successfully.
- Clean target should do clean for lib/traceevent too, fix from David
Ahern
- Check the right variable for allocation failure, fix from Namhyung
Kim
- Set up evsel->tp_format regardless of evsel->name being set
already, fix from Namhyung Kim
- Oprofile fixes from Robert Richter.
- Remove perf_event_attr needless version inflation, from Jiri Olsa
- Introduce libtraceevent strerror like error reporting facility,
from Namhyung Kim
- Add pmu mappings to perf.data header and use event names from cmd
line, from Robert Richter
- Fix include order for bison/flex-generated C files, from Ben
Hutchings
- Build fixes and documentation corrections from David Ahern
- Assorted cleanups from Robert Richter
- Let O= makes handle relative paths, from Steven Rostedt
- perf script python fixes, from Feng Tang.
- Initial bash completion support, from Frederic Weisbecker
- Allow building without libelf, from Namhyung Kim.
- Support DWARF CFI based unwind to have callchains when %bp based
unwinding is not possible, from Jiri Olsa.
- Symbol resolution fixes, while fixing support PPC64 files with an
.opt ELF section was the end goal, several fixes for code that
handles all architectures and cleanups are included, from Cody
Schafer.
- Assorted fixes for Documentation and build in 32 bit, from Robert
Richter
- Cache the libtraceevent event_format associated to each evsel
early, so that we avoid relookups, i.e. calling pevent_find_event
repeatedly when processing tracepoint events.
[ This is to reduce the surface contact with libtraceevents and
make clear what is that the perf tools needs from that lib: so
far parsing the common and per event fields. ]
- Don't stop the build if the audit libraries are not installed, fix
from Namhyung Kim.
- Fix bfd.h/libbfd detection with recent binutils, from Markus
Trippelsdorf.
- Improve warning message when libunwind devel packages not present,
from Jiri Olsa"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (282 commits)
perf trace: Add aliases for some syscalls
perf probe: Print an enum type variable in "enum variable-name" format when showing accessible variables
perf tools: Check libaudit availability for perf-trace builtin
perf hists: Add missing period_* fields when collapsing a hist entry
perf trace: New tool
perf evsel: Export the event_format constructor
perf evsel: Introduce rawptr() method
perf tools: Use perf_evsel__newtp in the event parser
perf evsel: The tracepoint constructor should store sys:name
perf evlist: Introduce set_filter() method
perf evlist: Renane set_filters method to apply_filters
perf test: Add test to check we correctly parse and match syscall open parms
perf evsel: Handle endianity in intval method
perf evsel: Know if byte swap is needed
perf tools: Allow handling a NULL cpu_map as meaning "all cpus"
perf evsel: Improve tracepoint constructor setup
tools lib traceevent: Fix error path on pevent_parse_event
perf test: Fix build failure
trace: Move trace event enable from fs_initcall to core_initcall
tracing: Add an option for disabling markers
...
Diffstat (limited to 'kernel/trace')
-rw-r--r-- | kernel/trace/Kconfig | 10 | ||||
-rw-r--r-- | kernel/trace/Makefile | 8 | ||||
-rw-r--r-- | kernel/trace/ftrace.c | 322 | ||||
-rw-r--r-- | kernel/trace/ring_buffer.c | 4 | ||||
-rw-r--r-- | kernel/trace/trace.c | 12 | ||||
-rw-r--r-- | kernel/trace/trace.h | 3 | ||||
-rw-r--r-- | kernel/trace/trace_event_perf.c | 3 | ||||
-rw-r--r-- | kernel/trace/trace_events.c | 116 | ||||
-rw-r--r-- | kernel/trace/trace_events_filter.c | 2 | ||||
-rw-r--r-- | kernel/trace/trace_functions.c | 14 | ||||
-rw-r--r-- | kernel/trace/trace_functions_graph.c | 5 | ||||
-rw-r--r-- | kernel/trace/trace_irqsoff.c | 5 | ||||
-rw-r--r-- | kernel/trace/trace_sched_wakeup.c | 5 | ||||
-rw-r--r-- | kernel/trace/trace_selftest.c | 304 | ||||
-rw-r--r-- | kernel/trace/trace_stack.c | 4 | ||||
-rw-r--r-- | kernel/trace/trace_syscalls.c | 2 |
16 files changed, 656 insertions, 163 deletions
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index 8c4c07071cc5..4cea4f41c1d9 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig | |||
@@ -49,6 +49,11 @@ config HAVE_SYSCALL_TRACEPOINTS | |||
49 | help | 49 | help |
50 | See Documentation/trace/ftrace-design.txt | 50 | See Documentation/trace/ftrace-design.txt |
51 | 51 | ||
52 | config HAVE_FENTRY | ||
53 | bool | ||
54 | help | ||
55 | Arch supports the gcc options -pg with -mfentry | ||
56 | |||
52 | config HAVE_C_RECORDMCOUNT | 57 | config HAVE_C_RECORDMCOUNT |
53 | bool | 58 | bool |
54 | help | 59 | help |
@@ -57,8 +62,12 @@ config HAVE_C_RECORDMCOUNT | |||
57 | config TRACER_MAX_TRACE | 62 | config TRACER_MAX_TRACE |
58 | bool | 63 | bool |
59 | 64 | ||
65 | config TRACE_CLOCK | ||
66 | bool | ||
67 | |||
60 | config RING_BUFFER | 68 | config RING_BUFFER |
61 | bool | 69 | bool |
70 | select TRACE_CLOCK | ||
62 | 71 | ||
63 | config FTRACE_NMI_ENTER | 72 | config FTRACE_NMI_ENTER |
64 | bool | 73 | bool |
@@ -109,6 +118,7 @@ config TRACING | |||
109 | select NOP_TRACER | 118 | select NOP_TRACER |
110 | select BINARY_PRINTF | 119 | select BINARY_PRINTF |
111 | select EVENT_TRACING | 120 | select EVENT_TRACING |
121 | select TRACE_CLOCK | ||
112 | 122 | ||
113 | config GENERIC_TRACER | 123 | config GENERIC_TRACER |
114 | bool | 124 | bool |
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile index b831087c8200..d7e2068e4b71 100644 --- a/kernel/trace/Makefile +++ b/kernel/trace/Makefile | |||
@@ -5,10 +5,12 @@ ifdef CONFIG_FUNCTION_TRACER | |||
5 | ORIG_CFLAGS := $(KBUILD_CFLAGS) | 5 | ORIG_CFLAGS := $(KBUILD_CFLAGS) |
6 | KBUILD_CFLAGS = $(subst -pg,,$(ORIG_CFLAGS)) | 6 | KBUILD_CFLAGS = $(subst -pg,,$(ORIG_CFLAGS)) |
7 | 7 | ||
8 | ifdef CONFIG_FTRACE_SELFTEST | ||
8 | # selftest needs instrumentation | 9 | # selftest needs instrumentation |
9 | CFLAGS_trace_selftest_dynamic.o = -pg | 10 | CFLAGS_trace_selftest_dynamic.o = -pg |
10 | obj-y += trace_selftest_dynamic.o | 11 | obj-y += trace_selftest_dynamic.o |
11 | endif | 12 | endif |
13 | endif | ||
12 | 14 | ||
13 | # If unlikely tracing is enabled, do not trace these files | 15 | # If unlikely tracing is enabled, do not trace these files |
14 | ifdef CONFIG_TRACING_BRANCHES | 16 | ifdef CONFIG_TRACING_BRANCHES |
@@ -17,11 +19,7 @@ endif | |||
17 | 19 | ||
18 | CFLAGS_trace_events_filter.o := -I$(src) | 20 | CFLAGS_trace_events_filter.o := -I$(src) |
19 | 21 | ||
20 | # | 22 | obj-$(CONFIG_TRACE_CLOCK) += trace_clock.o |
21 | # Make the trace clocks available generally: it's infrastructure | ||
22 | # relied on by ptrace for example: | ||
23 | # | ||
24 | obj-y += trace_clock.o | ||
25 | 23 | ||
26 | obj-$(CONFIG_FUNCTION_TRACER) += libftrace.o | 24 | obj-$(CONFIG_FUNCTION_TRACER) += libftrace.o |
27 | obj-$(CONFIG_RING_BUFFER) += ring_buffer.o | 25 | obj-$(CONFIG_RING_BUFFER) += ring_buffer.o |
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index b4f20fba09fc..9dcf15d38380 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c | |||
@@ -64,12 +64,20 @@ | |||
64 | 64 | ||
65 | #define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_CONTROL) | 65 | #define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_CONTROL) |
66 | 66 | ||
67 | static struct ftrace_ops ftrace_list_end __read_mostly = { | ||
68 | .func = ftrace_stub, | ||
69 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
70 | }; | ||
71 | |||
67 | /* ftrace_enabled is a method to turn ftrace on or off */ | 72 | /* ftrace_enabled is a method to turn ftrace on or off */ |
68 | int ftrace_enabled __read_mostly; | 73 | int ftrace_enabled __read_mostly; |
69 | static int last_ftrace_enabled; | 74 | static int last_ftrace_enabled; |
70 | 75 | ||
71 | /* Quick disabling of function tracer. */ | 76 | /* Quick disabling of function tracer. */ |
72 | int function_trace_stop; | 77 | int function_trace_stop __read_mostly; |
78 | |||
79 | /* Current function tracing op */ | ||
80 | struct ftrace_ops *function_trace_op __read_mostly = &ftrace_list_end; | ||
73 | 81 | ||
74 | /* List for set_ftrace_pid's pids. */ | 82 | /* List for set_ftrace_pid's pids. */ |
75 | LIST_HEAD(ftrace_pids); | 83 | LIST_HEAD(ftrace_pids); |
@@ -86,22 +94,43 @@ static int ftrace_disabled __read_mostly; | |||
86 | 94 | ||
87 | static DEFINE_MUTEX(ftrace_lock); | 95 | static DEFINE_MUTEX(ftrace_lock); |
88 | 96 | ||
89 | static struct ftrace_ops ftrace_list_end __read_mostly = { | ||
90 | .func = ftrace_stub, | ||
91 | }; | ||
92 | |||
93 | static struct ftrace_ops *ftrace_global_list __read_mostly = &ftrace_list_end; | 97 | static struct ftrace_ops *ftrace_global_list __read_mostly = &ftrace_list_end; |
94 | static struct ftrace_ops *ftrace_control_list __read_mostly = &ftrace_list_end; | 98 | static struct ftrace_ops *ftrace_control_list __read_mostly = &ftrace_list_end; |
95 | static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end; | 99 | static struct ftrace_ops *ftrace_ops_list __read_mostly = &ftrace_list_end; |
96 | ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub; | 100 | ftrace_func_t ftrace_trace_function __read_mostly = ftrace_stub; |
97 | static ftrace_func_t __ftrace_trace_function_delay __read_mostly = ftrace_stub; | ||
98 | ftrace_func_t __ftrace_trace_function __read_mostly = ftrace_stub; | ||
99 | ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub; | 101 | ftrace_func_t ftrace_pid_function __read_mostly = ftrace_stub; |
100 | static struct ftrace_ops global_ops; | 102 | static struct ftrace_ops global_ops; |
101 | static struct ftrace_ops control_ops; | 103 | static struct ftrace_ops control_ops; |
102 | 104 | ||
103 | static void | 105 | #if ARCH_SUPPORTS_FTRACE_OPS |
104 | ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip); | 106 | static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, |
107 | struct ftrace_ops *op, struct pt_regs *regs); | ||
108 | #else | ||
109 | /* See comment below, where ftrace_ops_list_func is defined */ | ||
110 | static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip); | ||
111 | #define ftrace_ops_list_func ((ftrace_func_t)ftrace_ops_no_ops) | ||
112 | #endif | ||
113 | |||
114 | /** | ||
115 | * ftrace_nr_registered_ops - return number of ops registered | ||
116 | * | ||
117 | * Returns the number of ftrace_ops registered and tracing functions | ||
118 | */ | ||
119 | int ftrace_nr_registered_ops(void) | ||
120 | { | ||
121 | struct ftrace_ops *ops; | ||
122 | int cnt = 0; | ||
123 | |||
124 | mutex_lock(&ftrace_lock); | ||
125 | |||
126 | for (ops = ftrace_ops_list; | ||
127 | ops != &ftrace_list_end; ops = ops->next) | ||
128 | cnt++; | ||
129 | |||
130 | mutex_unlock(&ftrace_lock); | ||
131 | |||
132 | return cnt; | ||
133 | } | ||
105 | 134 | ||
106 | /* | 135 | /* |
107 | * Traverse the ftrace_global_list, invoking all entries. The reason that we | 136 | * Traverse the ftrace_global_list, invoking all entries. The reason that we |
@@ -112,29 +141,29 @@ ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip); | |||
112 | * | 141 | * |
113 | * Silly Alpha and silly pointer-speculation compiler optimizations! | 142 | * Silly Alpha and silly pointer-speculation compiler optimizations! |
114 | */ | 143 | */ |
115 | static void ftrace_global_list_func(unsigned long ip, | 144 | static void |
116 | unsigned long parent_ip) | 145 | ftrace_global_list_func(unsigned long ip, unsigned long parent_ip, |
146 | struct ftrace_ops *op, struct pt_regs *regs) | ||
117 | { | 147 | { |
118 | struct ftrace_ops *op; | ||
119 | |||
120 | if (unlikely(trace_recursion_test(TRACE_GLOBAL_BIT))) | 148 | if (unlikely(trace_recursion_test(TRACE_GLOBAL_BIT))) |
121 | return; | 149 | return; |
122 | 150 | ||
123 | trace_recursion_set(TRACE_GLOBAL_BIT); | 151 | trace_recursion_set(TRACE_GLOBAL_BIT); |
124 | op = rcu_dereference_raw(ftrace_global_list); /*see above*/ | 152 | op = rcu_dereference_raw(ftrace_global_list); /*see above*/ |
125 | while (op != &ftrace_list_end) { | 153 | while (op != &ftrace_list_end) { |
126 | op->func(ip, parent_ip); | 154 | op->func(ip, parent_ip, op, regs); |
127 | op = rcu_dereference_raw(op->next); /*see above*/ | 155 | op = rcu_dereference_raw(op->next); /*see above*/ |
128 | }; | 156 | }; |
129 | trace_recursion_clear(TRACE_GLOBAL_BIT); | 157 | trace_recursion_clear(TRACE_GLOBAL_BIT); |
130 | } | 158 | } |
131 | 159 | ||
132 | static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip) | 160 | static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip, |
161 | struct ftrace_ops *op, struct pt_regs *regs) | ||
133 | { | 162 | { |
134 | if (!test_tsk_trace_trace(current)) | 163 | if (!test_tsk_trace_trace(current)) |
135 | return; | 164 | return; |
136 | 165 | ||
137 | ftrace_pid_function(ip, parent_ip); | 166 | ftrace_pid_function(ip, parent_ip, op, regs); |
138 | } | 167 | } |
139 | 168 | ||
140 | static void set_ftrace_pid_function(ftrace_func_t func) | 169 | static void set_ftrace_pid_function(ftrace_func_t func) |
@@ -153,25 +182,9 @@ static void set_ftrace_pid_function(ftrace_func_t func) | |||
153 | void clear_ftrace_function(void) | 182 | void clear_ftrace_function(void) |
154 | { | 183 | { |
155 | ftrace_trace_function = ftrace_stub; | 184 | ftrace_trace_function = ftrace_stub; |
156 | __ftrace_trace_function = ftrace_stub; | ||
157 | __ftrace_trace_function_delay = ftrace_stub; | ||
158 | ftrace_pid_function = ftrace_stub; | 185 | ftrace_pid_function = ftrace_stub; |
159 | } | 186 | } |
160 | 187 | ||
161 | #ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST | ||
162 | /* | ||
163 | * For those archs that do not test ftrace_trace_stop in their | ||
164 | * mcount call site, we need to do it from C. | ||
165 | */ | ||
166 | static void ftrace_test_stop_func(unsigned long ip, unsigned long parent_ip) | ||
167 | { | ||
168 | if (function_trace_stop) | ||
169 | return; | ||
170 | |||
171 | __ftrace_trace_function(ip, parent_ip); | ||
172 | } | ||
173 | #endif | ||
174 | |||
175 | static void control_ops_disable_all(struct ftrace_ops *ops) | 188 | static void control_ops_disable_all(struct ftrace_ops *ops) |
176 | { | 189 | { |
177 | int cpu; | 190 | int cpu; |
@@ -230,28 +243,27 @@ static void update_ftrace_function(void) | |||
230 | 243 | ||
231 | /* | 244 | /* |
232 | * If we are at the end of the list and this ops is | 245 | * If we are at the end of the list and this ops is |
233 | * not dynamic, then have the mcount trampoline call | 246 | * recursion safe and not dynamic and the arch supports passing ops, |
234 | * the function directly | 247 | * then have the mcount trampoline call the function directly. |
235 | */ | 248 | */ |
236 | if (ftrace_ops_list == &ftrace_list_end || | 249 | if (ftrace_ops_list == &ftrace_list_end || |
237 | (ftrace_ops_list->next == &ftrace_list_end && | 250 | (ftrace_ops_list->next == &ftrace_list_end && |
238 | !(ftrace_ops_list->flags & FTRACE_OPS_FL_DYNAMIC))) | 251 | !(ftrace_ops_list->flags & FTRACE_OPS_FL_DYNAMIC) && |
252 | (ftrace_ops_list->flags & FTRACE_OPS_FL_RECURSION_SAFE) && | ||
253 | !FTRACE_FORCE_LIST_FUNC)) { | ||
254 | /* Set the ftrace_ops that the arch callback uses */ | ||
255 | if (ftrace_ops_list == &global_ops) | ||
256 | function_trace_op = ftrace_global_list; | ||
257 | else | ||
258 | function_trace_op = ftrace_ops_list; | ||
239 | func = ftrace_ops_list->func; | 259 | func = ftrace_ops_list->func; |
240 | else | 260 | } else { |
261 | /* Just use the default ftrace_ops */ | ||
262 | function_trace_op = &ftrace_list_end; | ||
241 | func = ftrace_ops_list_func; | 263 | func = ftrace_ops_list_func; |
264 | } | ||
242 | 265 | ||
243 | #ifdef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST | ||
244 | ftrace_trace_function = func; | 266 | ftrace_trace_function = func; |
245 | #else | ||
246 | #ifdef CONFIG_DYNAMIC_FTRACE | ||
247 | /* do not update till all functions have been modified */ | ||
248 | __ftrace_trace_function_delay = func; | ||
249 | #else | ||
250 | __ftrace_trace_function = func; | ||
251 | #endif | ||
252 | ftrace_trace_function = | ||
253 | (func == ftrace_stub) ? func : ftrace_test_stop_func; | ||
254 | #endif | ||
255 | } | 267 | } |
256 | 268 | ||
257 | static void add_ftrace_ops(struct ftrace_ops **list, struct ftrace_ops *ops) | 269 | static void add_ftrace_ops(struct ftrace_ops **list, struct ftrace_ops *ops) |
@@ -325,6 +337,20 @@ static int __register_ftrace_function(struct ftrace_ops *ops) | |||
325 | if ((ops->flags & FL_GLOBAL_CONTROL_MASK) == FL_GLOBAL_CONTROL_MASK) | 337 | if ((ops->flags & FL_GLOBAL_CONTROL_MASK) == FL_GLOBAL_CONTROL_MASK) |
326 | return -EINVAL; | 338 | return -EINVAL; |
327 | 339 | ||
340 | #ifndef ARCH_SUPPORTS_FTRACE_SAVE_REGS | ||
341 | /* | ||
342 | * If the ftrace_ops specifies SAVE_REGS, then it only can be used | ||
343 | * if the arch supports it, or SAVE_REGS_IF_SUPPORTED is also set. | ||
344 | * Setting SAVE_REGS_IF_SUPPORTED makes SAVE_REGS irrelevant. | ||
345 | */ | ||
346 | if (ops->flags & FTRACE_OPS_FL_SAVE_REGS && | ||
347 | !(ops->flags & FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED)) | ||
348 | return -EINVAL; | ||
349 | |||
350 | if (ops->flags & FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED) | ||
351 | ops->flags |= FTRACE_OPS_FL_SAVE_REGS; | ||
352 | #endif | ||
353 | |||
328 | if (!core_kernel_data((unsigned long)ops)) | 354 | if (!core_kernel_data((unsigned long)ops)) |
329 | ops->flags |= FTRACE_OPS_FL_DYNAMIC; | 355 | ops->flags |= FTRACE_OPS_FL_DYNAMIC; |
330 | 356 | ||
@@ -773,7 +799,8 @@ ftrace_profile_alloc(struct ftrace_profile_stat *stat, unsigned long ip) | |||
773 | } | 799 | } |
774 | 800 | ||
775 | static void | 801 | static void |
776 | function_profile_call(unsigned long ip, unsigned long parent_ip) | 802 | function_profile_call(unsigned long ip, unsigned long parent_ip, |
803 | struct ftrace_ops *ops, struct pt_regs *regs) | ||
777 | { | 804 | { |
778 | struct ftrace_profile_stat *stat; | 805 | struct ftrace_profile_stat *stat; |
779 | struct ftrace_profile *rec; | 806 | struct ftrace_profile *rec; |
@@ -803,7 +830,7 @@ function_profile_call(unsigned long ip, unsigned long parent_ip) | |||
803 | #ifdef CONFIG_FUNCTION_GRAPH_TRACER | 830 | #ifdef CONFIG_FUNCTION_GRAPH_TRACER |
804 | static int profile_graph_entry(struct ftrace_graph_ent *trace) | 831 | static int profile_graph_entry(struct ftrace_graph_ent *trace) |
805 | { | 832 | { |
806 | function_profile_call(trace->func, 0); | 833 | function_profile_call(trace->func, 0, NULL, NULL); |
807 | return 1; | 834 | return 1; |
808 | } | 835 | } |
809 | 836 | ||
@@ -863,6 +890,7 @@ static void unregister_ftrace_profiler(void) | |||
863 | #else | 890 | #else |
864 | static struct ftrace_ops ftrace_profile_ops __read_mostly = { | 891 | static struct ftrace_ops ftrace_profile_ops __read_mostly = { |
865 | .func = function_profile_call, | 892 | .func = function_profile_call, |
893 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
866 | }; | 894 | }; |
867 | 895 | ||
868 | static int register_ftrace_profiler(void) | 896 | static int register_ftrace_profiler(void) |
@@ -1045,6 +1073,7 @@ static struct ftrace_ops global_ops = { | |||
1045 | .func = ftrace_stub, | 1073 | .func = ftrace_stub, |
1046 | .notrace_hash = EMPTY_HASH, | 1074 | .notrace_hash = EMPTY_HASH, |
1047 | .filter_hash = EMPTY_HASH, | 1075 | .filter_hash = EMPTY_HASH, |
1076 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
1048 | }; | 1077 | }; |
1049 | 1078 | ||
1050 | static DEFINE_MUTEX(ftrace_regex_lock); | 1079 | static DEFINE_MUTEX(ftrace_regex_lock); |
@@ -1525,6 +1554,12 @@ static void __ftrace_hash_rec_update(struct ftrace_ops *ops, | |||
1525 | rec->flags++; | 1554 | rec->flags++; |
1526 | if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == FTRACE_REF_MAX)) | 1555 | if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == FTRACE_REF_MAX)) |
1527 | return; | 1556 | return; |
1557 | /* | ||
1558 | * If any ops wants regs saved for this function | ||
1559 | * then all ops will get saved regs. | ||
1560 | */ | ||
1561 | if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) | ||
1562 | rec->flags |= FTRACE_FL_REGS; | ||
1528 | } else { | 1563 | } else { |
1529 | if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == 0)) | 1564 | if (FTRACE_WARN_ON((rec->flags & ~FTRACE_FL_MASK) == 0)) |
1530 | return; | 1565 | return; |
@@ -1616,18 +1651,59 @@ static int ftrace_check_record(struct dyn_ftrace *rec, int enable, int update) | |||
1616 | if (enable && (rec->flags & ~FTRACE_FL_MASK)) | 1651 | if (enable && (rec->flags & ~FTRACE_FL_MASK)) |
1617 | flag = FTRACE_FL_ENABLED; | 1652 | flag = FTRACE_FL_ENABLED; |
1618 | 1653 | ||
1654 | /* | ||
1655 | * If enabling and the REGS flag does not match the REGS_EN, then | ||
1656 | * do not ignore this record. Set flags to fail the compare against | ||
1657 | * ENABLED. | ||
1658 | */ | ||
1659 | if (flag && | ||
1660 | (!(rec->flags & FTRACE_FL_REGS) != !(rec->flags & FTRACE_FL_REGS_EN))) | ||
1661 | flag |= FTRACE_FL_REGS; | ||
1662 | |||
1619 | /* If the state of this record hasn't changed, then do nothing */ | 1663 | /* If the state of this record hasn't changed, then do nothing */ |
1620 | if ((rec->flags & FTRACE_FL_ENABLED) == flag) | 1664 | if ((rec->flags & FTRACE_FL_ENABLED) == flag) |
1621 | return FTRACE_UPDATE_IGNORE; | 1665 | return FTRACE_UPDATE_IGNORE; |
1622 | 1666 | ||
1623 | if (flag) { | 1667 | if (flag) { |
1624 | if (update) | 1668 | /* Save off if rec is being enabled (for return value) */ |
1669 | flag ^= rec->flags & FTRACE_FL_ENABLED; | ||
1670 | |||
1671 | if (update) { | ||
1625 | rec->flags |= FTRACE_FL_ENABLED; | 1672 | rec->flags |= FTRACE_FL_ENABLED; |
1626 | return FTRACE_UPDATE_MAKE_CALL; | 1673 | if (flag & FTRACE_FL_REGS) { |
1674 | if (rec->flags & FTRACE_FL_REGS) | ||
1675 | rec->flags |= FTRACE_FL_REGS_EN; | ||
1676 | else | ||
1677 | rec->flags &= ~FTRACE_FL_REGS_EN; | ||
1678 | } | ||
1679 | } | ||
1680 | |||
1681 | /* | ||
1682 | * If this record is being updated from a nop, then | ||
1683 | * return UPDATE_MAKE_CALL. | ||
1684 | * Otherwise, if the EN flag is set, then return | ||
1685 | * UPDATE_MODIFY_CALL_REGS to tell the caller to convert | ||
1686 | * from the non-save regs, to a save regs function. | ||
1687 | * Otherwise, | ||
1688 | * return UPDATE_MODIFY_CALL to tell the caller to convert | ||
1689 | * from the save regs, to a non-save regs function. | ||
1690 | */ | ||
1691 | if (flag & FTRACE_FL_ENABLED) | ||
1692 | return FTRACE_UPDATE_MAKE_CALL; | ||
1693 | else if (rec->flags & FTRACE_FL_REGS_EN) | ||
1694 | return FTRACE_UPDATE_MODIFY_CALL_REGS; | ||
1695 | else | ||
1696 | return FTRACE_UPDATE_MODIFY_CALL; | ||
1627 | } | 1697 | } |
1628 | 1698 | ||
1629 | if (update) | 1699 | if (update) { |
1630 | rec->flags &= ~FTRACE_FL_ENABLED; | 1700 | /* If there's no more users, clear all flags */ |
1701 | if (!(rec->flags & ~FTRACE_FL_MASK)) | ||
1702 | rec->flags = 0; | ||
1703 | else | ||
1704 | /* Just disable the record (keep REGS state) */ | ||
1705 | rec->flags &= ~FTRACE_FL_ENABLED; | ||
1706 | } | ||
1631 | 1707 | ||
1632 | return FTRACE_UPDATE_MAKE_NOP; | 1708 | return FTRACE_UPDATE_MAKE_NOP; |
1633 | } | 1709 | } |
@@ -1662,13 +1738,17 @@ int ftrace_test_record(struct dyn_ftrace *rec, int enable) | |||
1662 | static int | 1738 | static int |
1663 | __ftrace_replace_code(struct dyn_ftrace *rec, int enable) | 1739 | __ftrace_replace_code(struct dyn_ftrace *rec, int enable) |
1664 | { | 1740 | { |
1741 | unsigned long ftrace_old_addr; | ||
1665 | unsigned long ftrace_addr; | 1742 | unsigned long ftrace_addr; |
1666 | int ret; | 1743 | int ret; |
1667 | 1744 | ||
1668 | ftrace_addr = (unsigned long)FTRACE_ADDR; | ||
1669 | |||
1670 | ret = ftrace_update_record(rec, enable); | 1745 | ret = ftrace_update_record(rec, enable); |
1671 | 1746 | ||
1747 | if (rec->flags & FTRACE_FL_REGS) | ||
1748 | ftrace_addr = (unsigned long)FTRACE_REGS_ADDR; | ||
1749 | else | ||
1750 | ftrace_addr = (unsigned long)FTRACE_ADDR; | ||
1751 | |||
1672 | switch (ret) { | 1752 | switch (ret) { |
1673 | case FTRACE_UPDATE_IGNORE: | 1753 | case FTRACE_UPDATE_IGNORE: |
1674 | return 0; | 1754 | return 0; |
@@ -1678,6 +1758,15 @@ __ftrace_replace_code(struct dyn_ftrace *rec, int enable) | |||
1678 | 1758 | ||
1679 | case FTRACE_UPDATE_MAKE_NOP: | 1759 | case FTRACE_UPDATE_MAKE_NOP: |
1680 | return ftrace_make_nop(NULL, rec, ftrace_addr); | 1760 | return ftrace_make_nop(NULL, rec, ftrace_addr); |
1761 | |||
1762 | case FTRACE_UPDATE_MODIFY_CALL_REGS: | ||
1763 | case FTRACE_UPDATE_MODIFY_CALL: | ||
1764 | if (rec->flags & FTRACE_FL_REGS) | ||
1765 | ftrace_old_addr = (unsigned long)FTRACE_ADDR; | ||
1766 | else | ||
1767 | ftrace_old_addr = (unsigned long)FTRACE_REGS_ADDR; | ||
1768 | |||
1769 | return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr); | ||
1681 | } | 1770 | } |
1682 | 1771 | ||
1683 | return -1; /* unknow ftrace bug */ | 1772 | return -1; /* unknow ftrace bug */ |
@@ -1882,16 +1971,6 @@ static void ftrace_run_update_code(int command) | |||
1882 | */ | 1971 | */ |
1883 | arch_ftrace_update_code(command); | 1972 | arch_ftrace_update_code(command); |
1884 | 1973 | ||
1885 | #ifndef CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST | ||
1886 | /* | ||
1887 | * For archs that call ftrace_test_stop_func(), we must | ||
1888 | * wait till after we update all the function callers | ||
1889 | * before we update the callback. This keeps different | ||
1890 | * ops that record different functions from corrupting | ||
1891 | * each other. | ||
1892 | */ | ||
1893 | __ftrace_trace_function = __ftrace_trace_function_delay; | ||
1894 | #endif | ||
1895 | function_trace_stop--; | 1974 | function_trace_stop--; |
1896 | 1975 | ||
1897 | ret = ftrace_arch_code_modify_post_process(); | 1976 | ret = ftrace_arch_code_modify_post_process(); |
@@ -2441,8 +2520,9 @@ static int t_show(struct seq_file *m, void *v) | |||
2441 | 2520 | ||
2442 | seq_printf(m, "%ps", (void *)rec->ip); | 2521 | seq_printf(m, "%ps", (void *)rec->ip); |
2443 | if (iter->flags & FTRACE_ITER_ENABLED) | 2522 | if (iter->flags & FTRACE_ITER_ENABLED) |
2444 | seq_printf(m, " (%ld)", | 2523 | seq_printf(m, " (%ld)%s", |
2445 | rec->flags & ~FTRACE_FL_MASK); | 2524 | rec->flags & ~FTRACE_FL_MASK, |
2525 | rec->flags & FTRACE_FL_REGS ? " R" : ""); | ||
2446 | seq_printf(m, "\n"); | 2526 | seq_printf(m, "\n"); |
2447 | 2527 | ||
2448 | return 0; | 2528 | return 0; |
@@ -2790,8 +2870,8 @@ static int __init ftrace_mod_cmd_init(void) | |||
2790 | } | 2870 | } |
2791 | device_initcall(ftrace_mod_cmd_init); | 2871 | device_initcall(ftrace_mod_cmd_init); |
2792 | 2872 | ||
2793 | static void | 2873 | static void function_trace_probe_call(unsigned long ip, unsigned long parent_ip, |
2794 | function_trace_probe_call(unsigned long ip, unsigned long parent_ip) | 2874 | struct ftrace_ops *op, struct pt_regs *pt_regs) |
2795 | { | 2875 | { |
2796 | struct ftrace_func_probe *entry; | 2876 | struct ftrace_func_probe *entry; |
2797 | struct hlist_head *hhd; | 2877 | struct hlist_head *hhd; |
@@ -3162,8 +3242,27 @@ ftrace_notrace_write(struct file *file, const char __user *ubuf, | |||
3162 | } | 3242 | } |
3163 | 3243 | ||
3164 | static int | 3244 | static int |
3165 | ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len, | 3245 | ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove) |
3166 | int reset, int enable) | 3246 | { |
3247 | struct ftrace_func_entry *entry; | ||
3248 | |||
3249 | if (!ftrace_location(ip)) | ||
3250 | return -EINVAL; | ||
3251 | |||
3252 | if (remove) { | ||
3253 | entry = ftrace_lookup_ip(hash, ip); | ||
3254 | if (!entry) | ||
3255 | return -ENOENT; | ||
3256 | free_hash_entry(hash, entry); | ||
3257 | return 0; | ||
3258 | } | ||
3259 | |||
3260 | return add_hash_entry(hash, ip); | ||
3261 | } | ||
3262 | |||
3263 | static int | ||
3264 | ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len, | ||
3265 | unsigned long ip, int remove, int reset, int enable) | ||
3167 | { | 3266 | { |
3168 | struct ftrace_hash **orig_hash; | 3267 | struct ftrace_hash **orig_hash; |
3169 | struct ftrace_hash *hash; | 3268 | struct ftrace_hash *hash; |
@@ -3192,6 +3291,11 @@ ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len, | |||
3192 | ret = -EINVAL; | 3291 | ret = -EINVAL; |
3193 | goto out_regex_unlock; | 3292 | goto out_regex_unlock; |
3194 | } | 3293 | } |
3294 | if (ip) { | ||
3295 | ret = ftrace_match_addr(hash, ip, remove); | ||
3296 | if (ret < 0) | ||
3297 | goto out_regex_unlock; | ||
3298 | } | ||
3195 | 3299 | ||
3196 | mutex_lock(&ftrace_lock); | 3300 | mutex_lock(&ftrace_lock); |
3197 | ret = ftrace_hash_move(ops, enable, orig_hash, hash); | 3301 | ret = ftrace_hash_move(ops, enable, orig_hash, hash); |
@@ -3208,6 +3312,37 @@ ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len, | |||
3208 | return ret; | 3312 | return ret; |
3209 | } | 3313 | } |
3210 | 3314 | ||
3315 | static int | ||
3316 | ftrace_set_addr(struct ftrace_ops *ops, unsigned long ip, int remove, | ||
3317 | int reset, int enable) | ||
3318 | { | ||
3319 | return ftrace_set_hash(ops, 0, 0, ip, remove, reset, enable); | ||
3320 | } | ||
3321 | |||
3322 | /** | ||
3323 | * ftrace_set_filter_ip - set a function to filter on in ftrace by address | ||
3324 | * @ops - the ops to set the filter with | ||
3325 | * @ip - the address to add to or remove from the filter. | ||
3326 | * @remove - non zero to remove the ip from the filter | ||
3327 | * @reset - non zero to reset all filters before applying this filter. | ||
3328 | * | ||
3329 | * Filters denote which functions should be enabled when tracing is enabled | ||
3330 | * If @ip is NULL, it failes to update filter. | ||
3331 | */ | ||
3332 | int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip, | ||
3333 | int remove, int reset) | ||
3334 | { | ||
3335 | return ftrace_set_addr(ops, ip, remove, reset, 1); | ||
3336 | } | ||
3337 | EXPORT_SYMBOL_GPL(ftrace_set_filter_ip); | ||
3338 | |||
3339 | static int | ||
3340 | ftrace_set_regex(struct ftrace_ops *ops, unsigned char *buf, int len, | ||
3341 | int reset, int enable) | ||
3342 | { | ||
3343 | return ftrace_set_hash(ops, buf, len, 0, 0, reset, enable); | ||
3344 | } | ||
3345 | |||
3211 | /** | 3346 | /** |
3212 | * ftrace_set_filter - set a function to filter on in ftrace | 3347 | * ftrace_set_filter - set a function to filter on in ftrace |
3213 | * @ops - the ops to set the filter with | 3348 | * @ops - the ops to set the filter with |
@@ -3912,6 +4047,7 @@ void __init ftrace_init(void) | |||
3912 | 4047 | ||
3913 | static struct ftrace_ops global_ops = { | 4048 | static struct ftrace_ops global_ops = { |
3914 | .func = ftrace_stub, | 4049 | .func = ftrace_stub, |
4050 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
3915 | }; | 4051 | }; |
3916 | 4052 | ||
3917 | static int __init ftrace_nodyn_init(void) | 4053 | static int __init ftrace_nodyn_init(void) |
@@ -3942,10 +4078,9 @@ ftrace_ops_test(struct ftrace_ops *ops, unsigned long ip) | |||
3942 | #endif /* CONFIG_DYNAMIC_FTRACE */ | 4078 | #endif /* CONFIG_DYNAMIC_FTRACE */ |
3943 | 4079 | ||
3944 | static void | 4080 | static void |
3945 | ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip) | 4081 | ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip, |
4082 | struct ftrace_ops *op, struct pt_regs *regs) | ||
3946 | { | 4083 | { |
3947 | struct ftrace_ops *op; | ||
3948 | |||
3949 | if (unlikely(trace_recursion_test(TRACE_CONTROL_BIT))) | 4084 | if (unlikely(trace_recursion_test(TRACE_CONTROL_BIT))) |
3950 | return; | 4085 | return; |
3951 | 4086 | ||
@@ -3959,7 +4094,7 @@ ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip) | |||
3959 | while (op != &ftrace_list_end) { | 4094 | while (op != &ftrace_list_end) { |
3960 | if (!ftrace_function_local_disabled(op) && | 4095 | if (!ftrace_function_local_disabled(op) && |
3961 | ftrace_ops_test(op, ip)) | 4096 | ftrace_ops_test(op, ip)) |
3962 | op->func(ip, parent_ip); | 4097 | op->func(ip, parent_ip, op, regs); |
3963 | 4098 | ||
3964 | op = rcu_dereference_raw(op->next); | 4099 | op = rcu_dereference_raw(op->next); |
3965 | }; | 4100 | }; |
@@ -3969,13 +4104,18 @@ ftrace_ops_control_func(unsigned long ip, unsigned long parent_ip) | |||
3969 | 4104 | ||
3970 | static struct ftrace_ops control_ops = { | 4105 | static struct ftrace_ops control_ops = { |
3971 | .func = ftrace_ops_control_func, | 4106 | .func = ftrace_ops_control_func, |
4107 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
3972 | }; | 4108 | }; |
3973 | 4109 | ||
3974 | static void | 4110 | static inline void |
3975 | ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip) | 4111 | __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, |
4112 | struct ftrace_ops *ignored, struct pt_regs *regs) | ||
3976 | { | 4113 | { |
3977 | struct ftrace_ops *op; | 4114 | struct ftrace_ops *op; |
3978 | 4115 | ||
4116 | if (function_trace_stop) | ||
4117 | return; | ||
4118 | |||
3979 | if (unlikely(trace_recursion_test(TRACE_INTERNAL_BIT))) | 4119 | if (unlikely(trace_recursion_test(TRACE_INTERNAL_BIT))) |
3980 | return; | 4120 | return; |
3981 | 4121 | ||
@@ -3988,13 +4128,39 @@ ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip) | |||
3988 | op = rcu_dereference_raw(ftrace_ops_list); | 4128 | op = rcu_dereference_raw(ftrace_ops_list); |
3989 | while (op != &ftrace_list_end) { | 4129 | while (op != &ftrace_list_end) { |
3990 | if (ftrace_ops_test(op, ip)) | 4130 | if (ftrace_ops_test(op, ip)) |
3991 | op->func(ip, parent_ip); | 4131 | op->func(ip, parent_ip, op, regs); |
3992 | op = rcu_dereference_raw(op->next); | 4132 | op = rcu_dereference_raw(op->next); |
3993 | }; | 4133 | }; |
3994 | preempt_enable_notrace(); | 4134 | preempt_enable_notrace(); |
3995 | trace_recursion_clear(TRACE_INTERNAL_BIT); | 4135 | trace_recursion_clear(TRACE_INTERNAL_BIT); |
3996 | } | 4136 | } |
3997 | 4137 | ||
4138 | /* | ||
4139 | * Some archs only support passing ip and parent_ip. Even though | ||
4140 | * the list function ignores the op parameter, we do not want any | ||
4141 | * C side effects, where a function is called without the caller | ||
4142 | * sending a third parameter. | ||
4143 | * Archs are to support both the regs and ftrace_ops at the same time. | ||
4144 | * If they support ftrace_ops, it is assumed they support regs. | ||
4145 | * If call backs want to use regs, they must either check for regs | ||
4146 | * being NULL, or ARCH_SUPPORTS_FTRACE_SAVE_REGS. | ||
4147 | * Note, ARCH_SUPPORT_SAVE_REGS expects a full regs to be saved. | ||
4148 | * An architecture can pass partial regs with ftrace_ops and still | ||
4149 | * set the ARCH_SUPPORT_FTARCE_OPS. | ||
4150 | */ | ||
4151 | #if ARCH_SUPPORTS_FTRACE_OPS | ||
4152 | static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, | ||
4153 | struct ftrace_ops *op, struct pt_regs *regs) | ||
4154 | { | ||
4155 | __ftrace_ops_list_func(ip, parent_ip, NULL, regs); | ||
4156 | } | ||
4157 | #else | ||
4158 | static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip) | ||
4159 | { | ||
4160 | __ftrace_ops_list_func(ip, parent_ip, NULL, NULL); | ||
4161 | } | ||
4162 | #endif | ||
4163 | |||
3998 | static void clear_ftrace_swapper(void) | 4164 | static void clear_ftrace_swapper(void) |
3999 | { | 4165 | { |
4000 | struct task_struct *p; | 4166 | struct task_struct *p; |
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 49491fa7daa2..b32ed0e385a5 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c | |||
@@ -2816,7 +2816,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_record_enable); | |||
2816 | * to the buffer after this will fail and return NULL. | 2816 | * to the buffer after this will fail and return NULL. |
2817 | * | 2817 | * |
2818 | * This is different than ring_buffer_record_disable() as | 2818 | * This is different than ring_buffer_record_disable() as |
2819 | * it works like an on/off switch, where as the disable() verison | 2819 | * it works like an on/off switch, where as the disable() version |
2820 | * must be paired with a enable(). | 2820 | * must be paired with a enable(). |
2821 | */ | 2821 | */ |
2822 | void ring_buffer_record_off(struct ring_buffer *buffer) | 2822 | void ring_buffer_record_off(struct ring_buffer *buffer) |
@@ -2839,7 +2839,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_record_off); | |||
2839 | * ring_buffer_record_off(). | 2839 | * ring_buffer_record_off(). |
2840 | * | 2840 | * |
2841 | * This is different than ring_buffer_record_enable() as | 2841 | * This is different than ring_buffer_record_enable() as |
2842 | * it works like an on/off switch, where as the enable() verison | 2842 | * it works like an on/off switch, where as the enable() version |
2843 | * must be paired with a disable(). | 2843 | * must be paired with a disable(). |
2844 | */ | 2844 | */ |
2845 | void ring_buffer_record_on(struct ring_buffer *buffer) | 2845 | void ring_buffer_record_on(struct ring_buffer *buffer) |
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 5c38c81496ce..1ec5c1dab629 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c | |||
@@ -328,7 +328,7 @@ static DECLARE_WAIT_QUEUE_HEAD(trace_wait); | |||
328 | unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK | | 328 | unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK | |
329 | TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME | | 329 | TRACE_ITER_ANNOTATE | TRACE_ITER_CONTEXT_INFO | TRACE_ITER_SLEEP_TIME | |
330 | TRACE_ITER_GRAPH_TIME | TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE | | 330 | TRACE_ITER_GRAPH_TIME | TRACE_ITER_RECORD_CMD | TRACE_ITER_OVERWRITE | |
331 | TRACE_ITER_IRQ_INFO; | 331 | TRACE_ITER_IRQ_INFO | TRACE_ITER_MARKERS; |
332 | 332 | ||
333 | static int trace_stop_count; | 333 | static int trace_stop_count; |
334 | static DEFINE_RAW_SPINLOCK(tracing_start_lock); | 334 | static DEFINE_RAW_SPINLOCK(tracing_start_lock); |
@@ -426,15 +426,15 @@ __setup("trace_buf_size=", set_buf_size); | |||
426 | 426 | ||
427 | static int __init set_tracing_thresh(char *str) | 427 | static int __init set_tracing_thresh(char *str) |
428 | { | 428 | { |
429 | unsigned long threshhold; | 429 | unsigned long threshold; |
430 | int ret; | 430 | int ret; |
431 | 431 | ||
432 | if (!str) | 432 | if (!str) |
433 | return 0; | 433 | return 0; |
434 | ret = strict_strtoul(str, 0, &threshhold); | 434 | ret = strict_strtoul(str, 0, &threshold); |
435 | if (ret < 0) | 435 | if (ret < 0) |
436 | return 0; | 436 | return 0; |
437 | tracing_thresh = threshhold * 1000; | 437 | tracing_thresh = threshold * 1000; |
438 | return 1; | 438 | return 1; |
439 | } | 439 | } |
440 | __setup("tracing_thresh=", set_tracing_thresh); | 440 | __setup("tracing_thresh=", set_tracing_thresh); |
@@ -470,6 +470,7 @@ static const char *trace_options[] = { | |||
470 | "overwrite", | 470 | "overwrite", |
471 | "disable_on_free", | 471 | "disable_on_free", |
472 | "irq-info", | 472 | "irq-info", |
473 | "markers", | ||
473 | NULL | 474 | NULL |
474 | }; | 475 | }; |
475 | 476 | ||
@@ -3886,6 +3887,9 @@ tracing_mark_write(struct file *filp, const char __user *ubuf, | |||
3886 | if (tracing_disabled) | 3887 | if (tracing_disabled) |
3887 | return -EINVAL; | 3888 | return -EINVAL; |
3888 | 3889 | ||
3890 | if (!(trace_flags & TRACE_ITER_MARKERS)) | ||
3891 | return -EINVAL; | ||
3892 | |||
3889 | if (cnt > TRACE_BUF_SIZE) | 3893 | if (cnt > TRACE_BUF_SIZE) |
3890 | cnt = TRACE_BUF_SIZE; | 3894 | cnt = TRACE_BUF_SIZE; |
3891 | 3895 | ||
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 55e1f7f0db12..63a2da0b9a6e 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h | |||
@@ -472,11 +472,11 @@ extern void trace_find_cmdline(int pid, char comm[]); | |||
472 | 472 | ||
473 | #ifdef CONFIG_DYNAMIC_FTRACE | 473 | #ifdef CONFIG_DYNAMIC_FTRACE |
474 | extern unsigned long ftrace_update_tot_cnt; | 474 | extern unsigned long ftrace_update_tot_cnt; |
475 | #endif | ||
475 | #define DYN_FTRACE_TEST_NAME trace_selftest_dynamic_test_func | 476 | #define DYN_FTRACE_TEST_NAME trace_selftest_dynamic_test_func |
476 | extern int DYN_FTRACE_TEST_NAME(void); | 477 | extern int DYN_FTRACE_TEST_NAME(void); |
477 | #define DYN_FTRACE_TEST_NAME2 trace_selftest_dynamic_test_func2 | 478 | #define DYN_FTRACE_TEST_NAME2 trace_selftest_dynamic_test_func2 |
478 | extern int DYN_FTRACE_TEST_NAME2(void); | 479 | extern int DYN_FTRACE_TEST_NAME2(void); |
479 | #endif | ||
480 | 480 | ||
481 | extern int ring_buffer_expanded; | 481 | extern int ring_buffer_expanded; |
482 | extern bool tracing_selftest_disabled; | 482 | extern bool tracing_selftest_disabled; |
@@ -680,6 +680,7 @@ enum trace_iterator_flags { | |||
680 | TRACE_ITER_OVERWRITE = 0x200000, | 680 | TRACE_ITER_OVERWRITE = 0x200000, |
681 | TRACE_ITER_STOP_ON_FREE = 0x400000, | 681 | TRACE_ITER_STOP_ON_FREE = 0x400000, |
682 | TRACE_ITER_IRQ_INFO = 0x800000, | 682 | TRACE_ITER_IRQ_INFO = 0x800000, |
683 | TRACE_ITER_MARKERS = 0x1000000, | ||
683 | }; | 684 | }; |
684 | 685 | ||
685 | /* | 686 | /* |
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c index 8a6d2ee2086c..84b1e045faba 100644 --- a/kernel/trace/trace_event_perf.c +++ b/kernel/trace/trace_event_perf.c | |||
@@ -258,7 +258,8 @@ EXPORT_SYMBOL_GPL(perf_trace_buf_prepare); | |||
258 | 258 | ||
259 | #ifdef CONFIG_FUNCTION_TRACER | 259 | #ifdef CONFIG_FUNCTION_TRACER |
260 | static void | 260 | static void |
261 | perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip) | 261 | perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip, |
262 | struct ftrace_ops *ops, struct pt_regs *pt_regs) | ||
262 | { | 263 | { |
263 | struct ftrace_entry *entry; | 264 | struct ftrace_entry *entry; |
264 | struct hlist_head *head; | 265 | struct hlist_head *head; |
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 29111da1d100..d608d09d08c0 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c | |||
@@ -1199,6 +1199,31 @@ event_create_dir(struct ftrace_event_call *call, struct dentry *d_events, | |||
1199 | return 0; | 1199 | return 0; |
1200 | } | 1200 | } |
1201 | 1201 | ||
1202 | static void event_remove(struct ftrace_event_call *call) | ||
1203 | { | ||
1204 | ftrace_event_enable_disable(call, 0); | ||
1205 | if (call->event.funcs) | ||
1206 | __unregister_ftrace_event(&call->event); | ||
1207 | list_del(&call->list); | ||
1208 | } | ||
1209 | |||
1210 | static int event_init(struct ftrace_event_call *call) | ||
1211 | { | ||
1212 | int ret = 0; | ||
1213 | |||
1214 | if (WARN_ON(!call->name)) | ||
1215 | return -EINVAL; | ||
1216 | |||
1217 | if (call->class->raw_init) { | ||
1218 | ret = call->class->raw_init(call); | ||
1219 | if (ret < 0 && ret != -ENOSYS) | ||
1220 | pr_warn("Could not initialize trace events/%s\n", | ||
1221 | call->name); | ||
1222 | } | ||
1223 | |||
1224 | return ret; | ||
1225 | } | ||
1226 | |||
1202 | static int | 1227 | static int |
1203 | __trace_add_event_call(struct ftrace_event_call *call, struct module *mod, | 1228 | __trace_add_event_call(struct ftrace_event_call *call, struct module *mod, |
1204 | const struct file_operations *id, | 1229 | const struct file_operations *id, |
@@ -1209,19 +1234,9 @@ __trace_add_event_call(struct ftrace_event_call *call, struct module *mod, | |||
1209 | struct dentry *d_events; | 1234 | struct dentry *d_events; |
1210 | int ret; | 1235 | int ret; |
1211 | 1236 | ||
1212 | /* The linker may leave blanks */ | 1237 | ret = event_init(call); |
1213 | if (!call->name) | 1238 | if (ret < 0) |
1214 | return -EINVAL; | 1239 | return ret; |
1215 | |||
1216 | if (call->class->raw_init) { | ||
1217 | ret = call->class->raw_init(call); | ||
1218 | if (ret < 0) { | ||
1219 | if (ret != -ENOSYS) | ||
1220 | pr_warning("Could not initialize trace events/%s\n", | ||
1221 | call->name); | ||
1222 | return ret; | ||
1223 | } | ||
1224 | } | ||
1225 | 1240 | ||
1226 | d_events = event_trace_events_dir(); | 1241 | d_events = event_trace_events_dir(); |
1227 | if (!d_events) | 1242 | if (!d_events) |
@@ -1272,13 +1287,10 @@ static void remove_subsystem_dir(const char *name) | |||
1272 | */ | 1287 | */ |
1273 | static void __trace_remove_event_call(struct ftrace_event_call *call) | 1288 | static void __trace_remove_event_call(struct ftrace_event_call *call) |
1274 | { | 1289 | { |
1275 | ftrace_event_enable_disable(call, 0); | 1290 | event_remove(call); |
1276 | if (call->event.funcs) | ||
1277 | __unregister_ftrace_event(&call->event); | ||
1278 | debugfs_remove_recursive(call->dir); | ||
1279 | list_del(&call->list); | ||
1280 | trace_destroy_fields(call); | 1291 | trace_destroy_fields(call); |
1281 | destroy_preds(call); | 1292 | destroy_preds(call); |
1293 | debugfs_remove_recursive(call->dir); | ||
1282 | remove_subsystem_dir(call->class->system); | 1294 | remove_subsystem_dir(call->class->system); |
1283 | } | 1295 | } |
1284 | 1296 | ||
@@ -1450,15 +1462,43 @@ static __init int setup_trace_event(char *str) | |||
1450 | } | 1462 | } |
1451 | __setup("trace_event=", setup_trace_event); | 1463 | __setup("trace_event=", setup_trace_event); |
1452 | 1464 | ||
1465 | static __init int event_trace_enable(void) | ||
1466 | { | ||
1467 | struct ftrace_event_call **iter, *call; | ||
1468 | char *buf = bootup_event_buf; | ||
1469 | char *token; | ||
1470 | int ret; | ||
1471 | |||
1472 | for_each_event(iter, __start_ftrace_events, __stop_ftrace_events) { | ||
1473 | |||
1474 | call = *iter; | ||
1475 | ret = event_init(call); | ||
1476 | if (!ret) | ||
1477 | list_add(&call->list, &ftrace_events); | ||
1478 | } | ||
1479 | |||
1480 | while (true) { | ||
1481 | token = strsep(&buf, ","); | ||
1482 | |||
1483 | if (!token) | ||
1484 | break; | ||
1485 | if (!*token) | ||
1486 | continue; | ||
1487 | |||
1488 | ret = ftrace_set_clr_event(token, 1); | ||
1489 | if (ret) | ||
1490 | pr_warn("Failed to enable trace event: %s\n", token); | ||
1491 | } | ||
1492 | return 0; | ||
1493 | } | ||
1494 | |||
1453 | static __init int event_trace_init(void) | 1495 | static __init int event_trace_init(void) |
1454 | { | 1496 | { |
1455 | struct ftrace_event_call **call; | 1497 | struct ftrace_event_call *call; |
1456 | struct dentry *d_tracer; | 1498 | struct dentry *d_tracer; |
1457 | struct dentry *entry; | 1499 | struct dentry *entry; |
1458 | struct dentry *d_events; | 1500 | struct dentry *d_events; |
1459 | int ret; | 1501 | int ret; |
1460 | char *buf = bootup_event_buf; | ||
1461 | char *token; | ||
1462 | 1502 | ||
1463 | d_tracer = tracing_init_dentry(); | 1503 | d_tracer = tracing_init_dentry(); |
1464 | if (!d_tracer) | 1504 | if (!d_tracer) |
@@ -1497,24 +1537,19 @@ static __init int event_trace_init(void) | |||
1497 | if (trace_define_common_fields()) | 1537 | if (trace_define_common_fields()) |
1498 | pr_warning("tracing: Failed to allocate common fields"); | 1538 | pr_warning("tracing: Failed to allocate common fields"); |
1499 | 1539 | ||
1500 | for_each_event(call, __start_ftrace_events, __stop_ftrace_events) { | 1540 | /* |
1501 | __trace_add_event_call(*call, NULL, &ftrace_event_id_fops, | 1541 | * Early initialization already enabled ftrace event. |
1542 | * Now it's only necessary to create the event directory. | ||
1543 | */ | ||
1544 | list_for_each_entry(call, &ftrace_events, list) { | ||
1545 | |||
1546 | ret = event_create_dir(call, d_events, | ||
1547 | &ftrace_event_id_fops, | ||
1502 | &ftrace_enable_fops, | 1548 | &ftrace_enable_fops, |
1503 | &ftrace_event_filter_fops, | 1549 | &ftrace_event_filter_fops, |
1504 | &ftrace_event_format_fops); | 1550 | &ftrace_event_format_fops); |
1505 | } | 1551 | if (ret < 0) |
1506 | 1552 | event_remove(call); | |
1507 | while (true) { | ||
1508 | token = strsep(&buf, ","); | ||
1509 | |||
1510 | if (!token) | ||
1511 | break; | ||
1512 | if (!*token) | ||
1513 | continue; | ||
1514 | |||
1515 | ret = ftrace_set_clr_event(token, 1); | ||
1516 | if (ret) | ||
1517 | pr_warning("Failed to enable trace event: %s\n", token); | ||
1518 | } | 1553 | } |
1519 | 1554 | ||
1520 | ret = register_module_notifier(&trace_module_nb); | 1555 | ret = register_module_notifier(&trace_module_nb); |
@@ -1523,6 +1558,7 @@ static __init int event_trace_init(void) | |||
1523 | 1558 | ||
1524 | return 0; | 1559 | return 0; |
1525 | } | 1560 | } |
1561 | core_initcall(event_trace_enable); | ||
1526 | fs_initcall(event_trace_init); | 1562 | fs_initcall(event_trace_init); |
1527 | 1563 | ||
1528 | #ifdef CONFIG_FTRACE_STARTUP_TEST | 1564 | #ifdef CONFIG_FTRACE_STARTUP_TEST |
@@ -1646,9 +1682,11 @@ static __init void event_trace_self_tests(void) | |||
1646 | event_test_stuff(); | 1682 | event_test_stuff(); |
1647 | 1683 | ||
1648 | ret = __ftrace_set_clr_event(NULL, system->name, NULL, 0); | 1684 | ret = __ftrace_set_clr_event(NULL, system->name, NULL, 0); |
1649 | if (WARN_ON_ONCE(ret)) | 1685 | if (WARN_ON_ONCE(ret)) { |
1650 | pr_warning("error disabling system %s\n", | 1686 | pr_warning("error disabling system %s\n", |
1651 | system->name); | 1687 | system->name); |
1688 | continue; | ||
1689 | } | ||
1652 | 1690 | ||
1653 | pr_cont("OK\n"); | 1691 | pr_cont("OK\n"); |
1654 | } | 1692 | } |
@@ -1681,7 +1719,8 @@ static __init void event_trace_self_tests(void) | |||
1681 | static DEFINE_PER_CPU(atomic_t, ftrace_test_event_disable); | 1719 | static DEFINE_PER_CPU(atomic_t, ftrace_test_event_disable); |
1682 | 1720 | ||
1683 | static void | 1721 | static void |
1684 | function_test_events_call(unsigned long ip, unsigned long parent_ip) | 1722 | function_test_events_call(unsigned long ip, unsigned long parent_ip, |
1723 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
1685 | { | 1724 | { |
1686 | struct ring_buffer_event *event; | 1725 | struct ring_buffer_event *event; |
1687 | struct ring_buffer *buffer; | 1726 | struct ring_buffer *buffer; |
@@ -1720,6 +1759,7 @@ function_test_events_call(unsigned long ip, unsigned long parent_ip) | |||
1720 | static struct ftrace_ops trace_ops __initdata = | 1759 | static struct ftrace_ops trace_ops __initdata = |
1721 | { | 1760 | { |
1722 | .func = function_test_events_call, | 1761 | .func = function_test_events_call, |
1762 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
1723 | }; | 1763 | }; |
1724 | 1764 | ||
1725 | static __init void event_trace_self_test_with_function(void) | 1765 | static __init void event_trace_self_test_with_function(void) |
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c index 431dba8b7542..c154797a7ff7 100644 --- a/kernel/trace/trace_events_filter.c +++ b/kernel/trace/trace_events_filter.c | |||
@@ -2002,7 +2002,7 @@ static int ftrace_function_set_regexp(struct ftrace_ops *ops, int filter, | |||
2002 | static int __ftrace_function_set_filter(int filter, char *buf, int len, | 2002 | static int __ftrace_function_set_filter(int filter, char *buf, int len, |
2003 | struct function_filter_data *data) | 2003 | struct function_filter_data *data) |
2004 | { | 2004 | { |
2005 | int i, re_cnt, ret; | 2005 | int i, re_cnt, ret = -EINVAL; |
2006 | int *reset; | 2006 | int *reset; |
2007 | char **re; | 2007 | char **re; |
2008 | 2008 | ||
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c index a426f410c060..483162a9f908 100644 --- a/kernel/trace/trace_functions.c +++ b/kernel/trace/trace_functions.c | |||
@@ -49,7 +49,8 @@ static void function_trace_start(struct trace_array *tr) | |||
49 | } | 49 | } |
50 | 50 | ||
51 | static void | 51 | static void |
52 | function_trace_call_preempt_only(unsigned long ip, unsigned long parent_ip) | 52 | function_trace_call_preempt_only(unsigned long ip, unsigned long parent_ip, |
53 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
53 | { | 54 | { |
54 | struct trace_array *tr = func_trace; | 55 | struct trace_array *tr = func_trace; |
55 | struct trace_array_cpu *data; | 56 | struct trace_array_cpu *data; |
@@ -84,7 +85,9 @@ enum { | |||
84 | static struct tracer_flags func_flags; | 85 | static struct tracer_flags func_flags; |
85 | 86 | ||
86 | static void | 87 | static void |
87 | function_trace_call(unsigned long ip, unsigned long parent_ip) | 88 | function_trace_call(unsigned long ip, unsigned long parent_ip, |
89 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
90 | |||
88 | { | 91 | { |
89 | struct trace_array *tr = func_trace; | 92 | struct trace_array *tr = func_trace; |
90 | struct trace_array_cpu *data; | 93 | struct trace_array_cpu *data; |
@@ -121,7 +124,8 @@ function_trace_call(unsigned long ip, unsigned long parent_ip) | |||
121 | } | 124 | } |
122 | 125 | ||
123 | static void | 126 | static void |
124 | function_stack_trace_call(unsigned long ip, unsigned long parent_ip) | 127 | function_stack_trace_call(unsigned long ip, unsigned long parent_ip, |
128 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
125 | { | 129 | { |
126 | struct trace_array *tr = func_trace; | 130 | struct trace_array *tr = func_trace; |
127 | struct trace_array_cpu *data; | 131 | struct trace_array_cpu *data; |
@@ -164,13 +168,13 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip) | |||
164 | static struct ftrace_ops trace_ops __read_mostly = | 168 | static struct ftrace_ops trace_ops __read_mostly = |
165 | { | 169 | { |
166 | .func = function_trace_call, | 170 | .func = function_trace_call, |
167 | .flags = FTRACE_OPS_FL_GLOBAL, | 171 | .flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE, |
168 | }; | 172 | }; |
169 | 173 | ||
170 | static struct ftrace_ops trace_stack_ops __read_mostly = | 174 | static struct ftrace_ops trace_stack_ops __read_mostly = |
171 | { | 175 | { |
172 | .func = function_stack_trace_call, | 176 | .func = function_stack_trace_call, |
173 | .flags = FTRACE_OPS_FL_GLOBAL, | 177 | .flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE, |
174 | }; | 178 | }; |
175 | 179 | ||
176 | static struct tracer_opt func_opts[] = { | 180 | static struct tracer_opt func_opts[] = { |
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index ce27c8ba8d31..99b4378393d5 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c | |||
@@ -143,7 +143,7 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, | |||
143 | return; | 143 | return; |
144 | } | 144 | } |
145 | 145 | ||
146 | #ifdef CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST | 146 | #if defined(CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST) && !defined(CC_USING_FENTRY) |
147 | /* | 147 | /* |
148 | * The arch may choose to record the frame pointer used | 148 | * The arch may choose to record the frame pointer used |
149 | * and check it here to make sure that it is what we expect it | 149 | * and check it here to make sure that it is what we expect it |
@@ -154,6 +154,9 @@ ftrace_pop_return_trace(struct ftrace_graph_ret *trace, unsigned long *ret, | |||
154 | * | 154 | * |
155 | * Currently, x86_32 with optimize for size (-Os) makes the latest | 155 | * Currently, x86_32 with optimize for size (-Os) makes the latest |
156 | * gcc do the above. | 156 | * gcc do the above. |
157 | * | ||
158 | * Note, -mfentry does not use frame pointers, and this test | ||
159 | * is not needed if CC_USING_FENTRY is set. | ||
157 | */ | 160 | */ |
158 | if (unlikely(current->ret_stack[index].fp != frame_pointer)) { | 161 | if (unlikely(current->ret_stack[index].fp != frame_pointer)) { |
159 | ftrace_graph_stop(); | 162 | ftrace_graph_stop(); |
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index 99d20e920368..d98ee8283b29 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c | |||
@@ -136,7 +136,8 @@ static int func_prolog_dec(struct trace_array *tr, | |||
136 | * irqsoff uses its own tracer function to keep the overhead down: | 136 | * irqsoff uses its own tracer function to keep the overhead down: |
137 | */ | 137 | */ |
138 | static void | 138 | static void |
139 | irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip) | 139 | irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip, |
140 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
140 | { | 141 | { |
141 | struct trace_array *tr = irqsoff_trace; | 142 | struct trace_array *tr = irqsoff_trace; |
142 | struct trace_array_cpu *data; | 143 | struct trace_array_cpu *data; |
@@ -153,7 +154,7 @@ irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip) | |||
153 | static struct ftrace_ops trace_ops __read_mostly = | 154 | static struct ftrace_ops trace_ops __read_mostly = |
154 | { | 155 | { |
155 | .func = irqsoff_tracer_call, | 156 | .func = irqsoff_tracer_call, |
156 | .flags = FTRACE_OPS_FL_GLOBAL, | 157 | .flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE, |
157 | }; | 158 | }; |
158 | #endif /* CONFIG_FUNCTION_TRACER */ | 159 | #endif /* CONFIG_FUNCTION_TRACER */ |
159 | 160 | ||
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index ff791ea48b57..02170c00c413 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c | |||
@@ -108,7 +108,8 @@ out_enable: | |||
108 | * wakeup uses its own tracer function to keep the overhead down: | 108 | * wakeup uses its own tracer function to keep the overhead down: |
109 | */ | 109 | */ |
110 | static void | 110 | static void |
111 | wakeup_tracer_call(unsigned long ip, unsigned long parent_ip) | 111 | wakeup_tracer_call(unsigned long ip, unsigned long parent_ip, |
112 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
112 | { | 113 | { |
113 | struct trace_array *tr = wakeup_trace; | 114 | struct trace_array *tr = wakeup_trace; |
114 | struct trace_array_cpu *data; | 115 | struct trace_array_cpu *data; |
@@ -129,7 +130,7 @@ wakeup_tracer_call(unsigned long ip, unsigned long parent_ip) | |||
129 | static struct ftrace_ops trace_ops __read_mostly = | 130 | static struct ftrace_ops trace_ops __read_mostly = |
130 | { | 131 | { |
131 | .func = wakeup_tracer_call, | 132 | .func = wakeup_tracer_call, |
132 | .flags = FTRACE_OPS_FL_GLOBAL, | 133 | .flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE, |
133 | }; | 134 | }; |
134 | #endif /* CONFIG_FUNCTION_TRACER */ | 135 | #endif /* CONFIG_FUNCTION_TRACER */ |
135 | 136 | ||
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c index 288541f977fb..2c00a691a540 100644 --- a/kernel/trace/trace_selftest.c +++ b/kernel/trace/trace_selftest.c | |||
@@ -103,54 +103,67 @@ static inline void warn_failed_init_tracer(struct tracer *trace, int init_ret) | |||
103 | 103 | ||
104 | static int trace_selftest_test_probe1_cnt; | 104 | static int trace_selftest_test_probe1_cnt; |
105 | static void trace_selftest_test_probe1_func(unsigned long ip, | 105 | static void trace_selftest_test_probe1_func(unsigned long ip, |
106 | unsigned long pip) | 106 | unsigned long pip, |
107 | struct ftrace_ops *op, | ||
108 | struct pt_regs *pt_regs) | ||
107 | { | 109 | { |
108 | trace_selftest_test_probe1_cnt++; | 110 | trace_selftest_test_probe1_cnt++; |
109 | } | 111 | } |
110 | 112 | ||
111 | static int trace_selftest_test_probe2_cnt; | 113 | static int trace_selftest_test_probe2_cnt; |
112 | static void trace_selftest_test_probe2_func(unsigned long ip, | 114 | static void trace_selftest_test_probe2_func(unsigned long ip, |
113 | unsigned long pip) | 115 | unsigned long pip, |
116 | struct ftrace_ops *op, | ||
117 | struct pt_regs *pt_regs) | ||
114 | { | 118 | { |
115 | trace_selftest_test_probe2_cnt++; | 119 | trace_selftest_test_probe2_cnt++; |
116 | } | 120 | } |
117 | 121 | ||
118 | static int trace_selftest_test_probe3_cnt; | 122 | static int trace_selftest_test_probe3_cnt; |
119 | static void trace_selftest_test_probe3_func(unsigned long ip, | 123 | static void trace_selftest_test_probe3_func(unsigned long ip, |
120 | unsigned long pip) | 124 | unsigned long pip, |
125 | struct ftrace_ops *op, | ||
126 | struct pt_regs *pt_regs) | ||
121 | { | 127 | { |
122 | trace_selftest_test_probe3_cnt++; | 128 | trace_selftest_test_probe3_cnt++; |
123 | } | 129 | } |
124 | 130 | ||
125 | static int trace_selftest_test_global_cnt; | 131 | static int trace_selftest_test_global_cnt; |
126 | static void trace_selftest_test_global_func(unsigned long ip, | 132 | static void trace_selftest_test_global_func(unsigned long ip, |
127 | unsigned long pip) | 133 | unsigned long pip, |
134 | struct ftrace_ops *op, | ||
135 | struct pt_regs *pt_regs) | ||
128 | { | 136 | { |
129 | trace_selftest_test_global_cnt++; | 137 | trace_selftest_test_global_cnt++; |
130 | } | 138 | } |
131 | 139 | ||
132 | static int trace_selftest_test_dyn_cnt; | 140 | static int trace_selftest_test_dyn_cnt; |
133 | static void trace_selftest_test_dyn_func(unsigned long ip, | 141 | static void trace_selftest_test_dyn_func(unsigned long ip, |
134 | unsigned long pip) | 142 | unsigned long pip, |
143 | struct ftrace_ops *op, | ||
144 | struct pt_regs *pt_regs) | ||
135 | { | 145 | { |
136 | trace_selftest_test_dyn_cnt++; | 146 | trace_selftest_test_dyn_cnt++; |
137 | } | 147 | } |
138 | 148 | ||
139 | static struct ftrace_ops test_probe1 = { | 149 | static struct ftrace_ops test_probe1 = { |
140 | .func = trace_selftest_test_probe1_func, | 150 | .func = trace_selftest_test_probe1_func, |
151 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
141 | }; | 152 | }; |
142 | 153 | ||
143 | static struct ftrace_ops test_probe2 = { | 154 | static struct ftrace_ops test_probe2 = { |
144 | .func = trace_selftest_test_probe2_func, | 155 | .func = trace_selftest_test_probe2_func, |
156 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
145 | }; | 157 | }; |
146 | 158 | ||
147 | static struct ftrace_ops test_probe3 = { | 159 | static struct ftrace_ops test_probe3 = { |
148 | .func = trace_selftest_test_probe3_func, | 160 | .func = trace_selftest_test_probe3_func, |
161 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
149 | }; | 162 | }; |
150 | 163 | ||
151 | static struct ftrace_ops test_global = { | 164 | static struct ftrace_ops test_global = { |
152 | .func = trace_selftest_test_global_func, | 165 | .func = trace_selftest_test_global_func, |
153 | .flags = FTRACE_OPS_FL_GLOBAL, | 166 | .flags = FTRACE_OPS_FL_GLOBAL | FTRACE_OPS_FL_RECURSION_SAFE, |
154 | }; | 167 | }; |
155 | 168 | ||
156 | static void print_counts(void) | 169 | static void print_counts(void) |
@@ -393,10 +406,253 @@ int trace_selftest_startup_dynamic_tracing(struct tracer *trace, | |||
393 | 406 | ||
394 | return ret; | 407 | return ret; |
395 | } | 408 | } |
409 | |||
410 | static int trace_selftest_recursion_cnt; | ||
411 | static void trace_selftest_test_recursion_func(unsigned long ip, | ||
412 | unsigned long pip, | ||
413 | struct ftrace_ops *op, | ||
414 | struct pt_regs *pt_regs) | ||
415 | { | ||
416 | /* | ||
417 | * This function is registered without the recursion safe flag. | ||
418 | * The ftrace infrastructure should provide the recursion | ||
419 | * protection. If not, this will crash the kernel! | ||
420 | */ | ||
421 | trace_selftest_recursion_cnt++; | ||
422 | DYN_FTRACE_TEST_NAME(); | ||
423 | } | ||
424 | |||
425 | static void trace_selftest_test_recursion_safe_func(unsigned long ip, | ||
426 | unsigned long pip, | ||
427 | struct ftrace_ops *op, | ||
428 | struct pt_regs *pt_regs) | ||
429 | { | ||
430 | /* | ||
431 | * We said we would provide our own recursion. By calling | ||
432 | * this function again, we should recurse back into this function | ||
433 | * and count again. But this only happens if the arch supports | ||
434 | * all of ftrace features and nothing else is using the function | ||
435 | * tracing utility. | ||
436 | */ | ||
437 | if (trace_selftest_recursion_cnt++) | ||
438 | return; | ||
439 | DYN_FTRACE_TEST_NAME(); | ||
440 | } | ||
441 | |||
442 | static struct ftrace_ops test_rec_probe = { | ||
443 | .func = trace_selftest_test_recursion_func, | ||
444 | }; | ||
445 | |||
446 | static struct ftrace_ops test_recsafe_probe = { | ||
447 | .func = trace_selftest_test_recursion_safe_func, | ||
448 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
449 | }; | ||
450 | |||
451 | static int | ||
452 | trace_selftest_function_recursion(void) | ||
453 | { | ||
454 | int save_ftrace_enabled = ftrace_enabled; | ||
455 | int save_tracer_enabled = tracer_enabled; | ||
456 | char *func_name; | ||
457 | int len; | ||
458 | int ret; | ||
459 | int cnt; | ||
460 | |||
461 | /* The previous test PASSED */ | ||
462 | pr_cont("PASSED\n"); | ||
463 | pr_info("Testing ftrace recursion: "); | ||
464 | |||
465 | |||
466 | /* enable tracing, and record the filter function */ | ||
467 | ftrace_enabled = 1; | ||
468 | tracer_enabled = 1; | ||
469 | |||
470 | /* Handle PPC64 '.' name */ | ||
471 | func_name = "*" __stringify(DYN_FTRACE_TEST_NAME); | ||
472 | len = strlen(func_name); | ||
473 | |||
474 | ret = ftrace_set_filter(&test_rec_probe, func_name, len, 1); | ||
475 | if (ret) { | ||
476 | pr_cont("*Could not set filter* "); | ||
477 | goto out; | ||
478 | } | ||
479 | |||
480 | ret = register_ftrace_function(&test_rec_probe); | ||
481 | if (ret) { | ||
482 | pr_cont("*could not register callback* "); | ||
483 | goto out; | ||
484 | } | ||
485 | |||
486 | DYN_FTRACE_TEST_NAME(); | ||
487 | |||
488 | unregister_ftrace_function(&test_rec_probe); | ||
489 | |||
490 | ret = -1; | ||
491 | if (trace_selftest_recursion_cnt != 1) { | ||
492 | pr_cont("*callback not called once (%d)* ", | ||
493 | trace_selftest_recursion_cnt); | ||
494 | goto out; | ||
495 | } | ||
496 | |||
497 | trace_selftest_recursion_cnt = 1; | ||
498 | |||
499 | pr_cont("PASSED\n"); | ||
500 | pr_info("Testing ftrace recursion safe: "); | ||
501 | |||
502 | ret = ftrace_set_filter(&test_recsafe_probe, func_name, len, 1); | ||
503 | if (ret) { | ||
504 | pr_cont("*Could not set filter* "); | ||
505 | goto out; | ||
506 | } | ||
507 | |||
508 | ret = register_ftrace_function(&test_recsafe_probe); | ||
509 | if (ret) { | ||
510 | pr_cont("*could not register callback* "); | ||
511 | goto out; | ||
512 | } | ||
513 | |||
514 | DYN_FTRACE_TEST_NAME(); | ||
515 | |||
516 | unregister_ftrace_function(&test_recsafe_probe); | ||
517 | |||
518 | /* | ||
519 | * If arch supports all ftrace features, and no other task | ||
520 | * was on the list, we should be fine. | ||
521 | */ | ||
522 | if (!ftrace_nr_registered_ops() && !FTRACE_FORCE_LIST_FUNC) | ||
523 | cnt = 2; /* Should have recursed */ | ||
524 | else | ||
525 | cnt = 1; | ||
526 | |||
527 | ret = -1; | ||
528 | if (trace_selftest_recursion_cnt != cnt) { | ||
529 | pr_cont("*callback not called expected %d times (%d)* ", | ||
530 | cnt, trace_selftest_recursion_cnt); | ||
531 | goto out; | ||
532 | } | ||
533 | |||
534 | ret = 0; | ||
535 | out: | ||
536 | ftrace_enabled = save_ftrace_enabled; | ||
537 | tracer_enabled = save_tracer_enabled; | ||
538 | |||
539 | return ret; | ||
540 | } | ||
396 | #else | 541 | #else |
397 | # define trace_selftest_startup_dynamic_tracing(trace, tr, func) ({ 0; }) | 542 | # define trace_selftest_startup_dynamic_tracing(trace, tr, func) ({ 0; }) |
543 | # define trace_selftest_function_recursion() ({ 0; }) | ||
398 | #endif /* CONFIG_DYNAMIC_FTRACE */ | 544 | #endif /* CONFIG_DYNAMIC_FTRACE */ |
399 | 545 | ||
546 | static enum { | ||
547 | TRACE_SELFTEST_REGS_START, | ||
548 | TRACE_SELFTEST_REGS_FOUND, | ||
549 | TRACE_SELFTEST_REGS_NOT_FOUND, | ||
550 | } trace_selftest_regs_stat; | ||
551 | |||
552 | static void trace_selftest_test_regs_func(unsigned long ip, | ||
553 | unsigned long pip, | ||
554 | struct ftrace_ops *op, | ||
555 | struct pt_regs *pt_regs) | ||
556 | { | ||
557 | if (pt_regs) | ||
558 | trace_selftest_regs_stat = TRACE_SELFTEST_REGS_FOUND; | ||
559 | else | ||
560 | trace_selftest_regs_stat = TRACE_SELFTEST_REGS_NOT_FOUND; | ||
561 | } | ||
562 | |||
563 | static struct ftrace_ops test_regs_probe = { | ||
564 | .func = trace_selftest_test_regs_func, | ||
565 | .flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_SAVE_REGS, | ||
566 | }; | ||
567 | |||
568 | static int | ||
569 | trace_selftest_function_regs(void) | ||
570 | { | ||
571 | int save_ftrace_enabled = ftrace_enabled; | ||
572 | int save_tracer_enabled = tracer_enabled; | ||
573 | char *func_name; | ||
574 | int len; | ||
575 | int ret; | ||
576 | int supported = 0; | ||
577 | |||
578 | #ifdef ARCH_SUPPORTS_FTRACE_SAVE_REGS | ||
579 | supported = 1; | ||
580 | #endif | ||
581 | |||
582 | /* The previous test PASSED */ | ||
583 | pr_cont("PASSED\n"); | ||
584 | pr_info("Testing ftrace regs%s: ", | ||
585 | !supported ? "(no arch support)" : ""); | ||
586 | |||
587 | /* enable tracing, and record the filter function */ | ||
588 | ftrace_enabled = 1; | ||
589 | tracer_enabled = 1; | ||
590 | |||
591 | /* Handle PPC64 '.' name */ | ||
592 | func_name = "*" __stringify(DYN_FTRACE_TEST_NAME); | ||
593 | len = strlen(func_name); | ||
594 | |||
595 | ret = ftrace_set_filter(&test_regs_probe, func_name, len, 1); | ||
596 | /* | ||
597 | * If DYNAMIC_FTRACE is not set, then we just trace all functions. | ||
598 | * This test really doesn't care. | ||
599 | */ | ||
600 | if (ret && ret != -ENODEV) { | ||
601 | pr_cont("*Could not set filter* "); | ||
602 | goto out; | ||
603 | } | ||
604 | |||
605 | ret = register_ftrace_function(&test_regs_probe); | ||
606 | /* | ||
607 | * Now if the arch does not support passing regs, then this should | ||
608 | * have failed. | ||
609 | */ | ||
610 | if (!supported) { | ||
611 | if (!ret) { | ||
612 | pr_cont("*registered save-regs without arch support* "); | ||
613 | goto out; | ||
614 | } | ||
615 | test_regs_probe.flags |= FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED; | ||
616 | ret = register_ftrace_function(&test_regs_probe); | ||
617 | } | ||
618 | if (ret) { | ||
619 | pr_cont("*could not register callback* "); | ||
620 | goto out; | ||
621 | } | ||
622 | |||
623 | |||
624 | DYN_FTRACE_TEST_NAME(); | ||
625 | |||
626 | unregister_ftrace_function(&test_regs_probe); | ||
627 | |||
628 | ret = -1; | ||
629 | |||
630 | switch (trace_selftest_regs_stat) { | ||
631 | case TRACE_SELFTEST_REGS_START: | ||
632 | pr_cont("*callback never called* "); | ||
633 | goto out; | ||
634 | |||
635 | case TRACE_SELFTEST_REGS_FOUND: | ||
636 | if (supported) | ||
637 | break; | ||
638 | pr_cont("*callback received regs without arch support* "); | ||
639 | goto out; | ||
640 | |||
641 | case TRACE_SELFTEST_REGS_NOT_FOUND: | ||
642 | if (!supported) | ||
643 | break; | ||
644 | pr_cont("*callback received NULL regs* "); | ||
645 | goto out; | ||
646 | } | ||
647 | |||
648 | ret = 0; | ||
649 | out: | ||
650 | ftrace_enabled = save_ftrace_enabled; | ||
651 | tracer_enabled = save_tracer_enabled; | ||
652 | |||
653 | return ret; | ||
654 | } | ||
655 | |||
400 | /* | 656 | /* |
401 | * Simple verification test of ftrace function tracer. | 657 | * Simple verification test of ftrace function tracer. |
402 | * Enable ftrace, sleep 1/10 second, and then read the trace | 658 | * Enable ftrace, sleep 1/10 second, and then read the trace |
@@ -442,7 +698,14 @@ trace_selftest_startup_function(struct tracer *trace, struct trace_array *tr) | |||
442 | 698 | ||
443 | ret = trace_selftest_startup_dynamic_tracing(trace, tr, | 699 | ret = trace_selftest_startup_dynamic_tracing(trace, tr, |
444 | DYN_FTRACE_TEST_NAME); | 700 | DYN_FTRACE_TEST_NAME); |
701 | if (ret) | ||
702 | goto out; | ||
445 | 703 | ||
704 | ret = trace_selftest_function_recursion(); | ||
705 | if (ret) | ||
706 | goto out; | ||
707 | |||
708 | ret = trace_selftest_function_regs(); | ||
446 | out: | 709 | out: |
447 | ftrace_enabled = save_ftrace_enabled; | 710 | ftrace_enabled = save_ftrace_enabled; |
448 | tracer_enabled = save_tracer_enabled; | 711 | tracer_enabled = save_tracer_enabled; |
@@ -778,6 +1041,8 @@ static int trace_wakeup_test_thread(void *data) | |||
778 | set_current_state(TASK_INTERRUPTIBLE); | 1041 | set_current_state(TASK_INTERRUPTIBLE); |
779 | schedule(); | 1042 | schedule(); |
780 | 1043 | ||
1044 | complete(x); | ||
1045 | |||
781 | /* we are awake, now wait to disappear */ | 1046 | /* we are awake, now wait to disappear */ |
782 | while (!kthread_should_stop()) { | 1047 | while (!kthread_should_stop()) { |
783 | /* | 1048 | /* |
@@ -821,24 +1086,21 @@ trace_selftest_startup_wakeup(struct tracer *trace, struct trace_array *tr) | |||
821 | /* reset the max latency */ | 1086 | /* reset the max latency */ |
822 | tracing_max_latency = 0; | 1087 | tracing_max_latency = 0; |
823 | 1088 | ||
824 | /* sleep to let the RT thread sleep too */ | 1089 | while (p->on_rq) { |
825 | msleep(100); | 1090 | /* |
1091 | * Sleep to make sure the RT thread is asleep too. | ||
1092 | * On virtual machines we can't rely on timings, | ||
1093 | * but we want to make sure this test still works. | ||
1094 | */ | ||
1095 | msleep(100); | ||
1096 | } | ||
826 | 1097 | ||
827 | /* | 1098 | init_completion(&isrt); |
828 | * Yes this is slightly racy. It is possible that for some | ||
829 | * strange reason that the RT thread we created, did not | ||
830 | * call schedule for 100ms after doing the completion, | ||
831 | * and we do a wakeup on a task that already is awake. | ||
832 | * But that is extremely unlikely, and the worst thing that | ||
833 | * happens in such a case, is that we disable tracing. | ||
834 | * Honestly, if this race does happen something is horrible | ||
835 | * wrong with the system. | ||
836 | */ | ||
837 | 1099 | ||
838 | wake_up_process(p); | 1100 | wake_up_process(p); |
839 | 1101 | ||
840 | /* give a little time to let the thread wake up */ | 1102 | /* Wait for the task to wake up */ |
841 | msleep(100); | 1103 | wait_for_completion(&isrt); |
842 | 1104 | ||
843 | /* stop the tracing. */ | 1105 | /* stop the tracing. */ |
844 | tracing_stop(); | 1106 | tracing_stop(); |
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c index d4545f49242e..0c1b165778e5 100644 --- a/kernel/trace/trace_stack.c +++ b/kernel/trace/trace_stack.c | |||
@@ -111,7 +111,8 @@ static inline void check_stack(void) | |||
111 | } | 111 | } |
112 | 112 | ||
113 | static void | 113 | static void |
114 | stack_trace_call(unsigned long ip, unsigned long parent_ip) | 114 | stack_trace_call(unsigned long ip, unsigned long parent_ip, |
115 | struct ftrace_ops *op, struct pt_regs *pt_regs) | ||
115 | { | 116 | { |
116 | int cpu; | 117 | int cpu; |
117 | 118 | ||
@@ -136,6 +137,7 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip) | |||
136 | static struct ftrace_ops trace_ops __read_mostly = | 137 | static struct ftrace_ops trace_ops __read_mostly = |
137 | { | 138 | { |
138 | .func = stack_trace_call, | 139 | .func = stack_trace_call, |
140 | .flags = FTRACE_OPS_FL_RECURSION_SAFE, | ||
139 | }; | 141 | }; |
140 | 142 | ||
141 | static ssize_t | 143 | static ssize_t |
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c index 6b245f64c8dd..2485a7d09b11 100644 --- a/kernel/trace/trace_syscalls.c +++ b/kernel/trace/trace_syscalls.c | |||
@@ -487,7 +487,7 @@ int __init init_ftrace_syscalls(void) | |||
487 | 487 | ||
488 | return 0; | 488 | return 0; |
489 | } | 489 | } |
490 | core_initcall(init_ftrace_syscalls); | 490 | early_initcall(init_ftrace_syscalls); |
491 | 491 | ||
492 | #ifdef CONFIG_PERF_EVENTS | 492 | #ifdef CONFIG_PERF_EVENTS |
493 | 493 | ||