aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2010-03-05 13:50:22 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2010-03-05 13:50:22 -0500
commit660f6a360be399f4ebdd6572a3d24afe54e9bb1c (patch)
tree9c16463c495a656e34577d59c97b58997b61d242
parent586fac13f8685bf9dfb32e1ee98bfb14f0dd0061 (diff)
parente5a11016643d1ab7172193591506d33a844734cc (diff)
Merge branch 'perf-probes-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'perf-probes-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Issue at least one memory barrier in stop_machine_text_poke() perf probe: Correct probe syntax on command line help perf probe: Add lazy line matching support perf probe: Show more lines after last line perf probe: Check function address range strictly in line finder perf probe: Use libdw callback routines perf probe: Use elfutils-libdw for analyzing debuginfo perf probe: Rename probe finder functions perf probe: Fix bugs in line range finder perf probe: Update perf probe document perf probe: Do not show --line option without dwarf support kprobes: Add documents of jump optimization kprobes/x86: Support kprobes jump optimization on x86 x86: Add text_poke_smp for SMP cross modifying code kprobes/x86: Cleanup save/restore registers kprobes/x86: Boost probes when reentering kprobes: Jump optimization sysctl interface kprobes: Introduce kprobes jump optimization kprobes: Introduce generic insn_slot framework kprobes/x86: Cleanup RELATIVEJUMP_INSTRUCTION to RELATIVEJUMP_OPCODE
-rw-r--r--Documentation/kprobes.txt207
-rw-r--r--arch/Kconfig13
-rw-r--r--arch/x86/Kconfig1
-rw-r--r--arch/x86/include/asm/alternative.h4
-rw-r--r--arch/x86/include/asm/kprobes.h31
-rw-r--r--arch/x86/kernel/alternative.c60
-rw-r--r--arch/x86/kernel/kprobes.c609
-rw-r--r--include/linux/kprobes.h44
-rw-r--r--kernel/kprobes.c647
-rw-r--r--kernel/sysctl.c12
-rw-r--r--tools/perf/Documentation/perf-probe.txt58
-rw-r--r--tools/perf/Makefile10
-rw-r--r--tools/perf/builtin-probe.c36
-rw-r--r--tools/perf/util/probe-event.c55
-rw-r--r--tools/perf/util/probe-finder.c1002
-rw-r--r--tools/perf/util/probe-finder.h53
-rw-r--r--tools/perf/util/string.c55
-rw-r--r--tools/perf/util/string.h1
18 files changed, 2063 insertions, 835 deletions
diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 053037a1fe6d..2f9115c0ae62 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -1,6 +1,7 @@
1Title : Kernel Probes (Kprobes) 1Title : Kernel Probes (Kprobes)
2Authors : Jim Keniston <jkenisto@us.ibm.com> 2Authors : Jim Keniston <jkenisto@us.ibm.com>
3 : Prasanna S Panchamukhi <prasanna@in.ibm.com> 3 : Prasanna S Panchamukhi <prasanna.panchamukhi@gmail.com>
4 : Masami Hiramatsu <mhiramat@redhat.com>
4 5
5CONTENTS 6CONTENTS
6 7
@@ -15,6 +16,7 @@ CONTENTS
159. Jprobes Example 169. Jprobes Example
1610. Kretprobes Example 1710. Kretprobes Example
17Appendix A: The kprobes debugfs interface 18Appendix A: The kprobes debugfs interface
19Appendix B: The kprobes sysctl interface
18 20
191. Concepts: Kprobes, Jprobes, Return Probes 211. Concepts: Kprobes, Jprobes, Return Probes
20 22
@@ -42,13 +44,13 @@ registration/unregistration of a group of *probes. These functions
42can speed up unregistration process when you have to unregister 44can speed up unregistration process when you have to unregister
43a lot of probes at once. 45a lot of probes at once.
44 46
45The next three subsections explain how the different types of 47The next four subsections explain how the different types of
46probes work. They explain certain things that you'll need to 48probes work and how jump optimization works. They explain certain
47know in order to make the best use of Kprobes -- e.g., the 49things that you'll need to know in order to make the best use of
48difference between a pre_handler and a post_handler, and how 50Kprobes -- e.g., the difference between a pre_handler and
49to use the maxactive and nmissed fields of a kretprobe. But 51a post_handler, and how to use the maxactive and nmissed fields of
50if you're in a hurry to start using Kprobes, you can skip ahead 52a kretprobe. But if you're in a hurry to start using Kprobes, you
51to section 2. 53can skip ahead to section 2.
52 54
531.1 How Does a Kprobe Work? 551.1 How Does a Kprobe Work?
54 56
@@ -161,13 +163,125 @@ In case probed function is entered but there is no kretprobe_instance
161object available, then in addition to incrementing the nmissed count, 163object available, then in addition to incrementing the nmissed count,
162the user entry_handler invocation is also skipped. 164the user entry_handler invocation is also skipped.
163 165
1661.4 How Does Jump Optimization Work?
167
168If you configured your kernel with CONFIG_OPTPROBES=y (currently
169this option is supported on x86/x86-64, non-preemptive kernel) and
170the "debug.kprobes_optimization" kernel parameter is set to 1 (see
171sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
172instruction instead of a breakpoint instruction at each probepoint.
173
1741.4.1 Init a Kprobe
175
176When a probe is registered, before attempting this optimization,
177Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
178address. So, even if it's not possible to optimize this particular
179probepoint, there'll be a probe there.
180
1811.4.2 Safety Check
182
183Before optimizing a probe, Kprobes performs the following safety checks:
184
185- Kprobes verifies that the region that will be replaced by the jump
186instruction (the "optimized region") lies entirely within one function.
187(A jump instruction is multiple bytes, and so may overlay multiple
188instructions.)
189
190- Kprobes analyzes the entire function and verifies that there is no
191jump into the optimized region. Specifically:
192 - the function contains no indirect jump;
193 - the function contains no instruction that causes an exception (since
194 the fixup code triggered by the exception could jump back into the
195 optimized region -- Kprobes checks the exception tables to verify this);
196 and
197 - there is no near jump to the optimized region (other than to the first
198 byte).
199
200- For each instruction in the optimized region, Kprobes verifies that
201the instruction can be executed out of line.
202
2031.4.3 Preparing Detour Buffer
204
205Next, Kprobes prepares a "detour" buffer, which contains the following
206instruction sequence:
207- code to push the CPU's registers (emulating a breakpoint trap)
208- a call to the trampoline code which calls user's probe handlers.
209- code to restore registers
210- the instructions from the optimized region
211- a jump back to the original execution path.
212
2131.4.4 Pre-optimization
214
215After preparing the detour buffer, Kprobes verifies that none of the
216following situations exist:
217- The probe has either a break_handler (i.e., it's a jprobe) or a
218post_handler.
219- Other instructions in the optimized region are probed.
220- The probe is disabled.
221In any of the above cases, Kprobes won't start optimizing the probe.
222Since these are temporary situations, Kprobes tries to start
223optimizing it again if the situation is changed.
224
225If the kprobe can be optimized, Kprobes enqueues the kprobe to an
226optimizing list, and kicks the kprobe-optimizer workqueue to optimize
227it. If the to-be-optimized probepoint is hit before being optimized,
228Kprobes returns control to the original instruction path by setting
229the CPU's instruction pointer to the copied code in the detour buffer
230-- thus at least avoiding the single-step.
231
2321.4.5 Optimization
233
234The Kprobe-optimizer doesn't insert the jump instruction immediately;
235rather, it calls synchronize_sched() for safety first, because it's
236possible for a CPU to be interrupted in the middle of executing the
237optimized region(*). As you know, synchronize_sched() can ensure
238that all interruptions that were active when synchronize_sched()
239was called are done, but only if CONFIG_PREEMPT=n. So, this version
240of kprobe optimization supports only kernels with CONFIG_PREEMPT=n.(**)
241
242After that, the Kprobe-optimizer calls stop_machine() to replace
243the optimized region with a jump instruction to the detour buffer,
244using text_poke_smp().
245
2461.4.6 Unoptimization
247
248When an optimized kprobe is unregistered, disabled, or blocked by
249another kprobe, it will be unoptimized. If this happens before
250the optimization is complete, the kprobe is just dequeued from the
251optimized list. If the optimization has been done, the jump is
252replaced with the original code (except for an int3 breakpoint in
253the first byte) by using text_poke_smp().
254
255(*)Please imagine that the 2nd instruction is interrupted and then
256the optimizer replaces the 2nd instruction with the jump *address*
257while the interrupt handler is running. When the interrupt
258returns to original address, there is no valid instruction,
259and it causes an unexpected result.
260
261(**)This optimization-safety checking may be replaced with the
262stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
263kernel.
264
265NOTE for geeks:
266The jump optimization changes the kprobe's pre_handler behavior.
267Without optimization, the pre_handler can change the kernel's execution
268path by changing regs->ip and returning 1. However, when the probe
269is optimized, that modification is ignored. Thus, if you want to
270tweak the kernel's execution path, you need to suppress optimization,
271using one of the following techniques:
272- Specify an empty function for the kprobe's post_handler or break_handler.
273 or
274- Config CONFIG_OPTPROBES=n.
275 or
276- Execute 'sysctl -w debug.kprobes_optimization=n'
277
1642. Architectures Supported 2782. Architectures Supported
165 279
166Kprobes, jprobes, and return probes are implemented on the following 280Kprobes, jprobes, and return probes are implemented on the following
167architectures: 281architectures:
168 282
169- i386 283- i386 (Supports jump optimization)
170- x86_64 (AMD-64, EM64T) 284- x86_64 (AMD-64, EM64T) (Supports jump optimization)
171- ppc64 285- ppc64
172- ia64 (Does not support probes on instruction slot1.) 286- ia64 (Does not support probes on instruction slot1.)
173- sparc64 (Return probes not yet implemented.) 287- sparc64 (Return probes not yet implemented.)
@@ -193,6 +307,10 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
193so you can use "objdump -d -l vmlinux" to see the source-to-object 307so you can use "objdump -d -l vmlinux" to see the source-to-object
194code mapping. 308code mapping.
195 309
310If you want to reduce probing overhead, set "Kprobes jump optimization
311support" (CONFIG_OPTPROBES) to "y". You can find this option under the
312"Kprobes" line.
313
1964. API Reference 3144. API Reference
197 315
198The Kprobes API includes a "register" function and an "unregister" 316The Kprobes API includes a "register" function and an "unregister"
@@ -389,7 +507,10 @@ the probe which has been registered.
389 507
390Kprobes allows multiple probes at the same address. Currently, 508Kprobes allows multiple probes at the same address. Currently,
391however, there cannot be multiple jprobes on the same function at 509however, there cannot be multiple jprobes on the same function at
392the same time. 510the same time. Also, a probepoint for which there is a jprobe or
511a post_handler cannot be optimized. So if you install a jprobe,
512or a kprobe with a post_handler, at an optimized probepoint, the
513probepoint will be unoptimized automatically.
393 514
394In general, you can install a probe anywhere in the kernel. 515In general, you can install a probe anywhere in the kernel.
395In particular, you can probe interrupt handlers. Known exceptions 516In particular, you can probe interrupt handlers. Known exceptions
@@ -453,6 +574,38 @@ reason, Kprobes doesn't support return probes (or kprobes or jprobes)
453on the x86_64 version of __switch_to(); the registration functions 574on the x86_64 version of __switch_to(); the registration functions
454return -EINVAL. 575return -EINVAL.
455 576
577On x86/x86-64, since the Jump Optimization of Kprobes modifies
578instructions widely, there are some limitations to optimization. To
579explain it, we introduce some terminology. Imagine a 3-instruction
580sequence consisting of a two 2-byte instructions and one 3-byte
581instruction.
582
583 IA
584 |
585[-2][-1][0][1][2][3][4][5][6][7]
586 [ins1][ins2][ ins3 ]
587 [<- DCR ->]
588 [<- JTPR ->]
589
590ins1: 1st Instruction
591ins2: 2nd Instruction
592ins3: 3rd Instruction
593IA: Insertion Address
594JTPR: Jump Target Prohibition Region
595DCR: Detoured Code Region
596
597The instructions in DCR are copied to the out-of-line buffer
598of the kprobe, because the bytes in DCR are replaced by
599a 5-byte jump instruction. So there are several limitations.
600
601a) The instructions in DCR must be relocatable.
602b) The instructions in DCR must not include a call instruction.
603c) JTPR must not be targeted by any jump or call instruction.
604d) DCR must not straddle the border betweeen functions.
605
606Anyway, these limitations are checked by the in-kernel instruction
607decoder, so you don't need to worry about that.
608
4566. Probe Overhead 6096. Probe Overhead
457 610
458On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0 611On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
@@ -476,6 +629,19 @@ k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07
476ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU) 629ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
477k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99 630k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99
478 631
6326.1 Optimized Probe Overhead
633
634Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
635process. Here are sample overhead figures (in usec) for x86 architectures.
636k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
637r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.
638
639i386: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
640k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33
641
642x86-64: Intel(R) Xeon(R) E5410, 2.33GHz, 4656.90 bogomips
643k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30
644
4797. TODO 6457. TODO
480 646
481a. SystemTap (http://sourceware.org/systemtap): Provides a simplified 647a. SystemTap (http://sourceware.org/systemtap): Provides a simplified
@@ -523,7 +689,8 @@ is also specified. Following columns show probe status. If the probe is on
523a virtual address that is no longer valid (module init sections, module 689a virtual address that is no longer valid (module init sections, module
524virtual addresses that correspond to modules that've been unloaded), 690virtual addresses that correspond to modules that've been unloaded),
525such probes are marked with [GONE]. If the probe is temporarily disabled, 691such probes are marked with [GONE]. If the probe is temporarily disabled,
526such probes are marked with [DISABLED]. 692such probes are marked with [DISABLED]. If the probe is optimized, it is
693marked with [OPTIMIZED].
527 694
528/sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly. 695/sys/kernel/debug/kprobes/enabled: Turn kprobes ON/OFF forcibly.
529 696
@@ -533,3 +700,19 @@ registered probes will be disarmed, till such time a "1" is echoed to this
533file. Note that this knob just disarms and arms all kprobes and doesn't 700file. Note that this knob just disarms and arms all kprobes and doesn't
534change each probe's disabling state. This means that disabled kprobes (marked 701change each probe's disabling state. This means that disabled kprobes (marked
535[DISABLED]) will be not enabled if you turn ON all kprobes by this knob. 702[DISABLED]) will be not enabled if you turn ON all kprobes by this knob.
703
704
705Appendix B: The kprobes sysctl interface
706
707/proc/sys/debug/kprobes-optimization: Turn kprobes optimization ON/OFF.
708
709When CONFIG_OPTPROBES=y, this sysctl interface appears and it provides
710a knob to globally and forcibly turn jump optimization (see section
7111.4) ON or OFF. By default, jump optimization is allowed (ON).
712If you echo "0" to this file or set "debug.kprobes_optimization" to
7130 via sysctl, all optimized probes will be unoptimized, and any new
714probes registered after that will not be optimized. Note that this
715knob *changes* the optimized state. This means that optimized probes
716(marked [OPTIMIZED]) will be unoptimized ([OPTIMIZED] tag will be
717removed). If the knob is turned on, they will be optimized again.
718
diff --git a/arch/Kconfig b/arch/Kconfig
index 215e46073c45..e5eb1337a537 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -41,6 +41,17 @@ config KPROBES
41 for kernel debugging, non-intrusive instrumentation and testing. 41 for kernel debugging, non-intrusive instrumentation and testing.
42 If in doubt, say "N". 42 If in doubt, say "N".
43 43
44config OPTPROBES
45 bool "Kprobes jump optimization support (EXPERIMENTAL)"
46 default y
47 depends on KPROBES
48 depends on !PREEMPT
49 depends on HAVE_OPTPROBES
50 select KALLSYMS_ALL
51 help
52 This option will allow kprobes to optimize breakpoint to
53 a jump for reducing its overhead.
54
44config HAVE_EFFICIENT_UNALIGNED_ACCESS 55config HAVE_EFFICIENT_UNALIGNED_ACCESS
45 bool 56 bool
46 help 57 help
@@ -83,6 +94,8 @@ config HAVE_KPROBES
83config HAVE_KRETPROBES 94config HAVE_KRETPROBES
84 bool 95 bool
85 96
97config HAVE_OPTPROBES
98 bool
86# 99#
87# An arch should select this if it provides all these things: 100# An arch should select this if it provides all these things:
88# 101#
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 57ccdcec1469..f15f37bfbd62 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -31,6 +31,7 @@ config X86
31 select ARCH_WANT_FRAME_POINTERS 31 select ARCH_WANT_FRAME_POINTERS
32 select HAVE_DMA_ATTRS 32 select HAVE_DMA_ATTRS
33 select HAVE_KRETPROBES 33 select HAVE_KRETPROBES
34 select HAVE_OPTPROBES
34 select HAVE_FTRACE_MCOUNT_RECORD 35 select HAVE_FTRACE_MCOUNT_RECORD
35 select HAVE_DYNAMIC_FTRACE 36 select HAVE_DYNAMIC_FTRACE
36 select HAVE_FUNCTION_TRACER 37 select HAVE_FUNCTION_TRACER
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index f1e253ceba4b..b09ec55650b3 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -165,10 +165,12 @@ static inline void apply_paravirt(struct paravirt_patch_site *start,
165 * invalid instruction possible) or if the instructions are changed from a 165 * invalid instruction possible) or if the instructions are changed from a
166 * consistent state to another consistent state atomically. 166 * consistent state to another consistent state atomically.
167 * More care must be taken when modifying code in the SMP case because of 167 * More care must be taken when modifying code in the SMP case because of
168 * Intel's errata. 168 * Intel's errata. text_poke_smp() takes care that errata, but still
169 * doesn't support NMI/MCE handler code modifying.
169 * On the local CPU you need to be protected again NMI or MCE handlers seeing an 170 * On the local CPU you need to be protected again NMI or MCE handlers seeing an
170 * inconsistent instruction while you patch. 171 * inconsistent instruction while you patch.
171 */ 172 */
172extern void *text_poke(void *addr, const void *opcode, size_t len); 173extern void *text_poke(void *addr, const void *opcode, size_t len);
174extern void *text_poke_smp(void *addr, const void *opcode, size_t len);
173 175
174#endif /* _ASM_X86_ALTERNATIVE_H */ 176#endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 4fe681de1e76..4ffa345a8ccb 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -32,7 +32,10 @@ struct kprobe;
32 32
33typedef u8 kprobe_opcode_t; 33typedef u8 kprobe_opcode_t;
34#define BREAKPOINT_INSTRUCTION 0xcc 34#define BREAKPOINT_INSTRUCTION 0xcc
35#define RELATIVEJUMP_INSTRUCTION 0xe9 35#define RELATIVEJUMP_OPCODE 0xe9
36#define RELATIVEJUMP_SIZE 5
37#define RELATIVECALL_OPCODE 0xe8
38#define RELATIVE_ADDR_SIZE 4
36#define MAX_INSN_SIZE 16 39#define MAX_INSN_SIZE 16
37#define MAX_STACK_SIZE 64 40#define MAX_STACK_SIZE 64
38#define MIN_STACK_SIZE(ADDR) \ 41#define MIN_STACK_SIZE(ADDR) \
@@ -44,6 +47,17 @@ typedef u8 kprobe_opcode_t;
44 47
45#define flush_insn_slot(p) do { } while (0) 48#define flush_insn_slot(p) do { } while (0)
46 49
50/* optinsn template addresses */
51extern kprobe_opcode_t optprobe_template_entry;
52extern kprobe_opcode_t optprobe_template_val;
53extern kprobe_opcode_t optprobe_template_call;
54extern kprobe_opcode_t optprobe_template_end;
55#define MAX_OPTIMIZED_LENGTH (MAX_INSN_SIZE + RELATIVE_ADDR_SIZE)
56#define MAX_OPTINSN_SIZE \
57 (((unsigned long)&optprobe_template_end - \
58 (unsigned long)&optprobe_template_entry) + \
59 MAX_OPTIMIZED_LENGTH + RELATIVEJUMP_SIZE)
60
47extern const int kretprobe_blacklist_size; 61extern const int kretprobe_blacklist_size;
48 62
49void arch_remove_kprobe(struct kprobe *p); 63void arch_remove_kprobe(struct kprobe *p);
@@ -64,6 +78,21 @@ struct arch_specific_insn {
64 int boostable; 78 int boostable;
65}; 79};
66 80
81struct arch_optimized_insn {
82 /* copy of the original instructions */
83 kprobe_opcode_t copied_insn[RELATIVE_ADDR_SIZE];
84 /* detour code buffer */
85 kprobe_opcode_t *insn;
86 /* the size of instructions copied to detour code buffer */
87 size_t size;
88};
89
90/* Return true (!0) if optinsn is prepared for optimization. */
91static inline int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
92{
93 return optinsn->size;
94}
95
67struct prev_kprobe { 96struct prev_kprobe {
68 struct kprobe *kp; 97 struct kprobe *kp;
69 unsigned long status; 98 unsigned long status;
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index e6ea0342c8f8..3a4bf35c179b 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -7,6 +7,7 @@
7#include <linux/mm.h> 7#include <linux/mm.h>
8#include <linux/vmalloc.h> 8#include <linux/vmalloc.h>
9#include <linux/memory.h> 9#include <linux/memory.h>
10#include <linux/stop_machine.h>
10#include <asm/alternative.h> 11#include <asm/alternative.h>
11#include <asm/sections.h> 12#include <asm/sections.h>
12#include <asm/pgtable.h> 13#include <asm/pgtable.h>
@@ -572,3 +573,62 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len)
572 local_irq_restore(flags); 573 local_irq_restore(flags);
573 return addr; 574 return addr;
574} 575}
576
577/*
578 * Cross-modifying kernel text with stop_machine().
579 * This code originally comes from immediate value.
580 */
581static atomic_t stop_machine_first;
582static int wrote_text;
583
584struct text_poke_params {
585 void *addr;
586 const void *opcode;
587 size_t len;
588};
589
590static int __kprobes stop_machine_text_poke(void *data)
591{
592 struct text_poke_params *tpp = data;
593
594 if (atomic_dec_and_test(&stop_machine_first)) {
595 text_poke(tpp->addr, tpp->opcode, tpp->len);
596 smp_wmb(); /* Make sure other cpus see that this has run */
597 wrote_text = 1;
598 } else {
599 while (!wrote_text)
600 cpu_relax();
601 smp_mb(); /* Load wrote_text before following execution */
602 }
603
604 flush_icache_range((unsigned long)tpp->addr,
605 (unsigned long)tpp->addr + tpp->len);
606 return 0;
607}
608
609/**
610 * text_poke_smp - Update instructions on a live kernel on SMP
611 * @addr: address to modify
612 * @opcode: source of the copy
613 * @len: length to copy
614 *
615 * Modify multi-byte instruction by using stop_machine() on SMP. This allows
616 * user to poke/set multi-byte text on SMP. Only non-NMI/MCE code modifying
617 * should be allowed, since stop_machine() does _not_ protect code against
618 * NMI and MCE.
619 *
620 * Note: Must be called under get_online_cpus() and text_mutex.
621 */
622void *__kprobes text_poke_smp(void *addr, const void *opcode, size_t len)
623{
624 struct text_poke_params tpp;
625
626 tpp.addr = addr;
627 tpp.opcode = opcode;
628 tpp.len = len;
629 atomic_set(&stop_machine_first, 1);
630 wrote_text = 0;
631 stop_machine(stop_machine_text_poke, (void *)&tpp, NULL);
632 return addr;
633}
634
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index 5de9f4a9c3fd..b43bbaebe2c0 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -49,6 +49,7 @@
49#include <linux/module.h> 49#include <linux/module.h>
50#include <linux/kdebug.h> 50#include <linux/kdebug.h>
51#include <linux/kallsyms.h> 51#include <linux/kallsyms.h>
52#include <linux/ftrace.h>
52 53
53#include <asm/cacheflush.h> 54#include <asm/cacheflush.h>
54#include <asm/desc.h> 55#include <asm/desc.h>
@@ -106,16 +107,22 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = {
106}; 107};
107const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist); 108const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist);
108 109
109/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/ 110static void __kprobes __synthesize_relative_insn(void *from, void *to, u8 op)
110static void __kprobes set_jmp_op(void *from, void *to)
111{ 111{
112 struct __arch_jmp_op { 112 struct __arch_relative_insn {
113 char op; 113 u8 op;
114 s32 raddr; 114 s32 raddr;
115 } __attribute__((packed)) * jop; 115 } __attribute__((packed)) *insn;
116 jop = (struct __arch_jmp_op *)from; 116
117 jop->raddr = (s32)((long)(to) - ((long)(from) + 5)); 117 insn = (struct __arch_relative_insn *)from;
118 jop->op = RELATIVEJUMP_INSTRUCTION; 118 insn->raddr = (s32)((long)(to) - ((long)(from) + 5));
119 insn->op = op;
120}
121
122/* Insert a jump instruction at address 'from', which jumps to address 'to'.*/
123static void __kprobes synthesize_reljump(void *from, void *to)
124{
125 __synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE);
119} 126}
120 127
121/* 128/*
@@ -202,7 +209,7 @@ static int recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr)
202 /* 209 /*
203 * Basically, kp->ainsn.insn has an original instruction. 210 * Basically, kp->ainsn.insn has an original instruction.
204 * However, RIP-relative instruction can not do single-stepping 211 * However, RIP-relative instruction can not do single-stepping
205 * at different place, fix_riprel() tweaks the displacement of 212 * at different place, __copy_instruction() tweaks the displacement of
206 * that instruction. In that case, we can't recover the instruction 213 * that instruction. In that case, we can't recover the instruction
207 * from the kp->ainsn.insn. 214 * from the kp->ainsn.insn.
208 * 215 *
@@ -284,21 +291,37 @@ static int __kprobes is_IF_modifier(kprobe_opcode_t *insn)
284} 291}
285 292
286/* 293/*
287 * Adjust the displacement if the instruction uses the %rip-relative 294 * Copy an instruction and adjust the displacement if the instruction
288 * addressing mode. 295 * uses the %rip-relative addressing mode.
289 * If it does, Return the address of the 32-bit displacement word. 296 * If it does, Return the address of the 32-bit displacement word.
290 * If not, return null. 297 * If not, return null.
291 * Only applicable to 64-bit x86. 298 * Only applicable to 64-bit x86.
292 */ 299 */
293static void __kprobes fix_riprel(struct kprobe *p) 300static int __kprobes __copy_instruction(u8 *dest, u8 *src, int recover)
294{ 301{
295#ifdef CONFIG_X86_64
296 struct insn insn; 302 struct insn insn;
297 kernel_insn_init(&insn, p->ainsn.insn); 303 int ret;
304 kprobe_opcode_t buf[MAX_INSN_SIZE];
298 305
306 kernel_insn_init(&insn, src);
307 if (recover) {
308 insn_get_opcode(&insn);
309 if (insn.opcode.bytes[0] == BREAKPOINT_INSTRUCTION) {
310 ret = recover_probed_instruction(buf,
311 (unsigned long)src);
312 if (ret)
313 return 0;
314 kernel_insn_init(&insn, buf);
315 }
316 }
317 insn_get_length(&insn);
318 memcpy(dest, insn.kaddr, insn.length);
319
320#ifdef CONFIG_X86_64
299 if (insn_rip_relative(&insn)) { 321 if (insn_rip_relative(&insn)) {
300 s64 newdisp; 322 s64 newdisp;
301 u8 *disp; 323 u8 *disp;
324 kernel_insn_init(&insn, dest);
302 insn_get_displacement(&insn); 325 insn_get_displacement(&insn);
303 /* 326 /*
304 * The copied instruction uses the %rip-relative addressing 327 * The copied instruction uses the %rip-relative addressing
@@ -312,20 +335,23 @@ static void __kprobes fix_riprel(struct kprobe *p)
312 * extension of the original signed 32-bit displacement would 335 * extension of the original signed 32-bit displacement would
313 * have given. 336 * have given.
314 */ 337 */
315 newdisp = (u8 *) p->addr + (s64) insn.displacement.value - 338 newdisp = (u8 *) src + (s64) insn.displacement.value -
316 (u8 *) p->ainsn.insn; 339 (u8 *) dest;
317 BUG_ON((s64) (s32) newdisp != newdisp); /* Sanity check. */ 340 BUG_ON((s64) (s32) newdisp != newdisp); /* Sanity check. */
318 disp = (u8 *) p->ainsn.insn + insn_offset_displacement(&insn); 341 disp = (u8 *) dest + insn_offset_displacement(&insn);
319 *(s32 *) disp = (s32) newdisp; 342 *(s32 *) disp = (s32) newdisp;
320 } 343 }
321#endif 344#endif
345 return insn.length;
322} 346}
323 347
324static void __kprobes arch_copy_kprobe(struct kprobe *p) 348static void __kprobes arch_copy_kprobe(struct kprobe *p)
325{ 349{
326 memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); 350 /*
327 351 * Copy an instruction without recovering int3, because it will be
328 fix_riprel(p); 352 * put by another subsystem.
353 */
354 __copy_instruction(p->ainsn.insn, p->addr, 0);
329 355
330 if (can_boost(p->addr)) 356 if (can_boost(p->addr))
331 p->ainsn.boostable = 0; 357 p->ainsn.boostable = 0;
@@ -406,18 +432,6 @@ static void __kprobes restore_btf(void)
406 update_debugctlmsr(current->thread.debugctlmsr); 432 update_debugctlmsr(current->thread.debugctlmsr);
407} 433}
408 434
409static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
410{
411 clear_btf();
412 regs->flags |= X86_EFLAGS_TF;
413 regs->flags &= ~X86_EFLAGS_IF;
414 /* single step inline if the instruction is an int3 */
415 if (p->opcode == BREAKPOINT_INSTRUCTION)
416 regs->ip = (unsigned long)p->addr;
417 else
418 regs->ip = (unsigned long)p->ainsn.insn;
419}
420
421void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, 435void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
422 struct pt_regs *regs) 436 struct pt_regs *regs)
423{ 437{
@@ -429,20 +443,50 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
429 *sara = (unsigned long) &kretprobe_trampoline; 443 *sara = (unsigned long) &kretprobe_trampoline;
430} 444}
431 445
446#ifdef CONFIG_OPTPROBES
447static int __kprobes setup_detour_execution(struct kprobe *p,
448 struct pt_regs *regs,
449 int reenter);
450#else
451#define setup_detour_execution(p, regs, reenter) (0)
452#endif
453
432static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs, 454static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
433 struct kprobe_ctlblk *kcb) 455 struct kprobe_ctlblk *kcb, int reenter)
434{ 456{
457 if (setup_detour_execution(p, regs, reenter))
458 return;
459
435#if !defined(CONFIG_PREEMPT) 460#if !defined(CONFIG_PREEMPT)
436 if (p->ainsn.boostable == 1 && !p->post_handler) { 461 if (p->ainsn.boostable == 1 && !p->post_handler) {
437 /* Boost up -- we can execute copied instructions directly */ 462 /* Boost up -- we can execute copied instructions directly */
438 reset_current_kprobe(); 463 if (!reenter)
464 reset_current_kprobe();
465 /*
466 * Reentering boosted probe doesn't reset current_kprobe,
467 * nor set current_kprobe, because it doesn't use single
468 * stepping.
469 */
439 regs->ip = (unsigned long)p->ainsn.insn; 470 regs->ip = (unsigned long)p->ainsn.insn;
440 preempt_enable_no_resched(); 471 preempt_enable_no_resched();
441 return; 472 return;
442 } 473 }
443#endif 474#endif
444 prepare_singlestep(p, regs); 475 if (reenter) {
445 kcb->kprobe_status = KPROBE_HIT_SS; 476 save_previous_kprobe(kcb);
477 set_current_kprobe(p, regs, kcb);
478 kcb->kprobe_status = KPROBE_REENTER;
479 } else
480 kcb->kprobe_status = KPROBE_HIT_SS;
481 /* Prepare real single stepping */
482 clear_btf();
483 regs->flags |= X86_EFLAGS_TF;
484 regs->flags &= ~X86_EFLAGS_IF;
485 /* single step inline if the instruction is an int3 */
486 if (p->opcode == BREAKPOINT_INSTRUCTION)
487 regs->ip = (unsigned long)p->addr;
488 else
489 regs->ip = (unsigned long)p->ainsn.insn;
446} 490}
447 491
448/* 492/*
@@ -456,11 +500,8 @@ static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
456 switch (kcb->kprobe_status) { 500 switch (kcb->kprobe_status) {
457 case KPROBE_HIT_SSDONE: 501 case KPROBE_HIT_SSDONE:
458 case KPROBE_HIT_ACTIVE: 502 case KPROBE_HIT_ACTIVE:
459 save_previous_kprobe(kcb);
460 set_current_kprobe(p, regs, kcb);
461 kprobes_inc_nmissed_count(p); 503 kprobes_inc_nmissed_count(p);
462 prepare_singlestep(p, regs); 504 setup_singlestep(p, regs, kcb, 1);
463 kcb->kprobe_status = KPROBE_REENTER;
464 break; 505 break;
465 case KPROBE_HIT_SS: 506 case KPROBE_HIT_SS:
466 /* A probe has been hit in the codepath leading up to, or just 507 /* A probe has been hit in the codepath leading up to, or just
@@ -535,13 +576,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
535 * more here. 576 * more here.
536 */ 577 */
537 if (!p->pre_handler || !p->pre_handler(p, regs)) 578 if (!p->pre_handler || !p->pre_handler(p, regs))
538 setup_singlestep(p, regs, kcb); 579 setup_singlestep(p, regs, kcb, 0);
539 return 1; 580 return 1;
540 } 581 }
541 } else if (kprobe_running()) { 582 } else if (kprobe_running()) {
542 p = __get_cpu_var(current_kprobe); 583 p = __get_cpu_var(current_kprobe);
543 if (p->break_handler && p->break_handler(p, regs)) { 584 if (p->break_handler && p->break_handler(p, regs)) {
544 setup_singlestep(p, regs, kcb); 585 setup_singlestep(p, regs, kcb, 0);
545 return 1; 586 return 1;
546 } 587 }
547 } /* else: not a kprobe fault; let the kernel handle it */ 588 } /* else: not a kprobe fault; let the kernel handle it */
@@ -550,6 +591,69 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
550 return 0; 591 return 0;
551} 592}
552 593
594#ifdef CONFIG_X86_64
595#define SAVE_REGS_STRING \
596 /* Skip cs, ip, orig_ax. */ \
597 " subq $24, %rsp\n" \
598 " pushq %rdi\n" \
599 " pushq %rsi\n" \
600 " pushq %rdx\n" \
601 " pushq %rcx\n" \
602 " pushq %rax\n" \
603 " pushq %r8\n" \
604 " pushq %r9\n" \
605 " pushq %r10\n" \
606 " pushq %r11\n" \
607 " pushq %rbx\n" \
608 " pushq %rbp\n" \
609 " pushq %r12\n" \
610 " pushq %r13\n" \
611 " pushq %r14\n" \
612 " pushq %r15\n"
613#define RESTORE_REGS_STRING \
614 " popq %r15\n" \
615 " popq %r14\n" \
616 " popq %r13\n" \
617 " popq %r12\n" \
618 " popq %rbp\n" \
619 " popq %rbx\n" \
620 " popq %r11\n" \
621 " popq %r10\n" \
622 " popq %r9\n" \
623 " popq %r8\n" \
624 " popq %rax\n" \
625 " popq %rcx\n" \
626 " popq %rdx\n" \
627 " popq %rsi\n" \
628 " popq %rdi\n" \
629 /* Skip orig_ax, ip, cs */ \
630 " addq $24, %rsp\n"
631#else
632#define SAVE_REGS_STRING \
633 /* Skip cs, ip, orig_ax and gs. */ \
634 " subl $16, %esp\n" \
635 " pushl %fs\n" \
636 " pushl %ds\n" \
637 " pushl %es\n" \
638 " pushl %eax\n" \
639 " pushl %ebp\n" \
640 " pushl %edi\n" \
641 " pushl %esi\n" \
642 " pushl %edx\n" \
643 " pushl %ecx\n" \
644 " pushl %ebx\n"
645#define RESTORE_REGS_STRING \
646 " popl %ebx\n" \
647 " popl %ecx\n" \
648 " popl %edx\n" \
649 " popl %esi\n" \
650 " popl %edi\n" \
651 " popl %ebp\n" \
652 " popl %eax\n" \
653 /* Skip ds, es, fs, gs, orig_ax, and ip. Note: don't pop cs here*/\
654 " addl $24, %esp\n"
655#endif
656
553/* 657/*
554 * When a retprobed function returns, this code saves registers and 658 * When a retprobed function returns, this code saves registers and
555 * calls trampoline_handler() runs, which calls the kretprobe's handler. 659 * calls trampoline_handler() runs, which calls the kretprobe's handler.
@@ -563,65 +667,16 @@ static void __used __kprobes kretprobe_trampoline_holder(void)
563 /* We don't bother saving the ss register */ 667 /* We don't bother saving the ss register */
564 " pushq %rsp\n" 668 " pushq %rsp\n"
565 " pushfq\n" 669 " pushfq\n"
566 /* 670 SAVE_REGS_STRING
567 * Skip cs, ip, orig_ax.
568 * trampoline_handler() will plug in these values
569 */
570 " subq $24, %rsp\n"
571 " pushq %rdi\n"
572 " pushq %rsi\n"
573 " pushq %rdx\n"
574 " pushq %rcx\n"
575 " pushq %rax\n"
576 " pushq %r8\n"
577 " pushq %r9\n"
578 " pushq %r10\n"
579 " pushq %r11\n"
580 " pushq %rbx\n"
581 " pushq %rbp\n"
582 " pushq %r12\n"
583 " pushq %r13\n"
584 " pushq %r14\n"
585 " pushq %r15\n"
586 " movq %rsp, %rdi\n" 671 " movq %rsp, %rdi\n"
587 " call trampoline_handler\n" 672 " call trampoline_handler\n"
588 /* Replace saved sp with true return address. */ 673 /* Replace saved sp with true return address. */
589 " movq %rax, 152(%rsp)\n" 674 " movq %rax, 152(%rsp)\n"
590 " popq %r15\n" 675 RESTORE_REGS_STRING
591 " popq %r14\n"
592 " popq %r13\n"
593 " popq %r12\n"
594 " popq %rbp\n"
595 " popq %rbx\n"
596 " popq %r11\n"
597 " popq %r10\n"
598 " popq %r9\n"
599 " popq %r8\n"
600 " popq %rax\n"
601 " popq %rcx\n"
602 " popq %rdx\n"
603 " popq %rsi\n"
604 " popq %rdi\n"
605 /* Skip orig_ax, ip, cs */
606 " addq $24, %rsp\n"
607 " popfq\n" 676 " popfq\n"
608#else 677#else
609 " pushf\n" 678 " pushf\n"
610 /* 679 SAVE_REGS_STRING
611 * Skip cs, ip, orig_ax and gs.
612 * trampoline_handler() will plug in these values
613 */
614 " subl $16, %esp\n"
615 " pushl %fs\n"
616 " pushl %es\n"
617 " pushl %ds\n"
618 " pushl %eax\n"
619 " pushl %ebp\n"
620 " pushl %edi\n"
621 " pushl %esi\n"
622 " pushl %edx\n"
623 " pushl %ecx\n"
624 " pushl %ebx\n"
625 " movl %esp, %eax\n" 680 " movl %esp, %eax\n"
626 " call trampoline_handler\n" 681 " call trampoline_handler\n"
627 /* Move flags to cs */ 682 /* Move flags to cs */
@@ -629,15 +684,7 @@ static void __used __kprobes kretprobe_trampoline_holder(void)
629 " movl %edx, 52(%esp)\n" 684 " movl %edx, 52(%esp)\n"
630 /* Replace saved flags with true return address. */ 685 /* Replace saved flags with true return address. */
631 " movl %eax, 56(%esp)\n" 686 " movl %eax, 56(%esp)\n"
632 " popl %ebx\n" 687 RESTORE_REGS_STRING
633 " popl %ecx\n"
634 " popl %edx\n"
635 " popl %esi\n"
636 " popl %edi\n"
637 " popl %ebp\n"
638 " popl %eax\n"
639 /* Skip ds, es, fs, gs, orig_ax and ip */
640 " addl $24, %esp\n"
641 " popf\n" 688 " popf\n"
642#endif 689#endif
643 " ret\n"); 690 " ret\n");
@@ -805,8 +852,8 @@ static void __kprobes resume_execution(struct kprobe *p,
805 * These instructions can be executed directly if it 852 * These instructions can be executed directly if it
806 * jumps back to correct address. 853 * jumps back to correct address.
807 */ 854 */
808 set_jmp_op((void *)regs->ip, 855 synthesize_reljump((void *)regs->ip,
809 (void *)orig_ip + (regs->ip - copy_ip)); 856 (void *)orig_ip + (regs->ip - copy_ip));
810 p->ainsn.boostable = 1; 857 p->ainsn.boostable = 1;
811 } else { 858 } else {
812 p->ainsn.boostable = -1; 859 p->ainsn.boostable = -1;
@@ -1033,6 +1080,358 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
1033 return 0; 1080 return 0;
1034} 1081}
1035 1082
1083
1084#ifdef CONFIG_OPTPROBES
1085
1086/* Insert a call instruction at address 'from', which calls address 'to'.*/
1087static void __kprobes synthesize_relcall(void *from, void *to)
1088{
1089 __synthesize_relative_insn(from, to, RELATIVECALL_OPCODE);
1090}
1091
1092/* Insert a move instruction which sets a pointer to eax/rdi (1st arg). */
1093static void __kprobes synthesize_set_arg1(kprobe_opcode_t *addr,
1094 unsigned long val)
1095{
1096#ifdef CONFIG_X86_64
1097 *addr++ = 0x48;
1098 *addr++ = 0xbf;
1099#else
1100 *addr++ = 0xb8;
1101#endif
1102 *(unsigned long *)addr = val;
1103}
1104
1105void __kprobes kprobes_optinsn_template_holder(void)
1106{
1107 asm volatile (
1108 ".global optprobe_template_entry\n"
1109 "optprobe_template_entry: \n"
1110#ifdef CONFIG_X86_64
1111 /* We don't bother saving the ss register */
1112 " pushq %rsp\n"
1113 " pushfq\n"
1114 SAVE_REGS_STRING
1115 " movq %rsp, %rsi\n"
1116 ".global optprobe_template_val\n"
1117 "optprobe_template_val: \n"
1118 ASM_NOP5
1119 ASM_NOP5
1120 ".global optprobe_template_call\n"
1121 "optprobe_template_call: \n"
1122 ASM_NOP5
1123 /* Move flags to rsp */
1124 " movq 144(%rsp), %rdx\n"
1125 " movq %rdx, 152(%rsp)\n"
1126 RESTORE_REGS_STRING
1127 /* Skip flags entry */
1128 " addq $8, %rsp\n"
1129 " popfq\n"
1130#else /* CONFIG_X86_32 */
1131 " pushf\n"
1132 SAVE_REGS_STRING
1133 " movl %esp, %edx\n"
1134 ".global optprobe_template_val\n"
1135 "optprobe_template_val: \n"
1136 ASM_NOP5
1137 ".global optprobe_template_call\n"
1138 "optprobe_template_call: \n"
1139 ASM_NOP5
1140 RESTORE_REGS_STRING
1141 " addl $4, %esp\n" /* skip cs */
1142 " popf\n"
1143#endif
1144 ".global optprobe_template_end\n"
1145 "optprobe_template_end: \n");
1146}
1147
1148#define TMPL_MOVE_IDX \
1149 ((long)&optprobe_template_val - (long)&optprobe_template_entry)
1150#define TMPL_CALL_IDX \
1151 ((long)&optprobe_template_call - (long)&optprobe_template_entry)
1152#define TMPL_END_IDX \
1153 ((long)&optprobe_template_end - (long)&optprobe_template_entry)
1154
1155#define INT3_SIZE sizeof(kprobe_opcode_t)
1156
1157/* Optimized kprobe call back function: called from optinsn */
1158static void __kprobes optimized_callback(struct optimized_kprobe *op,
1159 struct pt_regs *regs)
1160{
1161 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
1162
1163 preempt_disable();
1164 if (kprobe_running()) {
1165 kprobes_inc_nmissed_count(&op->kp);
1166 } else {
1167 /* Save skipped registers */
1168#ifdef CONFIG_X86_64
1169 regs->cs = __KERNEL_CS;
1170#else
1171 regs->cs = __KERNEL_CS | get_kernel_rpl();
1172 regs->gs = 0;
1173#endif
1174 regs->ip = (unsigned long)op->kp.addr + INT3_SIZE;
1175 regs->orig_ax = ~0UL;
1176
1177 __get_cpu_var(current_kprobe) = &op->kp;
1178 kcb->kprobe_status = KPROBE_HIT_ACTIVE;
1179 opt_pre_handler(&op->kp, regs);
1180 __get_cpu_var(current_kprobe) = NULL;
1181 }
1182 preempt_enable_no_resched();
1183}
1184
1185static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
1186{
1187 int len = 0, ret;
1188
1189 while (len < RELATIVEJUMP_SIZE) {
1190 ret = __copy_instruction(dest + len, src + len, 1);
1191 if (!ret || !can_boost(dest + len))
1192 return -EINVAL;
1193 len += ret;
1194 }
1195 /* Check whether the address range is reserved */
1196 if (ftrace_text_reserved(src, src + len - 1) ||
1197 alternatives_text_reserved(src, src + len - 1))
1198 return -EBUSY;
1199
1200 return len;
1201}
1202
1203/* Check whether insn is indirect jump */
1204static int __kprobes insn_is_indirect_jump(struct insn *insn)
1205{
1206 return ((insn->opcode.bytes[0] == 0xff &&
1207 (X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */
1208 insn->opcode.bytes[0] == 0xea); /* Segment based jump */
1209}
1210
1211/* Check whether insn jumps into specified address range */
1212static int insn_jump_into_range(struct insn *insn, unsigned long start, int len)
1213{
1214 unsigned long target = 0;
1215
1216 switch (insn->opcode.bytes[0]) {
1217 case 0xe0: /* loopne */
1218 case 0xe1: /* loope */
1219 case 0xe2: /* loop */
1220 case 0xe3: /* jcxz */
1221 case 0xe9: /* near relative jump */
1222 case 0xeb: /* short relative jump */
1223 break;
1224 case 0x0f:
1225 if ((insn->opcode.bytes[1] & 0xf0) == 0x80) /* jcc near */
1226 break;
1227 return 0;
1228 default:
1229 if ((insn->opcode.bytes[0] & 0xf0) == 0x70) /* jcc short */
1230 break;
1231 return 0;
1232 }
1233 target = (unsigned long)insn->next_byte + insn->immediate.value;
1234
1235 return (start <= target && target <= start + len);
1236}
1237
1238/* Decode whole function to ensure any instructions don't jump into target */
1239static int __kprobes can_optimize(unsigned long paddr)
1240{
1241 int ret;
1242 unsigned long addr, size = 0, offset = 0;
1243 struct insn insn;
1244 kprobe_opcode_t buf[MAX_INSN_SIZE];
1245 /* Dummy buffers for lookup_symbol_attrs */
1246 static char __dummy_buf[KSYM_NAME_LEN];
1247
1248 /* Lookup symbol including addr */
1249 if (!kallsyms_lookup(paddr, &size, &offset, NULL, __dummy_buf))
1250 return 0;
1251
1252 /* Check there is enough space for a relative jump. */
1253 if (size - offset < RELATIVEJUMP_SIZE)
1254 return 0;
1255
1256 /* Decode instructions */
1257 addr = paddr - offset;
1258 while (addr < paddr - offset + size) { /* Decode until function end */
1259 if (search_exception_tables(addr))
1260 /*
1261 * Since some fixup code will jumps into this function,
1262 * we can't optimize kprobe in this function.
1263 */
1264 return 0;
1265 kernel_insn_init(&insn, (void *)addr);
1266 insn_get_opcode(&insn);
1267 if (insn.opcode.bytes[0] == BREAKPOINT_INSTRUCTION) {
1268 ret = recover_probed_instruction(buf, addr);
1269 if (ret)
1270 return 0;
1271 kernel_insn_init(&insn, buf);
1272 }
1273 insn_get_length(&insn);
1274 /* Recover address */
1275 insn.kaddr = (void *)addr;
1276 insn.next_byte = (void *)(addr + insn.length);
1277 /* Check any instructions don't jump into target */
1278 if (insn_is_indirect_jump(&insn) ||
1279 insn_jump_into_range(&insn, paddr + INT3_SIZE,
1280 RELATIVE_ADDR_SIZE))
1281 return 0;
1282 addr += insn.length;
1283 }
1284
1285 return 1;
1286}
1287
1288/* Check optimized_kprobe can actually be optimized. */
1289int __kprobes arch_check_optimized_kprobe(struct optimized_kprobe *op)
1290{
1291 int i;
1292 struct kprobe *p;
1293
1294 for (i = 1; i < op->optinsn.size; i++) {
1295 p = get_kprobe(op->kp.addr + i);
1296 if (p && !kprobe_disabled(p))
1297 return -EEXIST;
1298 }
1299
1300 return 0;
1301}
1302
1303/* Check the addr is within the optimized instructions. */
1304int __kprobes arch_within_optimized_kprobe(struct optimized_kprobe *op,
1305 unsigned long addr)
1306{
1307 return ((unsigned long)op->kp.addr <= addr &&
1308 (unsigned long)op->kp.addr + op->optinsn.size > addr);
1309}
1310
1311/* Free optimized instruction slot */
1312static __kprobes
1313void __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
1314{
1315 if (op->optinsn.insn) {
1316 free_optinsn_slot(op->optinsn.insn, dirty);
1317 op->optinsn.insn = NULL;
1318 op->optinsn.size = 0;
1319 }
1320}
1321
1322void __kprobes arch_remove_optimized_kprobe(struct optimized_kprobe *op)
1323{
1324 __arch_remove_optimized_kprobe(op, 1);
1325}
1326
1327/*
1328 * Copy replacing target instructions
1329 * Target instructions MUST be relocatable (checked inside)
1330 */
1331int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
1332{
1333 u8 *buf;
1334 int ret;
1335 long rel;
1336
1337 if (!can_optimize((unsigned long)op->kp.addr))
1338 return -EILSEQ;
1339
1340 op->optinsn.insn = get_optinsn_slot();
1341 if (!op->optinsn.insn)
1342 return -ENOMEM;
1343
1344 /*
1345 * Verify if the address gap is in 2GB range, because this uses
1346 * a relative jump.
1347 */
1348 rel = (long)op->optinsn.insn - (long)op->kp.addr + RELATIVEJUMP_SIZE;
1349 if (abs(rel) > 0x7fffffff)
1350 return -ERANGE;
1351
1352 buf = (u8 *)op->optinsn.insn;
1353
1354 /* Copy instructions into the out-of-line buffer */
1355 ret = copy_optimized_instructions(buf + TMPL_END_IDX, op->kp.addr);
1356 if (ret < 0) {
1357 __arch_remove_optimized_kprobe(op, 0);
1358 return ret;
1359 }
1360 op->optinsn.size = ret;
1361
1362 /* Copy arch-dep-instance from template */
1363 memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
1364
1365 /* Set probe information */
1366 synthesize_set_arg1(buf + TMPL_MOVE_IDX, (unsigned long)op);
1367
1368 /* Set probe function call */
1369 synthesize_relcall(buf + TMPL_CALL_IDX, optimized_callback);
1370
1371 /* Set returning jmp instruction at the tail of out-of-line buffer */
1372 synthesize_reljump(buf + TMPL_END_IDX + op->optinsn.size,
1373 (u8 *)op->kp.addr + op->optinsn.size);
1374
1375 flush_icache_range((unsigned long) buf,
1376 (unsigned long) buf + TMPL_END_IDX +
1377 op->optinsn.size + RELATIVEJUMP_SIZE);
1378 return 0;
1379}
1380
1381/* Replace a breakpoint (int3) with a relative jump. */
1382int __kprobes arch_optimize_kprobe(struct optimized_kprobe *op)
1383{
1384 unsigned char jmp_code[RELATIVEJUMP_SIZE];
1385 s32 rel = (s32)((long)op->optinsn.insn -
1386 ((long)op->kp.addr + RELATIVEJUMP_SIZE));
1387
1388 /* Backup instructions which will be replaced by jump address */
1389 memcpy(op->optinsn.copied_insn, op->kp.addr + INT3_SIZE,
1390 RELATIVE_ADDR_SIZE);
1391
1392 jmp_code[0] = RELATIVEJUMP_OPCODE;
1393 *(s32 *)(&jmp_code[1]) = rel;
1394
1395 /*
1396 * text_poke_smp doesn't support NMI/MCE code modifying.
1397 * However, since kprobes itself also doesn't support NMI/MCE
1398 * code probing, it's not a problem.
1399 */
1400 text_poke_smp(op->kp.addr, jmp_code, RELATIVEJUMP_SIZE);
1401 return 0;
1402}
1403
1404/* Replace a relative jump with a breakpoint (int3). */
1405void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op)
1406{
1407 u8 buf[RELATIVEJUMP_SIZE];
1408
1409 /* Set int3 to first byte for kprobes */
1410 buf[0] = BREAKPOINT_INSTRUCTION;
1411 memcpy(buf + 1, op->optinsn.copied_insn, RELATIVE_ADDR_SIZE);
1412 text_poke_smp(op->kp.addr, buf, RELATIVEJUMP_SIZE);
1413}
1414
1415static int __kprobes setup_detour_execution(struct kprobe *p,
1416 struct pt_regs *regs,
1417 int reenter)
1418{
1419 struct optimized_kprobe *op;
1420
1421 if (p->flags & KPROBE_FLAG_OPTIMIZED) {
1422 /* This kprobe is really able to run optimized path. */
1423 op = container_of(p, struct optimized_kprobe, kp);
1424 /* Detour through copied instructions */
1425 regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
1426 if (!reenter)
1427 reset_current_kprobe();
1428 preempt_enable_no_resched();
1429 return 1;
1430 }
1431 return 0;
1432}
1433#endif
1434
1036int __init arch_init_kprobes(void) 1435int __init arch_init_kprobes(void)
1037{ 1436{
1038 return 0; 1437 return 0;
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 1b672f74a32f..e7d1b2e0070d 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -122,6 +122,11 @@ struct kprobe {
122/* Kprobe status flags */ 122/* Kprobe status flags */
123#define KPROBE_FLAG_GONE 1 /* breakpoint has already gone */ 123#define KPROBE_FLAG_GONE 1 /* breakpoint has already gone */
124#define KPROBE_FLAG_DISABLED 2 /* probe is temporarily disabled */ 124#define KPROBE_FLAG_DISABLED 2 /* probe is temporarily disabled */
125#define KPROBE_FLAG_OPTIMIZED 4 /*
126 * probe is really optimized.
127 * NOTE:
128 * this flag is only for optimized_kprobe.
129 */
125 130
126/* Has this kprobe gone ? */ 131/* Has this kprobe gone ? */
127static inline int kprobe_gone(struct kprobe *p) 132static inline int kprobe_gone(struct kprobe *p)
@@ -134,6 +139,12 @@ static inline int kprobe_disabled(struct kprobe *p)
134{ 139{
135 return p->flags & (KPROBE_FLAG_DISABLED | KPROBE_FLAG_GONE); 140 return p->flags & (KPROBE_FLAG_DISABLED | KPROBE_FLAG_GONE);
136} 141}
142
143/* Is this kprobe really running optimized path ? */
144static inline int kprobe_optimized(struct kprobe *p)
145{
146 return p->flags & KPROBE_FLAG_OPTIMIZED;
147}
137/* 148/*
138 * Special probe type that uses setjmp-longjmp type tricks to resume 149 * Special probe type that uses setjmp-longjmp type tricks to resume
139 * execution at a specified entry with a matching prototype corresponding 150 * execution at a specified entry with a matching prototype corresponding
@@ -249,6 +260,39 @@ extern kprobe_opcode_t *get_insn_slot(void);
249extern void free_insn_slot(kprobe_opcode_t *slot, int dirty); 260extern void free_insn_slot(kprobe_opcode_t *slot, int dirty);
250extern void kprobes_inc_nmissed_count(struct kprobe *p); 261extern void kprobes_inc_nmissed_count(struct kprobe *p);
251 262
263#ifdef CONFIG_OPTPROBES
264/*
265 * Internal structure for direct jump optimized probe
266 */
267struct optimized_kprobe {
268 struct kprobe kp;
269 struct list_head list; /* list for optimizing queue */
270 struct arch_optimized_insn optinsn;
271};
272
273/* Architecture dependent functions for direct jump optimization */
274extern int arch_prepared_optinsn(struct arch_optimized_insn *optinsn);
275extern int arch_check_optimized_kprobe(struct optimized_kprobe *op);
276extern int arch_prepare_optimized_kprobe(struct optimized_kprobe *op);
277extern void arch_remove_optimized_kprobe(struct optimized_kprobe *op);
278extern int arch_optimize_kprobe(struct optimized_kprobe *op);
279extern void arch_unoptimize_kprobe(struct optimized_kprobe *op);
280extern kprobe_opcode_t *get_optinsn_slot(void);
281extern void free_optinsn_slot(kprobe_opcode_t *slot, int dirty);
282extern int arch_within_optimized_kprobe(struct optimized_kprobe *op,
283 unsigned long addr);
284
285extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
286
287#ifdef CONFIG_SYSCTL
288extern int sysctl_kprobes_optimization;
289extern int proc_kprobes_optimization_handler(struct ctl_table *table,
290 int write, void __user *buffer,
291 size_t *length, loff_t *ppos);
292#endif
293
294#endif /* CONFIG_OPTPROBES */
295
252/* Get the kprobe at this addr (if any) - called with preemption disabled */ 296/* Get the kprobe at this addr (if any) - called with preemption disabled */
253struct kprobe *get_kprobe(void *addr); 297struct kprobe *get_kprobe(void *addr);
254void kretprobe_hash_lock(struct task_struct *tsk, 298void kretprobe_hash_lock(struct task_struct *tsk,
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index ccec774c716d..fa034d29cf73 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -42,9 +42,11 @@
42#include <linux/freezer.h> 42#include <linux/freezer.h>
43#include <linux/seq_file.h> 43#include <linux/seq_file.h>
44#include <linux/debugfs.h> 44#include <linux/debugfs.h>
45#include <linux/sysctl.h>
45#include <linux/kdebug.h> 46#include <linux/kdebug.h>
46#include <linux/memory.h> 47#include <linux/memory.h>
47#include <linux/ftrace.h> 48#include <linux/ftrace.h>
49#include <linux/cpu.h>
48 50
49#include <asm-generic/sections.h> 51#include <asm-generic/sections.h>
50#include <asm/cacheflush.h> 52#include <asm/cacheflush.h>
@@ -105,57 +107,74 @@ static struct kprobe_blackpoint kprobe_blacklist[] = {
105 * stepping on the instruction on a vmalloced/kmalloced/data page 107 * stepping on the instruction on a vmalloced/kmalloced/data page
106 * is a recipe for disaster 108 * is a recipe for disaster
107 */ 109 */
108#define INSNS_PER_PAGE (PAGE_SIZE/(MAX_INSN_SIZE * sizeof(kprobe_opcode_t)))
109
110struct kprobe_insn_page { 110struct kprobe_insn_page {
111 struct list_head list; 111 struct list_head list;
112 kprobe_opcode_t *insns; /* Page of instruction slots */ 112 kprobe_opcode_t *insns; /* Page of instruction slots */
113 char slot_used[INSNS_PER_PAGE];
114 int nused; 113 int nused;
115 int ngarbage; 114 int ngarbage;
115 char slot_used[];
116};
117
118#define KPROBE_INSN_PAGE_SIZE(slots) \
119 (offsetof(struct kprobe_insn_page, slot_used) + \
120 (sizeof(char) * (slots)))
121
122struct kprobe_insn_cache {
123 struct list_head pages; /* list of kprobe_insn_page */
124 size_t insn_size; /* size of instruction slot */
125 int nr_garbage;
116}; 126};
117 127
128static int slots_per_page(struct kprobe_insn_cache *c)
129{
130 return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
131}
132
118enum kprobe_slot_state { 133enum kprobe_slot_state {
119 SLOT_CLEAN = 0, 134 SLOT_CLEAN = 0,
120 SLOT_DIRTY = 1, 135 SLOT_DIRTY = 1,
121 SLOT_USED = 2, 136 SLOT_USED = 2,
122}; 137};
123 138
124static DEFINE_MUTEX(kprobe_insn_mutex); /* Protects kprobe_insn_pages */ 139static DEFINE_MUTEX(kprobe_insn_mutex); /* Protects kprobe_insn_slots */
125static LIST_HEAD(kprobe_insn_pages); 140static struct kprobe_insn_cache kprobe_insn_slots = {
126static int kprobe_garbage_slots; 141 .pages = LIST_HEAD_INIT(kprobe_insn_slots.pages),
127static int collect_garbage_slots(void); 142 .insn_size = MAX_INSN_SIZE,
143 .nr_garbage = 0,
144};
145static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c);
128 146
129/** 147/**
130 * __get_insn_slot() - Find a slot on an executable page for an instruction. 148 * __get_insn_slot() - Find a slot on an executable page for an instruction.
131 * We allocate an executable page if there's no room on existing ones. 149 * We allocate an executable page if there's no room on existing ones.
132 */ 150 */
133static kprobe_opcode_t __kprobes *__get_insn_slot(void) 151static kprobe_opcode_t __kprobes *__get_insn_slot(struct kprobe_insn_cache *c)
134{ 152{
135 struct kprobe_insn_page *kip; 153 struct kprobe_insn_page *kip;
136 154
137 retry: 155 retry:
138 list_for_each_entry(kip, &kprobe_insn_pages, list) { 156 list_for_each_entry(kip, &c->pages, list) {
139 if (kip->nused < INSNS_PER_PAGE) { 157 if (kip->nused < slots_per_page(c)) {
140 int i; 158 int i;
141 for (i = 0; i < INSNS_PER_PAGE; i++) { 159 for (i = 0; i < slots_per_page(c); i++) {
142 if (kip->slot_used[i] == SLOT_CLEAN) { 160 if (kip->slot_used[i] == SLOT_CLEAN) {
143 kip->slot_used[i] = SLOT_USED; 161 kip->slot_used[i] = SLOT_USED;
144 kip->nused++; 162 kip->nused++;
145 return kip->insns + (i * MAX_INSN_SIZE); 163 return kip->insns + (i * c->insn_size);
146 } 164 }
147 } 165 }
148 /* Surprise! No unused slots. Fix kip->nused. */ 166 /* kip->nused is broken. Fix it. */
149 kip->nused = INSNS_PER_PAGE; 167 kip->nused = slots_per_page(c);
168 WARN_ON(1);
150 } 169 }
151 } 170 }
152 171
153 /* If there are any garbage slots, collect it and try again. */ 172 /* If there are any garbage slots, collect it and try again. */
154 if (kprobe_garbage_slots && collect_garbage_slots() == 0) { 173 if (c->nr_garbage && collect_garbage_slots(c) == 0)
155 goto retry; 174 goto retry;
156 } 175
157 /* All out of space. Need to allocate a new page. Use slot 0. */ 176 /* All out of space. Need to allocate a new page. */
158 kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL); 177 kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
159 if (!kip) 178 if (!kip)
160 return NULL; 179 return NULL;
161 180
@@ -170,20 +189,23 @@ static kprobe_opcode_t __kprobes *__get_insn_slot(void)
170 return NULL; 189 return NULL;
171 } 190 }
172 INIT_LIST_HEAD(&kip->list); 191 INIT_LIST_HEAD(&kip->list);
173 list_add(&kip->list, &kprobe_insn_pages); 192 memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
174 memset(kip->slot_used, SLOT_CLEAN, INSNS_PER_PAGE);
175 kip->slot_used[0] = SLOT_USED; 193 kip->slot_used[0] = SLOT_USED;
176 kip->nused = 1; 194 kip->nused = 1;
177 kip->ngarbage = 0; 195 kip->ngarbage = 0;
196 list_add(&kip->list, &c->pages);
178 return kip->insns; 197 return kip->insns;
179} 198}
180 199
200
181kprobe_opcode_t __kprobes *get_insn_slot(void) 201kprobe_opcode_t __kprobes *get_insn_slot(void)
182{ 202{
183 kprobe_opcode_t *ret; 203 kprobe_opcode_t *ret = NULL;
204
184 mutex_lock(&kprobe_insn_mutex); 205 mutex_lock(&kprobe_insn_mutex);
185 ret = __get_insn_slot(); 206 ret = __get_insn_slot(&kprobe_insn_slots);
186 mutex_unlock(&kprobe_insn_mutex); 207 mutex_unlock(&kprobe_insn_mutex);
208
187 return ret; 209 return ret;
188} 210}
189 211
@@ -199,7 +221,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
199 * so as not to have to set it up again the 221 * so as not to have to set it up again the
200 * next time somebody inserts a probe. 222 * next time somebody inserts a probe.
201 */ 223 */
202 if (!list_is_singular(&kprobe_insn_pages)) { 224 if (!list_is_singular(&kip->list)) {
203 list_del(&kip->list); 225 list_del(&kip->list);
204 module_free(NULL, kip->insns); 226 module_free(NULL, kip->insns);
205 kfree(kip); 227 kfree(kip);
@@ -209,51 +231,84 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx)
209 return 0; 231 return 0;
210} 232}
211 233
212static int __kprobes collect_garbage_slots(void) 234static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c)
213{ 235{
214 struct kprobe_insn_page *kip, *next; 236 struct kprobe_insn_page *kip, *next;
215 237
216 /* Ensure no-one is interrupted on the garbages */ 238 /* Ensure no-one is interrupted on the garbages */
217 synchronize_sched(); 239 synchronize_sched();
218 240
219 list_for_each_entry_safe(kip, next, &kprobe_insn_pages, list) { 241 list_for_each_entry_safe(kip, next, &c->pages, list) {
220 int i; 242 int i;
221 if (kip->ngarbage == 0) 243 if (kip->ngarbage == 0)
222 continue; 244 continue;
223 kip->ngarbage = 0; /* we will collect all garbages */ 245 kip->ngarbage = 0; /* we will collect all garbages */
224 for (i = 0; i < INSNS_PER_PAGE; i++) { 246 for (i = 0; i < slots_per_page(c); i++) {
225 if (kip->slot_used[i] == SLOT_DIRTY && 247 if (kip->slot_used[i] == SLOT_DIRTY &&
226 collect_one_slot(kip, i)) 248 collect_one_slot(kip, i))
227 break; 249 break;
228 } 250 }
229 } 251 }
230 kprobe_garbage_slots = 0; 252 c->nr_garbage = 0;
231 return 0; 253 return 0;
232} 254}
233 255
234void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty) 256static void __kprobes __free_insn_slot(struct kprobe_insn_cache *c,
257 kprobe_opcode_t *slot, int dirty)
235{ 258{
236 struct kprobe_insn_page *kip; 259 struct kprobe_insn_page *kip;
237 260
238 mutex_lock(&kprobe_insn_mutex); 261 list_for_each_entry(kip, &c->pages, list) {
239 list_for_each_entry(kip, &kprobe_insn_pages, list) { 262 long idx = ((long)slot - (long)kip->insns) / c->insn_size;
240 if (kip->insns <= slot && 263 if (idx >= 0 && idx < slots_per_page(c)) {
241 slot < kip->insns + (INSNS_PER_PAGE * MAX_INSN_SIZE)) { 264 WARN_ON(kip->slot_used[idx] != SLOT_USED);
242 int i = (slot - kip->insns) / MAX_INSN_SIZE;
243 if (dirty) { 265 if (dirty) {
244 kip->slot_used[i] = SLOT_DIRTY; 266 kip->slot_used[idx] = SLOT_DIRTY;
245 kip->ngarbage++; 267 kip->ngarbage++;
268 if (++c->nr_garbage > slots_per_page(c))
269 collect_garbage_slots(c);
246 } else 270 } else
247 collect_one_slot(kip, i); 271 collect_one_slot(kip, idx);
248 break; 272 return;
249 } 273 }
250 } 274 }
275 /* Could not free this slot. */
276 WARN_ON(1);
277}
251 278
252 if (dirty && ++kprobe_garbage_slots > INSNS_PER_PAGE) 279void __kprobes free_insn_slot(kprobe_opcode_t * slot, int dirty)
253 collect_garbage_slots(); 280{
254 281 mutex_lock(&kprobe_insn_mutex);
282 __free_insn_slot(&kprobe_insn_slots, slot, dirty);
255 mutex_unlock(&kprobe_insn_mutex); 283 mutex_unlock(&kprobe_insn_mutex);
256} 284}
285#ifdef CONFIG_OPTPROBES
286/* For optimized_kprobe buffer */
287static DEFINE_MUTEX(kprobe_optinsn_mutex); /* Protects kprobe_optinsn_slots */
288static struct kprobe_insn_cache kprobe_optinsn_slots = {
289 .pages = LIST_HEAD_INIT(kprobe_optinsn_slots.pages),
290 /* .insn_size is initialized later */
291 .nr_garbage = 0,
292};
293/* Get a slot for optimized_kprobe buffer */
294kprobe_opcode_t __kprobes *get_optinsn_slot(void)
295{
296 kprobe_opcode_t *ret = NULL;
297
298 mutex_lock(&kprobe_optinsn_mutex);
299 ret = __get_insn_slot(&kprobe_optinsn_slots);
300 mutex_unlock(&kprobe_optinsn_mutex);
301
302 return ret;
303}
304
305void __kprobes free_optinsn_slot(kprobe_opcode_t * slot, int dirty)
306{
307 mutex_lock(&kprobe_optinsn_mutex);
308 __free_insn_slot(&kprobe_optinsn_slots, slot, dirty);
309 mutex_unlock(&kprobe_optinsn_mutex);
310}
311#endif
257#endif 312#endif
258 313
259/* We have preemption disabled.. so it is safe to use __ versions */ 314/* We have preemption disabled.. so it is safe to use __ versions */
@@ -284,23 +339,401 @@ struct kprobe __kprobes *get_kprobe(void *addr)
284 if (p->addr == addr) 339 if (p->addr == addr)
285 return p; 340 return p;
286 } 341 }
342
343 return NULL;
344}
345
346static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs);
347
348/* Return true if the kprobe is an aggregator */
349static inline int kprobe_aggrprobe(struct kprobe *p)
350{
351 return p->pre_handler == aggr_pre_handler;
352}
353
354/*
355 * Keep all fields in the kprobe consistent
356 */
357static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
358{
359 memcpy(&p->opcode, &old_p->opcode, sizeof(kprobe_opcode_t));
360 memcpy(&p->ainsn, &old_p->ainsn, sizeof(struct arch_specific_insn));
361}
362
363#ifdef CONFIG_OPTPROBES
364/* NOTE: change this value only with kprobe_mutex held */
365static bool kprobes_allow_optimization;
366
367/*
368 * Call all pre_handler on the list, but ignores its return value.
369 * This must be called from arch-dep optimized caller.
370 */
371void __kprobes opt_pre_handler(struct kprobe *p, struct pt_regs *regs)
372{
373 struct kprobe *kp;
374
375 list_for_each_entry_rcu(kp, &p->list, list) {
376 if (kp->pre_handler && likely(!kprobe_disabled(kp))) {
377 set_kprobe_instance(kp);
378 kp->pre_handler(kp, regs);
379 }
380 reset_kprobe_instance();
381 }
382}
383
384/* Return true(!0) if the kprobe is ready for optimization. */
385static inline int kprobe_optready(struct kprobe *p)
386{
387 struct optimized_kprobe *op;
388
389 if (kprobe_aggrprobe(p)) {
390 op = container_of(p, struct optimized_kprobe, kp);
391 return arch_prepared_optinsn(&op->optinsn);
392 }
393
394 return 0;
395}
396
397/*
398 * Return an optimized kprobe whose optimizing code replaces
399 * instructions including addr (exclude breakpoint).
400 */
401struct kprobe *__kprobes get_optimized_kprobe(unsigned long addr)
402{
403 int i;
404 struct kprobe *p = NULL;
405 struct optimized_kprobe *op;
406
407 /* Don't check i == 0, since that is a breakpoint case. */
408 for (i = 1; !p && i < MAX_OPTIMIZED_LENGTH; i++)
409 p = get_kprobe((void *)(addr - i));
410
411 if (p && kprobe_optready(p)) {
412 op = container_of(p, struct optimized_kprobe, kp);
413 if (arch_within_optimized_kprobe(op, addr))
414 return p;
415 }
416
287 return NULL; 417 return NULL;
288} 418}
289 419
420/* Optimization staging list, protected by kprobe_mutex */
421static LIST_HEAD(optimizing_list);
422
423static void kprobe_optimizer(struct work_struct *work);
424static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer);
425#define OPTIMIZE_DELAY 5
426
427/* Kprobe jump optimizer */
428static __kprobes void kprobe_optimizer(struct work_struct *work)
429{
430 struct optimized_kprobe *op, *tmp;
431
432 /* Lock modules while optimizing kprobes */
433 mutex_lock(&module_mutex);
434 mutex_lock(&kprobe_mutex);
435 if (kprobes_all_disarmed || !kprobes_allow_optimization)
436 goto end;
437
438 /*
439 * Wait for quiesence period to ensure all running interrupts
440 * are done. Because optprobe may modify multiple instructions
441 * there is a chance that Nth instruction is interrupted. In that
442 * case, running interrupt can return to 2nd-Nth byte of jump
443 * instruction. This wait is for avoiding it.
444 */
445 synchronize_sched();
446
447 /*
448 * The optimization/unoptimization refers online_cpus via
449 * stop_machine() and cpu-hotplug modifies online_cpus.
450 * And same time, text_mutex will be held in cpu-hotplug and here.
451 * This combination can cause a deadlock (cpu-hotplug try to lock
452 * text_mutex but stop_machine can not be done because online_cpus
453 * has been changed)
454 * To avoid this deadlock, we need to call get_online_cpus()
455 * for preventing cpu-hotplug outside of text_mutex locking.
456 */
457 get_online_cpus();
458 mutex_lock(&text_mutex);
459 list_for_each_entry_safe(op, tmp, &optimizing_list, list) {
460 WARN_ON(kprobe_disabled(&op->kp));
461 if (arch_optimize_kprobe(op) < 0)
462 op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
463 list_del_init(&op->list);
464 }
465 mutex_unlock(&text_mutex);
466 put_online_cpus();
467end:
468 mutex_unlock(&kprobe_mutex);
469 mutex_unlock(&module_mutex);
470}
471
472/* Optimize kprobe if p is ready to be optimized */
473static __kprobes void optimize_kprobe(struct kprobe *p)
474{
475 struct optimized_kprobe *op;
476
477 /* Check if the kprobe is disabled or not ready for optimization. */
478 if (!kprobe_optready(p) || !kprobes_allow_optimization ||
479 (kprobe_disabled(p) || kprobes_all_disarmed))
480 return;
481
482 /* Both of break_handler and post_handler are not supported. */
483 if (p->break_handler || p->post_handler)
484 return;
485
486 op = container_of(p, struct optimized_kprobe, kp);
487
488 /* Check there is no other kprobes at the optimized instructions */
489 if (arch_check_optimized_kprobe(op) < 0)
490 return;
491
492 /* Check if it is already optimized. */
493 if (op->kp.flags & KPROBE_FLAG_OPTIMIZED)
494 return;
495
496 op->kp.flags |= KPROBE_FLAG_OPTIMIZED;
497 list_add(&op->list, &optimizing_list);
498 if (!delayed_work_pending(&optimizing_work))
499 schedule_delayed_work(&optimizing_work, OPTIMIZE_DELAY);
500}
501
502/* Unoptimize a kprobe if p is optimized */
503static __kprobes void unoptimize_kprobe(struct kprobe *p)
504{
505 struct optimized_kprobe *op;
506
507 if ((p->flags & KPROBE_FLAG_OPTIMIZED) && kprobe_aggrprobe(p)) {
508 op = container_of(p, struct optimized_kprobe, kp);
509 if (!list_empty(&op->list))
510 /* Dequeue from the optimization queue */
511 list_del_init(&op->list);
512 else
513 /* Replace jump with break */
514 arch_unoptimize_kprobe(op);
515 op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
516 }
517}
518
519/* Remove optimized instructions */
520static void __kprobes kill_optimized_kprobe(struct kprobe *p)
521{
522 struct optimized_kprobe *op;
523
524 op = container_of(p, struct optimized_kprobe, kp);
525 if (!list_empty(&op->list)) {
526 /* Dequeue from the optimization queue */
527 list_del_init(&op->list);
528 op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED;
529 }
530 /* Don't unoptimize, because the target code will be freed. */
531 arch_remove_optimized_kprobe(op);
532}
533
534/* Try to prepare optimized instructions */
535static __kprobes void prepare_optimized_kprobe(struct kprobe *p)
536{
537 struct optimized_kprobe *op;
538
539 op = container_of(p, struct optimized_kprobe, kp);
540 arch_prepare_optimized_kprobe(op);
541}
542
543/* Free optimized instructions and optimized_kprobe */
544static __kprobes void free_aggr_kprobe(struct kprobe *p)
545{
546 struct optimized_kprobe *op;
547
548 op = container_of(p, struct optimized_kprobe, kp);
549 arch_remove_optimized_kprobe(op);
550 kfree(op);
551}
552
553/* Allocate new optimized_kprobe and try to prepare optimized instructions */
554static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
555{
556 struct optimized_kprobe *op;
557
558 op = kzalloc(sizeof(struct optimized_kprobe), GFP_KERNEL);
559 if (!op)
560 return NULL;
561
562 INIT_LIST_HEAD(&op->list);
563 op->kp.addr = p->addr;
564 arch_prepare_optimized_kprobe(op);
565
566 return &op->kp;
567}
568
569static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p);
570
571/*
572 * Prepare an optimized_kprobe and optimize it
573 * NOTE: p must be a normal registered kprobe
574 */
575static __kprobes void try_to_optimize_kprobe(struct kprobe *p)
576{
577 struct kprobe *ap;
578 struct optimized_kprobe *op;
579
580 ap = alloc_aggr_kprobe(p);
581 if (!ap)
582 return;
583
584 op = container_of(ap, struct optimized_kprobe, kp);
585 if (!arch_prepared_optinsn(&op->optinsn)) {
586 /* If failed to setup optimizing, fallback to kprobe */
587 free_aggr_kprobe(ap);
588 return;
589 }
590
591 init_aggr_kprobe(ap, p);
592 optimize_kprobe(ap);
593}
594
595#ifdef CONFIG_SYSCTL
596static void __kprobes optimize_all_kprobes(void)
597{
598 struct hlist_head *head;
599 struct hlist_node *node;
600 struct kprobe *p;
601 unsigned int i;
602
603 /* If optimization is already allowed, just return */
604 if (kprobes_allow_optimization)
605 return;
606
607 kprobes_allow_optimization = true;
608 mutex_lock(&text_mutex);
609 for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
610 head = &kprobe_table[i];
611 hlist_for_each_entry_rcu(p, node, head, hlist)
612 if (!kprobe_disabled(p))
613 optimize_kprobe(p);
614 }
615 mutex_unlock(&text_mutex);
616 printk(KERN_INFO "Kprobes globally optimized\n");
617}
618
619static void __kprobes unoptimize_all_kprobes(void)
620{
621 struct hlist_head *head;
622 struct hlist_node *node;
623 struct kprobe *p;
624 unsigned int i;
625
626 /* If optimization is already prohibited, just return */
627 if (!kprobes_allow_optimization)
628 return;
629
630 kprobes_allow_optimization = false;
631 printk(KERN_INFO "Kprobes globally unoptimized\n");
632 get_online_cpus(); /* For avoiding text_mutex deadlock */
633 mutex_lock(&text_mutex);
634 for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
635 head = &kprobe_table[i];
636 hlist_for_each_entry_rcu(p, node, head, hlist) {
637 if (!kprobe_disabled(p))
638 unoptimize_kprobe(p);
639 }
640 }
641
642 mutex_unlock(&text_mutex);
643 put_online_cpus();
644 /* Allow all currently running kprobes to complete */
645 synchronize_sched();
646}
647
648int sysctl_kprobes_optimization;
649int proc_kprobes_optimization_handler(struct ctl_table *table, int write,
650 void __user *buffer, size_t *length,
651 loff_t *ppos)
652{
653 int ret;
654
655 mutex_lock(&kprobe_mutex);
656 sysctl_kprobes_optimization = kprobes_allow_optimization ? 1 : 0;
657 ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
658
659 if (sysctl_kprobes_optimization)
660 optimize_all_kprobes();
661 else
662 unoptimize_all_kprobes();
663 mutex_unlock(&kprobe_mutex);
664
665 return ret;
666}
667#endif /* CONFIG_SYSCTL */
668
669static void __kprobes __arm_kprobe(struct kprobe *p)
670{
671 struct kprobe *old_p;
672
673 /* Check collision with other optimized kprobes */
674 old_p = get_optimized_kprobe((unsigned long)p->addr);
675 if (unlikely(old_p))
676 unoptimize_kprobe(old_p); /* Fallback to unoptimized kprobe */
677
678 arch_arm_kprobe(p);
679 optimize_kprobe(p); /* Try to optimize (add kprobe to a list) */
680}
681
682static void __kprobes __disarm_kprobe(struct kprobe *p)
683{
684 struct kprobe *old_p;
685
686 unoptimize_kprobe(p); /* Try to unoptimize */
687 arch_disarm_kprobe(p);
688
689 /* If another kprobe was blocked, optimize it. */
690 old_p = get_optimized_kprobe((unsigned long)p->addr);
691 if (unlikely(old_p))
692 optimize_kprobe(old_p);
693}
694
695#else /* !CONFIG_OPTPROBES */
696
697#define optimize_kprobe(p) do {} while (0)
698#define unoptimize_kprobe(p) do {} while (0)
699#define kill_optimized_kprobe(p) do {} while (0)
700#define prepare_optimized_kprobe(p) do {} while (0)
701#define try_to_optimize_kprobe(p) do {} while (0)
702#define __arm_kprobe(p) arch_arm_kprobe(p)
703#define __disarm_kprobe(p) arch_disarm_kprobe(p)
704
705static __kprobes void free_aggr_kprobe(struct kprobe *p)
706{
707 kfree(p);
708}
709
710static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p)
711{
712 return kzalloc(sizeof(struct kprobe), GFP_KERNEL);
713}
714#endif /* CONFIG_OPTPROBES */
715
290/* Arm a kprobe with text_mutex */ 716/* Arm a kprobe with text_mutex */
291static void __kprobes arm_kprobe(struct kprobe *kp) 717static void __kprobes arm_kprobe(struct kprobe *kp)
292{ 718{
719 /*
720 * Here, since __arm_kprobe() doesn't use stop_machine(),
721 * this doesn't cause deadlock on text_mutex. So, we don't
722 * need get_online_cpus().
723 */
293 mutex_lock(&text_mutex); 724 mutex_lock(&text_mutex);
294 arch_arm_kprobe(kp); 725 __arm_kprobe(kp);
295 mutex_unlock(&text_mutex); 726 mutex_unlock(&text_mutex);
296} 727}
297 728
298/* Disarm a kprobe with text_mutex */ 729/* Disarm a kprobe with text_mutex */
299static void __kprobes disarm_kprobe(struct kprobe *kp) 730static void __kprobes disarm_kprobe(struct kprobe *kp)
300{ 731{
732 get_online_cpus(); /* For avoiding text_mutex deadlock */
301 mutex_lock(&text_mutex); 733 mutex_lock(&text_mutex);
302 arch_disarm_kprobe(kp); 734 __disarm_kprobe(kp);
303 mutex_unlock(&text_mutex); 735 mutex_unlock(&text_mutex);
736 put_online_cpus();
304} 737}
305 738
306/* 739/*
@@ -369,7 +802,7 @@ static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs)
369void __kprobes kprobes_inc_nmissed_count(struct kprobe *p) 802void __kprobes kprobes_inc_nmissed_count(struct kprobe *p)
370{ 803{
371 struct kprobe *kp; 804 struct kprobe *kp;
372 if (p->pre_handler != aggr_pre_handler) { 805 if (!kprobe_aggrprobe(p)) {
373 p->nmissed++; 806 p->nmissed++;
374 } else { 807 } else {
375 list_for_each_entry_rcu(kp, &p->list, list) 808 list_for_each_entry_rcu(kp, &p->list, list)
@@ -493,21 +926,16 @@ static void __kprobes cleanup_rp_inst(struct kretprobe *rp)
493} 926}
494 927
495/* 928/*
496 * Keep all fields in the kprobe consistent
497 */
498static inline void copy_kprobe(struct kprobe *old_p, struct kprobe *p)
499{
500 memcpy(&p->opcode, &old_p->opcode, sizeof(kprobe_opcode_t));
501 memcpy(&p->ainsn, &old_p->ainsn, sizeof(struct arch_specific_insn));
502}
503
504/*
505* Add the new probe to ap->list. Fail if this is the 929* Add the new probe to ap->list. Fail if this is the
506* second jprobe at the address - two jprobes can't coexist 930* second jprobe at the address - two jprobes can't coexist
507*/ 931*/
508static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p) 932static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
509{ 933{
510 BUG_ON(kprobe_gone(ap) || kprobe_gone(p)); 934 BUG_ON(kprobe_gone(ap) || kprobe_gone(p));
935
936 if (p->break_handler || p->post_handler)
937 unoptimize_kprobe(ap); /* Fall back to normal kprobe */
938
511 if (p->break_handler) { 939 if (p->break_handler) {
512 if (ap->break_handler) 940 if (ap->break_handler)
513 return -EEXIST; 941 return -EEXIST;
@@ -522,7 +950,7 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
522 ap->flags &= ~KPROBE_FLAG_DISABLED; 950 ap->flags &= ~KPROBE_FLAG_DISABLED;
523 if (!kprobes_all_disarmed) 951 if (!kprobes_all_disarmed)
524 /* Arm the breakpoint again. */ 952 /* Arm the breakpoint again. */
525 arm_kprobe(ap); 953 __arm_kprobe(ap);
526 } 954 }
527 return 0; 955 return 0;
528} 956}
@@ -531,12 +959,13 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p)
531 * Fill in the required fields of the "manager kprobe". Replace the 959 * Fill in the required fields of the "manager kprobe". Replace the
532 * earlier kprobe in the hlist with the manager kprobe 960 * earlier kprobe in the hlist with the manager kprobe
533 */ 961 */
534static inline void add_aggr_kprobe(struct kprobe *ap, struct kprobe *p) 962static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
535{ 963{
964 /* Copy p's insn slot to ap */
536 copy_kprobe(p, ap); 965 copy_kprobe(p, ap);
537 flush_insn_slot(ap); 966 flush_insn_slot(ap);
538 ap->addr = p->addr; 967 ap->addr = p->addr;
539 ap->flags = p->flags; 968 ap->flags = p->flags & ~KPROBE_FLAG_OPTIMIZED;
540 ap->pre_handler = aggr_pre_handler; 969 ap->pre_handler = aggr_pre_handler;
541 ap->fault_handler = aggr_fault_handler; 970 ap->fault_handler = aggr_fault_handler;
542 /* We don't care the kprobe which has gone. */ 971 /* We don't care the kprobe which has gone. */
@@ -546,8 +975,9 @@ static inline void add_aggr_kprobe(struct kprobe *ap, struct kprobe *p)
546 ap->break_handler = aggr_break_handler; 975 ap->break_handler = aggr_break_handler;
547 976
548 INIT_LIST_HEAD(&ap->list); 977 INIT_LIST_HEAD(&ap->list);
549 list_add_rcu(&p->list, &ap->list); 978 INIT_HLIST_NODE(&ap->hlist);
550 979
980 list_add_rcu(&p->list, &ap->list);
551 hlist_replace_rcu(&p->hlist, &ap->hlist); 981 hlist_replace_rcu(&p->hlist, &ap->hlist);
552} 982}
553 983
@@ -561,12 +991,12 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
561 int ret = 0; 991 int ret = 0;
562 struct kprobe *ap = old_p; 992 struct kprobe *ap = old_p;
563 993
564 if (old_p->pre_handler != aggr_pre_handler) { 994 if (!kprobe_aggrprobe(old_p)) {
565 /* If old_p is not an aggr_probe, create new aggr_kprobe. */ 995 /* If old_p is not an aggr_kprobe, create new aggr_kprobe. */
566 ap = kzalloc(sizeof(struct kprobe), GFP_KERNEL); 996 ap = alloc_aggr_kprobe(old_p);
567 if (!ap) 997 if (!ap)
568 return -ENOMEM; 998 return -ENOMEM;
569 add_aggr_kprobe(ap, old_p); 999 init_aggr_kprobe(ap, old_p);
570 } 1000 }
571 1001
572 if (kprobe_gone(ap)) { 1002 if (kprobe_gone(ap)) {
@@ -585,6 +1015,9 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
585 */ 1015 */
586 return ret; 1016 return ret;
587 1017
1018 /* Prepare optimized instructions if possible. */
1019 prepare_optimized_kprobe(ap);
1020
588 /* 1021 /*
589 * Clear gone flag to prevent allocating new slot again, and 1022 * Clear gone flag to prevent allocating new slot again, and
590 * set disabled flag because it is not armed yet. 1023 * set disabled flag because it is not armed yet.
@@ -593,6 +1026,7 @@ static int __kprobes register_aggr_kprobe(struct kprobe *old_p,
593 | KPROBE_FLAG_DISABLED; 1026 | KPROBE_FLAG_DISABLED;
594 } 1027 }
595 1028
1029 /* Copy ap's insn slot to p */
596 copy_kprobe(ap, p); 1030 copy_kprobe(ap, p);
597 return add_new_kprobe(ap, p); 1031 return add_new_kprobe(ap, p);
598} 1032}
@@ -743,27 +1177,34 @@ int __kprobes register_kprobe(struct kprobe *p)
743 p->nmissed = 0; 1177 p->nmissed = 0;
744 INIT_LIST_HEAD(&p->list); 1178 INIT_LIST_HEAD(&p->list);
745 mutex_lock(&kprobe_mutex); 1179 mutex_lock(&kprobe_mutex);
1180
1181 get_online_cpus(); /* For avoiding text_mutex deadlock. */
1182 mutex_lock(&text_mutex);
1183
746 old_p = get_kprobe(p->addr); 1184 old_p = get_kprobe(p->addr);
747 if (old_p) { 1185 if (old_p) {
1186 /* Since this may unoptimize old_p, locking text_mutex. */
748 ret = register_aggr_kprobe(old_p, p); 1187 ret = register_aggr_kprobe(old_p, p);
749 goto out; 1188 goto out;
750 } 1189 }
751 1190
752 mutex_lock(&text_mutex);
753 ret = arch_prepare_kprobe(p); 1191 ret = arch_prepare_kprobe(p);
754 if (ret) 1192 if (ret)
755 goto out_unlock_text; 1193 goto out;
756 1194
757 INIT_HLIST_NODE(&p->hlist); 1195 INIT_HLIST_NODE(&p->hlist);
758 hlist_add_head_rcu(&p->hlist, 1196 hlist_add_head_rcu(&p->hlist,
759 &kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]); 1197 &kprobe_table[hash_ptr(p->addr, KPROBE_HASH_BITS)]);
760 1198
761 if (!kprobes_all_disarmed && !kprobe_disabled(p)) 1199 if (!kprobes_all_disarmed && !kprobe_disabled(p))
762 arch_arm_kprobe(p); 1200 __arm_kprobe(p);
1201
1202 /* Try to optimize kprobe */
1203 try_to_optimize_kprobe(p);
763 1204
764out_unlock_text:
765 mutex_unlock(&text_mutex);
766out: 1205out:
1206 mutex_unlock(&text_mutex);
1207 put_online_cpus();
767 mutex_unlock(&kprobe_mutex); 1208 mutex_unlock(&kprobe_mutex);
768 1209
769 if (probed_mod) 1210 if (probed_mod)
@@ -785,7 +1226,7 @@ static int __kprobes __unregister_kprobe_top(struct kprobe *p)
785 return -EINVAL; 1226 return -EINVAL;
786 1227
787 if (old_p == p || 1228 if (old_p == p ||
788 (old_p->pre_handler == aggr_pre_handler && 1229 (kprobe_aggrprobe(old_p) &&
789 list_is_singular(&old_p->list))) { 1230 list_is_singular(&old_p->list))) {
790 /* 1231 /*
791 * Only probe on the hash list. Disarm only if kprobes are 1232 * Only probe on the hash list. Disarm only if kprobes are
@@ -793,7 +1234,7 @@ static int __kprobes __unregister_kprobe_top(struct kprobe *p)
793 * already have been removed. We save on flushing icache. 1234 * already have been removed. We save on flushing icache.
794 */ 1235 */
795 if (!kprobes_all_disarmed && !kprobe_disabled(old_p)) 1236 if (!kprobes_all_disarmed && !kprobe_disabled(old_p))
796 disarm_kprobe(p); 1237 disarm_kprobe(old_p);
797 hlist_del_rcu(&old_p->hlist); 1238 hlist_del_rcu(&old_p->hlist);
798 } else { 1239 } else {
799 if (p->break_handler && !kprobe_gone(p)) 1240 if (p->break_handler && !kprobe_gone(p))
@@ -809,8 +1250,13 @@ noclean:
809 list_del_rcu(&p->list); 1250 list_del_rcu(&p->list);
810 if (!kprobe_disabled(old_p)) { 1251 if (!kprobe_disabled(old_p)) {
811 try_to_disable_aggr_kprobe(old_p); 1252 try_to_disable_aggr_kprobe(old_p);
812 if (!kprobes_all_disarmed && kprobe_disabled(old_p)) 1253 if (!kprobes_all_disarmed) {
813 disarm_kprobe(old_p); 1254 if (kprobe_disabled(old_p))
1255 disarm_kprobe(old_p);
1256 else
1257 /* Try to optimize this probe again */
1258 optimize_kprobe(old_p);
1259 }
814 } 1260 }
815 } 1261 }
816 return 0; 1262 return 0;
@@ -827,7 +1273,7 @@ static void __kprobes __unregister_kprobe_bottom(struct kprobe *p)
827 old_p = list_entry(p->list.next, struct kprobe, list); 1273 old_p = list_entry(p->list.next, struct kprobe, list);
828 list_del(&p->list); 1274 list_del(&p->list);
829 arch_remove_kprobe(old_p); 1275 arch_remove_kprobe(old_p);
830 kfree(old_p); 1276 free_aggr_kprobe(old_p);
831 } 1277 }
832} 1278}
833 1279
@@ -1123,7 +1569,7 @@ static void __kprobes kill_kprobe(struct kprobe *p)
1123 struct kprobe *kp; 1569 struct kprobe *kp;
1124 1570
1125 p->flags |= KPROBE_FLAG_GONE; 1571 p->flags |= KPROBE_FLAG_GONE;
1126 if (p->pre_handler == aggr_pre_handler) { 1572 if (kprobe_aggrprobe(p)) {
1127 /* 1573 /*
1128 * If this is an aggr_kprobe, we have to list all the 1574 * If this is an aggr_kprobe, we have to list all the
1129 * chained probes and mark them GONE. 1575 * chained probes and mark them GONE.
@@ -1132,6 +1578,7 @@ static void __kprobes kill_kprobe(struct kprobe *p)
1132 kp->flags |= KPROBE_FLAG_GONE; 1578 kp->flags |= KPROBE_FLAG_GONE;
1133 p->post_handler = NULL; 1579 p->post_handler = NULL;
1134 p->break_handler = NULL; 1580 p->break_handler = NULL;
1581 kill_optimized_kprobe(p);
1135 } 1582 }
1136 /* 1583 /*
1137 * Here, we can remove insn_slot safely, because no thread calls 1584 * Here, we can remove insn_slot safely, because no thread calls
@@ -1241,6 +1688,15 @@ static int __init init_kprobes(void)
1241 } 1688 }
1242 } 1689 }
1243 1690
1691#if defined(CONFIG_OPTPROBES)
1692#if defined(__ARCH_WANT_KPROBES_INSN_SLOT)
1693 /* Init kprobe_optinsn_slots */
1694 kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE;
1695#endif
1696 /* By default, kprobes can be optimized */
1697 kprobes_allow_optimization = true;
1698#endif
1699
1244 /* By default, kprobes are armed */ 1700 /* By default, kprobes are armed */
1245 kprobes_all_disarmed = false; 1701 kprobes_all_disarmed = false;
1246 1702
@@ -1259,7 +1715,7 @@ static int __init init_kprobes(void)
1259 1715
1260#ifdef CONFIG_DEBUG_FS 1716#ifdef CONFIG_DEBUG_FS
1261static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p, 1717static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
1262 const char *sym, int offset,char *modname) 1718 const char *sym, int offset, char *modname, struct kprobe *pp)
1263{ 1719{
1264 char *kprobe_type; 1720 char *kprobe_type;
1265 1721
@@ -1269,19 +1725,21 @@ static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p,
1269 kprobe_type = "j"; 1725 kprobe_type = "j";
1270 else 1726 else
1271 kprobe_type = "k"; 1727 kprobe_type = "k";
1728
1272 if (sym) 1729 if (sym)
1273 seq_printf(pi, "%p %s %s+0x%x %s %s%s\n", 1730 seq_printf(pi, "%p %s %s+0x%x %s ",
1274 p->addr, kprobe_type, sym, offset, 1731 p->addr, kprobe_type, sym, offset,
1275 (modname ? modname : " "), 1732 (modname ? modname : " "));
1276 (kprobe_gone(p) ? "[GONE]" : ""),
1277 ((kprobe_disabled(p) && !kprobe_gone(p)) ?
1278 "[DISABLED]" : ""));
1279 else 1733 else
1280 seq_printf(pi, "%p %s %p %s%s\n", 1734 seq_printf(pi, "%p %s %p ",
1281 p->addr, kprobe_type, p->addr, 1735 p->addr, kprobe_type, p->addr);
1282 (kprobe_gone(p) ? "[GONE]" : ""), 1736
1283 ((kprobe_disabled(p) && !kprobe_gone(p)) ? 1737 if (!pp)
1284 "[DISABLED]" : "")); 1738 pp = p;
1739 seq_printf(pi, "%s%s%s\n",
1740 (kprobe_gone(p) ? "[GONE]" : ""),
1741 ((kprobe_disabled(p) && !kprobe_gone(p)) ? "[DISABLED]" : ""),
1742 (kprobe_optimized(pp) ? "[OPTIMIZED]" : ""));
1285} 1743}
1286 1744
1287static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos) 1745static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos)
@@ -1317,11 +1775,11 @@ static int __kprobes show_kprobe_addr(struct seq_file *pi, void *v)
1317 hlist_for_each_entry_rcu(p, node, head, hlist) { 1775 hlist_for_each_entry_rcu(p, node, head, hlist) {
1318 sym = kallsyms_lookup((unsigned long)p->addr, NULL, 1776 sym = kallsyms_lookup((unsigned long)p->addr, NULL,
1319 &offset, &modname, namebuf); 1777 &offset, &modname, namebuf);
1320 if (p->pre_handler == aggr_pre_handler) { 1778 if (kprobe_aggrprobe(p)) {
1321 list_for_each_entry_rcu(kp, &p->list, list) 1779 list_for_each_entry_rcu(kp, &p->list, list)
1322 report_probe(pi, kp, sym, offset, modname); 1780 report_probe(pi, kp, sym, offset, modname, p);
1323 } else 1781 } else
1324 report_probe(pi, p, sym, offset, modname); 1782 report_probe(pi, p, sym, offset, modname, NULL);
1325 } 1783 }
1326 preempt_enable(); 1784 preempt_enable();
1327 return 0; 1785 return 0;
@@ -1399,12 +1857,13 @@ int __kprobes enable_kprobe(struct kprobe *kp)
1399 goto out; 1857 goto out;
1400 } 1858 }
1401 1859
1402 if (!kprobes_all_disarmed && kprobe_disabled(p))
1403 arm_kprobe(p);
1404
1405 p->flags &= ~KPROBE_FLAG_DISABLED;
1406 if (p != kp) 1860 if (p != kp)
1407 kp->flags &= ~KPROBE_FLAG_DISABLED; 1861 kp->flags &= ~KPROBE_FLAG_DISABLED;
1862
1863 if (!kprobes_all_disarmed && kprobe_disabled(p)) {
1864 p->flags &= ~KPROBE_FLAG_DISABLED;
1865 arm_kprobe(p);
1866 }
1408out: 1867out:
1409 mutex_unlock(&kprobe_mutex); 1868 mutex_unlock(&kprobe_mutex);
1410 return ret; 1869 return ret;
@@ -1424,12 +1883,13 @@ static void __kprobes arm_all_kprobes(void)
1424 if (!kprobes_all_disarmed) 1883 if (!kprobes_all_disarmed)
1425 goto already_enabled; 1884 goto already_enabled;
1426 1885
1886 /* Arming kprobes doesn't optimize kprobe itself */
1427 mutex_lock(&text_mutex); 1887 mutex_lock(&text_mutex);
1428 for (i = 0; i < KPROBE_TABLE_SIZE; i++) { 1888 for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
1429 head = &kprobe_table[i]; 1889 head = &kprobe_table[i];
1430 hlist_for_each_entry_rcu(p, node, head, hlist) 1890 hlist_for_each_entry_rcu(p, node, head, hlist)
1431 if (!kprobe_disabled(p)) 1891 if (!kprobe_disabled(p))
1432 arch_arm_kprobe(p); 1892 __arm_kprobe(p);
1433 } 1893 }
1434 mutex_unlock(&text_mutex); 1894 mutex_unlock(&text_mutex);
1435 1895
@@ -1456,16 +1916,23 @@ static void __kprobes disarm_all_kprobes(void)
1456 1916
1457 kprobes_all_disarmed = true; 1917 kprobes_all_disarmed = true;
1458 printk(KERN_INFO "Kprobes globally disabled\n"); 1918 printk(KERN_INFO "Kprobes globally disabled\n");
1919
1920 /*
1921 * Here we call get_online_cpus() for avoiding text_mutex deadlock,
1922 * because disarming may also unoptimize kprobes.
1923 */
1924 get_online_cpus();
1459 mutex_lock(&text_mutex); 1925 mutex_lock(&text_mutex);
1460 for (i = 0; i < KPROBE_TABLE_SIZE; i++) { 1926 for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
1461 head = &kprobe_table[i]; 1927 head = &kprobe_table[i];
1462 hlist_for_each_entry_rcu(p, node, head, hlist) { 1928 hlist_for_each_entry_rcu(p, node, head, hlist) {
1463 if (!arch_trampoline_kprobe(p) && !kprobe_disabled(p)) 1929 if (!arch_trampoline_kprobe(p) && !kprobe_disabled(p))
1464 arch_disarm_kprobe(p); 1930 __disarm_kprobe(p);
1465 } 1931 }
1466 } 1932 }
1467 1933
1468 mutex_unlock(&text_mutex); 1934 mutex_unlock(&text_mutex);
1935 put_online_cpus();
1469 mutex_unlock(&kprobe_mutex); 1936 mutex_unlock(&kprobe_mutex);
1470 /* Allow all currently running kprobes to complete */ 1937 /* Allow all currently running kprobes to complete */
1471 synchronize_sched(); 1938 synchronize_sched();
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 33e7a38b6eb9..0ef19c614f6d 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -50,6 +50,7 @@
50#include <linux/ftrace.h> 50#include <linux/ftrace.h>
51#include <linux/slow-work.h> 51#include <linux/slow-work.h>
52#include <linux/perf_event.h> 52#include <linux/perf_event.h>
53#include <linux/kprobes.h>
53 54
54#include <asm/uaccess.h> 55#include <asm/uaccess.h>
55#include <asm/processor.h> 56#include <asm/processor.h>
@@ -1450,6 +1451,17 @@ static struct ctl_table debug_table[] = {
1450 .proc_handler = proc_dointvec 1451 .proc_handler = proc_dointvec
1451 }, 1452 },
1452#endif 1453#endif
1454#if defined(CONFIG_OPTPROBES)
1455 {
1456 .procname = "kprobes-optimization",
1457 .data = &sysctl_kprobes_optimization,
1458 .maxlen = sizeof(int),
1459 .mode = 0644,
1460 .proc_handler = proc_kprobes_optimization_handler,
1461 .extra1 = &zero,
1462 .extra2 = &one,
1463 },
1464#endif
1453 { } 1465 { }
1454}; 1466};
1455 1467
diff --git a/tools/perf/Documentation/perf-probe.txt b/tools/perf/Documentation/perf-probe.txt
index 2de34075f6a4..34202b1be0bb 100644
--- a/tools/perf/Documentation/perf-probe.txt
+++ b/tools/perf/Documentation/perf-probe.txt
@@ -41,7 +41,8 @@ OPTIONS
41 41
42-d:: 42-d::
43--del=:: 43--del=::
44 Delete a probe event. 44 Delete probe events. This accepts glob wildcards('*', '?') and character
45 classes(e.g. [a-z], [!A-Z]).
45 46
46-l:: 47-l::
47--list:: 48--list::
@@ -50,17 +51,29 @@ OPTIONS
50-L:: 51-L::
51--line=:: 52--line=::
52 Show source code lines which can be probed. This needs an argument 53 Show source code lines which can be probed. This needs an argument
53 which specifies a range of the source code. 54 which specifies a range of the source code. (see LINE SYNTAX for detail)
55
56-f::
57--force::
58 Forcibly add events with existing name.
54 59
55PROBE SYNTAX 60PROBE SYNTAX
56------------ 61------------
57Probe points are defined by following syntax. 62Probe points are defined by following syntax.
58 63
59 "[EVENT=]FUNC[+OFFS|:RLN|%return][@SRC]|SRC:ALN [ARG ...]" 64 1) Define event based on function name
65 [EVENT=]FUNC[@SRC][:RLN|+OFFS|%return|;PTN] [ARG ...]
66
67 2) Define event based on source file with line number
68 [EVENT=]SRC:ALN [ARG ...]
69
70 3) Define event based on source file with lazy pattern
71 [EVENT=]SRC;PTN [ARG ...]
72
60 73
61'EVENT' specifies the name of new event, if omitted, it will be set the name of the probed function. Currently, event group name is set as 'probe'. 74'EVENT' specifies the name of new event, if omitted, it will be set the name of the probed function. Currently, event group name is set as 'probe'.
62'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, 'RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. In addition, 'SRC' specifies a source file which has that function. 75'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, ':RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. And ';PTN' means lazy matching pattern (see LAZY MATCHING). Note that ';PTN' must be the end of the probe point definition. In addition, '@SRC' specifies a source file which has that function.
63It is also possible to specify a probe point by the source line number by using 'SRC:ALN' syntax, where 'SRC' is the source file path and 'ALN' is the line number. 76It is also possible to specify a probe point by the source line number or lazy matching by using 'SRC:ALN' or 'SRC;PTN' syntax, where 'SRC' is the source file path, ':ALN' is the line number and ';PTN' is the lazy matching pattern.
64'ARG' specifies the arguments of this probe point. You can use the name of local variable, or kprobe-tracer argument format (e.g. $retval, %ax, etc). 77'ARG' specifies the arguments of this probe point. You can use the name of local variable, or kprobe-tracer argument format (e.g. $retval, %ax, etc).
65 78
66LINE SYNTAX 79LINE SYNTAX
@@ -76,6 +89,41 @@ and 'ALN2' is end line number in the file. It is also possible to specify how
76many lines to show by using 'NUM'. 89many lines to show by using 'NUM'.
77So, "source.c:100-120" shows lines between 100th to l20th in source.c file. And "func:10+20" shows 20 lines from 10th line of func function. 90So, "source.c:100-120" shows lines between 100th to l20th in source.c file. And "func:10+20" shows 20 lines from 10th line of func function.
78 91
92LAZY MATCHING
93-------------
94 The lazy line matching is similar to glob matching but ignoring spaces in both of pattern and target. So this accepts wildcards('*', '?') and character classes(e.g. [a-z], [!A-Z]).
95
96e.g.
97 'a=*' can matches 'a=b', 'a = b', 'a == b' and so on.
98
99This provides some sort of flexibility and robustness to probe point definitions against minor code changes. For example, actual 10th line of schedule() can be moved easily by modifying schedule(), but the same line matching 'rq=cpu_rq*' may still exist in the function.)
100
101
102EXAMPLES
103--------
104Display which lines in schedule() can be probed:
105
106 ./perf probe --line schedule
107
108Add a probe on schedule() function 12th line with recording cpu local variable:
109
110 ./perf probe schedule:12 cpu
111 or
112 ./perf probe --add='schedule:12 cpu'
113
114 this will add one or more probes which has the name start with "schedule".
115
116 Add probes on lines in schedule() function which calls update_rq_clock().
117
118 ./perf probe 'schedule;update_rq_clock*'
119 or
120 ./perf probe --add='schedule;update_rq_clock*'
121
122Delete all probes on schedule().
123
124 ./perf probe --del='schedule*'
125
126
79SEE ALSO 127SEE ALSO
80-------- 128--------
81linkperf:perf-trace[1], linkperf:perf-record[1] 129linkperf:perf-trace[1], linkperf:perf-record[1]
diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index 54a5b50ff312..2d537382c686 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -500,12 +500,12 @@ else
500 msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]); 500 msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]);
501endif 501endif
502 502
503ifneq ($(shell sh -c "(echo '\#ifndef _MIPS_SZLONG'; echo '\#define _MIPS_SZLONG 0'; echo '\#endif'; echo '\#include <dwarf.h>'; echo '\#include <libdwarf.h>'; echo 'int main(void) { Dwarf_Debug dbg; Dwarf_Error err; Dwarf_Ranges *rng; dwarf_init(0, DW_DLC_READ, 0, 0, &dbg, &err); dwarf_get_ranges(dbg, 0, &rng, 0, 0, &err); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/libdwarf -ldwarf -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 503ifneq ($(shell sh -c "(echo '\#include <dwarf.h>'; echo '\#include <libdw.h>'; echo 'int main(void) { Dwarf *dbg; dbg = dwarf_begin(0, DWARF_C_READ); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/elfutils -ldw -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y)
504 msg := $(warning No libdwarf.h found or old libdwarf.h found, disables dwarf support. Please install libdwarf-dev/libdwarf-devel >= 20081231); 504 msg := $(warning No libdw.h found or old libdw.h found, disables dwarf support. Please install elfutils-devel/elfutils-dev);
505 BASIC_CFLAGS += -DNO_LIBDWARF 505 BASIC_CFLAGS += -DNO_DWARF_SUPPORT
506else 506else
507 BASIC_CFLAGS += -I/usr/include/libdwarf 507 BASIC_CFLAGS += -I/usr/include/elfutils
508 EXTLIBS += -lelf -ldwarf 508 EXTLIBS += -lelf -ldw
509 LIB_OBJS += util/probe-finder.o 509 LIB_OBJS += util/probe-finder.o
510endif 510endif
511 511
diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
index ad47bd4c50ef..c30a33592340 100644
--- a/tools/perf/builtin-probe.c
+++ b/tools/perf/builtin-probe.c
@@ -128,7 +128,7 @@ static void evaluate_probe_point(struct probe_point *pp)
128 pp->function); 128 pp->function);
129} 129}
130 130
131#ifndef NO_LIBDWARF 131#ifndef NO_DWARF_SUPPORT
132static int open_vmlinux(void) 132static int open_vmlinux(void)
133{ 133{
134 if (map__load(session.kmaps[MAP__FUNCTION], NULL) < 0) { 134 if (map__load(session.kmaps[MAP__FUNCTION], NULL) < 0) {
@@ -156,14 +156,16 @@ static const char * const probe_usage[] = {
156 "perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]", 156 "perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]",
157 "perf probe [<options>] --del '[GROUP:]EVENT' ...", 157 "perf probe [<options>] --del '[GROUP:]EVENT' ...",
158 "perf probe --list", 158 "perf probe --list",
159#ifndef NO_DWARF_SUPPORT
159 "perf probe --line 'LINEDESC'", 160 "perf probe --line 'LINEDESC'",
161#endif
160 NULL 162 NULL
161}; 163};
162 164
163static const struct option options[] = { 165static const struct option options[] = {
164 OPT_BOOLEAN('v', "verbose", &verbose, 166 OPT_BOOLEAN('v', "verbose", &verbose,
165 "be more verbose (show parsed arguments, etc)"), 167 "be more verbose (show parsed arguments, etc)"),
166#ifndef NO_LIBDWARF 168#ifndef NO_DWARF_SUPPORT
167 OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name, 169 OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name,
168 "file", "vmlinux pathname"), 170 "file", "vmlinux pathname"),
169#endif 171#endif
@@ -172,30 +174,32 @@ static const struct option options[] = {
172 OPT_CALLBACK('d', "del", NULL, "[GROUP:]EVENT", "delete a probe event.", 174 OPT_CALLBACK('d', "del", NULL, "[GROUP:]EVENT", "delete a probe event.",
173 opt_del_probe_event), 175 opt_del_probe_event),
174 OPT_CALLBACK('a', "add", NULL, 176 OPT_CALLBACK('a', "add", NULL,
175#ifdef NO_LIBDWARF 177#ifdef NO_DWARF_SUPPORT
176 "[EVENT=]FUNC[+OFFS|%return] [ARG ...]", 178 "[EVENT=]FUNC[+OFF|%return] [ARG ...]",
177#else 179#else
178 "[EVENT=]FUNC[+OFFS|%return|:RLN][@SRC]|SRC:ALN [ARG ...]", 180 "[EVENT=]FUNC[@SRC][+OFF|%return|:RL|;PT]|SRC:AL|SRC;PT"
181 " [ARG ...]",
179#endif 182#endif
180 "probe point definition, where\n" 183 "probe point definition, where\n"
181 "\t\tGROUP:\tGroup name (optional)\n" 184 "\t\tGROUP:\tGroup name (optional)\n"
182 "\t\tEVENT:\tEvent name\n" 185 "\t\tEVENT:\tEvent name\n"
183 "\t\tFUNC:\tFunction name\n" 186 "\t\tFUNC:\tFunction name\n"
184 "\t\tOFFS:\tOffset from function entry (in byte)\n" 187 "\t\tOFF:\tOffset from function entry (in byte)\n"
185 "\t\t%return:\tPut the probe at function return\n" 188 "\t\t%return:\tPut the probe at function return\n"
186#ifdef NO_LIBDWARF 189#ifdef NO_DWARF_SUPPORT
187 "\t\tARG:\tProbe argument (only \n" 190 "\t\tARG:\tProbe argument (only \n"
188#else 191#else
189 "\t\tSRC:\tSource code path\n" 192 "\t\tSRC:\tSource code path\n"
190 "\t\tRLN:\tRelative line number from function entry.\n" 193 "\t\tRL:\tRelative line number from function entry.\n"
191 "\t\tALN:\tAbsolute line number in file.\n" 194 "\t\tAL:\tAbsolute line number in file.\n"
195 "\t\tPT:\tLazy expression of line code.\n"
192 "\t\tARG:\tProbe argument (local variable name or\n" 196 "\t\tARG:\tProbe argument (local variable name or\n"
193#endif 197#endif
194 "\t\t\tkprobe-tracer argument format.)\n", 198 "\t\t\tkprobe-tracer argument format.)\n",
195 opt_add_probe_event), 199 opt_add_probe_event),
196 OPT_BOOLEAN('f', "force", &session.force_add, "forcibly add events" 200 OPT_BOOLEAN('f', "force", &session.force_add, "forcibly add events"
197 " with existing name"), 201 " with existing name"),
198#ifndef NO_LIBDWARF 202#ifndef NO_DWARF_SUPPORT
199 OPT_CALLBACK('L', "line", NULL, 203 OPT_CALLBACK('L', "line", NULL,
200 "FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]", 204 "FUNC[:RLN[+NUM|:RLN2]]|SRC:ALN[+NUM|:ALN2]",
201 "Show source code lines.", opt_show_lines), 205 "Show source code lines.", opt_show_lines),
@@ -223,7 +227,7 @@ static void init_vmlinux(void)
223int cmd_probe(int argc, const char **argv, const char *prefix __used) 227int cmd_probe(int argc, const char **argv, const char *prefix __used)
224{ 228{
225 int i, ret; 229 int i, ret;
226#ifndef NO_LIBDWARF 230#ifndef NO_DWARF_SUPPORT
227 int fd; 231 int fd;
228#endif 232#endif
229 struct probe_point *pp; 233 struct probe_point *pp;
@@ -259,7 +263,7 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
259 return 0; 263 return 0;
260 } 264 }
261 265
262#ifndef NO_LIBDWARF 266#ifndef NO_DWARF_SUPPORT
263 if (session.show_lines) { 267 if (session.show_lines) {
264 if (session.nr_probe != 0 || session.dellist) { 268 if (session.nr_probe != 0 || session.dellist) {
265 pr_warning(" Error: Don't use --line with" 269 pr_warning(" Error: Don't use --line with"
@@ -290,9 +294,9 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
290 init_vmlinux(); 294 init_vmlinux();
291 295
292 if (session.need_dwarf) 296 if (session.need_dwarf)
293#ifdef NO_LIBDWARF 297#ifdef NO_DWARF_SUPPORT
294 die("Debuginfo-analysis is not supported"); 298 die("Debuginfo-analysis is not supported");
295#else /* !NO_LIBDWARF */ 299#else /* !NO_DWARF_SUPPORT */
296 pr_debug("Some probes require debuginfo.\n"); 300 pr_debug("Some probes require debuginfo.\n");
297 301
298 fd = open_vmlinux(); 302 fd = open_vmlinux();
@@ -312,7 +316,7 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
312 continue; 316 continue;
313 317
314 lseek(fd, SEEK_SET, 0); 318 lseek(fd, SEEK_SET, 0);
315 ret = find_probepoint(fd, pp); 319 ret = find_probe_point(fd, pp);
316 if (ret > 0) 320 if (ret > 0)
317 continue; 321 continue;
318 if (ret == 0) { /* No error but failed to find probe point. */ 322 if (ret == 0) { /* No error but failed to find probe point. */
@@ -333,7 +337,7 @@ int cmd_probe(int argc, const char **argv, const char *prefix __used)
333 close(fd); 337 close(fd);
334 338
335end_dwarf: 339end_dwarf:
336#endif /* !NO_LIBDWARF */ 340#endif /* !NO_DWARF_SUPPORT */
337 341
338 /* Synthesize probes without dwarf */ 342 /* Synthesize probes without dwarf */
339 for (i = 0; i < session.nr_probe; i++) { 343 for (i = 0; i < session.nr_probe; i++) {
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 8f0568849691..c971e81e9cbf 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -119,14 +119,14 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
119 char c, nc = 0; 119 char c, nc = 0;
120 /* 120 /*
121 * <Syntax> 121 * <Syntax>
122 * perf probe [EVENT=]SRC:LN 122 * perf probe [EVENT=]SRC[:LN|;PTN]
123 * perf probe [EVENT=]FUNC[+OFFS|%return][@SRC] 123 * perf probe [EVENT=]FUNC[@SRC][+OFFS|%return|:LN|;PAT]
124 * 124 *
125 * TODO:Group name support 125 * TODO:Group name support
126 */ 126 */
127 127
128 ptr = strchr(arg, '='); 128 ptr = strpbrk(arg, ";=@+%");
129 if (ptr) { /* Event name */ 129 if (ptr && *ptr == '=') { /* Event name */
130 *ptr = '\0'; 130 *ptr = '\0';
131 tmp = ptr + 1; 131 tmp = ptr + 1;
132 ptr = strchr(arg, ':'); 132 ptr = strchr(arg, ':');
@@ -139,7 +139,7 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
139 arg = tmp; 139 arg = tmp;
140 } 140 }
141 141
142 ptr = strpbrk(arg, ":+@%"); 142 ptr = strpbrk(arg, ";:+@%");
143 if (ptr) { 143 if (ptr) {
144 nc = *ptr; 144 nc = *ptr;
145 *ptr++ = '\0'; 145 *ptr++ = '\0';
@@ -156,7 +156,11 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
156 while (ptr) { 156 while (ptr) {
157 arg = ptr; 157 arg = ptr;
158 c = nc; 158 c = nc;
159 ptr = strpbrk(arg, ":+@%"); 159 if (c == ';') { /* Lazy pattern must be the last part */
160 pp->lazy_line = strdup(arg);
161 break;
162 }
163 ptr = strpbrk(arg, ";:+@%");
160 if (ptr) { 164 if (ptr) {
161 nc = *ptr; 165 nc = *ptr;
162 *ptr++ = '\0'; 166 *ptr++ = '\0';
@@ -165,13 +169,13 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
165 case ':': /* Line number */ 169 case ':': /* Line number */
166 pp->line = strtoul(arg, &tmp, 0); 170 pp->line = strtoul(arg, &tmp, 0);
167 if (*tmp != '\0') 171 if (*tmp != '\0')
168 semantic_error("There is non-digit charactor" 172 semantic_error("There is non-digit char"
169 " in line number."); 173 " in line number.");
170 break; 174 break;
171 case '+': /* Byte offset from a symbol */ 175 case '+': /* Byte offset from a symbol */
172 pp->offset = strtoul(arg, &tmp, 0); 176 pp->offset = strtoul(arg, &tmp, 0);
173 if (*tmp != '\0') 177 if (*tmp != '\0')
174 semantic_error("There is non-digit charactor" 178 semantic_error("There is non-digit character"
175 " in offset."); 179 " in offset.");
176 break; 180 break;
177 case '@': /* File name */ 181 case '@': /* File name */
@@ -179,9 +183,6 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
179 semantic_error("SRC@SRC is not allowed."); 183 semantic_error("SRC@SRC is not allowed.");
180 pp->file = strdup(arg); 184 pp->file = strdup(arg);
181 DIE_IF(pp->file == NULL); 185 DIE_IF(pp->file == NULL);
182 if (ptr)
183 semantic_error("@SRC must be the last "
184 "option.");
185 break; 186 break;
186 case '%': /* Probe places */ 187 case '%': /* Probe places */
187 if (strcmp(arg, "return") == 0) { 188 if (strcmp(arg, "return") == 0) {
@@ -196,11 +197,18 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
196 } 197 }
197 198
198 /* Exclusion check */ 199 /* Exclusion check */
200 if (pp->lazy_line && pp->line)
201 semantic_error("Lazy pattern can't be used with line number.");
202
203 if (pp->lazy_line && pp->offset)
204 semantic_error("Lazy pattern can't be used with offset.");
205
199 if (pp->line && pp->offset) 206 if (pp->line && pp->offset)
200 semantic_error("Offset can't be used with line number."); 207 semantic_error("Offset can't be used with line number.");
201 208
202 if (!pp->line && pp->file && !pp->function) 209 if (!pp->line && !pp->lazy_line && pp->file && !pp->function)
203 semantic_error("File always requires line number."); 210 semantic_error("File always requires line number or "
211 "lazy pattern.");
204 212
205 if (pp->offset && !pp->function) 213 if (pp->offset && !pp->function)
206 semantic_error("Offset requires an entry function."); 214 semantic_error("Offset requires an entry function.");
@@ -208,11 +216,13 @@ static void parse_perf_probe_probepoint(char *arg, struct probe_point *pp)
208 if (pp->retprobe && !pp->function) 216 if (pp->retprobe && !pp->function)
209 semantic_error("Return probe requires an entry function."); 217 semantic_error("Return probe requires an entry function.");
210 218
211 if ((pp->offset || pp->line) && pp->retprobe) 219 if ((pp->offset || pp->line || pp->lazy_line) && pp->retprobe)
212 semantic_error("Offset/Line can't be used with return probe."); 220 semantic_error("Offset/Line/Lazy pattern can't be used with "
221 "return probe.");
213 222
214 pr_debug("symbol:%s file:%s line:%d offset:%d, return:%d\n", 223 pr_debug("symbol:%s file:%s line:%d offset:%d return:%d lazy:%s\n",
215 pp->function, pp->file, pp->line, pp->offset, pp->retprobe); 224 pp->function, pp->file, pp->line, pp->offset, pp->retprobe,
225 pp->lazy_line);
216} 226}
217 227
218/* Parse perf-probe event definition */ 228/* Parse perf-probe event definition */
@@ -458,6 +468,8 @@ static void clear_probe_point(struct probe_point *pp)
458 free(pp->function); 468 free(pp->function);
459 if (pp->file) 469 if (pp->file)
460 free(pp->file); 470 free(pp->file);
471 if (pp->lazy_line)
472 free(pp->lazy_line);
461 for (i = 0; i < pp->nr_args; i++) 473 for (i = 0; i < pp->nr_args; i++)
462 free(pp->args[i]); 474 free(pp->args[i]);
463 if (pp->args) 475 if (pp->args)
@@ -719,6 +731,7 @@ void del_trace_kprobe_events(struct strlist *dellist)
719} 731}
720 732
721#define LINEBUF_SIZE 256 733#define LINEBUF_SIZE 256
734#define NR_ADDITIONAL_LINES 2
722 735
723static void show_one_line(FILE *fp, unsigned int l, bool skip, bool show_num) 736static void show_one_line(FILE *fp, unsigned int l, bool skip, bool show_num)
724{ 737{
@@ -779,5 +792,11 @@ void show_line_range(struct line_range *lr)
779 show_one_line(fp, (l++) - lr->offset, false, false); 792 show_one_line(fp, (l++) - lr->offset, false, false);
780 show_one_line(fp, (l++) - lr->offset, false, true); 793 show_one_line(fp, (l++) - lr->offset, false, true);
781 } 794 }
795
796 if (lr->end == INT_MAX)
797 lr->end = l + NR_ADDITIONAL_LINES;
798 while (l < lr->end && !feof(fp))
799 show_one_line(fp, (l++) - lr->offset, false, false);
800
782 fclose(fp); 801 fclose(fp);
783} 802}
diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index 1b2124d12f68..e77dc886760e 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -32,21 +32,13 @@
32#include <stdarg.h> 32#include <stdarg.h>
33#include <ctype.h> 33#include <ctype.h>
34 34
35#include "string.h"
35#include "event.h" 36#include "event.h"
36#include "debug.h" 37#include "debug.h"
37#include "util.h" 38#include "util.h"
38#include "probe-finder.h" 39#include "probe-finder.h"
39 40
40 41
41/* Dwarf_Die Linkage to parent Die */
42struct die_link {
43 struct die_link *parent; /* Parent die */
44 Dwarf_Die die; /* Current die */
45};
46
47static Dwarf_Debug __dw_debug;
48static Dwarf_Error __dw_error;
49
50/* 42/*
51 * Generic dwarf analysis helpers 43 * Generic dwarf analysis helpers
52 */ 44 */
@@ -113,281 +105,190 @@ static int strtailcmp(const char *s1, const char *s2)
113 return 0; 105 return 0;
114} 106}
115 107
116/* Find the fileno of the target file. */ 108/* Line number list operations */
117static Dwarf_Unsigned cu_find_fileno(Dwarf_Die cu_die, const char *fname)
118{
119 Dwarf_Signed cnt, i;
120 Dwarf_Unsigned found = 0;
121 char **srcs;
122 int ret;
123 109
124 if (!fname) 110/* Add a line to line number list */
125 return 0; 111static void line_list__add_line(struct list_head *head, unsigned int line)
112{
113 struct line_node *ln;
114 struct list_head *p;
126 115
127 ret = dwarf_srcfiles(cu_die, &srcs, &cnt, &__dw_error); 116 /* Reverse search, because new line will be the last one */
128 if (ret == DW_DLV_OK) { 117 list_for_each_entry_reverse(ln, head, list) {
129 for (i = 0; i < cnt && !found; i++) { 118 if (ln->line < line) {
130 if (strtailcmp(srcs[i], fname) == 0) 119 p = &ln->list;
131 found = i + 1; 120 goto found;
132 dwarf_dealloc(__dw_debug, srcs[i], DW_DLA_STRING); 121 } else if (ln->line == line) /* Already exist */
133 } 122 return ;
134 for (; i < cnt; i++)
135 dwarf_dealloc(__dw_debug, srcs[i], DW_DLA_STRING);
136 dwarf_dealloc(__dw_debug, srcs, DW_DLA_LIST);
137 } 123 }
138 if (found) 124 /* List is empty, or the smallest entry */
139 pr_debug("found fno: %d\n", (int)found); 125 p = head;
140 return found; 126found:
127 pr_debug("line list: add a line %u\n", line);
128 ln = zalloc(sizeof(struct line_node));
129 DIE_IF(ln == NULL);
130 ln->line = line;
131 INIT_LIST_HEAD(&ln->list);
132 list_add(&ln->list, p);
141} 133}
142 134
143static int cu_get_filename(Dwarf_Die cu_die, Dwarf_Unsigned fno, char **buf) 135/* Check if the line in line number list */
136static int line_list__has_line(struct list_head *head, unsigned int line)
144{ 137{
145 Dwarf_Signed cnt, i; 138 struct line_node *ln;
146 char **srcs; 139
147 int ret = 0; 140 /* Reverse search, because new line will be the last one */
148 141 list_for_each_entry(ln, head, list)
149 if (!buf || !fno) 142 if (ln->line == line)
150 return -EINVAL; 143 return 1;
151 144
152 ret = dwarf_srcfiles(cu_die, &srcs, &cnt, &__dw_error); 145 return 0;
153 if (ret == DW_DLV_OK) {
154 if ((Dwarf_Unsigned)cnt > fno - 1) {
155 *buf = strdup(srcs[fno - 1]);
156 ret = 0;
157 pr_debug("found filename: %s\n", *buf);
158 } else
159 ret = -ENOENT;
160 for (i = 0; i < cnt; i++)
161 dwarf_dealloc(__dw_debug, srcs[i], DW_DLA_STRING);
162 dwarf_dealloc(__dw_debug, srcs, DW_DLA_LIST);
163 } else
164 ret = -EINVAL;
165 return ret;
166} 146}
167 147
168/* Compare diename and tname */ 148/* Init line number list */
169static int die_compare_name(Dwarf_Die dw_die, const char *tname) 149static void line_list__init(struct list_head *head)
170{ 150{
171 char *name; 151 INIT_LIST_HEAD(head);
172 int ret;
173 ret = dwarf_diename(dw_die, &name, &__dw_error);
174 DIE_IF(ret == DW_DLV_ERROR);
175 if (ret == DW_DLV_OK) {
176 ret = strcmp(tname, name);
177 dwarf_dealloc(__dw_debug, name, DW_DLA_STRING);
178 } else
179 ret = -1;
180 return ret;
181} 152}
182 153
183/* Check the address is in the subprogram(function). */ 154/* Free line number list */
184static int die_within_subprogram(Dwarf_Die sp_die, Dwarf_Addr addr, 155static void line_list__free(struct list_head *head)
185 Dwarf_Signed *offs)
186{ 156{
187 Dwarf_Addr lopc, hipc; 157 struct line_node *ln;
188 int ret; 158 while (!list_empty(head)) {
189 159 ln = list_first_entry(head, struct line_node, list);
190 /* TODO: check ranges */ 160 list_del(&ln->list);
191 ret = dwarf_lowpc(sp_die, &lopc, &__dw_error); 161 free(ln);
192 DIE_IF(ret == DW_DLV_ERROR); 162 }
193 if (ret == DW_DLV_NO_ENTRY)
194 return 0;
195 ret = dwarf_highpc(sp_die, &hipc, &__dw_error);
196 DIE_IF(ret != DW_DLV_OK);
197 if (lopc <= addr && addr < hipc) {
198 *offs = addr - lopc;
199 return 1;
200 } else
201 return 0;
202} 163}
203 164
204/* Check the die is inlined function */ 165/* Dwarf wrappers */
205static Dwarf_Bool die_inlined_subprogram(Dwarf_Die dw_die) 166
167/* Find the realpath of the target file. */
168static const char *cu_find_realpath(Dwarf_Die *cu_die, const char *fname)
206{ 169{
207 /* TODO: check strictly */ 170 Dwarf_Files *files;
208 Dwarf_Bool inl; 171 size_t nfiles, i;
172 const char *src;
209 int ret; 173 int ret;
210 174
211 ret = dwarf_hasattr(dw_die, DW_AT_inline, &inl, &__dw_error); 175 if (!fname)
212 DIE_IF(ret == DW_DLV_ERROR); 176 return NULL;
213 return inl;
214}
215 177
216/* Get the offset of abstruct_origin */ 178 ret = dwarf_getsrcfiles(cu_die, &files, &nfiles);
217static Dwarf_Off die_get_abstract_origin(Dwarf_Die dw_die) 179 if (ret != 0)
218{ 180 return NULL;
219 Dwarf_Attribute attr;
220 Dwarf_Off cu_offs;
221 int ret;
222 181
223 ret = dwarf_attr(dw_die, DW_AT_abstract_origin, &attr, &__dw_error); 182 for (i = 0; i < nfiles; i++) {
224 DIE_IF(ret != DW_DLV_OK); 183 src = dwarf_filesrc(files, i, NULL, NULL);
225 ret = dwarf_formref(attr, &cu_offs, &__dw_error); 184 if (strtailcmp(src, fname) == 0)
226 DIE_IF(ret != DW_DLV_OK); 185 break;
227 dwarf_dealloc(__dw_debug, attr, DW_DLA_ATTR); 186 }
228 return cu_offs; 187 return src;
229} 188}
230 189
231/* Get entry pc(or low pc, 1st entry of ranges) of the die */ 190struct __addr_die_search_param {
232static Dwarf_Addr die_get_entrypc(Dwarf_Die dw_die) 191 Dwarf_Addr addr;
192 Dwarf_Die *die_mem;
193};
194
195static int __die_search_func_cb(Dwarf_Die *fn_die, void *data)
233{ 196{
234 Dwarf_Attribute attr; 197 struct __addr_die_search_param *ad = data;
235 Dwarf_Addr addr;
236 Dwarf_Off offs;
237 Dwarf_Ranges *ranges;
238 Dwarf_Signed cnt;
239 int ret;
240 198
241 /* Try to get entry pc */ 199 if (dwarf_tag(fn_die) == DW_TAG_subprogram &&
242 ret = dwarf_attr(dw_die, DW_AT_entry_pc, &attr, &__dw_error); 200 dwarf_haspc(fn_die, ad->addr)) {
243 DIE_IF(ret == DW_DLV_ERROR); 201 memcpy(ad->die_mem, fn_die, sizeof(Dwarf_Die));
244 if (ret == DW_DLV_OK) { 202 return DWARF_CB_ABORT;
245 ret = dwarf_formaddr(attr, &addr, &__dw_error);
246 DIE_IF(ret != DW_DLV_OK);
247 dwarf_dealloc(__dw_debug, attr, DW_DLA_ATTR);
248 return addr;
249 } 203 }
204 return DWARF_CB_OK;
205}
250 206
251 /* Try to get low pc */ 207/* Search a real subprogram including this line, */
252 ret = dwarf_lowpc(dw_die, &addr, &__dw_error); 208static Dwarf_Die *die_get_real_subprogram(Dwarf_Die *cu_die, Dwarf_Addr addr,
253 DIE_IF(ret == DW_DLV_ERROR); 209 Dwarf_Die *die_mem)
254 if (ret == DW_DLV_OK) 210{
255 return addr; 211 struct __addr_die_search_param ad;
256 212 ad.addr = addr;
257 /* Try to get ranges */ 213 ad.die_mem = die_mem;
258 ret = dwarf_attr(dw_die, DW_AT_ranges, &attr, &__dw_error); 214 /* dwarf_getscopes can't find subprogram. */
259 DIE_IF(ret != DW_DLV_OK); 215 if (!dwarf_getfuncs(cu_die, __die_search_func_cb, &ad, 0))
260 ret = dwarf_formref(attr, &offs, &__dw_error); 216 return NULL;
261 DIE_IF(ret != DW_DLV_OK); 217 else
262 ret = dwarf_get_ranges(__dw_debug, offs, &ranges, &cnt, NULL, 218 return die_mem;
263 &__dw_error);
264 DIE_IF(ret != DW_DLV_OK);
265 addr = ranges[0].dwr_addr1;
266 dwarf_ranges_dealloc(__dw_debug, ranges, cnt);
267 return addr;
268} 219}
269 220
270/* 221/* Similar to dwarf_getfuncs, but returns inlined_subroutine if exists. */
271 * Search a Die from Die tree. 222static Dwarf_Die *die_get_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,
272 * Note: cur_link->die should be deallocated in this function. 223 Dwarf_Die *die_mem)
273 */
274static int __search_die_tree(struct die_link *cur_link,
275 int (*die_cb)(struct die_link *, void *),
276 void *data)
277{ 224{
278 Dwarf_Die new_die; 225 Dwarf_Die child_die;
279 struct die_link new_link;
280 int ret; 226 int ret;
281 227
282 if (!die_cb) 228 ret = dwarf_child(sp_die, die_mem);
283 return 0; 229 if (ret != 0)
284 230 return NULL;
285 /* Check current die */
286 while (!(ret = die_cb(cur_link, data))) {
287 /* Check child die */
288 ret = dwarf_child(cur_link->die, &new_die, &__dw_error);
289 DIE_IF(ret == DW_DLV_ERROR);
290 if (ret == DW_DLV_OK) {
291 new_link.parent = cur_link;
292 new_link.die = new_die;
293 ret = __search_die_tree(&new_link, die_cb, data);
294 if (ret)
295 break;
296 }
297 231
298 /* Move to next sibling */ 232 do {
299 ret = dwarf_siblingof(__dw_debug, cur_link->die, &new_die, 233 if (dwarf_tag(die_mem) == DW_TAG_inlined_subroutine &&
300 &__dw_error); 234 dwarf_haspc(die_mem, addr))
301 DIE_IF(ret == DW_DLV_ERROR); 235 return die_mem;
302 dwarf_dealloc(__dw_debug, cur_link->die, DW_DLA_DIE);
303 cur_link->die = new_die;
304 if (ret == DW_DLV_NO_ENTRY)
305 return 0;
306 }
307 dwarf_dealloc(__dw_debug, cur_link->die, DW_DLA_DIE);
308 return ret;
309}
310 236
311/* Search a die in its children's die tree */ 237 if (die_get_inlinefunc(die_mem, addr, &child_die)) {
312static int search_die_from_children(Dwarf_Die parent_die, 238 memcpy(die_mem, &child_die, sizeof(Dwarf_Die));
313 int (*die_cb)(struct die_link *, void *), 239 return die_mem;
314 void *data) 240 }
315{ 241 } while (dwarf_siblingof(die_mem, die_mem) == 0);
316 struct die_link new_link;
317 int ret;
318 242
319 new_link.parent = NULL; 243 return NULL;
320 ret = dwarf_child(parent_die, &new_link.die, &__dw_error);
321 DIE_IF(ret == DW_DLV_ERROR);
322 if (ret == DW_DLV_OK)
323 return __search_die_tree(&new_link, die_cb, data);
324 else
325 return 0;
326} 244}
327 245
328/* Find a locdesc corresponding to the address */ 246/* Compare diename and tname */
329static int attr_get_locdesc(Dwarf_Attribute attr, Dwarf_Locdesc *desc, 247static bool die_compare_name(Dwarf_Die *dw_die, const char *tname)
330 Dwarf_Addr addr)
331{ 248{
332 Dwarf_Signed lcnt; 249 const char *name;
333 Dwarf_Locdesc **llbuf; 250 name = dwarf_diename(dw_die);
334 int ret, i; 251 DIE_IF(name == NULL);
335 252 return strcmp(tname, name);
336 ret = dwarf_loclist_n(attr, &llbuf, &lcnt, &__dw_error);
337 DIE_IF(ret != DW_DLV_OK);
338 ret = DW_DLV_NO_ENTRY;
339 for (i = 0; i < lcnt; ++i) {
340 if (llbuf[i]->ld_lopc <= addr &&
341 llbuf[i]->ld_hipc > addr) {
342 memcpy(desc, llbuf[i], sizeof(Dwarf_Locdesc));
343 desc->ld_s =
344 malloc(sizeof(Dwarf_Loc) * llbuf[i]->ld_cents);
345 DIE_IF(desc->ld_s == NULL);
346 memcpy(desc->ld_s, llbuf[i]->ld_s,
347 sizeof(Dwarf_Loc) * llbuf[i]->ld_cents);
348 ret = DW_DLV_OK;
349 break;
350 }
351 dwarf_dealloc(__dw_debug, llbuf[i]->ld_s, DW_DLA_LOC_BLOCK);
352 dwarf_dealloc(__dw_debug, llbuf[i], DW_DLA_LOCDESC);
353 }
354 /* Releasing loop */
355 for (; i < lcnt; ++i) {
356 dwarf_dealloc(__dw_debug, llbuf[i]->ld_s, DW_DLA_LOC_BLOCK);
357 dwarf_dealloc(__dw_debug, llbuf[i], DW_DLA_LOCDESC);
358 }
359 dwarf_dealloc(__dw_debug, llbuf, DW_DLA_LIST);
360 return ret;
361} 253}
362 254
363/* Get decl_file attribute value (file number) */ 255/* Get entry pc(or low pc, 1st entry of ranges) of the die */
364static Dwarf_Unsigned die_get_decl_file(Dwarf_Die sp_die) 256static Dwarf_Addr die_get_entrypc(Dwarf_Die *dw_die)
365{ 257{
366 Dwarf_Attribute attr; 258 Dwarf_Addr epc;
367 Dwarf_Unsigned fno;
368 int ret; 259 int ret;
369 260
370 ret = dwarf_attr(sp_die, DW_AT_decl_file, &attr, &__dw_error); 261 ret = dwarf_entrypc(dw_die, &epc);
371 DIE_IF(ret != DW_DLV_OK); 262 DIE_IF(ret == -1);
372 dwarf_formudata(attr, &fno, &__dw_error); 263 return epc;
373 DIE_IF(ret != DW_DLV_OK);
374 dwarf_dealloc(__dw_debug, attr, DW_DLA_ATTR);
375 return fno;
376} 264}
377 265
378/* Get decl_line attribute value (line number) */ 266/* Get a variable die */
379static Dwarf_Unsigned die_get_decl_line(Dwarf_Die sp_die) 267static Dwarf_Die *die_find_variable(Dwarf_Die *sp_die, const char *name,
268 Dwarf_Die *die_mem)
380{ 269{
381 Dwarf_Attribute attr; 270 Dwarf_Die child_die;
382 Dwarf_Unsigned lno; 271 int tag;
383 int ret; 272 int ret;
384 273
385 ret = dwarf_attr(sp_die, DW_AT_decl_line, &attr, &__dw_error); 274 ret = dwarf_child(sp_die, die_mem);
386 DIE_IF(ret != DW_DLV_OK); 275 if (ret != 0)
387 dwarf_formudata(attr, &lno, &__dw_error); 276 return NULL;
388 DIE_IF(ret != DW_DLV_OK); 277
389 dwarf_dealloc(__dw_debug, attr, DW_DLA_ATTR); 278 do {
390 return lno; 279 tag = dwarf_tag(die_mem);
280 if ((tag == DW_TAG_formal_parameter ||
281 tag == DW_TAG_variable) &&
282 (die_compare_name(die_mem, name) == 0))
283 return die_mem;
284
285 if (die_find_variable(die_mem, name, &child_die)) {
286 memcpy(die_mem, &child_die, sizeof(Dwarf_Die));
287 return die_mem;
288 }
289 } while (dwarf_siblingof(die_mem, die_mem) == 0);
290
291 return NULL;
391} 292}
392 293
393/* 294/*
@@ -395,47 +296,45 @@ static Dwarf_Unsigned die_get_decl_line(Dwarf_Die sp_die)
395 */ 296 */
396 297
397/* Show a location */ 298/* Show a location */
398static void show_location(Dwarf_Loc *loc, struct probe_finder *pf) 299static void show_location(Dwarf_Op *op, struct probe_finder *pf)
399{ 300{
400 Dwarf_Small op; 301 unsigned int regn;
401 Dwarf_Unsigned regn; 302 Dwarf_Word offs = 0;
402 Dwarf_Signed offs;
403 int deref = 0, ret; 303 int deref = 0, ret;
404 const char *regs; 304 const char *regs;
405 305
406 op = loc->lr_atom; 306 /* TODO: support CFA */
407
408 /* If this is based on frame buffer, set the offset */ 307 /* If this is based on frame buffer, set the offset */
409 if (op == DW_OP_fbreg) { 308 if (op->atom == DW_OP_fbreg) {
309 if (pf->fb_ops == NULL)
310 die("The attribute of frame base is not supported.\n");
410 deref = 1; 311 deref = 1;
411 offs = (Dwarf_Signed)loc->lr_number; 312 offs = op->number;
412 op = pf->fbloc.ld_s[0].lr_atom; 313 op = &pf->fb_ops[0];
413 loc = &pf->fbloc.ld_s[0]; 314 }
414 } else
415 offs = 0;
416 315
417 if (op >= DW_OP_breg0 && op <= DW_OP_breg31) { 316 if (op->atom >= DW_OP_breg0 && op->atom <= DW_OP_breg31) {
418 regn = op - DW_OP_breg0; 317 regn = op->atom - DW_OP_breg0;
419 offs += (Dwarf_Signed)loc->lr_number; 318 offs += op->number;
420 deref = 1; 319 deref = 1;
421 } else if (op >= DW_OP_reg0 && op <= DW_OP_reg31) { 320 } else if (op->atom >= DW_OP_reg0 && op->atom <= DW_OP_reg31) {
422 regn = op - DW_OP_reg0; 321 regn = op->atom - DW_OP_reg0;
423 } else if (op == DW_OP_bregx) { 322 } else if (op->atom == DW_OP_bregx) {
424 regn = loc->lr_number; 323 regn = op->number;
425 offs += (Dwarf_Signed)loc->lr_number2; 324 offs += op->number2;
426 deref = 1; 325 deref = 1;
427 } else if (op == DW_OP_regx) { 326 } else if (op->atom == DW_OP_regx) {
428 regn = loc->lr_number; 327 regn = op->number;
429 } else 328 } else
430 die("Dwarf_OP %d is not supported.", op); 329 die("DW_OP %d is not supported.", op->atom);
431 330
432 regs = get_arch_regstr(regn); 331 regs = get_arch_regstr(regn);
433 if (!regs) 332 if (!regs)
434 die("%lld exceeds max register number.", regn); 333 die("%u exceeds max register number.", regn);
435 334
436 if (deref) 335 if (deref)
437 ret = snprintf(pf->buf, pf->len, 336 ret = snprintf(pf->buf, pf->len, " %s=+%ju(%s)",
438 " %s=%+lld(%s)", pf->var, offs, regs); 337 pf->var, (uintmax_t)offs, regs);
439 else 338 else
440 ret = snprintf(pf->buf, pf->len, " %s=%s", pf->var, regs); 339 ret = snprintf(pf->buf, pf->len, " %s=%s", pf->var, regs);
441 DIE_IF(ret < 0); 340 DIE_IF(ret < 0);
@@ -443,52 +342,37 @@ static void show_location(Dwarf_Loc *loc, struct probe_finder *pf)
443} 342}
444 343
445/* Show a variables in kprobe event format */ 344/* Show a variables in kprobe event format */
446static void show_variable(Dwarf_Die vr_die, struct probe_finder *pf) 345static void show_variable(Dwarf_Die *vr_die, struct probe_finder *pf)
447{ 346{
448 Dwarf_Attribute attr; 347 Dwarf_Attribute attr;
449 Dwarf_Locdesc ld; 348 Dwarf_Op *expr;
349 size_t nexpr;
450 int ret; 350 int ret;
451 351
452 ret = dwarf_attr(vr_die, DW_AT_location, &attr, &__dw_error); 352 if (dwarf_attr(vr_die, DW_AT_location, &attr) == NULL)
453 if (ret != DW_DLV_OK)
454 goto error; 353 goto error;
455 ret = attr_get_locdesc(attr, &ld, (pf->addr - pf->cu_base)); 354 /* TODO: handle more than 1 exprs */
456 if (ret != DW_DLV_OK) 355 ret = dwarf_getlocation_addr(&attr, (pf->addr - pf->cu_base),
356 &expr, &nexpr, 1);
357 if (ret <= 0 || nexpr == 0)
457 goto error; 358 goto error;
458 /* TODO? */ 359
459 DIE_IF(ld.ld_cents != 1); 360 show_location(expr, pf);
460 show_location(&ld.ld_s[0], pf); 361 /* *expr will be cached in libdw. Don't free it. */
461 free(ld.ld_s);
462 dwarf_dealloc(__dw_debug, attr, DW_DLA_ATTR);
463 return ; 362 return ;
464error: 363error:
364 /* TODO: Support const_value */
465 die("Failed to find the location of %s at this address.\n" 365 die("Failed to find the location of %s at this address.\n"
466 " Perhaps, it has been optimized out.", pf->var); 366 " Perhaps, it has been optimized out.", pf->var);
467} 367}
468 368
469static int variable_callback(struct die_link *dlink, void *data)
470{
471 struct probe_finder *pf = (struct probe_finder *)data;
472 Dwarf_Half tag;
473 int ret;
474
475 ret = dwarf_tag(dlink->die, &tag, &__dw_error);
476 DIE_IF(ret == DW_DLV_ERROR);
477 if ((tag == DW_TAG_formal_parameter ||
478 tag == DW_TAG_variable) &&
479 (die_compare_name(dlink->die, pf->var) == 0)) {
480 show_variable(dlink->die, pf);
481 return 1;
482 }
483 /* TODO: Support struct members and arrays */
484 return 0;
485}
486
487/* Find a variable in a subprogram die */ 369/* Find a variable in a subprogram die */
488static void find_variable(Dwarf_Die sp_die, struct probe_finder *pf) 370static void find_variable(Dwarf_Die *sp_die, struct probe_finder *pf)
489{ 371{
490 int ret; 372 int ret;
373 Dwarf_Die vr_die;
491 374
375 /* TODO: Support struct members and arrays */
492 if (!is_c_varname(pf->var)) { 376 if (!is_c_varname(pf->var)) {
493 /* Output raw parameters */ 377 /* Output raw parameters */
494 ret = snprintf(pf->buf, pf->len, " %s", pf->var); 378 ret = snprintf(pf->buf, pf->len, " %s", pf->var);
@@ -499,58 +383,51 @@ static void find_variable(Dwarf_Die sp_die, struct probe_finder *pf)
499 383
500 pr_debug("Searching '%s' variable in context.\n", pf->var); 384 pr_debug("Searching '%s' variable in context.\n", pf->var);
501 /* Search child die for local variables and parameters. */ 385 /* Search child die for local variables and parameters. */
502 ret = search_die_from_children(sp_die, variable_callback, pf); 386 if (!die_find_variable(sp_die, pf->var, &vr_die))
503 if (!ret)
504 die("Failed to find '%s' in this function.", pf->var); 387 die("Failed to find '%s' in this function.", pf->var);
505}
506
507/* Get a frame base on the address */
508static void get_current_frame_base(Dwarf_Die sp_die, struct probe_finder *pf)
509{
510 Dwarf_Attribute attr;
511 int ret;
512 388
513 ret = dwarf_attr(sp_die, DW_AT_frame_base, &attr, &__dw_error); 389 show_variable(&vr_die, pf);
514 DIE_IF(ret != DW_DLV_OK);
515 ret = attr_get_locdesc(attr, &pf->fbloc, (pf->addr - pf->cu_base));
516 DIE_IF(ret != DW_DLV_OK);
517 dwarf_dealloc(__dw_debug, attr, DW_DLA_ATTR);
518}
519
520static void free_current_frame_base(struct probe_finder *pf)
521{
522 free(pf->fbloc.ld_s);
523 memset(&pf->fbloc, 0, sizeof(Dwarf_Locdesc));
524} 390}
525 391
526/* Show a probe point to output buffer */ 392/* Show a probe point to output buffer */
527static void show_probepoint(Dwarf_Die sp_die, Dwarf_Signed offs, 393static void show_probe_point(Dwarf_Die *sp_die, struct probe_finder *pf)
528 struct probe_finder *pf)
529{ 394{
530 struct probe_point *pp = pf->pp; 395 struct probe_point *pp = pf->pp;
531 char *name; 396 Dwarf_Addr eaddr;
397 Dwarf_Die die_mem;
398 const char *name;
532 char tmp[MAX_PROBE_BUFFER]; 399 char tmp[MAX_PROBE_BUFFER];
533 int ret, i, len; 400 int ret, i, len;
401 Dwarf_Attribute fb_attr;
402 size_t nops;
403
404 /* If no real subprogram, find a real one */
405 if (!sp_die || dwarf_tag(sp_die) != DW_TAG_subprogram) {
406 sp_die = die_get_real_subprogram(&pf->cu_die,
407 pf->addr, &die_mem);
408 if (!sp_die)
409 die("Probe point is not found in subprograms.");
410 }
534 411
535 /* Output name of probe point */ 412 /* Output name of probe point */
536 ret = dwarf_diename(sp_die, &name, &__dw_error); 413 name = dwarf_diename(sp_die);
537 DIE_IF(ret == DW_DLV_ERROR); 414 if (name) {
538 if (ret == DW_DLV_OK) { 415 dwarf_entrypc(sp_die, &eaddr);
539 ret = snprintf(tmp, MAX_PROBE_BUFFER, "%s+%u", name, 416 ret = snprintf(tmp, MAX_PROBE_BUFFER, "%s+%lu", name,
540 (unsigned int)offs); 417 (unsigned long)(pf->addr - eaddr));
541 /* Copy the function name if possible */ 418 /* Copy the function name if possible */
542 if (!pp->function) { 419 if (!pp->function) {
543 pp->function = strdup(name); 420 pp->function = strdup(name);
544 pp->offset = offs; 421 pp->offset = (size_t)(pf->addr - eaddr);
545 } 422 }
546 dwarf_dealloc(__dw_debug, name, DW_DLA_STRING);
547 } else { 423 } else {
548 /* This function has no name. */ 424 /* This function has no name. */
549 ret = snprintf(tmp, MAX_PROBE_BUFFER, "0x%llx", pf->addr); 425 ret = snprintf(tmp, MAX_PROBE_BUFFER, "0x%jx",
426 (uintmax_t)pf->addr);
550 if (!pp->function) { 427 if (!pp->function) {
551 /* TODO: Use _stext */ 428 /* TODO: Use _stext */
552 pp->function = strdup(""); 429 pp->function = strdup("");
553 pp->offset = (int)pf->addr; 430 pp->offset = (size_t)pf->addr;
554 } 431 }
555 } 432 }
556 DIE_IF(ret < 0); 433 DIE_IF(ret < 0);
@@ -558,8 +435,15 @@ static void show_probepoint(Dwarf_Die sp_die, Dwarf_Signed offs,
558 len = ret; 435 len = ret;
559 pr_debug("Probe point found: %s\n", tmp); 436 pr_debug("Probe point found: %s\n", tmp);
560 437
438 /* Get the frame base attribute/ops */
439 dwarf_attr(sp_die, DW_AT_frame_base, &fb_attr);
440 ret = dwarf_getlocation_addr(&fb_attr, (pf->addr - pf->cu_base),
441 &pf->fb_ops, &nops, 1);
442 if (ret <= 0 || nops == 0)
443 pf->fb_ops = NULL;
444
561 /* Find each argument */ 445 /* Find each argument */
562 get_current_frame_base(sp_die, pf); 446 /* TODO: use dwarf_cfi_addrframe */
563 for (i = 0; i < pp->nr_args; i++) { 447 for (i = 0; i < pp->nr_args; i++) {
564 pf->var = pp->args[i]; 448 pf->var = pp->args[i];
565 pf->buf = &tmp[len]; 449 pf->buf = &tmp[len];
@@ -567,289 +451,327 @@ static void show_probepoint(Dwarf_Die sp_die, Dwarf_Signed offs,
567 find_variable(sp_die, pf); 451 find_variable(sp_die, pf);
568 len += strlen(pf->buf); 452 len += strlen(pf->buf);
569 } 453 }
570 free_current_frame_base(pf); 454
455 /* *pf->fb_ops will be cached in libdw. Don't free it. */
456 pf->fb_ops = NULL;
571 457
572 pp->probes[pp->found] = strdup(tmp); 458 pp->probes[pp->found] = strdup(tmp);
573 pp->found++; 459 pp->found++;
574} 460}
575 461
576static int probeaddr_callback(struct die_link *dlink, void *data) 462/* Find probe point from its line number */
463static void find_probe_point_by_line(struct probe_finder *pf)
577{ 464{
578 struct probe_finder *pf = (struct probe_finder *)data; 465 Dwarf_Lines *lines;
579 Dwarf_Half tag; 466 Dwarf_Line *line;
580 Dwarf_Signed offs; 467 size_t nlines, i;
468 Dwarf_Addr addr;
469 int lineno;
581 int ret; 470 int ret;
582 471
583 ret = dwarf_tag(dlink->die, &tag, &__dw_error); 472 ret = dwarf_getsrclines(&pf->cu_die, &lines, &nlines);
584 DIE_IF(ret == DW_DLV_ERROR); 473 DIE_IF(ret != 0);
585 /* Check the address is in this subprogram */ 474
586 if (tag == DW_TAG_subprogram && 475 for (i = 0; i < nlines; i++) {
587 die_within_subprogram(dlink->die, pf->addr, &offs)) { 476 line = dwarf_onesrcline(lines, i);
588 show_probepoint(dlink->die, offs, pf); 477 dwarf_lineno(line, &lineno);
589 return 1; 478 if (lineno != pf->lno)
479 continue;
480
481 /* TODO: Get fileno from line, but how? */
482 if (strtailcmp(dwarf_linesrc(line, NULL, NULL), pf->fname) != 0)
483 continue;
484
485 ret = dwarf_lineaddr(line, &addr);
486 DIE_IF(ret != 0);
487 pr_debug("Probe line found: line[%d]:%d addr:0x%jx\n",
488 (int)i, lineno, (uintmax_t)addr);
489 pf->addr = addr;
490
491 show_probe_point(NULL, pf);
492 /* Continuing, because target line might be inlined. */
590 } 493 }
591 return 0;
592} 494}
593 495
594/* Find probe point from its line number */ 496/* Find lines which match lazy pattern */
595static void find_probe_point_by_line(struct probe_finder *pf) 497static int find_lazy_match_lines(struct list_head *head,
498 const char *fname, const char *pat)
596{ 499{
597 Dwarf_Signed cnt, i, clm; 500 char *fbuf, *p1, *p2;
598 Dwarf_Line *lines; 501 int fd, line, nlines = 0;
599 Dwarf_Unsigned lineno = 0; 502 struct stat st;
503
504 fd = open(fname, O_RDONLY);
505 if (fd < 0)
506 die("failed to open %s", fname);
507 DIE_IF(fstat(fd, &st) < 0);
508 fbuf = malloc(st.st_size + 2);
509 DIE_IF(fbuf == NULL);
510 DIE_IF(read(fd, fbuf, st.st_size) < 0);
511 close(fd);
512 fbuf[st.st_size] = '\n'; /* Dummy line */
513 fbuf[st.st_size + 1] = '\0';
514 p1 = fbuf;
515 line = 1;
516 while ((p2 = strchr(p1, '\n')) != NULL) {
517 *p2 = '\0';
518 if (strlazymatch(p1, pat)) {
519 line_list__add_line(head, line);
520 nlines++;
521 }
522 line++;
523 p1 = p2 + 1;
524 }
525 free(fbuf);
526 return nlines;
527}
528
529/* Find probe points from lazy pattern */
530static void find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf)
531{
532 Dwarf_Lines *lines;
533 Dwarf_Line *line;
534 size_t nlines, i;
600 Dwarf_Addr addr; 535 Dwarf_Addr addr;
601 Dwarf_Unsigned fno; 536 Dwarf_Die die_mem;
537 int lineno;
602 int ret; 538 int ret;
603 539
604 ret = dwarf_srclines(pf->cu_die, &lines, &cnt, &__dw_error); 540 if (list_empty(&pf->lcache)) {
605 DIE_IF(ret != DW_DLV_OK); 541 /* Matching lazy line pattern */
542 ret = find_lazy_match_lines(&pf->lcache, pf->fname,
543 pf->pp->lazy_line);
544 if (ret <= 0)
545 die("No matched lines found in %s.", pf->fname);
546 }
547
548 ret = dwarf_getsrclines(&pf->cu_die, &lines, &nlines);
549 DIE_IF(ret != 0);
550 for (i = 0; i < nlines; i++) {
551 line = dwarf_onesrcline(lines, i);
606 552
607 for (i = 0; i < cnt; i++) { 553 dwarf_lineno(line, &lineno);
608 ret = dwarf_line_srcfileno(lines[i], &fno, &__dw_error); 554 if (!line_list__has_line(&pf->lcache, lineno))
609 DIE_IF(ret != DW_DLV_OK);
610 if (fno != pf->fno)
611 continue; 555 continue;
612 556
613 ret = dwarf_lineno(lines[i], &lineno, &__dw_error); 557 /* TODO: Get fileno from line, but how? */
614 DIE_IF(ret != DW_DLV_OK); 558 if (strtailcmp(dwarf_linesrc(line, NULL, NULL), pf->fname) != 0)
615 if (lineno != pf->lno)
616 continue; 559 continue;
617 560
618 ret = dwarf_lineoff(lines[i], &clm, &__dw_error); 561 ret = dwarf_lineaddr(line, &addr);
619 DIE_IF(ret != DW_DLV_OK); 562 DIE_IF(ret != 0);
563 if (sp_die) {
564 /* Address filtering 1: does sp_die include addr? */
565 if (!dwarf_haspc(sp_die, addr))
566 continue;
567 /* Address filtering 2: No child include addr? */
568 if (die_get_inlinefunc(sp_die, addr, &die_mem))
569 continue;
570 }
620 571
621 ret = dwarf_lineaddr(lines[i], &addr, &__dw_error); 572 pr_debug("Probe line found: line[%d]:%d addr:0x%llx\n",
622 DIE_IF(ret != DW_DLV_OK); 573 (int)i, lineno, (unsigned long long)addr);
623 pr_debug("Probe line found: line[%d]:%u,%d addr:0x%llx\n",
624 (int)i, (unsigned)lineno, (int)clm, addr);
625 pf->addr = addr; 574 pf->addr = addr;
626 /* Search a real subprogram including this line, */ 575
627 ret = search_die_from_children(pf->cu_die, 576 show_probe_point(sp_die, pf);
628 probeaddr_callback, pf);
629 if (ret == 0)
630 die("Probe point is not found in subprograms.");
631 /* Continuing, because target line might be inlined. */ 577 /* Continuing, because target line might be inlined. */
632 } 578 }
633 dwarf_srclines_dealloc(__dw_debug, lines, cnt); 579 /* TODO: deallocate lines, but how? */
580}
581
582static int probe_point_inline_cb(Dwarf_Die *in_die, void *data)
583{
584 struct probe_finder *pf = (struct probe_finder *)data;
585 struct probe_point *pp = pf->pp;
586
587 if (pp->lazy_line)
588 find_probe_point_lazy(in_die, pf);
589 else {
590 /* Get probe address */
591 pf->addr = die_get_entrypc(in_die);
592 pf->addr += pp->offset;
593 pr_debug("found inline addr: 0x%jx\n",
594 (uintmax_t)pf->addr);
595
596 show_probe_point(in_die, pf);
597 }
598
599 return DWARF_CB_OK;
634} 600}
635 601
636/* Search function from function name */ 602/* Search function from function name */
637static int probefunc_callback(struct die_link *dlink, void *data) 603static int probe_point_search_cb(Dwarf_Die *sp_die, void *data)
638{ 604{
639 struct probe_finder *pf = (struct probe_finder *)data; 605 struct probe_finder *pf = (struct probe_finder *)data;
640 struct probe_point *pp = pf->pp; 606 struct probe_point *pp = pf->pp;
641 struct die_link *lk;
642 Dwarf_Signed offs;
643 Dwarf_Half tag;
644 int ret;
645 607
646 ret = dwarf_tag(dlink->die, &tag, &__dw_error); 608 /* Check tag and diename */
647 DIE_IF(ret == DW_DLV_ERROR); 609 if (dwarf_tag(sp_die) != DW_TAG_subprogram ||
648 if (tag == DW_TAG_subprogram) { 610 die_compare_name(sp_die, pp->function) != 0)
649 if (die_compare_name(dlink->die, pp->function) == 0) { 611 return 0;
650 if (pp->line) { /* Function relative line */ 612
651 pf->fno = die_get_decl_file(dlink->die); 613 pf->fname = dwarf_decl_file(sp_die);
652 pf->lno = die_get_decl_line(dlink->die) 614 if (pp->line) { /* Function relative line */
653 + pp->line; 615 dwarf_decl_line(sp_die, &pf->lno);
654 find_probe_point_by_line(pf); 616 pf->lno += pp->line;
655 return 1; 617 find_probe_point_by_line(pf);
656 } 618 } else if (!dwarf_func_inline(sp_die)) {
657 if (die_inlined_subprogram(dlink->die)) { 619 /* Real function */
658 /* Inlined function, save it. */ 620 if (pp->lazy_line)
659 ret = dwarf_die_CU_offset(dlink->die, 621 find_probe_point_lazy(sp_die, pf);
660 &pf->inl_offs, 622 else {
661 &__dw_error); 623 pf->addr = die_get_entrypc(sp_die);
662 DIE_IF(ret != DW_DLV_OK);
663 pr_debug("inline definition offset %lld\n",
664 pf->inl_offs);
665 return 0; /* Continue to search */
666 }
667 /* Get probe address */
668 pf->addr = die_get_entrypc(dlink->die);
669 pf->addr += pp->offset; 624 pf->addr += pp->offset;
670 /* TODO: Check the address in this function */ 625 /* TODO: Check the address in this function */
671 show_probepoint(dlink->die, pp->offset, pf); 626 show_probe_point(sp_die, pf);
672 return 1; /* Exit; no same symbol in this CU. */
673 }
674 } else if (tag == DW_TAG_inlined_subroutine && pf->inl_offs) {
675 if (die_get_abstract_origin(dlink->die) == pf->inl_offs) {
676 /* Get probe address */
677 pf->addr = die_get_entrypc(dlink->die);
678 pf->addr += pp->offset;
679 pr_debug("found inline addr: 0x%llx\n", pf->addr);
680 /* Inlined function. Get a real subprogram */
681 for (lk = dlink->parent; lk != NULL; lk = lk->parent) {
682 tag = 0;
683 dwarf_tag(lk->die, &tag, &__dw_error);
684 DIE_IF(ret == DW_DLV_ERROR);
685 if (tag == DW_TAG_subprogram &&
686 !die_inlined_subprogram(lk->die))
687 goto found;
688 }
689 die("Failed to find real subprogram.");
690found:
691 /* Get offset from subprogram */
692 ret = die_within_subprogram(lk->die, pf->addr, &offs);
693 DIE_IF(!ret);
694 show_probepoint(lk->die, offs, pf);
695 /* Continue to search */
696 } 627 }
697 } 628 } else
698 return 0; 629 /* Inlined function: search instances */
630 dwarf_func_inline_instances(sp_die, probe_point_inline_cb, pf);
631
632 return 1; /* Exit; no same symbol in this CU. */
699} 633}
700 634
701static void find_probe_point_by_func(struct probe_finder *pf) 635static void find_probe_point_by_func(struct probe_finder *pf)
702{ 636{
703 search_die_from_children(pf->cu_die, probefunc_callback, pf); 637 dwarf_getfuncs(&pf->cu_die, probe_point_search_cb, pf, 0);
704} 638}
705 639
706/* Find a probe point */ 640/* Find a probe point */
707int find_probepoint(int fd, struct probe_point *pp) 641int find_probe_point(int fd, struct probe_point *pp)
708{ 642{
709 Dwarf_Half addr_size = 0;
710 Dwarf_Unsigned next_cuh = 0;
711 int cu_number = 0, ret;
712 struct probe_finder pf = {.pp = pp}; 643 struct probe_finder pf = {.pp = pp};
644 int ret;
645 Dwarf_Off off, noff;
646 size_t cuhl;
647 Dwarf_Die *diep;
648 Dwarf *dbg;
713 649
714 ret = dwarf_init(fd, DW_DLC_READ, 0, 0, &__dw_debug, &__dw_error); 650 dbg = dwarf_begin(fd, DWARF_C_READ);
715 if (ret != DW_DLV_OK) 651 if (!dbg)
716 return -ENOENT; 652 return -ENOENT;
717 653
718 pp->found = 0; 654 pp->found = 0;
719 while (++cu_number) { 655 off = 0;
720 /* Search CU (Compilation Unit) */ 656 line_list__init(&pf.lcache);
721 ret = dwarf_next_cu_header(__dw_debug, NULL, NULL, NULL, 657 /* Loop on CUs (Compilation Unit) */
722 &addr_size, &next_cuh, &__dw_error); 658 while (!dwarf_nextcu(dbg, off, &noff, &cuhl, NULL, NULL, NULL)) {
723 DIE_IF(ret == DW_DLV_ERROR);
724 if (ret == DW_DLV_NO_ENTRY)
725 break;
726
727 /* Get the DIE(Debugging Information Entry) of this CU */ 659 /* Get the DIE(Debugging Information Entry) of this CU */
728 ret = dwarf_siblingof(__dw_debug, 0, &pf.cu_die, &__dw_error); 660 diep = dwarf_offdie(dbg, off + cuhl, &pf.cu_die);
729 DIE_IF(ret != DW_DLV_OK); 661 if (!diep)
662 continue;
730 663
731 /* Check if target file is included. */ 664 /* Check if target file is included. */
732 if (pp->file) 665 if (pp->file)
733 pf.fno = cu_find_fileno(pf.cu_die, pp->file); 666 pf.fname = cu_find_realpath(&pf.cu_die, pp->file);
667 else
668 pf.fname = NULL;
734 669
735 if (!pp->file || pf.fno) { 670 if (!pp->file || pf.fname) {
736 /* Save CU base address (for frame_base) */ 671 /* Save CU base address (for frame_base) */
737 ret = dwarf_lowpc(pf.cu_die, &pf.cu_base, &__dw_error); 672 ret = dwarf_lowpc(&pf.cu_die, &pf.cu_base);
738 DIE_IF(ret == DW_DLV_ERROR); 673 if (ret != 0)
739 if (ret == DW_DLV_NO_ENTRY)
740 pf.cu_base = 0; 674 pf.cu_base = 0;
741 if (pp->function) 675 if (pp->function)
742 find_probe_point_by_func(&pf); 676 find_probe_point_by_func(&pf);
677 else if (pp->lazy_line)
678 find_probe_point_lazy(NULL, &pf);
743 else { 679 else {
744 pf.lno = pp->line; 680 pf.lno = pp->line;
745 find_probe_point_by_line(&pf); 681 find_probe_point_by_line(&pf);
746 } 682 }
747 } 683 }
748 dwarf_dealloc(__dw_debug, pf.cu_die, DW_DLA_DIE); 684 off = noff;
749 } 685 }
750 ret = dwarf_finish(__dw_debug, &__dw_error); 686 line_list__free(&pf.lcache);
751 DIE_IF(ret != DW_DLV_OK); 687 dwarf_end(dbg);
752 688
753 return pp->found; 689 return pp->found;
754} 690}
755 691
756
757static void line_range_add_line(struct line_range *lr, unsigned int line)
758{
759 struct line_node *ln;
760 struct list_head *p;
761
762 /* Reverse search, because new line will be the last one */
763 list_for_each_entry_reverse(ln, &lr->line_list, list) {
764 if (ln->line < line) {
765 p = &ln->list;
766 goto found;
767 } else if (ln->line == line) /* Already exist */
768 return ;
769 }
770 /* List is empty, or the smallest entry */
771 p = &lr->line_list;
772found:
773 pr_debug("Debug: add a line %u\n", line);
774 ln = zalloc(sizeof(struct line_node));
775 DIE_IF(ln == NULL);
776 ln->line = line;
777 INIT_LIST_HEAD(&ln->list);
778 list_add(&ln->list, p);
779}
780
781/* Find line range from its line number */ 692/* Find line range from its line number */
782static void find_line_range_by_line(struct line_finder *lf) 693static void find_line_range_by_line(Dwarf_Die *sp_die, struct line_finder *lf)
783{ 694{
784 Dwarf_Signed cnt, i; 695 Dwarf_Lines *lines;
785 Dwarf_Line *lines; 696 Dwarf_Line *line;
786 Dwarf_Unsigned lineno = 0; 697 size_t nlines, i;
787 Dwarf_Unsigned fno;
788 Dwarf_Addr addr; 698 Dwarf_Addr addr;
699 int lineno;
789 int ret; 700 int ret;
701 const char *src;
702 Dwarf_Die die_mem;
790 703
791 ret = dwarf_srclines(lf->cu_die, &lines, &cnt, &__dw_error); 704 line_list__init(&lf->lr->line_list);
792 DIE_IF(ret != DW_DLV_OK); 705 ret = dwarf_getsrclines(&lf->cu_die, &lines, &nlines);
706 DIE_IF(ret != 0);
793 707
794 for (i = 0; i < cnt; i++) { 708 for (i = 0; i < nlines; i++) {
795 ret = dwarf_line_srcfileno(lines[i], &fno, &__dw_error); 709 line = dwarf_onesrcline(lines, i);
796 DIE_IF(ret != DW_DLV_OK); 710 ret = dwarf_lineno(line, &lineno);
797 if (fno != lf->fno) 711 DIE_IF(ret != 0);
798 continue;
799
800 ret = dwarf_lineno(lines[i], &lineno, &__dw_error);
801 DIE_IF(ret != DW_DLV_OK);
802 if (lf->lno_s > lineno || lf->lno_e < lineno) 712 if (lf->lno_s > lineno || lf->lno_e < lineno)
803 continue; 713 continue;
804 714
805 /* Filter line in the function address range */ 715 if (sp_die) {
806 if (lf->addr_s && lf->addr_e) { 716 /* Address filtering 1: does sp_die include addr? */
807 ret = dwarf_lineaddr(lines[i], &addr, &__dw_error); 717 ret = dwarf_lineaddr(line, &addr);
808 DIE_IF(ret != DW_DLV_OK); 718 DIE_IF(ret != 0);
809 if (lf->addr_s > addr || lf->addr_e <= addr) 719 if (!dwarf_haspc(sp_die, addr))
720 continue;
721
722 /* Address filtering 2: No child include addr? */
723 if (die_get_inlinefunc(sp_die, addr, &die_mem))
810 continue; 724 continue;
811 } 725 }
812 line_range_add_line(lf->lr, (unsigned int)lineno); 726
727 /* TODO: Get fileno from line, but how? */
728 src = dwarf_linesrc(line, NULL, NULL);
729 if (strtailcmp(src, lf->fname) != 0)
730 continue;
731
732 /* Copy real path */
733 if (!lf->lr->path)
734 lf->lr->path = strdup(src);
735 line_list__add_line(&lf->lr->line_list, (unsigned int)lineno);
813 } 736 }
814 dwarf_srclines_dealloc(__dw_debug, lines, cnt); 737 /* Update status */
815 if (!list_empty(&lf->lr->line_list)) 738 if (!list_empty(&lf->lr->line_list))
816 lf->found = 1; 739 lf->found = 1;
740 else {
741 free(lf->lr->path);
742 lf->lr->path = NULL;
743 }
744}
745
746static int line_range_inline_cb(Dwarf_Die *in_die, void *data)
747{
748 find_line_range_by_line(in_die, (struct line_finder *)data);
749 return DWARF_CB_ABORT; /* No need to find other instances */
817} 750}
818 751
819/* Search function from function name */ 752/* Search function from function name */
820static int linefunc_callback(struct die_link *dlink, void *data) 753static int line_range_search_cb(Dwarf_Die *sp_die, void *data)
821{ 754{
822 struct line_finder *lf = (struct line_finder *)data; 755 struct line_finder *lf = (struct line_finder *)data;
823 struct line_range *lr = lf->lr; 756 struct line_range *lr = lf->lr;
824 Dwarf_Half tag;
825 int ret;
826 757
827 ret = dwarf_tag(dlink->die, &tag, &__dw_error); 758 if (dwarf_tag(sp_die) == DW_TAG_subprogram &&
828 DIE_IF(ret == DW_DLV_ERROR); 759 die_compare_name(sp_die, lr->function) == 0) {
829 if (tag == DW_TAG_subprogram && 760 lf->fname = dwarf_decl_file(sp_die);
830 die_compare_name(dlink->die, lr->function) == 0) { 761 dwarf_decl_line(sp_die, &lr->offset);
831 /* Get the address range of this function */ 762 pr_debug("fname: %s, lineno:%d\n", lf->fname, lr->offset);
832 ret = dwarf_highpc(dlink->die, &lf->addr_e, &__dw_error);
833 if (ret == DW_DLV_OK)
834 ret = dwarf_lowpc(dlink->die, &lf->addr_s, &__dw_error);
835 DIE_IF(ret == DW_DLV_ERROR);
836 if (ret == DW_DLV_NO_ENTRY) {
837 lf->addr_s = 0;
838 lf->addr_e = 0;
839 }
840
841 lf->fno = die_get_decl_file(dlink->die);
842 lr->offset = die_get_decl_line(dlink->die);;
843 lf->lno_s = lr->offset + lr->start; 763 lf->lno_s = lr->offset + lr->start;
844 if (!lr->end) 764 if (!lr->end)
845 lf->lno_e = (Dwarf_Unsigned)-1; 765 lf->lno_e = INT_MAX;
846 else 766 else
847 lf->lno_e = lr->offset + lr->end; 767 lf->lno_e = lr->offset + lr->end;
848 lr->start = lf->lno_s; 768 lr->start = lf->lno_s;
849 lr->end = lf->lno_e; 769 lr->end = lf->lno_e;
850 find_line_range_by_line(lf); 770 if (dwarf_func_inline(sp_die))
851 /* If we find a target function, this should be end. */ 771 dwarf_func_inline_instances(sp_die,
852 lf->found = 1; 772 line_range_inline_cb, lf);
773 else
774 find_line_range_by_line(sp_die, lf);
853 return 1; 775 return 1;
854 } 776 }
855 return 0; 777 return 0;
@@ -857,55 +779,55 @@ static int linefunc_callback(struct die_link *dlink, void *data)
857 779
858static void find_line_range_by_func(struct line_finder *lf) 780static void find_line_range_by_func(struct line_finder *lf)
859{ 781{
860 search_die_from_children(lf->cu_die, linefunc_callback, lf); 782 dwarf_getfuncs(&lf->cu_die, line_range_search_cb, lf, 0);
861} 783}
862 784
863int find_line_range(int fd, struct line_range *lr) 785int find_line_range(int fd, struct line_range *lr)
864{ 786{
865 Dwarf_Half addr_size = 0; 787 struct line_finder lf = {.lr = lr, .found = 0};
866 Dwarf_Unsigned next_cuh = 0;
867 int ret; 788 int ret;
868 struct line_finder lf = {.lr = lr}; 789 Dwarf_Off off = 0, noff;
790 size_t cuhl;
791 Dwarf_Die *diep;
792 Dwarf *dbg;
869 793
870 ret = dwarf_init(fd, DW_DLC_READ, 0, 0, &__dw_debug, &__dw_error); 794 dbg = dwarf_begin(fd, DWARF_C_READ);
871 if (ret != DW_DLV_OK) 795 if (!dbg)
872 return -ENOENT; 796 return -ENOENT;
873 797
798 /* Loop on CUs (Compilation Unit) */
874 while (!lf.found) { 799 while (!lf.found) {
875 /* Search CU (Compilation Unit) */ 800 ret = dwarf_nextcu(dbg, off, &noff, &cuhl, NULL, NULL, NULL);
876 ret = dwarf_next_cu_header(__dw_debug, NULL, NULL, NULL, 801 if (ret != 0)
877 &addr_size, &next_cuh, &__dw_error);
878 DIE_IF(ret == DW_DLV_ERROR);
879 if (ret == DW_DLV_NO_ENTRY)
880 break; 802 break;
881 803
882 /* Get the DIE(Debugging Information Entry) of this CU */ 804 /* Get the DIE(Debugging Information Entry) of this CU */
883 ret = dwarf_siblingof(__dw_debug, 0, &lf.cu_die, &__dw_error); 805 diep = dwarf_offdie(dbg, off + cuhl, &lf.cu_die);
884 DIE_IF(ret != DW_DLV_OK); 806 if (!diep)
807 continue;
885 808
886 /* Check if target file is included. */ 809 /* Check if target file is included. */
887 if (lr->file) 810 if (lr->file)
888 lf.fno = cu_find_fileno(lf.cu_die, lr->file); 811 lf.fname = cu_find_realpath(&lf.cu_die, lr->file);
812 else
813 lf.fname = 0;
889 814
890 if (!lr->file || lf.fno) { 815 if (!lr->file || lf.fname) {
891 if (lr->function) 816 if (lr->function)
892 find_line_range_by_func(&lf); 817 find_line_range_by_func(&lf);
893 else { 818 else {
894 lf.lno_s = lr->start; 819 lf.lno_s = lr->start;
895 if (!lr->end) 820 if (!lr->end)
896 lf.lno_e = (Dwarf_Unsigned)-1; 821 lf.lno_e = INT_MAX;
897 else 822 else
898 lf.lno_e = lr->end; 823 lf.lno_e = lr->end;
899 find_line_range_by_line(&lf); 824 find_line_range_by_line(NULL, &lf);
900 } 825 }
901 /* Get the real file path */
902 if (lf.found)
903 cu_get_filename(lf.cu_die, lf.fno, &lr->path);
904 } 826 }
905 dwarf_dealloc(__dw_debug, lf.cu_die, DW_DLA_DIE); 827 off = noff;
906 } 828 }
907 ret = dwarf_finish(__dw_debug, &__dw_error); 829 pr_debug("path: %lx\n", (unsigned long)lr->path);
908 DIE_IF(ret != DW_DLV_OK); 830 dwarf_end(dbg);
909 return lf.found; 831 return lf.found;
910} 832}
911 833
diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
index 972b386116f1..d1a651793ba6 100644
--- a/tools/perf/util/probe-finder.h
+++ b/tools/perf/util/probe-finder.h
@@ -1,6 +1,7 @@
1#ifndef _PROBE_FINDER_H 1#ifndef _PROBE_FINDER_H
2#define _PROBE_FINDER_H 2#define _PROBE_FINDER_H
3 3
4#include <stdbool.h>
4#include "util.h" 5#include "util.h"
5 6
6#define MAX_PATH_LEN 256 7#define MAX_PATH_LEN 256
@@ -20,6 +21,7 @@ struct probe_point {
20 /* Inputs */ 21 /* Inputs */
21 char *file; /* File name */ 22 char *file; /* File name */
22 int line; /* Line number */ 23 int line; /* Line number */
24 char *lazy_line; /* Lazy line pattern */
23 25
24 char *function; /* Function name */ 26 char *function; /* Function name */
25 int offset; /* Offset bytes */ 27 int offset; /* Offset bytes */
@@ -46,53 +48,46 @@ struct line_range {
46 char *function; /* Function name */ 48 char *function; /* Function name */
47 unsigned int start; /* Start line number */ 49 unsigned int start; /* Start line number */
48 unsigned int end; /* End line number */ 50 unsigned int end; /* End line number */
49 unsigned int offset; /* Start line offset */ 51 int offset; /* Start line offset */
50 char *path; /* Real path name */ 52 char *path; /* Real path name */
51 struct list_head line_list; /* Visible lines */ 53 struct list_head line_list; /* Visible lines */
52}; 54};
53 55
54#ifndef NO_LIBDWARF 56#ifndef NO_DWARF_SUPPORT
55extern int find_probepoint(int fd, struct probe_point *pp); 57extern int find_probe_point(int fd, struct probe_point *pp);
56extern int find_line_range(int fd, struct line_range *lr); 58extern int find_line_range(int fd, struct line_range *lr);
57 59
58/* Workaround for undefined _MIPS_SZLONG bug in libdwarf.h: */
59#ifndef _MIPS_SZLONG
60# define _MIPS_SZLONG 0
61#endif
62
63#include <dwarf.h> 60#include <dwarf.h>
64#include <libdwarf.h> 61#include <libdw.h>
65 62
66struct probe_finder { 63struct probe_finder {
67 struct probe_point *pp; /* Target probe point */ 64 struct probe_point *pp; /* Target probe point */
68 65
69 /* For function searching */ 66 /* For function searching */
70 Dwarf_Addr addr; /* Address */ 67 Dwarf_Addr addr; /* Address */
71 Dwarf_Unsigned fno; /* File number */ 68 const char *fname; /* File name */
72 Dwarf_Unsigned lno; /* Line number */ 69 int lno; /* Line number */
73 Dwarf_Off inl_offs; /* Inline offset */ 70 Dwarf_Die cu_die; /* Current CU */
74 Dwarf_Die cu_die; /* Current CU */
75 71
76 /* For variable searching */ 72 /* For variable searching */
77 Dwarf_Addr cu_base; /* Current CU base address */ 73 Dwarf_Op *fb_ops; /* Frame base attribute */
78 Dwarf_Locdesc fbloc; /* Location of Current Frame Base */ 74 Dwarf_Addr cu_base; /* Current CU base address */
79 const char *var; /* Current variable name */ 75 const char *var; /* Current variable name */
80 char *buf; /* Current output buffer */ 76 char *buf; /* Current output buffer */
81 int len; /* Length of output buffer */ 77 int len; /* Length of output buffer */
78 struct list_head lcache; /* Line cache for lazy match */
82}; 79};
83 80
84struct line_finder { 81struct line_finder {
85 struct line_range *lr; /* Target line range */ 82 struct line_range *lr; /* Target line range */
86 83
87 Dwarf_Unsigned fno; /* File number */ 84 const char *fname; /* File name */
88 Dwarf_Unsigned lno_s; /* Start line number */ 85 int lno_s; /* Start line number */
89 Dwarf_Unsigned lno_e; /* End line number */ 86 int lno_e; /* End line number */
90 Dwarf_Addr addr_s; /* Start address */ 87 Dwarf_Die cu_die; /* Current CU */
91 Dwarf_Addr addr_e; /* End address */
92 Dwarf_Die cu_die; /* Current CU */
93 int found; 88 int found;
94}; 89};
95 90
96#endif /* NO_LIBDWARF */ 91#endif /* NO_DWARF_SUPPORT */
97 92
98#endif /*_PROBE_FINDER_H */ 93#endif /*_PROBE_FINDER_H */
diff --git a/tools/perf/util/string.c b/tools/perf/util/string.c
index c397d4f6f748..a175949ed216 100644
--- a/tools/perf/util/string.c
+++ b/tools/perf/util/string.c
@@ -265,21 +265,21 @@ error:
265 return false; 265 return false;
266} 266}
267 267
268/** 268/* Glob/lazy pattern matching */
269 * strglobmatch - glob expression pattern matching 269static bool __match_glob(const char *str, const char *pat, bool ignore_space)
270 * @str: the target string to match
271 * @pat: the pattern string to match
272 *
273 * This returns true if the @str matches @pat. @pat can includes wildcards
274 * ('*','?') and character classes ([CHARS], complementation and ranges are
275 * also supported). Also, this supports escape character ('\') to use special
276 * characters as normal character.
277 *
278 * Note: if @pat syntax is broken, this always returns false.
279 */
280bool strglobmatch(const char *str, const char *pat)
281{ 270{
282 while (*str && *pat && *pat != '*') { 271 while (*str && *pat && *pat != '*') {
272 if (ignore_space) {
273 /* Ignore spaces for lazy matching */
274 if (isspace(*str)) {
275 str++;
276 continue;
277 }
278 if (isspace(*pat)) {
279 pat++;
280 continue;
281 }
282 }
283 if (*pat == '?') { /* Matches any single character */ 283 if (*pat == '?') { /* Matches any single character */
284 str++; 284 str++;
285 pat++; 285 pat++;
@@ -308,3 +308,32 @@ bool strglobmatch(const char *str, const char *pat)
308 return !*str && !*pat; 308 return !*str && !*pat;
309} 309}
310 310
311/**
312 * strglobmatch - glob expression pattern matching
313 * @str: the target string to match
314 * @pat: the pattern string to match
315 *
316 * This returns true if the @str matches @pat. @pat can includes wildcards
317 * ('*','?') and character classes ([CHARS], complementation and ranges are
318 * also supported). Also, this supports escape character ('\') to use special
319 * characters as normal character.
320 *
321 * Note: if @pat syntax is broken, this always returns false.
322 */
323bool strglobmatch(const char *str, const char *pat)
324{
325 return __match_glob(str, pat, false);
326}
327
328/**
329 * strlazymatch - matching pattern strings lazily with glob pattern
330 * @str: the target string to match
331 * @pat: the pattern string to match
332 *
333 * This is similar to strglobmatch, except this ignores spaces in
334 * the target string.
335 */
336bool strlazymatch(const char *str, const char *pat)
337{
338 return __match_glob(str, pat, true);
339}
diff --git a/tools/perf/util/string.h b/tools/perf/util/string.h
index 02ede58c54b4..542e44de3719 100644
--- a/tools/perf/util/string.h
+++ b/tools/perf/util/string.h
@@ -10,6 +10,7 @@ s64 perf_atoll(const char *str);
10char **argv_split(const char *str, int *argcp); 10char **argv_split(const char *str, int *argcp);
11void argv_free(char **argv); 11void argv_free(char **argv);
12bool strglobmatch(const char *str, const char *pat); 12bool strglobmatch(const char *str, const char *pat);
13bool strlazymatch(const char *str, const char *pat);
13 14
14#define _STR(x) #x 15#define _STR(x) #x
15#define STR(x) _STR(x) 16#define STR(x) _STR(x)