aboutsummaryrefslogtreecommitdiffstats
path: root/arch/i386/kernel/kprobes.c
diff options
context:
space:
mode:
authorMasami Hiramatsu <hiramatu@sdl.hitachi.co.jp>2006-03-26 04:38:17 -0500
committerLinus Torvalds <torvalds@g5.osdl.org>2006-03-26 11:57:04 -0500
commit311ac88fd2d4194a95e9e38d2fe08917be98723c (patch)
tree321602aa5da2c18a285f31dc2dab14a7464c0483 /arch/i386/kernel/kprobes.c
parentb50ea74c7bc3ebe3d88a357893f0b96ae9092f13 (diff)
[PATCH] x86: kprobes-booster
Current kprobe copies the original instruction at the probe point and replaces it with a breakpoint instruction (int3). When the kernel hits the probe point, kprobe handler is invoked. And the copied instruction is single-step executed on the copied buffer (not on the original address) by kprobe. After that, the kprobe checks registers and modify it (if need) as if the instructions was executed on the original address. My proposal is based on the fact there are many instructions which do NOT require the register modification after the single-step execution. When the copied instruction is a kind of them, kprobe just jumps back to the next instruction after single-step execution. If so, why don't we execute those instructions directly? With kprobe-booster patch, kprobes will execute a copied instruction directly and (if need) jump back to original code. This direct execution is executed when the kprobe don't have both post_handler and break_handler, and the copied instruction can be executed directly. I sorted instructions which can be executed directly or not; - Call instructions are NG(can not be executed directly). We should correct the return address pushed into top of stack. - Indirect instructions except for absolute indirect-jumps are NG. Those instructions changes EIP randomly. We should check EIP and correct it. - Instructions that change EIP beyond the range of the instruction buffer are NG. - Instructions that change EIP to tail 5 bytes of the instruction buffer (it is the size of a jump instruction). We must write a jump instruction which backs to original kernel code in the instruction buffer. - Break point instruction is NG. We should not touch EIP and pass to other handlers. - Absolute direct/indirect jumps are OK.- Conditional Jumps are NG. - Halt and software-interruptions are NG. Because it will stay on the instruction buffer of kprobes. - Prefixes are NG. - Unknown/reserved opcode is NG. - Other 1 byte instructions are OK. But those instructions need a jump back code. - 2 bytes instructions are mapped sparsely. So, in this release, this patch don't boost those instructions. >From Intel's IA-32 opcode map described in IA-32 Intel Architecture Software Developer's Manual Vol.2 B, I determined that following opcodes are not boostable. - 0FH (2byte escape) - 70H - 7FH (Jump on condition) - 9AH (Call) and 9CH (Pushf) - C0H-C1H (Grp 2: includes reserved opcode) - C6H-C7H (Grp11: includes reserved opcode) - CCH-CEH (Software-interrupt) - D0H-D3H (Grp2: includes reserved opcode) - D6H (Reserved) - D8H-DFH (Coprocessor) - E0H-E3H (loop/conditional jump) - E8H (Call) - F0H-F3H (Prefixes and reserved) - F4H (Halt) - F6H-F7H (Grp3: includes reserved opcode) - FEH-FFH(Grp4,5: includes reserved opcode) Kprobe-booster checks whether target instruction can be boosted (can be executed directly) at arch_copy_kprobe() function. If the target instruction can be boosted, it clears "boostable" flag. If not, it sets "boostable" flag -1. This is disabled status. In resume_execution() function, If "boostable" flag is cleared, kprobe-booster measures the size of the target instruction and sets "boostable" flag 1. In kprobe_handler(), kprobe checks the "boostable" flag. If the flag is 1, it resets current kprobe and executes instruction buffer directly instead of single stepping. When unregistering a boosted kprobe, it calls synchronize_sched() after "int3" is removed. So we can ensure followings after the synchronize_sched() called. - interrupt handlers are finished on all CPUs. - instruction buffer is not executed on all CPUs. And we can release the boosted kprobe safely. And also, on preemptible kernel, the booster is not enabled where the kernel preemption is enabled. So, there are no preempted threads on the instruction buffer. The description of kretprobe-booster: ==================================== In the normal operation, kretprobe make a target function return to trampoline code. And a kprobe (called trampoline_probe) have been inserted at the trampoline code. When the kernel hits this kprobe, it calls kretprobe's handler and it returns to original return address. Kretprobe-booster patch removes the trampoline_probe. It allows the trampoline code to call kretprobe's handler directly instead of invoking kprobe. And tranpoline code returns to original return address. This new trampoline code stores and restores registers, so the kretprobe handler is still able to access those registers. Current kprobe has about 1.3 usec/probe(*) overhead, and kprobe-booster patch reduces it to 0.6 usec/probe(*). Also current kretprobe has about 2.0 usec/probe(*) overhead. Kprobe-booster patch reduces it to 1.3 usec/probe(*), and the combination of both kprobe-booster patch and kretprobe-booster patch reduces it to 0.9 usec/probe(*). I expect the combination of both patches can reduce half of a probing overhead. Performance numbers strongly depend on the processor model. Andrew Morton wrote: > These preempt tricks look rather nasty. Can you please describe what the > problem is, precisely? And how this code avoids it? Perhaps we can find > something cleaner. The problem is how to remove the copied instructions of the kprobe *safely* on the preemptable kernel (CONFIG_PREEMPT=y). Kprobes basically executes the following actions; (1)int3 (2)preempt_disable() (3)kprobe_prehandler() (4)copied instructioin(single step) (5)kprobe_posthandler() (6)preempt_enable() (7)return to the original code During the execution of copied instruction, preemption is disabled (from step (2) to (6)). When unregistering the probes, Kprobe waits for RCU quiescent state by using synchronize_sched() after removing int3 instruction. Thus we can ensure the copied instruction is not executed. On the other hand, kprobe-booster executes the following actions; (1)int3 (2)preempt_disable() (3)kprobe_prehandler() (4)preempt_enable() <-- this one is added by my patch (5)copied instruction(direct execution) (6)jmp back to the original code The problem is that we have no way to prevent preemption on step (5) or (6). We cannot call preempt_disable() after step (6), because there are no rooms to do that. Thus, some other processes may be preempted at step(5) or (6) on preemptable kernel. And I couldn't find the easy way to ensure that other processes' stack do *not* have the address of them. (I thought some way to do that, but those are very costly.) So currently, I simply boost the kprobe only when the probe point is already preemption disabled. > Also, the patch adds a preempt_enable() but I don't see a corresponding > preempt_disable(). Am I missing something? It is corresponding to the preempt_disable() in the top of kprobe_handler(). I copied the code of kprobe_handler() here: static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; kprobe_opcode_t *addr = NULL; unsigned long *lp; struct kprobe_ctlblk *kcb; /* * We don't want to be preempted for the entire * duration of kprobe processing */ preempt_disable(); <-- HERE kcb = get_kprobe_ctlblk(); Signed-off-by: Masami Hiramatsu <hiramatu@sdl.hitachi.co.jp> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch/i386/kernel/kprobes.c')
-rw-r--r--arch/i386/kernel/kprobes.c92
1 files changed, 90 insertions, 2 deletions
diff --git a/arch/i386/kernel/kprobes.c b/arch/i386/kernel/kprobes.c
index b40614f5afe2..137bf612141b 100644
--- a/arch/i386/kernel/kprobes.c
+++ b/arch/i386/kernel/kprobes.c
@@ -41,6 +41,49 @@ void jprobe_return_end(void);
41DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; 41DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
42DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); 42DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
43 43
44/* insert a jmp code */
45static inline void set_jmp_op(void *from, void *to)
46{
47 struct __arch_jmp_op {
48 char op;
49 long raddr;
50 } __attribute__((packed)) *jop;
51 jop = (struct __arch_jmp_op *)from;
52 jop->raddr = (long)(to) - ((long)(from) + 5);
53 jop->op = RELATIVEJUMP_INSTRUCTION;
54}
55
56/*
57 * returns non-zero if opcodes can be boosted.
58 */
59static inline int can_boost(kprobe_opcode_t opcode)
60{
61 switch (opcode & 0xf0 ) {
62 case 0x70:
63 return 0; /* can't boost conditional jump */
64 case 0x90:
65 /* can't boost call and pushf */
66 return opcode != 0x9a && opcode != 0x9c;
67 case 0xc0:
68 /* can't boost undefined opcodes and soft-interruptions */
69 return (0xc1 < opcode && opcode < 0xc6) ||
70 (0xc7 < opcode && opcode < 0xcc) || opcode == 0xcf;
71 case 0xd0:
72 /* can boost AA* and XLAT */
73 return (opcode == 0xd4 || opcode == 0xd5 || opcode == 0xd7);
74 case 0xe0:
75 /* can boost in/out and (may be) jmps */
76 return (0xe3 < opcode && opcode != 0xe8);
77 case 0xf0:
78 /* clear and set flags can be boost */
79 return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
80 default:
81 /* currently, can't boost 2 bytes opcodes */
82 return opcode != 0x0f;
83 }
84}
85
86
44/* 87/*
45 * returns non-zero if opcode modifies the interrupt flag. 88 * returns non-zero if opcode modifies the interrupt flag.
46 */ 89 */
@@ -65,6 +108,11 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
65 108
66 memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); 109 memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
67 p->opcode = *p->addr; 110 p->opcode = *p->addr;
111 if (can_boost(p->opcode)) {
112 p->ainsn.boostable = 0;
113 } else {
114 p->ainsn.boostable = -1;
115 }
68 return 0; 116 return 0;
69} 117}
70 118
@@ -158,6 +206,9 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
158 kprobe_opcode_t *addr = NULL; 206 kprobe_opcode_t *addr = NULL;
159 unsigned long *lp; 207 unsigned long *lp;
160 struct kprobe_ctlblk *kcb; 208 struct kprobe_ctlblk *kcb;
209#ifdef CONFIG_PREEMPT
210 unsigned pre_preempt_count = preempt_count();
211#endif /* CONFIG_PREEMPT */
161 212
162 /* 213 /*
163 * We don't want to be preempted for the entire 214 * We don't want to be preempted for the entire
@@ -252,6 +303,21 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
252 /* handler has already set things up, so skip ss setup */ 303 /* handler has already set things up, so skip ss setup */
253 return 1; 304 return 1;
254 305
306 if (p->ainsn.boostable == 1 &&
307#ifdef CONFIG_PREEMPT
308 !(pre_preempt_count) && /*
309 * This enables booster when the direct
310 * execution path aren't preempted.
311 */
312#endif /* CONFIG_PREEMPT */
313 !p->post_handler && !p->break_handler ) {
314 /* Boost up -- we can execute copied instructions directly */
315 reset_current_kprobe();
316 regs->eip = (unsigned long)p->ainsn.insn;
317 preempt_enable_no_resched();
318 return 1;
319 }
320
255ss_probe: 321ss_probe:
256 prepare_singlestep(p, regs); 322 prepare_singlestep(p, regs);
257 kcb->kprobe_status = KPROBE_HIT_SS; 323 kcb->kprobe_status = KPROBE_HIT_SS;
@@ -357,6 +423,8 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
357 * 2) If the single-stepped instruction was a call, the return address 423 * 2) If the single-stepped instruction was a call, the return address
358 * that is atop the stack is the address following the copied instruction. 424 * that is atop the stack is the address following the copied instruction.
359 * We need to make it the address following the original instruction. 425 * We need to make it the address following the original instruction.
426 *
427 * This function also checks instruction size for preparing direct execution.
360 */ 428 */
361static void __kprobes resume_execution(struct kprobe *p, 429static void __kprobes resume_execution(struct kprobe *p,
362 struct pt_regs *regs, struct kprobe_ctlblk *kcb) 430 struct pt_regs *regs, struct kprobe_ctlblk *kcb)
@@ -377,6 +445,7 @@ static void __kprobes resume_execution(struct kprobe *p,
377 case 0xca: 445 case 0xca:
378 case 0xea: /* jmp absolute -- eip is correct */ 446 case 0xea: /* jmp absolute -- eip is correct */
379 /* eip is already adjusted, no more changes required */ 447 /* eip is already adjusted, no more changes required */
448 p->ainsn.boostable = 1;
380 goto no_change; 449 goto no_change;
381 case 0xe8: /* call relative - Fix return addr */ 450 case 0xe8: /* call relative - Fix return addr */
382 *tos = orig_eip + (*tos - copy_eip); 451 *tos = orig_eip + (*tos - copy_eip);
@@ -384,18 +453,37 @@ static void __kprobes resume_execution(struct kprobe *p,
384 case 0xff: 453 case 0xff:
385 if ((p->ainsn.insn[1] & 0x30) == 0x10) { 454 if ((p->ainsn.insn[1] & 0x30) == 0x10) {
386 /* call absolute, indirect */ 455 /* call absolute, indirect */
387 /* Fix return addr; eip is correct. */ 456 /*
457 * Fix return addr; eip is correct.
458 * But this is not boostable
459 */
388 *tos = orig_eip + (*tos - copy_eip); 460 *tos = orig_eip + (*tos - copy_eip);
389 goto no_change; 461 goto no_change;
390 } else if (((p->ainsn.insn[1] & 0x31) == 0x20) || /* jmp near, absolute indirect */ 462 } else if (((p->ainsn.insn[1] & 0x31) == 0x20) || /* jmp near, absolute indirect */
391 ((p->ainsn.insn[1] & 0x31) == 0x21)) { /* jmp far, absolute indirect */ 463 ((p->ainsn.insn[1] & 0x31) == 0x21)) { /* jmp far, absolute indirect */
392 /* eip is correct. */ 464 /* eip is correct. And this is boostable */
465 p->ainsn.boostable = 1;
393 goto no_change; 466 goto no_change;
394 } 467 }
395 default: 468 default:
396 break; 469 break;
397 } 470 }
398 471
472 if (p->ainsn.boostable == 0) {
473 if ((regs->eip > copy_eip) &&
474 (regs->eip - copy_eip) + 5 < MAX_INSN_SIZE) {
475 /*
476 * These instructions can be executed directly if it
477 * jumps back to correct address.
478 */
479 set_jmp_op((void *)regs->eip,
480 (void *)orig_eip + (regs->eip - copy_eip));
481 p->ainsn.boostable = 1;
482 } else {
483 p->ainsn.boostable = -1;
484 }
485 }
486
399 regs->eip = orig_eip + (regs->eip - copy_eip); 487 regs->eip = orig_eip + (regs->eip - copy_eip);
400 488
401no_change: 489no_change: