From f79b348856fbaf77e4a0c5cb08a808e5879967a9 Mon Sep 17 00:00:00 2001 From: Dean Nelson Date: Tue, 1 Nov 2005 10:21:51 -0600 Subject: [IA64] restrict CONFIG_SGI_SN_XP to IA64_GENERIC or IA64_SGI_SN2 Restrict CONFIG_SGI_SN_XP to IA64_GENERIC or IA64_SGI_SN2 kernels. Signed-off-by: Dean Nelson Signed-off-by: Tony Luck --- arch/ia64/Kconfig | 1 + 1 file changed, 1 insertion(+) (limited to 'arch/ia64') diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 3b4248cff9a7..eb784046130d 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -191,6 +191,7 @@ config IOSAPIC config IA64_SGI_SN_XP tristate "Support communication between SGI SSIs" + depends on IA64_GENERIC || IA64_SGI_SN2 select IA64_UNCACHED_ALLOCATOR help An SGI machine can be divided into multiple Single System -- cgit v1.2.2 From e1531b4218a7ccfc1b2234b87105201e5ebe1bbf Mon Sep 17 00:00:00 2001 From: "John W. Linville" Date: Mon, 7 Nov 2005 00:57:54 -0800 Subject: [PATCH] ia64: re-implement dma_get_cache_alignment to avoid EXPORT_SYMBOL The current ia64 implementation of dma_get_cache_alignment does not work for modules because it relies on a symbol which is not exported. Direct access to a global is a little ugly anyway, so this patch re-implements dma_get_cache_alignment in a manner similar to what is currently used for x86_64. Signed-off-by: John W. Linville Cc: "Luck, Tony" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/kernel/setup.c | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'arch/ia64') diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c index fc56ca2da358..3af6de36a482 100644 --- a/arch/ia64/kernel/setup.c +++ b/arch/ia64/kernel/setup.c @@ -92,6 +92,13 @@ extern void efi_initialize_iomem_resources(struct resource *, extern char _text[], _end[], _etext[]; unsigned long ia64_max_cacheline_size; + +int dma_get_cache_alignment(void) +{ + return ia64_max_cacheline_size; +} +EXPORT_SYMBOL(dma_get_cache_alignment); + unsigned long ia64_iobase; /* virtual address for I/O accesses */ EXPORT_SYMBOL(ia64_iobase); struct io_space io_space[MAX_IO_SPACES]; -- cgit v1.2.2 From cd6b0762a04978baf48412456a687842de97e381 Mon Sep 17 00:00:00 2001 From: Prasanna S Panchamukhi Date: Mon, 7 Nov 2005 00:59:14 -0800 Subject: [PATCH] Move Kprobes and Oprofile to "Instrumentation Support" menu Andrew Morton suggested to move kprobes from kernel hacking menu, since kernel hacking menu is in-appropriate for the Kprobes. This patch moves Kprobes and Oprofile under instrumentation menu. (akpm: it's not a natural fit, but things like djprobes and the s390 guys' statistics library need a home) Signed-of-by: Prasanna S Panchamukhi Cc: Philippe Elie Cc: John Levon Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/Kconfig | 13 +++++++++++++ arch/ia64/Kconfig.debug | 11 ----------- arch/ia64/oprofile/Kconfig | 6 ------ 3 files changed, 13 insertions(+), 17 deletions(-) (limited to 'arch/ia64') diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 3b4248cff9a7..9f2093c1f44b 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -426,8 +426,21 @@ config GENERIC_PENDING_IRQ source "arch/ia64/hp/sim/Kconfig" +menu "Instrumentation Support" + depends on EXPERIMENTAL + source "arch/ia64/oprofile/Kconfig" +config KPROBES + bool "Kprobes (EXPERIMENTAL)" + help + Kprobes allows you to trap at almost any kernel address and + execute a callback function. register_kprobe() establishes + a probepoint and specifies the callback. Kprobes is useful + for kernel debugging, non-intrusive instrumentation and testing. + If in doubt, say "N". +endmenu + source "arch/ia64/Kconfig.debug" source "security/Kconfig" diff --git a/arch/ia64/Kconfig.debug b/arch/ia64/Kconfig.debug index fda67ac993d7..de9d507ba0fd 100644 --- a/arch/ia64/Kconfig.debug +++ b/arch/ia64/Kconfig.debug @@ -2,17 +2,6 @@ menu "Kernel hacking" source "lib/Kconfig.debug" -config KPROBES - bool "Kprobes" - depends on DEBUG_KERNEL - help - Kprobes allows you to trap at almost any kernel address and - execute a callback function. register_kprobe() establishes - a probepoint and specifies the callback. Kprobes is useful - for kernel debugging, non-intrusive instrumentation and testing. - If in doubt, say "N". - - choice prompt "Physical memory granularity" default IA64_GRANULE_64MB diff --git a/arch/ia64/oprofile/Kconfig b/arch/ia64/oprofile/Kconfig index 56e6f614b04a..97271ab484dc 100644 --- a/arch/ia64/oprofile/Kconfig +++ b/arch/ia64/oprofile/Kconfig @@ -1,7 +1,3 @@ - -menu "Profiling support" - depends on EXPERIMENTAL - config PROFILING bool "Profiling support (EXPERIMENTAL)" help @@ -22,5 +18,3 @@ config OPROFILE If unsure, say N. -endmenu - -- cgit v1.2.2 From 66ff2d0691e00e1e7bfdf398a970310c9a0fe671 Mon Sep 17 00:00:00 2001 From: Ananth N Mavinakayanahalli Date: Mon, 7 Nov 2005 01:00:07 -0800 Subject: [PATCH] Kprobes: rearrange preempt_disable/enable() calls The following set of patches are aimed at improving kprobes scalability. We currently serialize kprobe registration, unregistration and handler execution using a single spinlock - kprobe_lock. With these changes, kprobe handlers can run without any locks held. It also allows for simultaneous kprobe handler executions on different processors as we now track kprobe execution on a per processor basis. It is now necessary that the handlers be re-entrant since handlers can run concurrently on multiple processors. All changes have been tested on i386, ia64, ppc64 and x86_64, while sparc64 has been compile tested only. The patches can be viewed as 3 logical chunks: patch 1: Reorder preempt_(dis/en)able calls patches 2-7: Introduce per_cpu data areas to track kprobe execution patches 8-9: Use RCU to synchronize kprobe (un)registration and handler execution. Thanks to Maneesh Soni, James Keniston and Anil Keshavamurthy for their review and suggestions. Thanks again to Anil, Hien Nguyen and Kevin Stafford for testing the patches. This patch: Reorder preempt_disable/enable() calls in arch kprobes files in preparation to introduce locking changes. No functional changes introduced by this patch. Signed-off-by: Ananth N Mavinakayahanalli Signed-off-by: Anil S Keshavamurthy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/kernel/kprobes.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) (limited to 'arch/ia64') diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c index 471086b808a4..1e80ec80dd21 100644 --- a/arch/ia64/kernel/kprobes.c +++ b/arch/ia64/kernel/kprobes.c @@ -395,7 +395,7 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) /* * By returning a non-zero value, we are telling * kprobe_handler() that we have handled unlocking - * and re-enabling preemption. + * and re-enabling preemption */ return 1; } @@ -607,8 +607,6 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) struct pt_regs *regs = args->regs; kprobe_opcode_t *addr = (kprobe_opcode_t *)instruction_pointer(regs); - preempt_disable(); - /* Handle recursion cases */ if (kprobe_running()) { p = get_kprobe(addr); @@ -665,6 +663,11 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) goto no_kprobe; } + /* + * This preempt_disable() matches the preempt_enable_no_resched() + * in post_kprobes_handler() + */ + preempt_disable(); kprobe_status = KPROBE_HIT_ACTIVE; set_current_kprobe(p); @@ -682,7 +685,6 @@ ss_probe: return 1; no_kprobe: - preempt_enable_no_resched(); return ret; } @@ -733,22 +735,26 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; + int ret = NOTIFY_DONE; + + preempt_disable(); switch(val) { case DIE_BREAK: if (pre_kprobes_handler(args)) - return NOTIFY_STOP; + ret = NOTIFY_STOP; break; case DIE_SS: if (post_kprobes_handler(args->regs)) - return NOTIFY_STOP; + ret = NOTIFY_STOP; break; case DIE_PAGE_FAULT: if (kprobes_fault_handler(args->regs, args->trapnr)) - return NOTIFY_STOP; + ret = NOTIFY_STOP; default: break; } - return NOTIFY_DONE; + preempt_enable(); + return ret; } int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) -- cgit v1.2.2 From 8a5c4dc5e5d72b7802f5647082ccf3861a94f013 Mon Sep 17 00:00:00 2001 From: Ananth N Mavinakayanahalli Date: Mon, 7 Nov 2005 01:00:09 -0800 Subject: [PATCH] Kprobes: Track kprobe on a per_cpu basis - ia64 changes IA64 changes to track kprobe execution on a per-cpu basis. We now track the kprobe state machine independently on each cpu using an arch specific kprobe control block. Signed-off-by: Ananth N Mavinakayanahalli Signed-off-by: Anil S Keshavamurthy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/kernel/kprobes.c | 83 +++++++++++++++++++++++++--------------------- 1 file changed, 45 insertions(+), 38 deletions(-) (limited to 'arch/ia64') diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c index 1e80ec80dd21..17e70b1b8d79 100644 --- a/arch/ia64/kernel/kprobes.c +++ b/arch/ia64/kernel/kprobes.c @@ -38,13 +38,8 @@ extern void jprobe_inst_return(void); -/* kprobe_status settings */ -#define KPROBE_HIT_ACTIVE 0x00000001 -#define KPROBE_HIT_SS 0x00000002 - -static struct kprobe *current_kprobe, *kprobe_prev; -static unsigned long kprobe_status, kprobe_status_prev; -static struct pt_regs jprobe_saved_regs; +DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL; +DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk); enum instruction_type {A, I, M, F, B, L, X, u}; static enum instruction_type bundle_encoding[32][3] = { @@ -313,21 +308,22 @@ static int __kprobes valid_kprobe_addr(int template, int slot, return 0; } -static inline void save_previous_kprobe(void) +static inline void save_previous_kprobe(struct kprobe_ctlblk *kcb) { - kprobe_prev = current_kprobe; - kprobe_status_prev = kprobe_status; + kcb->prev_kprobe.kp = kprobe_running(); + kcb->prev_kprobe.status = kcb->kprobe_status; } -static inline void restore_previous_kprobe(void) +static inline void restore_previous_kprobe(struct kprobe_ctlblk *kcb) { - current_kprobe = kprobe_prev; - kprobe_status = kprobe_status_prev; + __get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp; + kcb->kprobe_status = kcb->prev_kprobe.status; } -static inline void set_current_kprobe(struct kprobe *p) +static inline void set_current_kprobe(struct kprobe *p, + struct kprobe_ctlblk *kcb) { - current_kprobe = p; + __get_cpu_var(current_kprobe) = p; } static void kretprobe_trampoline(void) @@ -389,6 +385,7 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) BUG_ON(!orig_ret_address || (orig_ret_address == trampoline_address)); regs->cr_iip = orig_ret_address; + reset_current_kprobe(); unlock_kprobes(); preempt_enable_no_resched(); @@ -606,12 +603,13 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) int ret = 0; struct pt_regs *regs = args->regs; kprobe_opcode_t *addr = (kprobe_opcode_t *)instruction_pointer(regs); + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); /* Handle recursion cases */ if (kprobe_running()) { p = get_kprobe(addr); if (p) { - if ( (kprobe_status == KPROBE_HIT_SS) && + if ((kcb->kprobe_status == KPROBE_HIT_SS) && (p->ainsn.inst_flag == INST_FLAG_BREAK_INST)) { ia64_psr(regs)->ss = 0; unlock_kprobes(); @@ -623,17 +621,17 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) * just single step on the instruction of the new probe * without calling any user handlers. */ - save_previous_kprobe(); - set_current_kprobe(p); + save_previous_kprobe(kcb); + set_current_kprobe(p, kcb); p->nmissed++; prepare_ss(p, regs); - kprobe_status = KPROBE_REENTER; + kcb->kprobe_status = KPROBE_REENTER; return 1; } else if (args->err == __IA64_BREAK_JPROBE) { /* * jprobe instrumented function just completed */ - p = current_kprobe; + p = __get_cpu_var(current_kprobe); if (p->break_handler && p->break_handler(p, regs)) { goto ss_probe; } @@ -668,8 +666,8 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) * in post_kprobes_handler() */ preempt_disable(); - kprobe_status = KPROBE_HIT_ACTIVE; - set_current_kprobe(p); + set_current_kprobe(p, kcb); + kcb->kprobe_status = KPROBE_HIT_ACTIVE; if (p->pre_handler && p->pre_handler(p, regs)) /* @@ -681,7 +679,7 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) ss_probe: prepare_ss(p, regs); - kprobe_status = KPROBE_HIT_SS; + kcb->kprobe_status = KPROBE_HIT_SS; return 1; no_kprobe: @@ -690,22 +688,25 @@ no_kprobe: static int __kprobes post_kprobes_handler(struct pt_regs *regs) { - if (!kprobe_running()) + struct kprobe *cur = kprobe_running(); + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + + if (!cur) return 0; - if ((kprobe_status != KPROBE_REENTER) && current_kprobe->post_handler) { - kprobe_status = KPROBE_HIT_SSDONE; - current_kprobe->post_handler(current_kprobe, regs, 0); + if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) { + kcb->kprobe_status = KPROBE_HIT_SSDONE; + cur->post_handler(cur, regs, 0); } - resume_execution(current_kprobe, regs); + resume_execution(cur, regs); /*Restore back the original saved kprobes variables and continue. */ - if (kprobe_status == KPROBE_REENTER) { - restore_previous_kprobe(); + if (kcb->kprobe_status == KPROBE_REENTER) { + restore_previous_kprobe(kcb); goto out; } - + reset_current_kprobe(); unlock_kprobes(); out: @@ -715,15 +716,18 @@ out: static int __kprobes kprobes_fault_handler(struct pt_regs *regs, int trapnr) { - if (!kprobe_running()) + struct kprobe *cur = kprobe_running(); + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + + if (!cur) return 0; - if (current_kprobe->fault_handler && - current_kprobe->fault_handler(current_kprobe, regs, trapnr)) + if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr)) return 1; - if (kprobe_status & KPROBE_HIT_SS) { - resume_execution(current_kprobe, regs); + if (kcb->kprobe_status & KPROBE_HIT_SS) { + resume_execution(cur, regs); + reset_current_kprobe(); unlock_kprobes(); preempt_enable_no_resched(); } @@ -761,9 +765,10 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); unsigned long addr = ((struct fnptr *)(jp->entry))->ip; + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); /* save architectural state */ - jprobe_saved_regs = *regs; + kcb->jprobe_saved_regs = *regs; /* after rfi, execute the jprobe instrumented function */ regs->cr_iip = addr & ~0xFULL; @@ -781,7 +786,9 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) { - *regs = jprobe_saved_regs; + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + + *regs = kcb->jprobe_saved_regs; return 1; } -- cgit v1.2.2 From 991a51d83a3d9bebfafdd1e692cf310899d60791 Mon Sep 17 00:00:00 2001 From: Ananth N Mavinakayanahalli Date: Mon, 7 Nov 2005 01:00:14 -0800 Subject: [PATCH] Kprobes: Use RCU for (un)register synchronization - arch changes Changes to the arch kprobes infrastructure to take advantage of the locking changes introduced by usage of RCU for synchronization. All handlers are now run without any locks held, so they have to be re-entrant or provide their own synchronization. Signed-off-by: Ananth N Mavinakayanahalli Signed-off-by: Anil S Keshavamurthy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/kernel/kprobes.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) (limited to 'arch/ia64') diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c index 17e70b1b8d79..fddbac32d44a 100644 --- a/arch/ia64/kernel/kprobes.c +++ b/arch/ia64/kernel/kprobes.c @@ -26,7 +26,6 @@ #include #include #include -#include #include #include #include @@ -343,10 +342,11 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) struct kretprobe_instance *ri = NULL; struct hlist_head *head; struct hlist_node *node, *tmp; - unsigned long orig_ret_address = 0; + unsigned long flags, orig_ret_address = 0; unsigned long trampoline_address = ((struct fnptr *)kretprobe_trampoline)->ip; + spin_lock_irqsave(&kretprobe_lock, flags); head = kretprobe_inst_table_head(current); /* @@ -386,7 +386,7 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) regs->cr_iip = orig_ret_address; reset_current_kprobe(); - unlock_kprobes(); + spin_unlock_irqrestore(&kretprobe_lock, flags); preempt_enable_no_resched(); /* @@ -397,6 +397,7 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) return 1; } +/* Called with kretprobe_lock held */ void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) { @@ -612,7 +613,6 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) if ((kcb->kprobe_status == KPROBE_HIT_SS) && (p->ainsn.inst_flag == INST_FLAG_BREAK_INST)) { ia64_psr(regs)->ss = 0; - unlock_kprobes(); goto no_kprobe; } /* We have reentered the pre_kprobe_handler(), since @@ -641,10 +641,8 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) } } - lock_kprobes(); p = get_kprobe(addr); if (!p) { - unlock_kprobes(); if (!is_ia64_break_inst(regs)) { /* * The breakpoint instruction was removed right @@ -707,7 +705,6 @@ static int __kprobes post_kprobes_handler(struct pt_regs *regs) goto out; } reset_current_kprobe(); - unlock_kprobes(); out: preempt_enable_no_resched(); @@ -728,7 +725,6 @@ static int __kprobes kprobes_fault_handler(struct pt_regs *regs, int trapnr) if (kcb->kprobe_status & KPROBE_HIT_SS) { resume_execution(cur, regs); reset_current_kprobe(); - unlock_kprobes(); preempt_enable_no_resched(); } @@ -741,7 +737,7 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, struct die_args *args = (struct die_args *)data; int ret = NOTIFY_DONE; - preempt_disable(); + rcu_read_lock(); switch(val) { case DIE_BREAK: if (pre_kprobes_handler(args)) @@ -757,7 +753,7 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, default: break; } - preempt_enable(); + rcu_read_unlock(); return ret; } -- cgit v1.2.2 From d217d5450f11d8c907c0458d175b0dc999b4d06d Mon Sep 17 00:00:00 2001 From: Ananth N Mavinakayanahalli Date: Mon, 7 Nov 2005 01:00:14 -0800 Subject: [PATCH] Kprobes: preempt_disable/enable() simplification Reorganize the preempt_disable/enable calls to eliminate the extra preempt depth. Changes based on Paul McKenney's review suggestions for the kprobes RCU changeset. Signed-off-by: Ananth N Mavinakayanahalli Signed-off-by: Anil S Keshavamurthy Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/kernel/kprobes.c | 37 ++++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 17 deletions(-) (limited to 'arch/ia64') diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c index fddbac32d44a..96736a119c91 100644 --- a/arch/ia64/kernel/kprobes.c +++ b/arch/ia64/kernel/kprobes.c @@ -389,11 +389,11 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) spin_unlock_irqrestore(&kretprobe_lock, flags); preempt_enable_no_resched(); - /* - * By returning a non-zero value, we are telling - * kprobe_handler() that we have handled unlocking - * and re-enabling preemption - */ + /* + * By returning a non-zero value, we are telling + * kprobe_handler() that we don't want the post_handler + * to run (and have re-enabled preemption) + */ return 1; } @@ -604,7 +604,14 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) int ret = 0; struct pt_regs *regs = args->regs; kprobe_opcode_t *addr = (kprobe_opcode_t *)instruction_pointer(regs); - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + struct kprobe_ctlblk *kcb; + + /* + * We don't want to be preempted for the entire + * duration of kprobe processing + */ + preempt_disable(); + kcb = get_kprobe_ctlblk(); /* Handle recursion cases */ if (kprobe_running()) { @@ -659,11 +666,6 @@ static int __kprobes pre_kprobes_handler(struct die_args *args) goto no_kprobe; } - /* - * This preempt_disable() matches the preempt_enable_no_resched() - * in post_kprobes_handler() - */ - preempt_disable(); set_current_kprobe(p, kcb); kcb->kprobe_status = KPROBE_HIT_ACTIVE; @@ -681,6 +683,7 @@ ss_probe: return 1; no_kprobe: + preempt_enable_no_resched(); return ret; } @@ -716,9 +719,6 @@ static int __kprobes kprobes_fault_handler(struct pt_regs *regs, int trapnr) struct kprobe *cur = kprobe_running(); struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); - if (!cur) - return 0; - if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr)) return 1; @@ -737,7 +737,6 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, struct die_args *args = (struct die_args *)data; int ret = NOTIFY_DONE; - rcu_read_lock(); switch(val) { case DIE_BREAK: if (pre_kprobes_handler(args)) @@ -748,12 +747,15 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self, ret = NOTIFY_STOP; break; case DIE_PAGE_FAULT: - if (kprobes_fault_handler(args->regs, args->trapnr)) + /* kprobe_running() needs smp_processor_id() */ + preempt_disable(); + if (kprobe_running() && + kprobes_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; + preempt_enable(); default: break; } - rcu_read_unlock(); return ret; } @@ -785,6 +787,7 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); *regs = kcb->jprobe_saved_regs; + preempt_enable_no_resched(); return 1; } -- cgit v1.2.2 From 9e173c031a7542b1f66b6da853772e5de1804399 Mon Sep 17 00:00:00 2001 From: Nishanth Aravamudan Date: Mon, 7 Nov 2005 01:01:11 -0800 Subject: [PATCH] ia64: fix-up schedule_timeout() usage Use schedule_timeout_interruptible() instead of set_current_state()/schedule_timeout() to reduce kernel size. Signed-off-by: Nishanth Aravamudan Cc: "Luck, Tony" Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/hp/sim/simserial.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) (limited to 'arch/ia64') diff --git a/arch/ia64/hp/sim/simserial.c b/arch/ia64/hp/sim/simserial.c index b42ec37be51c..19ee635eeb70 100644 --- a/arch/ia64/hp/sim/simserial.c +++ b/arch/ia64/hp/sim/simserial.c @@ -642,10 +642,8 @@ static void rs_close(struct tty_struct *tty, struct file * filp) info->event = 0; info->tty = 0; if (info->blocked_open) { - if (info->close_delay) { - current->state = TASK_INTERRUPTIBLE; - schedule_timeout(info->close_delay); - } + if (info->close_delay) + schedule_timeout_interruptible(info->close_delay); wake_up_interruptible(&info->open_wait); } info->flags &= ~(ASYNC_NORMAL_ACTIVE|ASYNC_CLOSING); -- cgit v1.2.2 From b2325fe1b7e5654fac9e9419423aa2c58a3dbd83 Mon Sep 17 00:00:00 2001 From: Jesper Juhl Date: Mon, 7 Nov 2005 01:01:35 -0800 Subject: [PATCH] kfree cleanup: arch This is the arch/ part of the big kfree cleanup patch. Remove pointless checks for NULL prior to calling kfree() in arch/. Signed-off-by: Jesper Juhl Acked-by: Grant Grundler Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/ia64/kernel/perfmon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch/ia64') diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c index f7dfc107cb7b..410d4804fa6e 100644 --- a/arch/ia64/kernel/perfmon.c +++ b/arch/ia64/kernel/perfmon.c @@ -4940,7 +4940,7 @@ abort_locked: if (call_made && PFM_CMD_RW_ARG(cmd) && copy_to_user(arg, args_k, base_sz*count)) ret = -EFAULT; error_args: - if (args_k) kfree(args_k); + kfree(args_k); DPRINT(("cmd=%s ret=%ld\n", PFM_CMD_NAME(cmd), ret)); -- cgit v1.2.2