aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* perf counters: add support for group countersIngo Molnar2008-12-11
| | | | | | | | | | | | | | | | | | | | | | Impact: add group counters This patch adds the "counter groups" abstraction. Groups of counters behave much like normal 'single' counters, with a few semantic and behavioral extensions on top of that. A counter group is created by creating a new counter with the open() syscall's group-leader group_fd file descriptor parameter pointing to another, already existing counter. Groups of counters are scheduled in and out in one atomic group, and they are also roundrobin-scheduled atomically. Counters that are member of a group can also record events with an (atomic) extended timestamp that extends to all members of the group, if the record type is set to PERF_RECORD_GROUP. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf counters: restructure the APIIngo Molnar2008-12-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: clean up new API Thorough cleanup of the new perf counters API, we now get clean separation of the various concepts: - introduce perf_counter_hw_event to separate out the event source details - move special type flags into separate attributes: PERF_COUNT_NMI, PERF_COUNT_RAW - extend the type to u64 and reserve it fully to the architecture in the raw type case. And make use of all these changes in the core and x86 perfcounters code. Also change the syscall signature to: asmlinkage int sys_perf_counter_open( struct perf_counter_hw_event *hw_event_uptr __user, pid_t pid, int cpu, int group_fd); ( Note that group_fd is unused for now - it's reserved for the counter groups abstraction. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf counters: expand use of counter->eventThomas Gleixner2008-12-11
| | | | | | | | | Impact: change syscall, cleanup Make use of the new perf_counters event type. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf counters: clean up 'raw' type APIThomas Gleixner2008-12-11
| | | | | | | | | Impact: cleanup Introduce a separate hw_event type. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf counters: protect them against CSTATE transitionsThomas Gleixner2008-12-11
| | | | | | | | | | | | Impact: fix rare lost events problem There are CPUs whose performance counters misbehave on CSTATE transitions, so provide a way to just disable/enable them around deep idle methods. (hw_perf_enable_all() is cheap on x86.) Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perfcounters: consolidate global-disable codepathsIngo Molnar2008-12-09
| | | | | | | | Impact: cleanup Simplify global disable handling. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perfcounters, x86: clean up debug codeIngo Molnar2008-12-09
| | | | | | | | Impact: cleanup Get rid of unused debug code. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perfcounters, x86: simplify disable/enable of countersIngo Molnar2008-12-09
| | | | | | | | | | Impact: fix spurious missed counter wakeups In the case of NMI events, close a race window that can occur if an NMI hits counter code that temporarily disables+enables a counter, and the NMI leaks into the disabled section. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perfcounters: select ANON_INODESIngo Molnar2008-12-08
| | | | | | | The perfcounters subsystem depends on CONFIG_ANON_INODES facilities, so make sure it's selected. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, perfcounters: read out MSR_CORE_PERF_GLOBAL_STATUS with counters disabledIngo Molnar2008-12-08
| | | | | | | | | | | | Impact: make perfcounter NMI and IRQ sequence more robust Make __smp_perf_counter_interrupt() a bit more conservative: first disable all counters, then read out the status. Most invocations are because there are real events, so there's no performance impact. Code flow gets a bit simpler as well this way. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* performance counters: x86 supportIngo Molnar2008-12-08
| | | | | | | | | | | Implement performance counters for x86 Intel CPUs. It's simplified right now: the PERFMON CPU feature is assumed, which is available in Core2 and later Intel CPUs. The design is flexible to be extended to more CPU types as well. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* performance counters: documentationIngo Molnar2008-12-08
| | | | | | Add more documentation about performance counters. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* performance counters: core codeThomas Gleixner2008-12-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement the core kernel bits of Performance Counters subsystem. The Linux Performance Counter subsystem provides an abstraction of performance counter hardware capabilities. It provides per task and per CPU counters, and it provides event capabilities on top of those. Performance counters are accessed via special file descriptors. There's one file descriptor per virtual counter used. The special file descriptor is opened via the perf_counter_open() system call: int perf_counter_open(u32 hw_event_type, u32 hw_event_period, u32 record_type, pid_t pid, int cpu); The syscall returns the new fd. The fd can be used via the normal VFS system calls: read() can be used to read the counter, fcntl() can be used to set the blocking mode, etc. Multiple counters can be kept open at a time, and the counters can be poll()ed. See more details in Documentation/perf-counters.txt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
*-. Merge branches 'x86/signal' and 'x86/irq' into perfcounters/coreIngo Molnar2008-12-08
|\ \ | | | | | | | | | | | | Merge these pending x86 tree changes into the perfcounters tree to avoid conflicts.
| | * x86: ret_from_fork - get rid of jump backIngo Molnar2008-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: remove dead code If we take a closer look at the rff_trace/rff_action ret_from_fork code, we have to realize that it does all the wrong things: for example it checks the TIF flag - while later on jumping back to the ret-from-syscall path - duplicating the check needlessly. But checking for _TIF_SYSCALL_TRACE is completely unnecessary here because we clear that flag for every freshly forked task. So the whole "tracing" code here, for which there is a out of line jump optimization that makes it even harder to read, is in reality completely dead code ... Reported-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Tested-by: Cyrill Gorcunov <gorcunov@gmail.com>
| | * Merge branch 'x86/debug' into x86/irqIngo Molnar2008-11-28
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | We merge this branch because x86/debug touches code that we started cleaning up in x86/irq. The two branches started out independent, but as unexpected amount of activity went into x86/irq, they became dependent. Resolve that by this cross-merge.
| | | * x86, debug: remove the confusing entry in call tracejia zhang2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: improve backtrace quality avoid the confusion in call trace because of the lack of padding at the tail of function. When do_exit gets called, the return address behind call instruction is pushed into stack. If something get wrong in do_exit, for x86_64, the entry "kernel_execve +0x00/0xXX" rather than "child_rip +0xYY/0xZZ" is in the call trace. That looks confusing, so add a u2d to make the return address still part of the original call site. (This also catches any instances of us returning from that function somehow.) Signed-off-by: jia zhang <jia.zhang2008@gmail.com> Acked-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * x86: clean up stack overflow debug checkIngo Molnar2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Simplify the irq-sampled stack overflow debug check: - eliminate an #idef - use WARN_ONCE() to emit a single warning (all bets are off after the first such warning anyway) Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * x86_64: fix the check in stack_overflow_checkjia zhang2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: make stack overflow debug check and printout narrower stack_overflow_check() should consider the stack usage of pt_regs, and thus it could warn us in advance. Additionally, it looks better for the warning time to start at INITIAL_JIFFIES. Assuming that rsp gets close to the check point before interrupt arrives: when interrupt really happens, thread_info will be partly overrode. Signed-off-by: jia zhang <jia.zhang2008@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * Merge commit 'v2.6.28-rc6' into x86/debugIngo Molnar2008-11-23
| | | |\
| | | * | x86: warn of incorrect cpu_khz on AMD systemsPrarit Bhargava2008-11-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: add debug check If none of the perfctrs are free when calculating cpu_khz we default to using ctr 3 (ie, we just choose 3). This may lead to an incorrect tsc freq value which can cause the system to be unstable. To aid in future debugging, WARN the user of a potential problem. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | x86 debug: mark early_printk.o as notraceIngo Molnar2008-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: do not do function-tracing in the early-printk code this is useful when earlyprintk=vga,keep is used to debug tracer plugins. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | x86: add two missing unwind annotationsJan Beulich2008-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: improve debuginfo Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: entry_64.S - trivial: space, comments fixupCyrill Gorcunov2008-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: uv bau interrupt -- use proper interrupt numberCyrill Gorcunov2008-11-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Acked-by: Cliff Wickman <cpw@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: entry_64.S - use ENTRY to define child_ripCyrill Gorcunov2008-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | child_rip is called not by its name but indirectly rather so make it global and aligned. Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Acked-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: entry_64.S - use X86_EFLAGS_IF instead of hardcoded numbergorcunov@gmail.com2008-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Acked-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | i386: get rid of the use of KPROBE_ENTRY / KPROBE_ENDAlexander van Heukelum2008-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | entry_32.S is now the only user of KPROBE_ENTRY / KPROBE_END, treewide. This patch reorders entry_64.S and explicitly generates a separate section for functions that need the protection. The generated code before and after the patch is equal. The KPROBE_ENTRY and KPROBE_END macro's are removed too. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86_64: get rid of the use of KPROBE_ENTRY / KPROBE_ENDAlexander van Heukelum2008-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: clean up assembly macros and annotations - with some object impact entry_64.S is the only user of KPROBE_ENTRY / KPROBE_END on x86_64. This patch reorders entry_64.S and explicitly generates a separate section for functions that need the protection. The generated code before and after the patch is equal. Implicitly changing sections in assembly files makes it more difficult to follow why the assembler is doing certain things. For example, .p2align 5 KPROBE_ENTRY(...) was not doing what you would expect. Other section changes (__ex_table, .fixup, .init.rodata) are done explicitly already. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: introduce ENTRY(KPROBE_ENTRY)_X86 assembly helpers to catch unbalanced ↵Cyrill Gorcunov2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | declaration v3 Impact: make ENTRY()/END() macros more capable It's usefull to catch unbalanced or messed or mixed declarations of ENTRY and KPROBES. These macros would help a bit. For example the following code would compile without problems ENTRY_X86(mcount) retq END_X86(mcount) But if you forget and mess the following form ENTRY_X86(mcount) retq END(mcount) ENTRY_X86(ftrace_caller) The assembler will issue the following message: Error: ENTRY_X86/KPROBE_X86 unbalanced,missed,mixed Actually the checking is performed at every _X86 macro so maybe it's good idea to put ENTRY_KPROBE_FINAL_X86 at the end of .S file to be sure you didn't miss anything. Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Alexander van Heukelum <heukelum@mailshack.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: KPROBE_ENTRY should be paired wth KPROBE_ENDAlexander van Heukelum2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: move some code out of .kprobes.text KPROBE_ENTRY switches code generation to .kprobes.text, and KPROBE_END uses .popsection to get back to the previous section (.text, normally). Also replace ENDPROC by END, for consistency. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: include ENTRY/END in entry handlers in entry_64.SAlexander van Heukelum2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup of entry_64.S Except for the order and the place of the functions, this patch should not change the generated code. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: move dwarf2 related macro to dwarf2.hCyrill Gorcunov2008-11-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Move recently introduced dwarf2 macros to dwarf2.h file. It allow us to not duplicate them in assembly files. Active usage of _cfi macros don't make assembly files more obvious to understand but we already have a lot of macros there which requires to search the definitions of them *anyway*. But at least it make every cfi usage one line shorter. Also some code alignment is done. Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: split out some macro's and move common code to paranoid_exit, fixAlexander van Heukelum2008-11-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix bootup crash Even though it tested fine for me, there was still a bug in the first patch: I have overlooked a call to ptregscall_common. This patch fixes that, I think, but the code is never executed for me while running a debian install... (I tested this by putting an "1:jmp 1b" in there.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: entry_64.S: split out some macro's and move common code to paranoid_exitAlexander van Heukelum2008-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup DISABLE_INTERRUPTS(CLBR_NONE)/TRACE_IRQS_OFF is now always executed just before paranoid_exit. Move it there. Split out paranoidzeroentry, paranoiderrorentry, and paranoidzeroentry_ist to get more readable macro's. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: entry_64.S: factor out save_paranoid and paranoid_exitAlexander van Heukelum2008-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup, shrink kernel image size Also expand the paranoid_exit0 macro into nmi_exit inside the nmi stub in the case of enabled irq-tracing. This gives a few hundred bytes code size reduction. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: introduce save_rest and restructure the PTREGSCALL macro in entry_64.SAlexander van Heukelum2008-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup The save_rest function completes a partial stack frame for use by the PTREGSCALL macro. This also avoids the indirect call in PTREGSCALLs. This adds the macro movq_cfi_restore to hide the CFI_RESTORE annotation when restoring a register from the stack frame. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: entry_64.S: renameIngo Molnar2008-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Rename: CFI_PUSHQ => pushq_cfi CFI_POPQ => popq_cfi CFI_MOVQ => movq_cfi To make it blend better into regular assembly code. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: clean up after: move entry_64.S register saving out of the macros, fixIngo Molnar2008-11-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: build fix The break builds with older binutils (2.16.1): arch/x86/kernel/entry_64.S: Assembler messages: arch/x86/kernel/entry_64.S:282: Error: too many positional arguments arch/x86/kernel/entry_64.S:283: Error: too many positional arguments arch/x86/kernel/entry_64.S:284: Error: too many positional arguments arch/x86/kernel/entry_64.S:285: Error: too many positional arguments arch/x86/kernel/entry_64.S:286: Error: too many positional arguments arch/x86/kernel/entry_64.S:287: Error: too many positional arguments arch/x86/kernel/entry_64.S:288: Error: too many positional arguments arch/x86/kernel/entry_64.S:289: Error: too many positional arguments arch/x86/kernel/entry_64.S:290: Error: too many positional arguments Took some time to figure out the detail that GAS chokes on: it's negative offsets. Rearrange the calculations to make sure we never go negative. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: clean up after: move entry_64.S register saving out of the macrosAlexander van Heukelum2008-11-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This add-on patch to x86: move entry_64.S register saving out of the macros visually cleans up the appearance of the code by introducing some basic helper macro's. It also adds some cfi annotations which were missing. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | x86: move entry_64.S register saving out of the macrosAlexander van Heukelum2008-11-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is a combined patch that moves "save_args" out-of-line for the interrupt macro and moves "error_entry" mostly out-of-line for the zeroentry and errorentry macros. The save_args function becomes really straightforward and easy to understand, with the possible exception of the stack switch code, which now needs to copy the return address of to the calling function. Normal interrupts arrive with ((~vector)-0x80) on the stack, which gets adjusted in common_interrupt: <common_interrupt>: (5) addq $0xffffffffffffff80,(%rsp) /* -> ~(vector) */ (4) sub $0x50,%rsp /* space for registers */ (5) callq ffffffff80211290 <save_args> (5) callq ffffffff80214290 <do_IRQ> <ret_from_intr>: ... An apic interrupt stub now look like this: <thermal_interrupt>: (5) pushq $0xffffffffffffff05 /* ~(vector) */ (4) sub $0x50,%rsp /* space for registers */ (5) callq ffffffff80211290 <save_args> (5) callq ffffffff80212b8f <smp_thermal_interrupt> (5) jmpq ffffffff80211f93 <ret_from_intr> Similarly the exception handler register saving function becomes simpler, without the need of any parameter shuffling. The stub for an exception without errorcode looks like this: <overflow>: (6) callq *0x1cad12(%rip) # ffffffff803dd448 <pv_irq_ops+0x38> (2) pushq $0xffffffffffffffff /* no syscall */ (4) sub $0x78,%rsp /* space for registers */ (5) callq ffffffff8030e3b0 <error_entry> (3) mov %rsp,%rdi /* pt_regs pointer */ (2) xor %esi,%esi /* no error code */ (5) callq ffffffff80213446 <do_overflow> (5) jmpq ffffffff8030e460 <error_exit> And one for an exception with errorcode like this: <segment_not_present>: (6) callq *0x1cab92(%rip) # ffffffff803dd448 <pv_irq_ops+0x38> (4) sub $0x78,%rsp /* space for registers */ (5) callq ffffffff8030e3b0 <error_entry> (3) mov %rsp,%rdi /* pt_regs pointer */ (5) mov 0x78(%rsp),%rsi /* load error code */ (9) movq $0xffffffffffffffff,0x78(%rsp) /* no syscall */ (5) callq ffffffff80213209 <do_segment_not_present> (5) jmpq ffffffff8030e460 <error_exit> Unfortunately, this last type is more than 32 bytes. But the total space savings due to this patch is about 2500 bytes on an smp-configuration, and I think the code is clearer than it was before. The tested kernels were non-paravirt ones (i.e., without the indirect call at the top of the exception handlers). Anyhow, I tested this patch on top of a recent -tip. The machine was an 2x4-core Xeon at 2333MHz. Measured where the delays between (almost-)adjacent rdtsc instructions. The graphs show how much time is spent outside of the program as a function of the measured delay. The area under the graph represents the total time spent outside the program. Eight instances of the rdtsctest were started, each pinned to a single cpu. The histogams are added. For each kernel two measurements were done: one in mostly idle condition, the other while running "bonnie++ -f", bound to cpu 0. Each measurement took 40 minutes runtime. See the attached graphs for the results. The graphs overlap almost everywhere, but there are small differences. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * | | Merge branch 'x86/cleanups' into x86/irqIngo Molnar2008-11-20
| | |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | [ merged x86/cleanups into x86/irq to enable a wider IRQ entry code patch to be applied, which depends on a cleanup patch in x86/cleanups. ]
| | | * \ \ Merge branch 'x86/urgent' into x86/cleanupsIngo Molnar2008-11-18
| | | |\ \ \
| | | * | | | x86: entry_64.S: remove whitespace at end of linesAlexander van Heukelum2008-11-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup All blame goes to: color white,red "[^[:graph:]]+$" in .nanorc ;). Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | | | Merge commit 'v2.6.28-rc5' into x86/cleanupsIngo Molnar2008-11-17
| | | |\ \ \ \
| | | * | | | | x86: x86_32 has its own irq_regs definitionHarvey Harrison2008-11-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Arches that have their own irq_regs definition are expected to define ARCH_HAS_OWN_IRQ_REGS or else a generic (unused) set will also be defined in lib/irq_regs.c Sparse noticed the unused generic one had no prototype: lib/irq_regs.c:15:1: warning: symbol 'per_cpu____irq_regs' was not declared. Should it be static? Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | | | | Merge commit 'v2.6.28-rc4' into x86/cleanupsIngo Molnar2008-11-10
| | | |\ \ \ \ \
| | | * | | | | | x86: clean up vget_cycles()Ingo Molnar2008-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: remove unused variable I forgot to remove the now unused "cycles_t cycles" parameter from vget_cycles() - which triggers build warnings as tsc.h is included in a number of files. Remove it. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | | | | | x86: clean up rdtsc_barrier() useIngo Molnar2008-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Move rdtsc_barrier() use to vsyscall_64.c where it's relied on, and point out its role in the context of its use. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | | * | | | | | Merge branch 'linus' into x86/cleanupsIngo Molnar2008-11-08
| | | |\ \ \ \ \ \