aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/time.c
Commit message (Collapse)AuthorAge
...
| * | timekeeping: Increase granularity of read_persistent_clock(), build fixMartin Schwidefsky2009-08-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the following build problem on powerpc: arch/powerpc/kernel/time.c: In function 'read_persistent_clock': arch/powerpc/kernel/time.c:788: error: 'return' with a value, in function returning void arch/powerpc/kernel/time.c:791: error: 'return' with a value, in function returning void Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: dwalker@fifo99.com Cc: johnstul@us.ibm.com LKML-Reference: <20090822222313.74b9619c@skybase> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | timekeeping: Increase granularity of read_persistent_clock()Martin Schwidefsky2009-08-15
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | The persistent clock of some architectures (e.g. s390) have a better granularity than seconds. To reduce the delta between the host clock and the guest clock in a virtualized system change the read_persistent_clock function to return a struct timespec. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> Acked-by: John Stultz <johnstul@us.ibm.com> Cc: Daniel Walker <dwalker@fifo99.com> LKML-Reference: <20090814134811.013873340@de.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | powerpc: Properly start decrementer on BookE secondary CPUsBenjamin Herrenschmidt2009-08-28
| | | | | | | | | | | | | | | | | | | | This moves the code to start the decrementer on 40x and BookE into a separate function which is now called from time_init() and secondary_time_init(), before the respective clock sources are registered. We also remove the 85xx specific code for doing it from the platform code. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Use DIV_ROUND_CLOSEST in time init codeJulia Lawall2009-08-19
|/ | | | | | | | | | | | | | | | | | | | | | | | | The kernel.h macro DIV_ROUND_CLOSEST performs the computation (x + d/2)/d but is perhaps more readable. The semantic patch that makes this change is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @haskernel@ @@ #include <linux/kernel.h> @depends on haskernel@ expression x,__divisor; @@ - (((x) + ((__divisor) / 2)) / (__divisor)) + DIV_ROUND_CLOSEST(x,__divisor) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* perf_counter: powerpc: Enable use of software counters on 32-bit powerpcPaul Mackerras2009-06-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This enables the perf_counter subsystem on 32-bit powerpc. Since we don't have any support for hardware counters on 32-bit powerpc yet, only software counters can be used. Besides selecting HAVE_PERF_COUNTERS for 32-bit powerpc as well as 64-bit, the main thing this does is add an implementation of set_perf_counter_pending(). This needs to arrange for perf_counter_do_pending() to be called when interrupts are enabled. Rather than add code to local_irq_restore as 64-bit does, the 32-bit set_perf_counter_pending() generates an interrupt by setting the decrementer to 1 so that a decrementer interrupt will become pending in 1 or 2 timebase ticks (if a decrementer interrupt isn't already pending). When interrupts are enabled, timer_interrupt() will be called, and some new code in there calls perf_counter_do_pending(). We use a per-cpu array of flags to indicate whether we need to call perf_counter_do_pending() or not. This introduces a couple of new Kconfig symbols: PPC_HAVE_PMU_SUPPORT, which is selected by processor families for which we have hardware PMU support (currently only PPC64), and PPC_PERF_CTRS, which enables the powerpc-specific perf_counter back-end. Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: linuxppc-dev@ozlabs.org Cc: benh@kernel.crashing.org LKML-Reference: <19000.55404.103840.393470@cargo.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* powerpc: Don't do generic calibrate_delay()Benjamin Herrenschmidt2009-06-14
| | | | | | | | | | Currently we are wasting time calling the generic calibrate_delay() function. We don't need it since our implementation of __delay() is based on the CPU timebase. So instead, we use our own small implementation that initializes loops_per_jiffy to something sensible to make the few users like spinlock debug be happy Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Improve decrementer accuracyAnton Blanchard2009-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I have been looking at sources of OS jitter and notice that after a long NO_HZ idle period we wakeup too early: relative time (us) event timer irq exit 999946.405 timer irq entry 4.835 timer irq exit 21.685 timer irq entry 3.540 timer (tick_sched_timer) entry Here we slept for just under a second then took a timer interrupt that did nothing. 21.685 us later we wake up again and do the work. We set a rather low shift value of 16 for the decrementer clockevent, which I think is causing this issue. On this box we have a 207MHz decrementer and see: clockevent: decrementer mult[3501] shift[16] cpu[0] For calculations of large intervals this mult/shift combination could be off by a significant amount. I notice the sparc code has a loop that iterates to find a mult/shift combination that maximises the shift value while keeping mult under 32bit. With the patch below we get: clockevent: decrementer mult[35015c20] shift[32] cpu[15] And we no longer see the spurious wakeups. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* clocksource: pass clocksource to read() callbackMagnus Damm2009-04-21
| | | | | | | | | | | | | | Pass clocksource pointer to the read() callback for clocksources. This allows us to share the callback between multiple instances. [hugh@veritas.com: fix powerpc build of clocksource pass clocksource mods] [akpm@linux-foundation.org: cleanup] Signed-off-by: Magnus Damm <damm@igel.co.jp> Acked-by: John Stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* powerpc: Hook up rtc-generic, and kill rtc-ppcGeert Uytterhoeven2009-04-01
| | | | | | | | | | | | | | PowerPC has been a long time user of the generic RTC abstraction, so hook up rtc-generic: - Create the "rtc-generic" platform device if ppc_md.get_rtc_time is set, - Kill rtc-ppc, as rtc-generic offers the same functionality in a more generic way, and supports autoloading through udev. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: Alessandro Zummo <a.zummo@towertech.it> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
* Merge branch 'cputime' of git://git390.osdl.marist.edu/pub/scm/linux-2.6Linus Torvalds2009-01-03
|\ | | | | | | | | | | | | | | | | | | * 'cputime' of git://git390.osdl.marist.edu/pub/scm/linux-2.6: [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_ID [PATCH] improve idle cputime accounting [PATCH] improve precision of idle time detection. [PATCH] improve precision of process accounting. [PATCH] idle cputime accounting [PATCH] fix scaled & unscaled cputime accounting
| * [PATCH] idle cputime accountingMartin Schwidefsky2008-12-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cpu time spent by the idle process actually doing something is currently accounted as idle time. This is plain wrong, the architectures that support VIRT_CPU_ACCOUNTING=y can do better: distinguish between the time spent doing nothing and the time spent by idle doing work. The first is accounted with account_idle_time and the second with account_system_time. The architectures that use the account_xxx_time interface directly and not the account_xxx_ticks interface now need to do the check for the idle process in their arch code. In particular to improve the system vs true idle time accounting the arch code needs to measure the true idle time instead of just testing for the idle process. To improve the tick based accounting as well we would need an architecture primitive that can tell us if the pt_regs of the interrupted context points to the magic instruction that halts the cpu. In addition idle time is no more added to the stime of the idle process. This field now contains the system time of the idle process as it should be. On systems without VIRT_CPU_ACCOUNTING this will always be zero as every tick that occurs while idle is running will be accounted as idle time. This patch contains the necessary common code changes to be able to distinguish idle system time and true idle time. The architectures with support for VIRT_CPU_ACCOUNTING need some changes to exploit this. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
| * [PATCH] fix scaled & unscaled cputime accountingMartin Schwidefsky2008-12-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The utimescaled / stimescaled fields in the task structure and the global cpustat should be set on all architectures. On s390 the calls to account_user_time_scaled and account_system_time_scaled never have been added. In addition system time that is accounted as guest time to the user time of a process is accounted to the scaled system time instead of the scaled user time. To fix the bugs and to prevent future forgetfulness this patch merges account_system_time_scaled into account_system_time and account_user_time_scaled into account_user_time. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Michael Neuling <mikey@neuling.org> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | Merge branch 'cpus4096-for-linus-2' of ↵Linus Torvalds2009-01-02
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (66 commits) x86: export vector_used_by_percpu_irq x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and() sched: nominate preferred wakeup cpu, fix x86: fix lguest used_vectors breakage, -v2 x86: fix warning in arch/x86/kernel/io_apic.c sched: fix warning in kernel/sched.c sched: move test_sd_parent() to an SMP section of sched.h sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0 sched: activate active load balancing in new idle cpus sched: bias task wakeups to preferred semi-idle packages sched: nominate preferred wakeup cpu sched: favour lower logical cpu number for sched_mc balance sched: framework for sched_mc/smt_power_savings=N sched: convert BALANCE_FOR_xx_POWER to inline functions x86: use possible_cpus=NUM to extend the possible cpus allowed x86: fix cpu_mask_to_apicid_and to include cpu_online_mask x86: update io_apic.c to the new cpumask code x86: Introduce topology_core_cpumask()/topology_thread_cpumask() x86: xen: use smp_call_function_many() x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.c ... Fixed up trivial conflict in kernel/time/tick-sched.c manually
| * cpumask: convert struct clock_event_device to cpumask pointers.Rusty Russell2008-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | Impact: change calling convention of existing clock_event APIs struct clock_event_timer's cpumask field gets changed to take pointer, as does the ->broadcast function. Another single-patch change. For safety, we BUG_ON() in clockevents_register_device() if it's not set. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu>
* | powerpc: Eliminate unused do_gtod variablePaul Mackerras2008-11-05
| | | | | | | | | | | | | | | | | | Since we started using the generic timekeeping code, we haven't had a powerpc-specific version of do_gettimeofday, and hence there is now nothing that reads the do_gtod variable in arch/powerpc/kernel/time.c. This therefore removes it and the code that sets it. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc: Improve resolution of VDSO clock_gettimePaul Mackerras2008-11-05
|/ | | | | | | | | | | | | | | | | | | | | | Currently the clock_gettime implementation in the VDSO produces a result with microsecond resolution for the cases that are handled without a system call, i.e. CLOCK_REALTIME and CLOCK_MONOTONIC. The nanoseconds field of the result is obtained by computing a microseconds value and multiplying by 1000. This changes the code in the VDSO to do the computation for clock_gettime with nanosecond resolution. That means that the resolution of the result will ultimately depend on the timebase frequency. Because the timestamp in the VDSO datapage (stamp_xsec, the real time corresponding to the timebase count in tb_orig_stamp) is in units of 2^-20 seconds, it doesn't have sufficient resolution for computing a result with nanosecond resolution. Therefore this adds a copy of xtime to the VDSO datapage and updates it in update_gtod() along with the other time-related fields. Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge commit 'origin/master'Benjamin Herrenschmidt2008-07-15
|\ | | | | | | | | | | | | | | Manual merge of: arch/powerpc/Kconfig arch/powerpc/kernel/stacktrace.c arch/powerpc/mm/slice.c arch/ppc/kernel/smp.c
| * on_each_cpu(): kill unused 'retry' parameterJens Axboe2008-06-26
| | | | | | | | | | | | | | | | | | It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | powerpc/booke: don't reinitialize time baseKumar Gala2008-07-14
| | | | | | | | | | | | | | | | | | For some reason long ago I decided that we should zero out the time base when we calibrate the decrementer. The problem is that this can be harmful in SMP systems where the firmware has already synchronized the time bases on the various cores. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | [POWERPC] Fix sparse warnings in arch/powerpc/kernelMichael Ellerman2008-05-14
|/ | | | | | | | | | | | | | | | | | Make a few things static in lparcfg.c Make init and exit routines static in rtas_flash.c Make things static in rtas_pci.c Make some functions static in rtas.c Make fops static in rtas-proc.c Remove unneeded extern for do_gtod in smp.c Make clocksource_init() static in time.c Make last_tick_len and ticklen_to_xs static in time.c Move the declaration of the pvr per-cpu into smp.h Make kexec_smp_down() and kexec_stack static in machine_kexec_64.c Don't return void in arch_teardown_msi_irqs() in msi.c Move declaration of GregorianDay()into asm/time.h Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* ntp: rename TICK_LENGTH_SHIFT to NTP_SCALE_SHIFTRoman Zippel2008-05-01
| | | | | | | | | | | As TICK_LENGTH_SHIFT is used for more than just the tick length, the name isn't quite approriate anymore, so this renames it to NTP_SCALE_SHIFT. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Cc: john stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ntp: increase time_freq resolutionRoman Zippel2008-05-01
| | | | | | | | | | | | | | This changes time_freq to a 64bit value and makes it static (the only outside user had no real need to modify it). Intermediate values were already 64bit, so the change isn't that big, but it saves a little in shifts by replacing SHIFT_NSEC with TICK_LENGTH_SHIFT. PPM_SCALE is then used to convert between user space and kernel space representation. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Cc: john stultz <johnstul@us.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* taskstats scaled time cleanupMichael Neuling2008-02-06
| | | | | | | | | | | | | | | | | | | | | | This moves the ability to scale cputime into generic code. This allows us to fix the issue in kernel/timer.c (noticed by Balbir) where we could only add an unscaled value to the scaled utime/stime. This adds a cputime_to_scaled function. As before, the POWERPC version does the scaling based on the last SPURR/PURR ratio calculated. The generic and s390 (only other arch to implement asm/cputime.h) versions are both NOPs. Also moves the SPURR and PURR snapshots closer. Signed-off-by: Michael Neuling <mikey@neuling.org> Cc: Jay Lan <jlan@engr.sgi.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Implement arch disable/enable irq hooks.Scott Wood2007-12-21
| | | | | | | | | | | | | | | | These hooks ensure that a decrementer interrupt is not pending when suspending; otherwise, problems may occur on 6xx/7xx/7xxx-based systems (except for powermacs, which use a separate suspend path). For example, with deep sleep on the 831x, a pending decrementer will cause a system freeze because the SoC thinks the decrementer interrupt would have woken the system, but the core must have interrupts disabled due to the setup required for deep sleep. Changed via-pmu.c to use the new ppc_md hooks, and made the arch_* functions call the generic_* functions unconditionally. -- paulus Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Optimize account_system_vtimeMilton Miller2007-12-20
| | | | | | | | | | | | | | | | | | | | | | | | We have multiple calls to has_feature being inlined, but gcc can't be sure that the store via get_paca() doesn't alias the path to cur_cpu_spec->feature. Reorder to put the calls to read_purr and read_spurr adjacent to each other. To add a sense of consistency, reorder the remaining lines to perform parallel steps on purr and scaled purr of each line instead of calculating and then using one value before going on to the next. In addition, we can tell gcc that no SPURR means no PURR. The test is completely hidden in the PURR case, and in the !PURR case the second test is eliminated resulting in the simple register copy in the out-of-line branch. Further, gcc sees get_paca()->system_time referenced several times and allocates a register to address it (shadowing r13) instead of caching its value. Reading into a local varable saves the shadow of r13 and removes a potentially duplicate load (between the nested if and its parent). Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Depend on ->initialized in calc_steal_timeMilton Miller2007-12-20
| | | | | | | | | | | If CPU_FTR_PURR is not set, we will never set cpu_purr_data->initialized. Checking via __get_cpu_var on 64 bit avoids one dependent load compared to cpu_has_feature in the not-present case, and is always required when it is present. The code is under CONFIG_VIRT_CPU_ACCOUNTING so 32 bit will not be affected. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Timer interrupt: use a struct for two per_cpu varablesMilton Miller2007-12-20
| | | | | | | | | | | | timer_interrupt() was calculating per_cpu_offset several times, having to start from the toc because of potential aliasing issues. Placing both decrementer per_cpu varables in a struct and calculating the address once with __get_cpu_var results in better code on both 32 and 64 bit. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Use __get_cpu_var in time.cMilton Miller2007-12-20
| | | | | | | | | | | Use __get_cpu_var(x) instead of per_cpu(x, smp_processor_id()), as it is optimized on ppc64 to access the current cpu's per-cpu offset directly; it's local_paca.offset instead of TOC->paca[local_paca->processor_id].offset. This is the trivial portion, two functions with one use each. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] init_decrementer_clockevent can be static __initMilton Miller2007-12-20
| | | | | | | | | as its only called from time_init, which is __init. Also remove unneeded forward declaration. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix possible division by zero in scaled time accountingMichael Neuling2007-11-20
| | | | | | | | | | | If we get no user time and no system time allocated since the last account_system_vtime, the system to user time ratio estimate can end up dividing by zero. This was causing a problem noticed by Balbir Singh. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Demote clockevent printk to KERN_DEBUGTony Breeds2007-11-13
| | | | | | | These don't need to be seen by everyone on every boot. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds2007-11-09
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: sched: proper prototype for kernel/sched.c:migration_init() sched: avoid large irq-latencies in smp-balancing sched: fix copy_namespace() <-> sched_fork() dependency in do_fork sched: clean up the wakeup preempt check, #2 sched: clean up the wakeup preempt check sched: wakeup preemption fix sched: remove PREEMPT_RESTRICT sched: turn off PREEMPT_RESTRICT KVM: fix !SMP build error x86: make nmi_cpu_busy() always defined x86: make ipi_handler() always defined sched: cleanup, use NSEC_PER_MSEC and NSEC_PER_SEC sched: reintroduce SMP tunings again sched: restore deterministic CPU accounting on powerpc sched: fix delay accounting regression sched: reintroduce the sched_min_granularity tunable sched: documentation: place_entity() comments sched: fix vslice
| * sched: restore deterministic CPU accounting on powerpcPaul Mackerras2007-11-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since powerpc started using CONFIG_GENERIC_CLOCKEVENTS, the deterministic CPU accounting (CONFIG_VIRT_CPU_ACCOUNTING) has been broken on powerpc, because we end up counting user time twice: once in timer_interrupt() and once in update_process_times(). This fixes the problem by pulling the code in update_process_times that updates utime and stime into a separate function called account_process_tick. If CONFIG_VIRT_CPU_ACCOUNTING is not defined, there is a version of account_process_tick in kernel/timer.c that simply accounts a whole tick to either utime or stime as before. If CONFIG_VIRT_CPU_ACCOUNTING is defined, then arch code gets to implement account_process_tick. This also lets us simplify the s390 code a bit; it means that the s390 timer interrupt can now call update_process_times even when CONFIG_VIRT_CPU_ACCOUNTING is turned on, and can just implement a suitable account_process_tick(). account_process_tick() now takes the task_struct * as an argument. Tested both with and without CONFIG_VIRT_CPU_ACCOUNTING. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | [POWERPC] Fix off-by-one error in setting decrementer on Book E/4xx (v2)Paul Mackerras2007-11-07
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | The decrementer in Book E and 4xx processors interrupts on the transition from 1 to 0, rather than on the 0 to -1 transition as on 64-bit server and 32-bit "classic" (6xx/7xx/7xxx) processors. At the moment we subtract 1 from the count of how many decrementer ticks are required before the next interrupt before putting it into the decrementer, which is correct for server/classic processors, but could possibly cause the interrupt to happen too early on Book E and 4xx if the timebase/decrementer frequency is low. This fixes the problem by making set_dec subtract 1 from the count for server and classic processors, instead of having the callers subtract 1. Since set_dec already had a bunch of ifdefs to handle different processor types, there is no net increase in ugliness. :) Note that calling set_dec(0) may not generate an interrupt on some processors. To make sure that decrementer_set_next_event always calls set_dec with an interval of at least 1 tick, we set min_delta_ns of the decrementer_clockevent to correspond to 2 ticks (2 rather than 1 to compensate for truncations in the conversions between ticks and ns). This also removes a redundant call to set the decrementer to 0x7fffffff - it was already set to that earlier in timer_interrupt. Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: add scaled time accountingMichael Neuling2007-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds POWERPC specific hooks for scaled time accounting. POWER6 includes a SPURR register. The SPURR is based off the PURR register but is scaled based on CPU frequency and issue rates. This gives a more accurate account of the instructions used per task. The PURR and timebase will be constant relative to the wall clock, irrespective of the CPU frequency. This implementation reads the SPURR register in account_system_vtime which is only call called on context witch and hard and soft irq entry and exit. The percentage of user and system time is then estimated using the ratio of these accounted by the PURR. If the SPURR is not present, the PURR read. An earlier implementation of this patch read the SPURR whenever the PURR was read, which included the system call entry and exit path. Unfortunately this showed a performance regression on lmbench runs, so was re-implemented. I've included the lmbench results here when run bare metal on POWER6. 1st column is the unpatch results. 2nd column is the results using the below patch and the 3rd is the % diff of these results from the base. 4th and 5th columns are the results and % differnce from the base using the older patch (SPURR read in syscall entry/exit path). Base Scaled-Acct SPURR-in-syscall Result Result % diff Result % diff Simple syscall: 0.3086 0.3086 0.0000 0.3452 11.8600 Simple read: 0.4591 0.4671 1.7425 0.5044 9.86713 Simple write: 0.4364 0.4366 0.0458 0.4731 8.40971 Simple stat: 2.0055 2.0295 1.1967 2.0669 3.06158 Simple fstat: 0.5962 0.5876 -1.442 0.6368 6.80979 Simple open/close: 3.1283 3.1009 -0.875 3.2088 2.57328 Select on 10 fd's: 0.8554 0.8457 -1.133 0.8667 1.32101 Select on 100 fd's: 3.5292 3.6329 2.9383 3.6664 3.88756 Select on 250 fd's: 7.9097 8.1881 3.5197 8.2242 3.97613 Select on 500 fd's: 15.2659 15.836 3.7357 15.873 3.97814 Select on 10 tcp fd's: 0.9576 0.9416 -1.670 0.9752 1.83792 Select on 100 tcp fd's: 7.248 7.2254 -0.311 7.2685 0.28283 Select on 250 tcp fd's: 17.7742 17.707 -0.375 17.749 -0.1406 Select on 500 tcp fd's: 35.4258 35.25 -0.496 35.286 -0.3929 Signal handler installation: 0.6131 0.6075 -0.913 0.647 5.52927 Signal handler overhead: 2.0919 2.1078 0.7600 2.1831 4.35967 Protection fault: 0.7345 0.7478 1.8107 0.8031 9.33968 Pipe latency: 33.006 16.398 -50.31 33.475 1.42368 AF_UNIX sock stream latency: 14.5093 30.910 113.03 30.715 111.692 Process fork+exit: 219.8 222.8 1.3648 229.37 4.35623 Process fork+execve: 876.14 873.28 -0.32 868.66 -0.8533 Process fork+/bin/sh -c: 2830 2876.5 1.6431 2958 4.52296 File /var/tmp/XXX write bw: 1193497 1195536 0.1708 118657 -0.5799 Pagefaults on /var/tmp/XXX: 3.1272 3.2117 2.7020 3.2521 3.99398 Also, kernel compile times show no difference with this patch applied. [pbadari@us.ibm.com: Avoid unnecessary PURR reading] Signed-off-by: Michael Neuling <mikey@neuling.org> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Jay Lan <jlan@engr.sgi.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Quieten clockevent printkAnton Blanchard2007-10-17
| | | | | | | The clockevent bootup message only needs to be KERN_INFO. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Make clockevents work on PPC601 processorsPaul Mackerras2007-10-11
| | | | | | | | | | | | | | | | | | In testing the new clocksource and clockevent code on a PPC601 processor, I discovered that the clockevent multiplier value for the decrementer clockevent was overflowing. Because the RTCL register in the 601 effectively counts at 1GHz (it doesn't actually, but it increases by 128 every 128ns), and the shift value was 32, that meant the multiplier value had to be 2^32, which won't fit in an unsigned long on 32-bit. The same problem would arise on any platform where the timebase frequency was 1GHz or more (not that we actually have any such machines today). This fixes it by reducing the shift value to 16. Doing the calculations with a resolution of 2^-16 nanoseconds (15 femtoseconds) should be quite adequate. :) Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Prevent decrementer clockevents from firing earlyPaul Mackerras2007-10-11
| | | | | | | | | | | | | | | | | | | | On old powermacs, we sometimes set the decrementer to 1 in order to trigger a decrementer interrupt, which we use to handle an interrupt that was pending at the time when it was re-enabled. This was causing the decrementer clock event device to call the event function for the next event early, which was causing problems when high-res timers were not enabled. This fixes the problem by recording the timebase value at which the next event should occur, and checking the current timebase against the recorded value in timer_interrupt. If it isn't time for the next event, it just reprograms the decrementer and returns. This also subtracts 1 from the value stored into the decrementer, which is appropriate because the decrementer interrupts on the transition from 0 to -1, not when the decrementer reaches 0. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Implement clockevents driver for powerpcTony Breeds2007-10-03
| | | | | | | | | | | | This registers a clock event structure for the decrementer and turns on CONFIG_GENERIC_CLOCKEVENTS, which means that we now don't need most of timer_interrupt(), since the work is done in generic code. For secondary CPUs, their decrementer clockevent is registered when the CPU comes up (the generic code automatically removes the clockevent when the CPU goes down). Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Implement generic time of day clocksource for powerpcTony Breeds2007-10-02
| | | | | Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Implement {read,update}_persistent_clockTony Breeds2007-10-02
| | | | | | | | With these functions implemented we cooperate better with the generic timekeeping code. This obsoletes the need for the timer sysdev as a bonus. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge branch 'linux-2.6'Paul Mackerras2007-09-19
|\
| * [POWERPC] Fix timekeeping on PowerPC 601Benjamin Herrenschmidt2007-09-19
| | | | | | | | | | | | | | | | | | | | | | Recent changes to the timekeeping code broke support for the PowerPC 601 processor which doesn't have the usual timebase facility but a slightly different thing called (yuck) the RTC. This fixes it, boot tested on an old 601 based PowerMac 7200. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] 40x decrementer fixesJosh Boyer2007-08-20
| | | | | | | | | | | | | | | | Allow generic_calibrate_decr to work for 40x platforms. Given that the hardware behavior is identical, this also changes the set_dec function to reload the PIT on 40x to match the behavior 44x currently has. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
* | [POWERPC] Clean out a bunch of duplicate includesJesper Juhl2007-08-16
|/ | | | | | | | This removes several duplicate includes from arch/powerpc/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [CELL] oprofile: add support to OProfile for profiling CELL BE SPUsBob Nelson2007-07-20
| | | | | | | | | | | | | | | | | | | | | | | From: Maynard Johnson <mpjohn@us.ibm.com> This patch updates the existing arch/powerpc/oprofile/op_model_cell.c to add in the SPU profiling capabilities. In addition, a 'cell' subdirectory was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code. Exports spu_set_profile_private_kref and spu_get_profile_private_kref which are used by OProfile to store private profile information in spufs data structures. Also incorporated several fixes from other patches (rrn). Check pointer returned from kzalloc. Eliminated unnecessary cast. Better error handling and cleanup in the related area. 64-bit unsigned long parameter was being demoted to 32-bit unsigned int and eventually promoted back to unsigned long. Signed-off-by: Carl Love <carll@us.ibm.com> Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com> Signed-off-by: Bob Nelson <rrnelson@us.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Modify sched_clock() to make CONFIG_PRINTK_TIME more saneTony Breeds2007-07-10
| | | | | | | | | | | | | | | | | | | | | | | | | | When booting a current kernel with CONFIG_PRINTK_TIME enabled you'll see messages like: [ 0.000000] time_init: decrementer frequency = 188.044000 MHz [ 0.000000] time_init: processor frequency = 1504.352000 MHz [3712914.436297] Console: colour dummy device 80x25 This cause by the initialisation of tb_to_ns_scale in time_init(), suddenly the multiplication in sched_clock() now does something :). This patch modifies sched_clock() to report the offset since the machine booted so the same printk's now look like: [ 0.000000] time_init: decrementer frequency = 188.044000 MHz [ 0.000000] time_init: processor frequency = 1504.352000 MHz [ 0.000135] Console: colour dummy device 80x25 Effectivly including the uptime in printk()s. This patch makes tb_to_ns_scale and tb_to_ns_shift static and read_mostly for good measure. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Move iSeries_tb_recal into its own late_initcall.Tony Breeds2007-06-28
| | | | | | | | | | | | | | | Currently iSeries will recalibrate the cputime_factors in the first settimeofday() call. It seems the reason for doing this is to ensure a resaonable time delta after time_init(). On current kernels (with udev), this call is made 40-60 seconds into the boot process, by moving it to a late initcall it is called approximately 5 seconds after time_init() is called. This is sufficient to recalibrate the timebase. Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> CC: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix stolen time for SMT without LPARMichael Neuling2007-06-25
| | | | | | | | | | For POWERPC, stolen time accounts for cycles lost to the hypervisor or PURR cycles attributed to the other SMT thread. Hence, when a PURR is available, we should still calculate stolen time, irrespective of being virtualised. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Remove spinlock from struct cpu_purr_dataNathan Lynch2007-06-25
| | | | | | | | | | | | | | | cpu_purr_data is a per-cpu array used to account for stolen time on partitioned systems. It used to be the case that cpus accessed each others' cpu_purr_data, so each entry was protected by a spinlock. However, the code was reworked ("Simplify stolen time calculation") with the result that each cpu accesses its own cpu_purr_data and not those of other cpus. This means we can get rid of the spinlock as long as we're careful to disable interrupts when accessing cpu_purr_data in process context. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>