aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/smp.c
Commit message (Collapse)AuthorAge
* powerpc: Move cpu hotplug driver lock from pseries to powerpcNathan Fontenot2010-01-14
| | | | | | | | | | Move the defintion and lock helper routines for the cpu hotplug driver lock from pseries to powerpc code to avoid build breaks for platforms other than pseries that use cpu hotplug. Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com> Acked-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'for-linus' of ↵Linus Torvalds2009-12-14
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits) m68k: rename global variable vmalloc_end to m68k_vmalloc_end percpu: add missing per_cpu_ptr_to_phys() definition for UP percpu: Fix kdump failure if booted with percpu_alloc=page percpu: make misc percpu symbols unique percpu: make percpu symbols in ia64 unique percpu: make percpu symbols in powerpc unique percpu: make percpu symbols in x86 unique percpu: make percpu symbols in xen unique percpu: make percpu symbols in cpufreq unique percpu: make percpu symbols in oprofile unique percpu: make percpu symbols in tracer unique percpu: make percpu symbols under kernel/ and mm/ unique percpu: remove some sparse warnings percpu: make alloc_percpu() handle array types vmalloc: fix use of non-existent percpu variable in put_cpu_var() this_cpu: Use this_cpu_xx in trace_functions_graph.c this_cpu: Use this_cpu_xx for ftrace this_cpu: Use this_cpu_xx in nmi handling this_cpu: Use this_cpu operations in RCU this_cpu: Use this_cpu ops for VM statistics ... Fix up trivial (famous last words) global per-cpu naming conflicts in arch/x86/kvm/svm.c mm/slab.c
| * percpu: make percpu symbols in powerpc uniqueTejun Heo2009-10-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch updates percpu related symbols in powerpc such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * arch/powerpc/kernel/perf_callchain.c: s/callchain/cpu_perf_callchain/ * arch/powerpc/kernel/setup-common.c: s/pvr/cpu_pvr/ * arch/powerpc/platforms/pseries/dtl.c: s/dtl/cpu_dtl/ * arch/powerpc/platforms/cell/interrupt.c: s/iic/cpu_iic/ Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@ozlabs.org
* | powerpc: stop_this_cpu: remove the cpu from the online map.Valentine Barshak2009-12-09
|/ | | | | | | | Remove the CPU from the online map to prevent smp_call_function from sending messages to a stopped CPU. Signed-off-by: Valentine Barshak <vbarshak@ru.mvista.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* cpumask: Use accessors for cpu_*_mask: powerpcRusty Russell2009-09-23
| | | | | | | | Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com>
* cpumask: arch_send_call_function_ipi_mask: powerpcRusty Russell2009-09-23
| | | | | | | | We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* powerpc/85xx: Fix SMP compile error and allow NULL for smp_opsKumar Gala2009-09-10
| | | | | | | | | | | | | | | | The following commit introduced a compile error since it removed the implementation of smp_85xx_basic_setup: commit 77c0a700c1c292edafa11c1e52821ce4636f81b0 Author: Benjamin Herrenschmidt <benh@kernel.crashing.org> Date: Fri Aug 28 14:25:04 2009 +1000 powerpc: Properly start decrementer on BookE secondary CPUs Make it so that smp_ops probe() and setup_cpu() can be set to NULL. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/pseries: Reduce the polling interval in __cpu_up()Gautham R Shenoy2009-08-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Time time taken for a single cpu online operation on a pseries machine is as follows: Dedicated LPAR (POWER6): ~220ms. Shared LPAR (POWER5) : ~240ms. Of this time, approximately 200ms is taken up by __cpu_up(). This is because we poll every 200ms to check if the new cpu has notified it's presence through the cpu_callin_map. We repeat this operation until the new cpu sets the value in cpu_callin_map or 5 seconds elapse, whichever comes earlier. However, using completion_structs instead of polling loops, the time taken by the new processor to indicate it's presence has found to be less than 1ms on pseries. This method however may not work on all powerpc platforms due to the time-base synchronization code. Keeping this in mind, we could reduce msleep polling interval from 200ms to 1ms while retaining the 5 second timeout. With this, the time taken for a cpu online operation changes as follows: Dedicated LPAR (POWER6): 20-25ms. Shared LPAR (POWER5) : 60-80ms. In both these cases, it was found that the code polls through the loop only once indicating that 1ms is a reasonable value, atleast on pseries. The code needs testing on other powerpc platforms. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Acked-by: Joel Schopp <jschopp@austin.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/pmac: Fix issues with PowerMac "PowerSurge" SMPBenjamin Herrenschmidt2009-06-26
| | | | | | | | | | | | | | | The old PowerSurge SMP (ie, dual or quad 604 machines) code has numerous issues in modern world. One is cpu_possible_map is set too late (the device-tree is bogus) so we fail to allocate the interrupt stacks and crash. Another problem is the fact the timebase is frozen by the bringup of the second CPU so the delays in the generic code will hang, we need to move some of the calling procedure to inside the powermac code. This makes it boot again for me Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'cpus4096-for-linus-2' of ↵Linus Torvalds2009-01-02
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (66 commits) x86: export vector_used_by_percpu_irq x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and() sched: nominate preferred wakeup cpu, fix x86: fix lguest used_vectors breakage, -v2 x86: fix warning in arch/x86/kernel/io_apic.c sched: fix warning in kernel/sched.c sched: move test_sd_parent() to an SMP section of sched.h sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0 sched: activate active load balancing in new idle cpus sched: bias task wakeups to preferred semi-idle packages sched: nominate preferred wakeup cpu sched: favour lower logical cpu number for sched_mc balance sched: framework for sched_mc/smt_power_savings=N sched: convert BALANCE_FOR_xx_POWER to inline functions x86: use possible_cpus=NUM to extend the possible cpus allowed x86: fix cpu_mask_to_apicid_and to include cpu_online_mask x86: update io_apic.c to the new cpumask code x86: Introduce topology_core_cpumask()/topology_thread_cpumask() x86: xen: use smp_call_function_many() x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.c ... Fixed up trivial conflict in kernel/time/tick-sched.c manually
| * cpumask: centralize cpu_online_map and cpu_possible_mapRusty Russell2008-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Grant Grundler <grundler@parisc-linux.org> Tested-by: Tony Luck <tony.luck@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
* | powerpc: Convert cpu_to_l2cache() to of_find_next_cache_node()Nathan Lynch2008-12-20
| | | | | | | | | | | | | | | | The smp code uses cache information to populate cpu_core_map; change it to use common code for cache lookup. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc: Move smp_hw_index to 32-bit codeNathan Lynch2008-12-15
| | | | | | | | | | | | | | | | smp_hw_index isn't used on 64-bit, so move it from smp.c to setup_32.c. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc: Provide a separate handler for each IPI actionMilton Miller2008-11-19
|/ | | | | | | | | | | | | | | | | With the new generic smp call function helpers, I noticed the code in smp_message_recv was a single function call in many cases. While getting the message number from the ipi data is easy, we can reduce the path length by a function and data-dependent switch by registering seperate IPI actions for these simple calls. Originally I left the ipi action array exposed, but then I realized the registration code should be common too. The three users each had their own name array, so I made a fourth to convert all users to use a common one. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge commit 'origin'Benjamin Herrenschmidt2008-10-14
|\ | | | | | | | | | | Manual fixup of conflicts on: arch/powerpc/include/asm/dcr-regs.h drivers/net/ibm_newemac/core.h
| * kernel/cpu.c: create a CPU_STARTING cpu_chain notifierManfred Spraul2008-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now, there is no notifier that is called on a new cpu, before the new cpu begins processing interrupts/softirqs. Various kernel function would need that notification, e.g. kvm works around by calling smp_call_function_single(), rcu polls cpu_online_map. The patch adds a CPU_STARTING notification. It also adds a helper function that sends the message to all cpu_chain handlers. Tested on x86-64. All other archs are untested. Especially on sparc, I'm not sure if I got it right. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | powerpc/smp: No need to set_need_resched when getting a resched IPIMilton Miller2008-10-13
|/ | | | | | | | | The comment in the code was asking "Do we have to do this?", and according to x86 and s390 the answer is no, the scheduler will do it before calling the arch hook. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Make core id information available to userspaceNathan Lynch2008-07-28
| | | | | | | | | | Existing Open Firmware practice is to report each processor core as a separate node in the device tree. Report the value of the "reg" OF property corresponding to a logical CPU's device node as the core_id attribute in /sys/devices/system/cpu/cpu*/topology/core_id. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Make core sibling information available to userspaceNathan Lynch2008-07-28
| | | | | | | | | | | | Implement the notion of "core siblings" for powerpc. This makes /sys/devices/system/cpu/cpu*/topology/core_siblings present sensible values, indicating online CPUs which share an L2 cache. BenH: Made cpu_to_l2cache() use of_find_node_by_phandle() instead of IBM-specific open coded search Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Update cpu_sibling_maps dynamicallyNathan Lynch2008-07-28
| | | | | | | | | | | | | Rather doing one initialization pass over all the per-cpu cpu_sibling_maps at boot, update the maps at cpu online/offline time. This is a behavior change -- the thread_siblings attribute now reflects only online siblings, whereas it would display offline siblings before. The new behavior matches that of x86, and is arguably more useful. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge commit 'origin/master'Benjamin Herrenschmidt2008-07-15
|\ | | | | | | | | | | | | | | Manual merge of: arch/powerpc/Kconfig arch/powerpc/kernel/stacktrace.c arch/powerpc/mm/slice.c arch/ppc/kernel/smp.c
| * smp_call_function: get rid of the unused nonatomic/retry argumentJens Axboe2008-06-26
| | | | | | | | | | | | | | | | It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| * powerpc: convert to generic helpers for IPI function callsJens Axboe2008-06-26
| | | | | | | | | | | | | | | | | | | | | | This converts ppc to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). ppc loses the timeout functionality of smp_call_function_mask() with this change, as the generic code does not provide that. Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | [POWERPC] Fix sparse warnings in arch/powerpc/kernelMichael Ellerman2008-05-14
|/ | | | | | | | | | | | | | | | | | Make a few things static in lparcfg.c Make init and exit routines static in rtas_flash.c Make things static in rtas_pci.c Make some functions static in rtas.c Make fops static in rtas-proc.c Remove unneeded extern for do_gtod in smp.c Make clocksource_init() static in time.c Make last_tick_len and ticklen_to_xs static in time.c Move the declaration of the pvr per-cpu into smp.h Make kexec_smp_down() and kexec_stack static in machine_kexec_64.c Don't return void in arch_teardown_msi_irqs() in msi.c Move declaration of GregorianDay()into asm/time.h Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Bolt in SLB entry for kernel stack on secondary cpusPaul Mackerras2008-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a regression reported by Kamalesh Bulabel where a POWER4 machine would crash because of an SLB miss at a point where the SLB miss exception was unrecoverable. This regression is tracked at: http://bugzilla.kernel.org/show_bug.cgi?id=10082 SLB misses at such points shouldn't happen because the kernel stack is the only memory accessed other than things in the first segment of the linear mapping (which is mapped at all times by entry 0 of the SLB). The context switch code ensures that SLB entry 2 covers the kernel stack, if it is not already covered by entry 0. None of entries 0 to 2 are ever replaced by the SLB miss handler. Where this went wrong is that the context switch code assumes it doesn't have to write to SLB entry 2 if the new kernel stack is in the same segment as the old kernel stack, since entry 2 should already be correct. However, when we start up a secondary cpu, it calls slb_initialize, which doesn't set up entry 2. This is correct for the boot cpu, where we will be using a stack in the kernel BSS at this point (i.e. init_thread_union), but not necessarily for secondary cpus, whose initial stack can be allocated anywhere. This doesn't cause any immediate problem since the SLB miss handler will just create an SLB entry somewhere else to cover the initial stack. In fact it's possible for the cpu to go quite a long time without SLB entry 2 being valid. Eventually, though, the entry created by the SLB miss handler will get overwritten by some other entry, and if the next access to the stack is at an unrecoverable point, we get the crash. This fixes the problem by making slb_initialize create a suitable entry for the kernel stack, if we are on a secondary cpu and the stack isn't covered by SLB entry 0. This requires initializing the get_paca()->kstack field earlier, so I do that in smp_create_idle where the current field is initialized. This also abstracts a bit of the computation that mk_esid_data in slb.c does so that it can be used in slb_initialize. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Make smp_send_stop() handle panic and xmon rebootOlof Johansson2008-01-25
| | | | | | | | | | | | | | | | | | | | | | smp_send_stop() will send an IPI to all other cpus to shut them down. However, for the case of xmon-based reboots (as well as potentially some panics), the other cpus are (or might be) spinning with interrupts off, and won't take the IPI. Current code will drop us into the debugger when the IPI fails, which means we're in an infinite loop that we can't get out of without an external reset of some sort. Instead, make the smp_send_stop() IPI call path just print the warning about being unable to send IPIs, but make it return so the rest of the shutdown sequence can continue. It's not perfect, but the lesser of two evils. Also move the call_lock handling outside of smp_call_function_map so we can avoid deadlocks in smp_send_stop(). Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Make smp_call_function_map staticOlof Johansson2008-01-25
| | | | | | | | smp_call_function_map should be static, and for consistency prepend it with __ like other local helper functions in the same file. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Convert cpu_sibling_map to be a per cpu variableMike Travis2007-10-16
| | | | | | | | | | | | | | | | | Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu variable. This saves sizeof(cpumask_t) * NR unused cpus. Access is mostly from startup and CPU HOTPLUG functions. Signed-off-by: Mike Travis <travis@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Implement clockevents driver for powerpcTony Breeds2007-10-03
| | | | | | | | | | | | This registers a clock event structure for the decrementer and turns on CONFIG_GENERIC_CLOCKEVENTS, which means that we now don't need most of timer_interrupt(), since the work is done in generic code. For secondary CPUs, their decrementer clockevent is registered when the CPU comes up (the generic code automatically removes the clockevent when the CPU goes down). Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Avoid pointless WARN_ON(irqs_disabled()) from panic codepathSatyam Sharma2007-09-22
| | | | | | | | | | | | | | | | | | | | | | | | | | > ------------[ cut here ]------------ > Badness at arch/powerpc/kernel/smp.c:202 comes when smp_call_function_map() has been called with irqs disabled, which is illegal. However, there is a special case, the panic() codepath, when we do not want to warn about this -- warning at that time is pointless anyway, and only serves to scroll away the *real* cause of the panic and distracts from the real bug. * So let's extract the WARN_ON() from smp_call_function_map() into all its callers -- smp_call_function() and smp_call_function_single() * Also, introduce another caller of smp_call_function_map(), namely __smp_call_function() (and make smp_call_function() a wrapper over this) which does *not* warn about disabled irqs * Use this __smp_call_function() from the panic codepath's smp_send_stop() We also end having to move code of smp_send_stop() below the definition of __smp_call_function(). Signed-off-by: Satyam Sharma <satyam@infradead.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix num_cpus calculation in smp_call_function_map()Kevin Corry2007-08-03
| | | | | | | | | | | | | | | | In smp_call_function_map(), num_cpus is set to the number of online CPUs minus one. However, if the CPU mask does not include all CPUs (except the one we're running on), the routine will hang in the first while() loop until the 8 second timeout occurs. The num_cpus should be set to the number of CPUs specified in the mask passed into the routine, after we've made any modifications to the mask. With this change, we can also get rid of the call to cpus_empty() and avoid adding another pass through the bitmask. Signed-off-by: Kevin Corry <kevcorry@us.ibm.com> Signed-off-by: Carl Love <carll@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Allow smp_call_function_single() to current cpuAvi Kivity2007-07-22
| | | | | | | | This removes the requirement for callers to get_cpu() to check in simple cases. i386 and x86_64 already received a similar treatment. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix smp_call_function to be preempt-safeHugh Dickins2007-05-22
| | | | | | | | | | | | smp_call_function_map() was not safe against preemption to another cpu: its test for removing self from map was outside the spinlock. Rearrange it a little to fix that. smp_call_function_single() was also wrong: now get_cpu() before excluding self, as other architectures do. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Add smp_call_function_map and smp_call_function_singlewill schmidt2007-05-07
| | | | | | | | | | | | | | | Add a new function named smp_call_function_single(). This matches a generic prototype from include/linux/smp.h. Add a function smp_call_function_map(). This is, for the most part, a rename of smp_call_function, with some added cpumask support. smp_call_function and smp_call_function_single call into smp_call_function_map. Lightly tested on 970mp (blade), power4 and power5. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> cc: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Make tlb flush batch use lazy MMU modeBenjamin Herrenschmidt2007-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | The current tlb flush code on powerpc 64 bits has a subtle race since we lost the page table lock due to the possible faulting in of new PTEs after a previous one has been removed but before the corresponding hash entry has been evicted, which can leads to all sort of fatal problems. This patch reworks the batch code completely. It doesn't use the mmu_gather stuff anymore. Instead, we use the lazy mmu hooks that were added by the paravirt code. They have the nice property that the enter/leave lazy mmu mode pair is always fully contained by the PTE lock for a given range of PTEs. Thus we can guarantee that all batches are flushed on a given CPU before it drops that lock. We also generalize batching for any PTE update that require a flush. Batching is now enabled on a CPU by arch_enter_lazy_mmu_mode() and disabled by arch_leave_lazy_mmu_mode(). The code epects that this is always contained within a PTE lock section so no preemption can happen and no PTE insertion in that range from another CPU. When batching is enabled on a CPU, every PTE updates that need a hash flush will use the batch for that flush. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Move MPIC smp routines into mpic.cMichael Ellerman2007-02-13
| | | | | | | | Move a couple of MPIC smp routines into mpic.c, they're inside an SMP block in mpic.c - so they're still only built for SMP. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] Change cpu_up and co from __devinit to __cpuinitGautham R Shenoy2007-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Compiling the kernel with CONFIG_HOTPLUG = y and CONFIG_HOTPLUG_CPU = n with CONFIG_RELOCATABLE = y generates the following modpost warnings WARNING: vmlinux - Section mismatch: reference to .init.data: from .text between '_cpu_up' (at offset 0xc0141b7d) and 'cpu_up' WARNING: vmlinux - Section mismatch: reference to .init.data: from .text between '_cpu_up' (at offset 0xc0141b9c) and 'cpu_up' WARNING: vmlinux - Section mismatch: reference to .init.text:__cpu_up from .text between '_cpu_up' (at offset 0xc0141bd8) and 'cpu_up' WARNING: vmlinux - Section mismatch: reference to .init.data: from .text between '_cpu_up' (at offset 0xc0141c05) and 'cpu_up' WARNING: vmlinux - Section mismatch: reference to .init.data: from .text between '_cpu_up' (at offset 0xc0141c26) and 'cpu_up' WARNING: vmlinux - Section mismatch: reference to .init.data: from .text between '_cpu_up' (at offset 0xc0141c37) and 'cpu_up' This is because cpu_up, _cpu_up and __cpu_up (in some architectures) are defined as __devinit AND __cpu_up calls some __cpuinit functions. Since __cpuinit would map to __init with this kind of a configuration, we get a .text refering .init.data warning. This patch solves the problem by converting all of __cpu_up, _cpu_up and cpu_up from __devinit to __cpuinit. The approach is justified since the callers of cpu_up are either dependent on CONFIG_HOTPLUG_CPU or are of __init type. Thus when CONFIG_HOTPLUG_CPU=y, all these cpu up functions would land up in .text section, and when CONFIG_HOTPLUG_CPU=n, all these functions would land up in .init section. Tested on a i386 SMP machine running linux-2.6.20-rc3-mm1. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Cc: Vivek Goyal <vgoyal@in.ibm.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [POWERPC] cell: add cpufreq driver for Cell BE processorChristian Krafft2006-10-25
| | | | | | | | This patch adds a cpufreq backend driver to enable frequency scaling on cell. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* IRQ: Maintain regs pointer globally rather than passing to IRQ handlersDavid Howells2006-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Maintain a per-CPU global "struct pt_regs *" variable which can be used instead of passing regs around manually through all ~1800 interrupt handlers in the Linux kernel. The regs pointer is used in few places, but it potentially costs both stack space and code to pass it around. On the FRV arch, removing the regs parameter from all the genirq function results in a 20% speed up of the IRQ exit path (ie: from leaving timer_interrupt() to leaving do_IRQ()). Where appropriate, an arch may override the generic storage facility and do something different with the variable. On FRV, for instance, the address is maintained in GR28 at all times inside the kernel as part of general exception handling. Having looked over the code, it appears that the parameter may be handed down through up to twenty or so layers of functions. Consider a USB character device attached to a USB hub, attached to a USB controller that posts its interrupts through a cascaded auxiliary interrupt controller. A character device driver may want to pass regs to the sysrq handler through the input layer which adds another few layers of parameter passing. I've build this code with allyesconfig for x86_64 and i386. I've runtested the main part of the code on FRV and i386, though I can't test most of the drivers. I've also done partial conversion for powerpc and MIPS - these at least compile with minimal configurations. This will affect all archs. Mostly the changes should be relatively easy. Take do_IRQ(), store the regs pointer at the beginning, saving the old one: struct pt_regs *old_regs = set_irq_regs(regs); And put the old one back at the end: set_irq_regs(old_regs); Don't pass regs through to generic_handle_irq() or __do_IRQ(). In timer_interrupt(), this sort of change will be necessary: - update_process_times(user_mode(regs)); - profile_tick(CPU_PROFILING, regs); + update_process_times(user_mode(get_irq_regs())); + profile_tick(CPU_PROFILING); I'd like to move update_process_times()'s use of get_irq_regs() into itself, except that i386, alone of the archs, uses something other than user_mode(). Some notes on the interrupt handling in the drivers: (*) input_dev() is now gone entirely. The regs pointer is no longer stored in the input_dev struct. (*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does something different depending on whether it's been supplied with a regs pointer or not. (*) Various IRQ handler function pointers have been moved to type irq_handler_t. Signed-Off-By: David Howells <dhowells@redhat.com> (cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
* [POWERPC] Fix non-MPIC CHRPs with CONFIG_SMP setBenjamin Herrenschmidt2006-07-25
| | | | | | | | | | | Pseudo-CHRP machines like Pegasos without an MPIC would crash at boot if CONFIG_SMP was set because the "smp_ops" pointer was set to MPIC related ops unconditionally. This patch makes it NULL on machines that don't support SMP and provides proper default behaviour in the callers when smp_ops is NULL. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-30
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [POWERPC] Add starting of secondary 86xx CPUs.Jon Loeliger2006-06-21
| | | | | | | | | | Clear the high BATS during load_up_mmu if FTR_HAS_HIGH_BATS. Allow just a bit more time for secondary CPUs to phone home. Signed-off-by: Wei Zhang <Wei.Zhang@freescale.com> Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com> Signed-off-by: Jon Loeliger <jdl@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] for_each_possible_cpu: powerpcKAMEZAWA Hiroyuki2006-03-28
| | | | | | | | | | | | | | | | for_each_cpu() actually iterates across all possible CPUs. We've had mistakes in the past where people were using for_each_cpu() where they should have been iterating across only online or present CPUs. This is inefficient and possibly buggy. We're renaming for_each_cpu() to for_each_possible_cpu() to avoid this in the future. This patch replaces for_each_cpu with for_each_possible_cpu. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Implement accurate task and CPU time accountingPaul Mackerras2006-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements accurate task and cpu time accounting for 64-bit powerpc kernels. Instead of accounting a whole jiffy of time to a task on a timer interrupt because that task happened to be running at the time, we now account time in units of timebase ticks according to the actual time spent by the task in user mode and kernel mode. We also count the time spent processing hardware and software interrupts accurately. This is conditional on CONFIG_VIRT_CPU_ACCOUNTING. If that is not set, we do tick-based approximate accounting as before. To get this accurate information, we read either the PURR (processor utilization of resources register) on POWER5 machines, or the timebase on other machines on * each entry to the kernel from usermode * each exit to usermode * transitions between process context, hard irq context and soft irq context in kernel mode * context switches. On POWER5 systems with shared-processor logical partitioning we also read both the PURR and the timebase at each timer interrupt and context switch in order to determine how much time has been taken by the hypervisor to run other partitions ("steal" time). Unfortunately, since we need values of the PURR on both threads at the same time to accurately calculate the steal time, and since we can only calculate steal time on a per-core basis, the apportioning of the steal time between idle time (time which we ceded to the hypervisor in the idle loop) and actual stolen time is somewhat approximate at the moment. This is all based quite heavily on what s390 does, and it uses the generic interfaces that were added by the s390 developers, i.e. account_system_time(), account_user_time(), etc. This patch doesn't add any new interfaces between the kernel and userspace, and doesn't change the units in which time is reported to userspace by things such as /proc/stat, /proc/<pid>/stat, getrusage(), times(), etc. Internally the various task and cpu times are stored in timebase units, but they are converted to USER_HZ units (1/100th of a second) when reported to userspace. Some precision is therefore lost but there should not be any accumulating error, since the internal accumulation is at full precision. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: avoid timer interrupt replay effect when onlining cpuNathan Lynch2006-02-07
| | | | | | | | | | | | When a cpu is hotplug-onlined, if we don't set per_cpu(last_jiffy) to something sane, timer_interrupt will execute its while loop for every tick missed since the cpu was last online (or since the system was booted, if we're adding a new cpu). This can cause weird hangs, ssh sessions dropping, and we can even go xmon if we take a global IPI at the wrong time. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: task_thread_info()Al Viro2006-01-12
| | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64: Add NUMA cpu summary at bootAnton Blanchard2006-01-08
| | | | | | | | | | | | | | We used to print a NUMA cpu summary at boot before the hotplug cpu code was added. This has been useful for catching machine configuration as well as firmware bugs in the past. This patch restores that functionality. An example of the output is: Node 0 CPUs: 0-7 Node 1 CPUs: 8-15 Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Add arch dependent basic infrastructure for Kdump.Michael Ellerman2006-01-08
| | | | | | | | | | | | | | | | | | | | Implementing the machine_crash_shutdown which will be called by crash_kexec (called in case of a panic, sysrq etc.). Disable the interrupts, shootdown cpus using debugger IPI and collect regs for all CPUs. elfcorehdr= specifies the location of elf core header stored by the crashed kernel. This command line option will be passed by the kexec-tools to capture kernel. savemaxmem= specifies the actual memory size that the first kernel has and this value will be used for dumping in the capture kernel. This command line option will be passed by the kexec-tools to capture kernel. Signed-off-by: Haren Myneni <haren@us.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Remove some unneeded fields from the pacaDavid Gibson2006-01-08
| | | | | | | | | | | | | | | | | | | This patch removes several unnecessary fields from the paca: - next_jiffy_update_tb was simply unused. Remove trivially. - The exdsi exception save area was not used. There were plans to use it, but they never seem to have gone anywhere. If they ever do, we can put it back. Remove from the paca, and from asm-offsets.c - The default_decr field was used from asm, but was only ever assigned the value of tb_ticks_per_jiffy. Just access tb_ticks_per_jiffy from asm directly instead. Built and booted on POWER5 LPAR and iSeries RS64. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: More debugging fixupsMichael Ellerman2005-11-15
| | | | | | | Add a few more missing includes of udbg.h Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>