aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/platforms/cell
Commit message (Collapse)AuthorAge
* [CELL] allow linux to map Cell regs on legacy SLOF tree.Jean-Christophe DUBOIS2007-07-20
| | | | | | | | | | | | | | The platforms missing the "cpus" property in the "be" node are mono-Cell platforms such as CAB or Getaway. Therefore it is possible to assume that if there is no "cpus" properties under the "be" node then we can safely return the "device node" without more checking. This is a bit hacky but ... it allows it to work on these platforms. Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> Acked-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] fix cbe_thermal for legacy SLOF tree.Jean-Christophe DUBOIS2007-07-20
| | | | | | | | | | | | | | Previous patch changed based on Christian Krafft's comment. On some legacy SLOF tree the generic code is unable to ioremap some Cell BE registers. Therefore the "generic" functions are returning a NULL pointer, triggering a crash on such platforms. Let's handle this more gracefully. Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> Acked-by: Christian Kraff <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] fix cbe_cpufreq for legacy SLOF tree.Jean-Christophe DUBOIS2007-07-20
| | | | | | | | | | | | | | Previous patch changed based on Christian Krafft's comment. On some legacy SLOF tree the generic code is unable to ioremap some Cell BE registers. Therefore the "generic" functions are returning a NULL pointer, triggering a crash on such platforms. Let's handle this more gracefully. Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> Acked-by: Christian Kraff <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] cbe_cpufreq: reorganize codeChristian Krafft2007-07-20
| | | | | | | | | | | | This patch reorganizes the code of the driver into three files. Two cbe_cpufreq_pmi.c and cbe_cpufreq_pervasive.c care about hardware. cbe_cpufreq.c contains the logic. There is no changed behaviour, except that the PMI related function is now located in a seperate module cbe_cpufreq_pmi. This module will be required by cbe_cpufreq, if CONFIG_CBE_CPUFREQ_PMI has been set. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] cbe_cpufreq: fix minor issuesChristian Krafft2007-07-20
| | | | | | | | | | | | Minor issues have been fixed: * added a missing call to of_node_put() * signedness of a function parameter * added some line breaks * changed global pmi_frequency_limit to a per node pmi_slow_mode_limit array Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] cbe_cpufreq: fix initializationChristian Krafft2007-07-20
| | | | | | | | | | | | | | This patch fixes the initialization of the cbe_cpufreq driver. The code that initializes the PMI related functions was called per cpu: * registering cpufreq notifier block * registering a pmi handler This ends in a bug that the notifier block gets called in an endless loop. The initialization code is being put to the module init code path by this patch. This way it only gets called once. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] cbe_cpufreq: fix latency measurementChristian Krafft2007-07-20
| | | | | | | | This patch fixes the debug code that calculates the transition time when changing the slow modes on a Cell BE cpu. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* [CELL] pmi: remove support for mutiple devices.Christian Krafft2007-07-20
| | | | | | | | | | | | | | | | | The pmi driver got simplified by removing support for multiple devices. As there is no more than one pmi device per maschine, there is no need to specify the device for listening and sending messages. This way the caller (cbe_cpufreq) doesn't need to scan the device tree. When registering the handler on a board without a pmi interface, pmi.c will just return -ENODEV. The patch that fixed the breakage of cell_defconfig has been broken out of the earlier version of this patch. So this is the version that applies cleanly on top of it. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-19
| | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* fix spufs build after ->fault changesChristoph Hellwig2007-07-19
| | | | | | | | | | | | | | | | 83c54070ee1a2d05c89793884bea1a03f2851ed4 broke spufs by incorrectly updating the code, this patch gets it to compile again. It's probably still broken due to the scheduler changes, but this at least makes sure cell kernels can still be built. Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: Geoff Levand <geoffrey.levand@am.sony.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: fault feedback #2Nick Piggin2007-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch completes Linus's wish that the fault return codes be made into bit flags, which I agree makes everything nicer. This requires requires all handle_mm_fault callers to be modified (possibly the modifications should go further and do things like fault accounting in handle_mm_fault -- however that would be for another patch). [akpm@linux-foundation.org: fix alpha build] [akpm@linux-foundation.org: fix s390 build] [akpm@linux-foundation.org: fix sparc build] [akpm@linux-foundation.org: fix sparc64 build] [akpm@linux-foundation.org: fix ia64 build] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Bryan Wu <bryan.wu@analog.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Matthew Wilcox <willy@debian.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Still apparently needs some ARM and PPC loving - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Cell: Draw SPE helper penguin logosGeert Uytterhoeven2007-07-17
| | | | | | | | | | | | | | | Let spu_management_ops.enumerate_spus() return the number of found SPEs and use that information to draw some little helper penguin logos. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-By: James Simmons <jsimmons@infradead.org> Cc: "Antonino A. Daplas" <adaplas@pol.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-2.6.23' into mergePaul Mackerras2007-07-10
|\
| * [POWERPC] spufs: Save dma_tagstatus_R in CSAKazunori Asayama2007-07-03
| | | | | | | | | | | | | | | | | | | | The function backing_ops->read_mfc_tagstatus() doesn't return a correct value because the dma_tagstatus_R register isn't saved in CSA. This fixes the problem. Signed-off-by: Kazunori Asayama <asayama@sm.sony.co.jp> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Fix lost events in poll/epoll on mfcKazunori Asayama2007-07-03
| | | | | | | | | | | | | | | | | | | | | | | | When waiting for I/O events on mfc in an SPU context by using poll/epoll syscalls, some of the events can be lost because of wrong order of poll_wait and MFC status checks in the spufs_mfc_poll function and non-atomic update of tagwait. This fixes the problem. Signed-off-by: Kazunori Asayama <asayama@sm.sony.co.jp> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Add spu stats in sysfsChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | Export spu statistics in sysfs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Fix runqueue corruptionChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | spu_activate can be called from multiple threads at the same time on behalf of the same spu context. We need to make sure to only add it once to avoid runqueue corruption. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Disable tick when not neededChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | Only enable the scheduler tick if we have any context waiting to be scheduled. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Fix libassist accountingJeremy Kerr2007-07-03
| | | | | | | | | | | | | | | | We're currently too permissive with counting libassist calls - fix the check on the SPE stop-and-signal status. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Add stat file to spufsChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | Export per-context statistics in spufs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Implement /proc/spu_loadavgChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | Provide load average information for spu context. The format is identical to /proc/loadavg, which is also where a lot of code and concepts is borrowed from. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Add tid fileChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | The new tid file contains the ID of the thread currently running the context, if any. This is used so that the new spu-top and spu-ps tools can find the thread in /proc. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Trivial whitespace fixesJeremy Kerr2007-07-03
| | | | | | | | | | | | | | Remove redundant whitespace in arch/powerpc/platforms/cell/spufs/ Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Remove spufs_dir_inode_operationsJeremy Kerr2007-07-03
| | | | | | | | | | | | | | | | | | spufs_dir_inode_operations is exactly the same as simple_dir_inode_operations. Use that instead. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: No preemption for nosched contextsChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | And last but not least we need to make sure the scheduler tick never preempts a nosched context. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Catch nosched contexts in spu_deactivateChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | spu_deactivate should never be called for nosched contets. Put in a check so we can print a stacktrace and exit early in case it happes erroneously. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: fix cpu/node bindingChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | | | Add a cpus_allowed allowed filed to struct spu_context so that we always use the cpu mask of the owning thread instead of the one happening to call into the scheduler. Also use this information in grab_runnable_context to avoid spurious wakeups. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Update scheduling paramters on every spu_runChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update scheduling information on every spu_run to allow for setting threads to realtime priority just before running them. This requires some slightly ugly code in spufs_run_spu because we can just update the information unlocked if the spu is not runnable, but we need to acquire the active_mutex when it is runnable to protect against find_victim. This locking scheme requires opencoding spu_acquire_runnable in spufs_run_spu which actually is a nice cleanup all by itself. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Print out scheduling tunables with DEBUGJeremy Kerr2007-07-03
| | | | | | | | | | | | | | | | Print out a few scheduler tuning parameters when we've compiled with DEBUG defined. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Fix timeslice calculationsJeremy Kerr2007-07-03
| | | | | | | | | | | | | | | | | | | | | | | | The current timeslice code mixes 'jiffies' up with 'spesched ticks'. This change correctly defines the number of time slices each SPE contexts is given, and clarifies the comment. This brings the default timeslice for SPE contexts into a reasonable range. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Dynamic timeslicing for SCHED_OTHERChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable preemptive scheduling for non-RT contexts. We use the same algorithms as the CPU scheduler to calculate the time slice length, and for now we also use the same timeslice length as the CPU scheduler. This might be not enough for good performance and can be changed after some benchmarking. Note that currently we do not boost the priority for contexts waiting on the runqueue for a long time, so contexts with a higher nice value could starve ones with less priority. This could easily be fixed once the rework of the spu lists that Luke and I discussed is done. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spusched: Switch from workqueues to kthread + timer tickChristoph Hellwig2007-07-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the scheduler workqueues that complicated things a lot to a dedicated spu scheduler thread that gets woken by a traditional scheduler tick. By default this scheduler tick runs a HZ * 10, aka one spu scheduler tick for every 10 cpu ticks. Currently the tick is not disabled when we have less context than available spus, but I will implement this later. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Add bit definitionSebastian Siewior2007-07-03
| | | | | | | | | | | | | | | | | | Add a bit define from book, and replace one hex number with a symbol, for clarity. Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: fix building spufs/spu_save_dump.hSebastian Siewior2007-07-03
| | | | | | | | | | | | | | | | | | | | | | Currently it fails with gcc from sdk 2.1 because of a spec change [1]. Maybe we should start using the definitions from spu_mfcio.h. [1] http://gcc.gnu.org/ml/gcc-patches/2006-11/msg01598.html Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] cell: Add spu shutdown methodGeoff Levand2007-06-28
| | | | | | | | | | | | | | | | | | | | Add a shutdown method to spu_sysdev_class to allow proper spu resource cleanup on system shutdown. This is needed to support kexec on the PS3 platform. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Add a "capabilities" file to spu contextsBenjamin Herrenschmidt2007-06-14
| | | | | | | | | | | | | | | | | | | | | | This adds a "capabilities" file to spu contexts consisting of a list of linefeed separated capability names. The current exposed capabilities are "sched" (the context is scheduleable) and "step" (the context supports single stepping). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] spufs: Add support for SPU single steppingBenjamin Herrenschmidt2007-06-14
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for SPU single stepping. The single step bit is set in the SPU when the current process is being single-stepped via ptrace. The spu then stops and returns with a specific flag set and the syscall exit code will generate the SIGTRAP. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Rewrite IO allocation & mapping on powerpc64Benjamin Herrenschmidt2007-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This rewrites pretty much from scratch the handling of MMIO and PIO space allocations on powerpc64. The main goals are: - Get rid of imalloc and use more common code where possible - Simplify the current mess so that PIO space is allocated and mapped in a single place for PCI bridges - Handle allocation constraints of PIO for all bridges including hot plugged ones within the 2GB space reserved for IO ports, so that devices on hotplugged busses will now work with drivers that assume IO ports fit in an int. - Cleanup and separate tracking of the ISA space in the reserved low 64K of IO space. No ISA -> Nothing mapped there. I booted a cell blade with IDE on PIO and MMIO and a dual G5 so far, that's it :-) With this patch, all allocations are done using the code in mm/vmalloc.c, though we use the low level __get_vm_area with explicit start/stop constraints in order to manage separate areas for vmalloc/vmap, ioremap, and PCI IOs. This greatly simplifies a lot of things, as you can see in the diffstat of that patch :-) A new pair of functions pcibios_map/unmap_io_space() now replace all of the previous code that used to manipulate PCI IOs space. The allocation is done at mapping time, which is now called from scan_phb's, just before the devices are probed (instead of after, which is by itself a bug fix). The only other caller is the PCI hotplug code for hot adding PCI-PCI bridges (slots). imalloc is gone, as is the "sub-allocation" thing, but I do beleive that hotplug should still work in the sense that the space allocation is always done by the PHB, but if you unmap a child bus of this PHB (which seems to be possible), then the code should properly tear down all the HPTE mappings for that area of the PHB allocated IO space. I now always reserve the first 64K of IO space for the bridge with the ISA bus on it. I have moved the code for tracking ISA in a separate file which should also make it smarter if we ever are capable of hot unplugging or re-plugging an ISA bridge. This should have a side effect on platforms like powermac where VGA IOs will no longer work. This is done on purpose though as they would have worked semi-randomly before. The idea at this point is to isolate drivers that might need to access those and fix them by providing a proper function to obtain an offset to the legacy IOs of a given bus. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Fix PMI breakage in cbe_cbufreq driverChristian Krafft2007-07-01
|/ | | | | | | | | | | | | | The recent change to cell_defconfig to enable cpufreq on Cell exposed the fact that the cbe_cpufreq driver currently needs the PMI interface code to compile, but Kconfig doesn't make sure that the PMI interface code gets built if cbe_cpufreq is enabled. In fact cbe_cpufreq can work without PMI, so this ifdefs out the code that deals with PMI. This is a minimal solution for 2.6.22; a more comprehensive solution will be merged for 2.6.23. Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Fix error handling in spufs_fill_dir()Sebastian Siewior2007-06-06
| | | | | | | | | | | | | The error path in spufs_fill_dir() is broken. If d_alloc_name() or spufs_new_file() fails, spufs_prune_dir() is getting called. At this time dir->inode is not set and a NULL pointer is dereferenced by mutex_lock(). This bugfix replaces spufs_prune_dir() with a shorter version that does not touch dir->inode but simply removes all children. Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Don't yield nosched contextChristoph Hellwig2007-06-06
| | | | | | | | | Nosched context sould never be scheduled out, thus we must not deactivate them in spu_yield ever. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] cbe_cpufreq: Limit frequency via cpufreq notifier chainThomas Renninger2007-06-06
| | | | | | | | | | ... and get rid of cpufreq_set_policy call that caused a build failure due interfering commits. Signed-off-by: Thomas Renninger <trenn@suse.de> Signed-off-by: Christian Krafft <krafft@de.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs scheduler: Fix wakeup racesChristoph Hellwig2007-06-06
| | | | | | | | | | | | | | | | | | | | Fix the race between checking for contexts on the runqueue and actually waking them in spu_deactive and spu_yield. The guts of spu_reschedule are split into a new helper called grab_runnable_context which shows if there is a runnable thread below a specified priority and if yes removes if from the runqueue and uses it. This function is used by the new __spu_deactivate hepler shared by preemption and spu_yield to grab a new context before deactivating a specified priority and if yes removes if from the runqueue and uses it. This function is used by the new __spu_deactivate hepler shared by preemption and spu_yield to grab a new context before deactivating the old one. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Synchronize pte invalidation vs ps closeChristoph Hellwig2007-06-06
| | | | | | | | | | | | | Make sure the mapping_lock also protects access to the various address_space pointers used for tearing down the ptes on a spu context switch. Because unmap_mapping_range can sleep we need to turn mapping_lock from a spinlock into a sleeping mutex. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Free mm if spufs_fill_dir() failedSebastian Siewior2007-06-06
| | | | | | | | | | In case spufs_fill_dir() fails only put_spu_context() gets called for cleanup and the acquired mm_struct never gets freed. Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Fix gang destroy leaksJeremy Kerr2007-06-06
| | | | | | | | | | | | | | | Previously, closing a SPE gang that still has contexts would trigger a WARN_ON, and leak the allocated gang. This change fixes the problem by using the gang's reference counts to destroy the gang instead. The gangs will persist until their last reference (be it context or open file handle) is gone. Also, avoid using statements with side-effects in a WARN_ON(). Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Hook up spufs_release_memChristoph Hellwig2007-06-06
| | | | | | | | | Currently spufs_mem_release and the mem file doesn't have any release method hooked up, leading to leaks everytime is used. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] spufs: Refuse to load the module when not running on cellArnd Bergmann2007-06-06
| | | | | | | | | | | | | As noticed by David Woodhouse, it's currently possible to mount spufs on any machine, which means that it actually will get mounted by fedora. This refuses to load the module on platforms that have no support for SPUs. Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Remove SLAB_CTOR_CONSTRUCTORChristoph Lameter2007-05-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: David Howells <dhowells@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Steven French <sfrench@us.ibm.com> Cc: Michael Halcrow <mhalcrow@us.ibm.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Anton Altaparmakov <aia21@cantab.net> Cc: Mark Fasheh <mark.fasheh@oracle.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@ucw.cz> Cc: David Chinner <dgc@sgi.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* powerpc: fixup hard_irq_disable semanticsBenjamin Herrenschmidt2007-05-11
| | | | | | | | | | | | | | | | | | | | This patch renames the raw hard_irq_{enable,disable} into __hard_irq_{enable,disable} and introduces a higher level hard_irq_disable() function that can be used by any code to enforce that IRQs are fully disabled, not only lazy disabled. The difference with the __ versions is that it will update some per-processor fields so that the kernel keeps track and properly re-enables them in the next local_irq_disable(); This prepares powerpc for my next patch that introduces hard_irq_disable() generically. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>