aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kernel/iommu.c
Commit message (Collapse)AuthorAge
* powerpc: Remove IOMMU_VMERGE config optionFUJITA Tomonori2010-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The description says: Cause IO segments sent to a device for DMA to be merged virtually by the IOMMU when they happen to have been allocated contiguously. This doesn't add pressure to the IOMMU allocator. However, some drivers don't support getting large merged segments coming back from *_map_sg(). Most drivers don't have this problem; it is safe to say Y here. It's out of date. Long ago, drivers didn't have a way to tell IOMMUs about their segment length limit (that is, the maximum segment length that they can handle). So IOMMUs merged as many segments as possible and gave too large segments to drivers. dma_get_max_seg_size() was introduced to solve the above problem. Device drives can use the API to tell IOMMU about the maximum segment length that they can handle. In addition, the default limit (64K) should be safe for everyone. So this config option seems to be unnecessary. Note that this config option just enables users to disable the virtual merging by default. Users can still disable the virtual merging by the boot parameter. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* iommu-helper: use bitmap libraryAkinobu Mita2009-12-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use bitmap library and kill some unused iommu helper functions. 1. s/iommu_area_free/bitmap_clear/ 2. s/iommu_area_reserve/bitmap_set/ 3. Use bitmap_find_next_zero_area instead of find_next_zero_area This cannot be simple substitution because find_next_zero_area doesn't check the last bit of the limit in bitmap 4. Remove iommu_area_free, iommu_area_reserve, and find_next_zero_area Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* powerpc: Change u64/s64 to a long long integer typeIngo Molnar2009-01-12
| | | | | | | | | | | | | | | | | | | | | | Convert arch/powerpc/ over to long long based u64: -#ifdef __powerpc64__ -# include <asm-generic/int-l64.h> -#else -# include <asm-generic/int-ll64.h> -#endif +#include <asm-generic/int-ll64.h> This will avoid reoccuring spurious warnings in core kernel code that comes when people test on their own hardware. (i.e. x86 in ~98% of the cases) This is what x86 uses and it generally helps keep 64-bit code 32-bit clean too. [Adjusted to not impact user mode (from paulus) - sfr] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Update remaining dma_mapping_ops to use map/unmap_pageMark Nelson2008-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After the merge of the 32 and 64bit DMA code, dma_direct_ops lost their map/unmap_single() functions but gained map/unmap_page(). This caused a problem for Cell because Cell's dma_iommu_fixed_ops called the dma_direct_ops if the fixed linear mapping was to be used or the iommu ops if the dynamic window was to be used. So in order to fix this problem we need to update the 64bit DMA code to use map/unmap_page. First, we update the generic IOMMU code so that iommu_map_single() becomes iommu_map_page() and iommu_unmap_single() becomes iommu_unmap_page(). Then we propagate these changes up through all the callers of these two functions and in the process update all the dma_mapping_ops so that they have map/unmap_page rahter than map/unmap_single. We can do this because on 64bit there is no HIGHMEM memory so map/unmap_page ends up performing exactly the same function as map/unmap_single, just taking different arguments. This has no affect on drivers because the dma_map_single_attrs() just ends up calling the map_page() function of the appropriate dma_mapping_ops and similarly the dma_unmap_single_attrs() calls unmap_page(). This fixes an oops on Cell blades, which oops on boot without this because they call dma_direct_ops.map_single, which is NULL. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Use is_kdump_kernel()Milton Miller2008-10-31
| | | | | | | | | | | | linux/crash_dump.h defines is_kdump_kernel() to be used by code that needs to know if the previous kernel crashed instead of a (clean) boot or reboot. This updates the just added powerpc code to use it. This is needed for the next commit, which will remove __kdump_flag. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Support for relocatable kdump kernelMohan Kumar M2008-10-22
| | | | | | | | | | | | | | | | | | | | | | | | | | This adds relocatable kernel support for kdump. With this one can use the same regular kernel to capture the kdump. A signature (0xfeed1234) is passed in r6 from panic code to the next kernel through kexec_sequence and purgatory code. The signature is used to differentiate between kdump kernel and non-kdump kernels. The purgatory code compares the signature and sets the __kdump_flag in head_64.S. During the boot up, kernel code checks __kdump_flag and if it is set, the kernel will behave as relocatable kdump kernel. This kernel will boot at the address where it was loaded by kexec-tools ie. at the address reserved through crashkernel boot parameter. CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump kernel as relocatable. So the same kernel can be used as production and kdump kernel. This patch incorporates the changes suggested by Paul Mackerras to avoid GOT use and to avoid two copies of the code. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Mohan Kumar M <mohan@in.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: use iommu_num_pages function in IOMMU codeJoerg Roedel2008-10-16
| | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* powerpc: rename iommu_num_pages function to iommu_nr_pagesJoerg Roedel2008-10-16
| | | | | | | | | | | | This is a preparation patch for introducing a generic iommu_num_pages function. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* powerpc/pseries: iommu enablement for CMORobert Jennings2008-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | To support Cooperative Memory Overcommitment (CMO), we need to check for failure from some of the tce hcalls. These changes for the pseries platform affect the powerpc architecture; patches for the other affected platforms are included in this patch. pSeries platform IOMMU code changes: * platform TCE functions must handle H_NOT_ENOUGH_RESOURCES errors and return an error. Architecture IOMMU code changes: * Calls to ppc_md.tce_build need to check return values and return DMA_MAPPING_ERROR for transient errors. Architecture changes: * struct machdep_calls for tce_build*_pSeriesLP functions need to change to indicate failure. * all other platforms will need updates to iommu functions to match the new calling semantics; they will return 0 on success. The other platforms default configs have been built, but no further testing was performed. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/dma: Use the struct dma_attrs in iommu codeMark Nelson2008-07-21
| | | | | | | | | | Update iommu_alloc() to take the struct dma_attrs and pass them on to tce_build(). This change propagates down to the tce_build functions of all the platforms. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/dma: implement new dma_*map*_attrs() interfacesMark Nelson2008-07-09
| | | | | | | | | | | | | | | Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so update struct dma_mapping_ops to accept a struct dma_attrs and propagate these changes through to all users of the code (generic IOMMU and the 64bit DMA code, and the iseries and ps3 platform code). The old dma_*map_*() interfaces are reimplemented as calls to the corresponding new interfaces. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/dma: Add struct iommu_table argument to iommu_map_sg()Mark Nelson2008-07-09
| | | | | | | | | | | | | | | | | Make iommu_map_sg take a struct iommu_table. It did so before commit 740c3ce66700640a6e6136ff679b067e92125794 (iommu sg merging: ppc: make iommu respect the segment size limits). This stops the function looking in the archdata.dma_data for the iommu table because in the future it will be called with a device that has no table there. This also has the nice side effect of making iommu_map_sg() match the other map functions. Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* [POWERPC] Replace remaining __FUNCTION__ occurrencesHarvey Harrison2008-04-01
| | | | | | | | | __FUNCTION__ is gcc-specific, use __func__ Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* iommu sg: powerpc: remove DMA 4GB boundary protectionFUJITA Tomonori2008-02-05
| | | | | | | | | | | | | | | | | | | Previously, during initialization of the IOMMU tables, the last entry at each 4GB boundary is marked as used since there are many adapters which cannot handle DMAing across any 4GB boundary. The IOMMU doesn't allocate a memory area spanning LLD's segment boundary anymore. The segment boundary of devices are set to 4GB by default. So we can remove 4GB boundary protection now. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* iommu sg: powerpc: convert iommu to use the IOMMU helperFUJITA Tomonori2008-02-05
| | | | | | | | | | | | | | | | | This patch converts PPC's IOMMU to use the IOMMU helper functions. The IOMMU doesn't allocate a memory area spanning LLD's segment boundary anymore. iseries_hv_alloc and iseries_hv_map don't have proper device struct. 4GB boundary is used for them. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* iommu sg merging: ppc: make iommu respect the segment size limitsFUJITA Tomonori2008-02-05
| | | | | | | | | | | | | | This patch makes iommu respect segment size limits when merging sg lists. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Acked-by: Jens Axboe <jens.axboe@oracle.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'linux-2.6'Paul Mackerras2008-01-23
|\
| * [POWERPC] Workaround for iommu page alignmentBenjamin Herrenschmidt2008-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5d2efba64b231a1733c4048d1708d77e07f26426 changed our iommu code so that it always uses an iommu page size of 4kB. That means with our current code, drivers may do a dma_map_sg() of a 64kB page and obtain a dma_addr_t that is only 4k aligned. This works fine in most cases except for some infiniband HW it seems, where they tell the HW about the page size and it ignores the low bits of the DMA address. This works around it by making our IOMMU code enforce a PAGE_SIZE alignment for mappings of objects that are page aligned in the first place and whose size is larger or equal to a page. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] iommu_free_table doesn't need the device_nodeStephen Rothwell2007-12-10
|/ | | | | | | | | It only needs the iommu_table address. It also makes use of the node name to print error messages. So just pass it the things it needs. This reduces the places that know about the pci_dn by one. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Update arch/ to use sg helpersJens Axboe2007-10-22
| | | | Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* PPC: sg chaining supportJens Axboe2007-10-16
| | | | | | | This updates the ppc iommu/pci dma mappers to sg chaining. Includes further fixes from FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* [POWERPC] Clean out a bunch of duplicate includesJesper Juhl2007-08-16
| | | | | | | | This removes several duplicate includes from arch/powerpc/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Revert "[POWERPC] DMA 4GB boundary protection"Paul Mackerras2007-04-26
| | | | | This reverts commit 618d3adc351a24c4c48437c767befb88ca2d199d, because it is superseded by 569975591c5530fdc9c7a3c45122e5e46f075a74.
* [POWERPC] DMA 4GB boundary protectionJake Moilanen2007-04-12
| | | | | | | | | | | | | | | | | | | | | There are many adapters which cannot handle DMAing across any 4 GB boundary. For instance, the latest Emulex adapters. This normally is not an issue as firmware gives dma-windows under 4gigs. However, some of the new System-P boxes have dma-windows above 4gigs, and this present a problem. During initialization of the IOMMU tables, the last entry at each 4GB boundary is marked as used. Thus no mappings can cross the boundary. If a table ends at a 4GB boundary, the entry is not marked as used. A boot option to remove this 4GB protection is given w/ protect4gb=off. This exposes the potential issue for driver and hardware development purposes. Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] DMA 4GB boundary protectionJake Moilanen2007-03-08
| | | | | | | | | | | | | | | | | | | | There are many adapters which can not handle DMAing acrosss any 4 GB boundary. For instance the latest Emulex adapters. This normally is not an issue as firmware gives us dma-windows under 4gigs. However, some of the new System-P boxes have dma-windows above 4gigs, and this present a problem. I propose fixing it in the IOMMU allocation instead of making each driver protect against it as it is more efficient, and won't require changing every driver which has not considered this issue. This patch checks to see if the mapping spans a 4 gig boundary, and if it does, retries the allocation. It tries the next allocation at the start of the crossed 4 gig boundary. Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Refactor 64 bits DMA operationsBenjamin Herrenschmidt2006-12-04
| | | | | | | | | | | | | | | | | This patch completely refactors DMA operations for 64 bits powerpc. 32 bits is untouched for now. We use the new dev_archdata structure to add the dma operations pointer and associated data to struct device. While at it, we also add the OF node pointer and numa node. In the future, we might want to look into merging that with pci_dn as well. The old vio, pci-iommu and pci-direct DMA ops are gone. They are now replaced by a set of generic iommu and direct DMA ops (non PCI specific) that can be used by bus types. The toplevel implementation is now inline. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Use 4kB iommu pages even on 64kB-page systemsLinas Vepstas2006-10-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 10Gigabit ethernet device drivers appear to be able to chew up all 256MB of TCE mappings on pSeries systems, as evidenced by numerous error messages: iommu_alloc failed, tbl c0000000010d5c48 vaddr c0000000d875eff0 npages 1 Some experimentation indicates that this is essentially because one 1500 byte ethernet MTU gets mapped as a 64K DMA region when the large 64K pages are enabled. Thus, it doesn't take much to exhaust all of the available DMA mappings for a high-speed card. This patch changes the iommu allocator to work with its own unique, distinct page size. Although the patch is long, its actually quite simple: it just #defines a distinct IOMMU_PAGE_SIZE and then uses this in all the places that matter. As a side effect, it also dramatically improves network performance on platforms with H-calls on iommu translation inserts/removes (since we no longer call it 16 times for a 1500 bytes packet when the iommu HW is still 4k). In the future, we might want to make the IOMMU_PAGE_SIZE a variable in the iommu_table instance, thus allowing support for different HW page sizes in the iommu itself. Signed-off-by: Linas Vepstas <linas@austin.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix harmless typoNick Piggin2006-10-06
| | | | | | | Fix a typo. Noticed by the unlikely profiler. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-30
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [POWERPC] kdump: Reserve the existing TCE mappings left by the first kernelHaren Myneni2006-06-27
| | | | | | | | | | | | | | | | | | | | | | During kdump boot, noticed some machines checkstop on dma protection fault for ongoing DMA left in the first kernel. Instead of initializing TCE entries in iommu_init() for the kdump boot, this patch fixes this issue by walking through the each TCE table and checks whether the entries are in use by the first kernel. If so, reserve those entries by setting the corresponding bit in tbl->it_map such that these entries will not be available for the kdump boot. However it could be possible that all TCE entries might be used up due to the driver bug that does continuous mapping. My observation is around 1700 TCE entries are used on some systems (Ex: P4) at some point of time during kdump boot and saving dump (either write into the disk or sending to remote machine). Hence, this patch will make sure that minimum of 2048 entries will be available such that kdump boot could be successful in some cases. Signed-off-by: Haren Myneni <haren@us.ibm.com> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] node local IOMMU tablesAnton Blanchard2006-06-15
| | | | | | | | Allocate IOMMU tables local to the relevant node. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Fix bug in iommu_alloc_coherent causing hang during bootPaul Mackerras2006-06-10
| | | | | | | | | In commit 8eb6c6e3b9c8bfed3d75536ab142d7694627c2e5, Christoph Hellwig made iommu_alloc_coherent able to do node-local allocations, but unfortunately got the order of the arguments to alloc_pages_node wrong. This fixes it. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: node-aware dma allocationsChristoph Hellwig2006-06-09
| | | | | | | | | | | | Make sure dma_alloc_coherent allocates memory from the local node. This is important on Cell where we avoid going through the slow cpu interconnect. Note: I could only test this patch on Cell, it should be verified on some pseries machine by those that have the hardware. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: IOMMU support for honoring dma_maskOlof Johansson2006-04-21
| | | | | | | | Some devices don't support full 32-bit DMA address space, which we currently assume. Add the required mask-passing to the IOMMU allocators. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: trivial: modify comments to refer to new location of filesJon Mason2006-02-10
| | | | | | | | | This patch removes all self references and fixes references to files in the now defunct arch/ppc64 tree. I think this accomplises everything wanted, though there might be a few references I missed. Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: IOMMU SG paranoiaJake Moilanen2006-02-07
| | | | | | | | | | | | | | | | | This addresses two items, which are unlikely to be hit if we trust drivers. The first is moving a memory barrier below where the vmerged SG count is passed back, but before the list is set to end. If those instructions were reordered, there could be an issue in iommu_unmap_sg(). The second is making sure we terminate the list on the failure case of iommu_map_sg(). If a driver does not look at the failure return code, it could pass a ill-formed SG list to iommu_unmap_sg(). Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Move most remaining ppc64 files over to arch/powerpcPaul Mackerras2005-11-14
Also deletes files in arch/ppc64 that are no longer used now that we don't compile with ARCH=ppc64 any more. Signed-off-by: Paul Mackerras <paulus@samba.org>