aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/pci
Commit message (Collapse)AuthorAge
* Merge git://git.infradead.org/iommu-2.6Linus Torvalds2009-07-02
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.infradead.org/iommu-2.6: (38 commits) intel-iommu: Don't keep freeing page zero in dma_pte_free_pagetable() intel-iommu: Introduce first_pte_in_page() to simplify PTE-setting loops intel-iommu: Use cmpxchg64_local() for setting PTEs intel-iommu: Warn about unmatched unmap requests intel-iommu: Kill superfluous mapping_lock intel-iommu: Ensure that PTE writes are 64-bit atomic, even on i386 intel-iommu: Make iommu=pt work on i386 too intel-iommu: Performance improvement for dma_pte_free_pagetable() intel-iommu: Don't free too much in dma_pte_free_pagetable() intel-iommu: dump mappings but don't die on pte already set intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping() intel-iommu: Introduce domain_sg_mapping() to speed up intel_map_sg() intel-iommu: Simplify __intel_alloc_iova() intel-iommu: Performance improvement for domain_pfn_mapping() intel-iommu: Performance improvement for dma_pte_clear_range() intel-iommu: Clean up iommu_domain_identity_map() intel-iommu: Remove last use of PHYSICAL_PAGE_MASK, for reserving PCI BARs intel-iommu: Make iommu_flush_iotlb_psi() take pfn as argument intel-iommu: Change aligned_size() to aligned_nrpages() intel-iommu: Clean up intel_map_sg(), remove domain_page_mapping() ...
| * intel-iommu: Don't keep freeing page zero in dma_pte_free_pagetable()David Woodhouse2009-07-02
| | | | | | | | | | | | | | Check dma_pte_present() and only free the page if there _is_ one. Kind of surprising that there was no warning about this. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Introduce first_pte_in_page() to simplify PTE-setting loopsDavid Woodhouse2009-07-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | On Wed, 2009-07-01 at 16:59 -0700, Linus Torvalds wrote: > I also _really_ hate how you do > > (unsigned long)pte >> VTD_PAGE_SHIFT == > (unsigned long)first_pte >> VTD_PAGE_SHIFT Kill this, in favour of just looking to see if the incremented pte pointer has 'wrapped' onto the next page. Which means we have to check it _after_ incrementing it, not before. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Use cmpxchg64_local() for setting PTEsDavid Woodhouse2009-07-01
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Warn about unmatched unmap requestsDavid Woodhouse2009-07-01
| | | | | | | | | | | | | | This would have found the bug in i386 pci_unmap_addr() a long time ago. We shouldn't just silently return without doing anything. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Kill superfluous mapping_lockDavid Woodhouse2009-07-01
| | | | | | | | | | | | | | | | | | Since we're using cmpxchg64() anyway (because that's the only way to do an atomic 64-bit store on i386), we might as well ditch the extra locking and just use cmpxchg64() to ensure that we don't add the page twice. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Ensure that PTE writes are 64-bit atomic, even on i386David Woodhouse2009-07-01
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Performance improvement for dma_pte_free_pagetable()David Woodhouse2009-06-29
| | | | | | | | | | | | | | As with other functions, batch the CPU data cache flushes and don't keep recalculating PTE addresses. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Don't free too much in dma_pte_free_pagetable()David Woodhouse2009-06-29
| | | | | | | | | | | | | | | | The loop condition was wrong -- we should free a PMD only if its _entire_ range is within the range we're intending to clear. The early-termination condition was right, but not the loop. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: dump mappings but don't die on pte already setDavid Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Introduce domain_sg_mapping() to speed up intel_map_sg()David Woodhouse2009-06-29
| | | | | | | | | | | | | | | | | | Instead of calling domain_pfn_mapping() repeatedly with single or small numbers of pages, just pass the sglist in. It can optimise the number of cache flushes like domain_pfn_mapping() does, and gives a huge speedup for large scatterlists. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Simplify __intel_alloc_iova()David Woodhouse2009-06-29
| | | | | | | | | | | | | | | | There's no need for the separate iommu_alloc_iova() function, and certainly not for it to be global. Remove the underscores while we're at it. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Performance improvement for domain_pfn_mapping()David Woodhouse2009-06-29
| | | | | | | | | | | | | | | | As with dma_pte_clear_range(), don't keep flushing a single PTE at a time. And also micro-optimise the setting of PTE values rather than using the helper functions to do all the masking. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Performance improvement for dma_pte_clear_range()David Woodhouse2009-06-29
| | | | | | | | | | | | | | | | | | It's a bit silly to repeatedly call domain_flush_cache() for each PTE individually, as we clear it. Instead, batch them up and flush a whole range at a time. We might as well refrain from recalculating the PTE address from scratch each time round the loop too. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Clean up iommu_domain_identity_map()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Remove last use of PHYSICAL_PAGE_MASK, for reserving PCI BARsDavid Woodhouse2009-06-29
| | | | | | | | | | | | | | This is fairly broken anyway -- it doesn't take hotplug into account. We should probably be checking page_is_ram() instead. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Make iommu_flush_iotlb_psi() take pfn as argumentDavid Woodhouse2009-06-29
| | | | | | | | | | | | | | Most of its callers are having to shift for themselves anyway, so we might as well do it in iommu_flush_iotlb_psi(). Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Change aligned_size() to aligned_nrpages()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Clean up intel_map_sg(), remove domain_page_mapping()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Use domain_pfn_mapping() in intel_iommu_map_range()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Use domain_pfn_mapping() in __intel_map_single()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Introduce domain_pfn_mapping()David Woodhouse2009-06-29
| | | | | | | | | | | | | | | | | | | | ... and use it in the trivial cases; the other callers want individual (and bisectable) attention, since I screwed them up the first time... Make the BUG_ON() happen on too-large virtual address rather than physical address, too. That's the one we care about. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Clean up address handling in domain_page_mapping()David Woodhouse2009-06-29
| | | | | | | | | | | | No more masking and alignment; just use pfns. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Change addr_to_dma_pte() to pfn_to_dma_pte()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Clean up intel_iommu_unmap_range()David Woodhouse2009-06-29
| | | | | | | | | | | | | | Use unaligned address for domain->max_addr. That algorithm isn't ideal anyway -- we should probably just look at the last iova in the tree. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Make dma_pte_free_pagetable() take pfns as argumentDavid Woodhouse2009-06-29
| | | | | | | | | | | | | | With some cleanup of intel_unmap_page(), intel_unmap_sg() and vm_domain_exit() to no longer play with 64-bit addresses. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Make dma_pte_free_pagetable() use pfnsDavid Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Make dma_pte_clear_range() take pfns as argumentDavid Woodhouse2009-06-29
| | | | | | | | | | | | Noting that this is now an _inclusive_ range. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Make dma_pte_clear_range() use pfnsDavid Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Don't just mask out too-big physical addresses; BUG() insteadDavid Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Make dma_pte_clear_one() take pfn not addressDavid Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Change dma_addr_level_pte() to dma_pfn_level_pte()David Woodhouse2009-06-29
| | | | | | | | Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Change address_level_offset() to pfn_level_offset()David Woodhouse2009-06-29
| | | | | | | | | | | | We're shifting the inputs for now, but that'll change... Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Change dma_set_pte_addr() to dma_set_pte_pfn()David Woodhouse2009-06-29
| | | | | | | | | | | | | | Add some helpers for converting between VT-d and normal system pfns, since system pages can be larger than VT-d pages. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Clean up identity mapping code, remove CONFIG_DMAR_GFX_WADavid Woodhouse2009-06-29
| | | | | | | | | | | | | | | | | | | | | | | | There's no need for the GFX workaround now we have 'iommu=pt' for the cases where people really care about performance. There's no need to have a special case for just one type of device. This also speeds up the iommu=pt path and reduces memory usage by setting up the si_domain _once_ and then using it for all devices, rather than giving each device its own private page tables. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Create new iommu_domain_identity_map() functionDavid Woodhouse2009-06-29
| | | | | | | | | | | | We'll want to do this to a _domain_ (the si_domain) rather than a PCI device. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Only avoid flushing device IOTLB for domain ID 0 in caching modeYu Zhao2009-06-29
| | | | | | | | | | | | | | | | | | | | | | In caching mode, domain ID 0 is reserved for non-present to present mapping flush. Device IOTLB doesn't need to be flushed in this case. Previously we were avoiding the flush for domain zero, even if the IOMMU wasn't in caching mode and domain zero wasn't special. Signed-off-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* | Fix iommu address space allocationDavid Woodhouse2009-07-01
|/ | | | | | | | | | | | | | | | | This fixes kernel.org bug #13584. The IOVA code attempted to optimise the insertion of new ranges into the rbtree, with the unfortunate result that some ranges just didn't get inserted into the tree at all. Then those ranges would be handed out more than once, and things kind of go downhill from there. Introduced after 2.6.25 by ddf02886cbe665d67ca750750196ea5bf524b10b ("PCI: iova RB tree setup tweak"). Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Cc: mark gross <mgross@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* intel-iommu: fix Identity Mapping to be arch independentChris Wright2009-06-26
| | | | | | | | Drop the e820 scanning and use existing function for finding valid RAM regions to add to 1:1 mapping. Signed-off-by: Chris Wright <chrisw@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* Merge branch 'release' of ↵Linus Torvalds2009-06-24
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (72 commits) asus-laptop: remove EXPERIMENTAL dependency asus-laptop: use pr_fmt and pr_<level> eeepc-laptop: cpufv updates eeepc-laptop: sync eeepc-laptop with asus_acpi asus_acpi: Deprecate in favor of asus-laptop acpi4asus: update MAINTAINER and KConfig links asus-laptop: platform dev as parent for led and backlight eeepc-laptop: enable camera by default ACPI: Rename ACPI processor device bus ID acerhdf: Acer Aspire One fan control ACPI: video: DMI workaround broken Acer 7720 BIOS enabling display brightness ACPI: run ACPI device hot removal in kacpi_hotplug_wq ACPI: Add the reference count to avoid unloading ACPI video bus twice ACPI: DMI to disable Vista compatibility on some Sony laptops ACPI: fix a deadlock in hotplug case Show the physical device node of backlight class device. ACPI: pdc init related memory leak with physical CPU hotplug ACPI: pci_root: remove unused dev/fn information ACPI: pci_root: simplify list traversals ACPI: pci_root: use driver data rather than list lookup ...
| *---. Merge branches 'acerhdf', 'acpi-pci-bind', 'bjorn-pci-root', ↵Len Brown2009-06-24
| |\ \ \ | | | | | | | | | | | | | | | 'bugzilla-12904', 'bugzilla-13121', 'bugzilla-13396', 'bugzilla-13533', 'bugzilla-13612', 'c3_lock', 'hid-cleanups', 'misc-2.6.31', 'pdc-leak-fix', 'pnpacpi', 'power_nocheck', 'thinkpad_acpi', 'video' and 'wmi' into release
| | | * | PCI Hotplug: acpiphp: convert to acpi_get_pci_devAlexander Chiang2009-06-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that acpi_get_pci_dev is available, let's use it instead of acpi_get_pci_id. Signed-off-by: Alex Chiang <achiang@hp.com> Acked-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Len Brown <len.brown@intel.com>
| | | * | ACPI: Introduce acpi_is_root_bridge()Alexander Chiang2009-06-17
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Returns whether an ACPI CA node is a PCI root bridge or not. This API is generically useful, and shouldn't just be a hotplug function. The implementation becomes much simpler as well. Signed-off-by: Alex Chiang <achiang@hp.com> Acked-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Len Brown <len.brown@intel.com>
* | | | Intel-IOMMU, intr-remap: source-id checkingWeidong Han2009-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To support domain-isolation usages, the platform hardware must be capable of uniquely identifying the requestor (source-id) for each interrupt message. Without source-id checking for interrupt remapping , a rouge guest/VM with assigned devices can launch interrupt attacks to bring down anothe guest/VM or the VMM itself. This patch adds source-id checking for interrupt remapping, and then really isolates interrupts for guests/VMs with assigned devices. Because PCI subsystem is not initialized yet when set up IOAPIC entries, use read_pci_config_byte to access PCI config space directly. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* | | | Intel-IOMMU, intr-remap: set the whole 128bits of irte when modify/free itWeidong Han2009-06-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Interrupt remapping table entry is 128bits. Currently, it only sets low 64bits of irte in modify_irte and free_irte. This ignores high 64bits setting of irte, that means source-id setting will be ignored. This patch sets the whole 128bits of irte when modify/free it. Following source-id checking patch depends on this. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* | | | IOMMU Identity Mapping Support (drivers/pci/intel_iommu.c)Fenghua Yu2009-06-23
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Identity mapping for IOMMU defines a single domain to 1:1 map all PCI devices to all usable memory. This reduces map/unmap overhead in DMA API's and improve IOMMU performance. On 10Gb network cards, Netperf shows no performance degradation compared to non-IOMMU performance. This method may lose some of DMA remapping benefits like isolation. The patch sets up identity mapping for all PCI devices to all usable memory. In the DMA API, there is no overhead to maintain page tables, invalidate iotlb, flush cache etc. 32 bit DMA devices don't use identity mapping domain, in order to access memory beyond 4GiB. When kernel option iommu=pt, pass through is first tried. If pass through succeeds, IOMMU goes to pass through. If pass through is not supported in hw or fail for whatever reason, IOMMU goes to identity mapping. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* | | Merge git://git.infradead.org/~dwmw2/iommu-2.6.31Linus Torvalds2009-06-23
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.infradead.org/~dwmw2/iommu-2.6.31: intel-iommu: Fix one last ia64 build problem in Pass Through Support VT-d: support the device IOTLB VT-d: cleanup iommu_flush_iotlb_psi and flush_unmaps VT-d: add device IOTLB invalidation support VT-d: parse ATSR in DMA Remapping Reporting Structure PCI: handle Virtual Function ATS enabling PCI: support the ATS capability intel-iommu: dmar_set_interrupt return error value intel-iommu: Tidy up iommu->gcmd handling intel-iommu: Fix tiny theoretical race in write-buffer flush. intel-iommu: Clean up handling of "caching mode" vs. IOTLB flushing. intel-iommu: Clean up handling of "caching mode" vs. context flushing. VT-d: fix invalid domain id for KVM context flush Fix !CONFIG_DMAR build failure introduced by Intel IOMMU Pass Through Support Intel IOMMU Pass Through Support Fix up trivial conflicts in drivers/pci/{intel-iommu.c,intr_remapping.c}
| * | | VT-d: support the device IOTLBYu Zhao2009-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable the device IOTLB (i.e. ATS) for both the bare metal and KVM environments. Signed-off-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * | | VT-d: cleanup iommu_flush_iotlb_psi and flush_unmapsYu Zhao2009-05-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Make iommu_flush_iotlb_psi() and flush_unmaps() more readable. Signed-off-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>