aboutsummaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorKevin VanMaren <kvanmaren@lnxi.com>2006-02-03 15:51:32 -0500
committerLinus Torvalds <torvalds@g5.osdl.org>2006-02-04 19:43:14 -0500
commita1002a48e1af5ff8d02bfe79536e6fce3a0ec369 (patch)
tree0e6988d51b37185dac2e9b92d86091fb224c75ff /arch
parent1de6bf33bc4601d856c286ad5c7d515468e24bbb (diff)
[PATCH] x86_64: When allocation of merged SG lists fails in the IOMMU don't merge
[ AK: I redid Kevin's fix to be simpler, but the idea and original analysis of the problem is from Kevin] This avoid allocation failures on some SATA systems like Nvidia CK8 when the IOMMU gets fragmented. Modern SATA devices have quite large queues (128 entries) and the FS with ext2/3 is good enough now that it often passes whole 128 page sg lists down to the driver. These require 512K of continuous free space in the IOMMU aperture to map when merged. When the IOMMU is fragmented this could lead to spurious IO errors due to failing mappings. Short term fix is to just try to map the SG list again unmerged page by page - this way fragmentation doesn't matter anymore. The code for that was already there, but it just wasn't enabled for the merge case. According to Kevin at least the Nvidia device doesn't seem to benefit from merging much anyways, so the only slowdown is from trying to do an unnecessary merge attempt. Kevin plans to implement better fragmentation avoidance in the future, but that wouldn't be 2.6.16 material. TBD: should add some statistic counters to count how often that really happens. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86_64/kernel/pci-gart.c9
1 files changed, 6 insertions, 3 deletions
diff --git a/arch/x86_64/kernel/pci-gart.c b/arch/x86_64/kernel/pci-gart.c
index c37fc7726ba6..9188b25fad2a 100644
--- a/arch/x86_64/kernel/pci-gart.c
+++ b/arch/x86_64/kernel/pci-gart.c
@@ -457,9 +457,12 @@ int gart_map_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
457error: 457error:
458 flush_gart(NULL); 458 flush_gart(NULL);
459 gart_unmap_sg(dev, sg, nents, dir); 459 gart_unmap_sg(dev, sg, nents, dir);
460 /* When it was forced try again unforced */ 460 /* When it was forced or merged try again in a dumb way */
461 if (force_iommu) 461 if (force_iommu || iommu_merge) {
462 return dma_map_sg_nonforce(dev, sg, nents, dir); 462 out = dma_map_sg_nonforce(dev, sg, nents, dir);
463 if (out > 0)
464 return out;
465 }
463 if (panic_on_overflow) 466 if (panic_on_overflow)
464 panic("dma_map_sg: overflow on %lu pages\n", pages); 467 panic("dma_map_sg: overflow on %lu pages\n", pages);
465 iommu_full(dev, pages << PAGE_SHIFT, dir); 468 iommu_full(dev, pages << PAGE_SHIFT, dir);