aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorBjorn Helgaas <bjorn.helgaas@hp.com>2006-05-05 19:19:50 -0400
committerTony Luck <tony.luck@intel.com>2006-05-08 19:32:05 -0400
commit32e62c636a728cb39c0b3bd191286f2ca65d4028 (patch)
tree656454a01e720819103c172daae15b5f2fd85d68
parent6810b548b25114607e0814612d84125abccc0a4f (diff)
[IA64] rework memory attribute aliasing
This closes a couple holes in our attribute aliasing avoidance scheme: - The current kernel fails mmaps of some /dev/mem MMIO regions because they don't appear in the EFI memory map. This keeps X from working on the Intel Tiger box. - The current kernel allows UC mmap of the 0-1MB region of /sys/.../legacy_mem even when the chipset doesn't support UC access. This causes an MCA when starting X on HP rx7620 and rx8620 boxes in the default configuration. There's more detail in the Documentation/ia64/aliasing.txt file this adds, but the general idea is that if a region might be covered by a granule-sized kernel identity mapping, any access via /dev/mem or mmap must use the same attribute as the identity mapping. Otherwise, we fall back to using an attribute that is supported according to the EFI memory map, or to using UC if the EFI memory map doesn't mention the region. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
-rw-r--r--Documentation/ia64/aliasing.txt208
-rw-r--r--arch/ia64/kernel/efi.c156
-rw-r--r--arch/ia64/mm/ioremap.c27
-rw-r--r--arch/ia64/pci/pci.c17
-rw-r--r--include/asm-ia64/io.h1
-rw-r--r--include/asm-ia64/pgtable.h22
-rw-r--r--include/linux/efi.h1
7 files changed, 359 insertions, 73 deletions
diff --git a/Documentation/ia64/aliasing.txt b/Documentation/ia64/aliasing.txt
new file mode 100644
index 00000000000..38f9a52d182
--- /dev/null
+++ b/Documentation/ia64/aliasing.txt
@@ -0,0 +1,208 @@
1 MEMORY ATTRIBUTE ALIASING ON IA-64
2
3 Bjorn Helgaas
4 <bjorn.helgaas@hp.com>
5 May 4, 2006
6
7
8MEMORY ATTRIBUTES
9
10 Itanium supports several attributes for virtual memory references.
11 The attribute is part of the virtual translation, i.e., it is
12 contained in the TLB entry. The ones of most interest to the Linux
13 kernel are:
14
15 WB Write-back (cacheable)
16 UC Uncacheable
17 WC Write-coalescing
18
19 System memory typically uses the WB attribute. The UC attribute is
20 used for memory-mapped I/O devices. The WC attribute is uncacheable
21 like UC is, but writes may be delayed and combined to increase
22 performance for things like frame buffers.
23
24 The Itanium architecture requires that we avoid accessing the same
25 page with both a cacheable mapping and an uncacheable mapping[1].
26
27 The design of the chipset determines which attributes are supported
28 on which regions of the address space. For example, some chipsets
29 support either WB or UC access to main memory, while others support
30 only WB access.
31
32MEMORY MAP
33
34 Platform firmware describes the physical memory map and the
35 supported attributes for each region. At boot-time, the kernel uses
36 the EFI GetMemoryMap() interface. ACPI can also describe memory
37 devices and the attributes they support, but Linux/ia64 currently
38 doesn't use this information.
39
40 The kernel uses the efi_memmap table returned from GetMemoryMap() to
41 learn the attributes supported by each region of physical address
42 space. Unfortunately, this table does not completely describe the
43 address space because some machines omit some or all of the MMIO
44 regions from the map.
45
46 The kernel maintains another table, kern_memmap, which describes the
47 memory Linux is actually using and the attribute for each region.
48 This contains only system memory; it does not contain MMIO space.
49
50 The kern_memmap table typically contains only a subset of the system
51 memory described by the efi_memmap. Linux/ia64 can't use all memory
52 in the system because of constraints imposed by the identity mapping
53 scheme.
54
55 The efi_memmap table is preserved unmodified because the original
56 boot-time information is required for kexec.
57
58KERNEL IDENTITY MAPPINGS
59
60 Linux/ia64 identity mappings are done with large pages, currently
61 either 16MB or 64MB, referred to as "granules." Cacheable mappings
62 are speculative[2], so the processor can read any location in the
63 page at any time, independent of the programmer's intentions. This
64 means that to avoid attribute aliasing, Linux can create a cacheable
65 identity mapping only when the entire granule supports cacheable
66 access.
67
68 Therefore, kern_memmap contains only full granule-sized regions that
69 can referenced safely by an identity mapping.
70
71 Uncacheable mappings are not speculative, so the processor will
72 generate UC accesses only to locations explicitly referenced by
73 software. This allows UC identity mappings to cover granules that
74 are only partially populated, or populated with a combination of UC
75 and WB regions.
76
77USER MAPPINGS
78
79 User mappings are typically done with 16K or 64K pages. The smaller
80 page size allows more flexibility because only 16K or 64K has to be
81 homogeneous with respect to memory attributes.
82
83POTENTIAL ATTRIBUTE ALIASING CASES
84
85 There are several ways the kernel creates new mappings:
86
87 mmap of /dev/mem
88
89 This uses remap_pfn_range(), which creates user mappings. These
90 mappings may be either WB or UC. If the region being mapped
91 happens to be in kern_memmap, meaning that it may also be mapped
92 by a kernel identity mapping, the user mapping must use the same
93 attribute as the kernel mapping.
94
95 If the region is not in kern_memmap, the user mapping should use
96 an attribute reported as being supported in the EFI memory map.
97
98 Since the EFI memory map does not describe MMIO on some
99 machines, this should use an uncacheable mapping as a fallback.
100
101 mmap of /sys/class/pci_bus/.../legacy_mem
102
103 This is very similar to mmap of /dev/mem, except that legacy_mem
104 only allows mmap of the one megabyte "legacy MMIO" area for a
105 specific PCI bus. Typically this is the first megabyte of
106 physical address space, but it may be different on machines with
107 several VGA devices.
108
109 "X" uses this to access VGA frame buffers. Using legacy_mem
110 rather than /dev/mem allows multiple instances of X to talk to
111 different VGA cards.
112
113 The /dev/mem mmap constraints apply.
114
115 However, since this is for mapping legacy MMIO space, WB access
116 does not make sense. This matters on machines without legacy
117 VGA support: these machines may have WB memory for the entire
118 first megabyte (or even the entire first granule).
119
120 On these machines, we could mmap legacy_mem as WB, which would
121 be safe in terms of attribute aliasing, but X has no way of
122 knowing that it is accessing regular memory, not a frame buffer,
123 so the kernel should fail the mmap rather than doing it with WB.
124
125 read/write of /dev/mem
126
127 This uses copy_from_user(), which implicitly uses a kernel
128 identity mapping. This is obviously safe for things in
129 kern_memmap.
130
131 There may be corner cases of things that are not in kern_memmap,
132 but could be accessed this way. For example, registers in MMIO
133 space are not in kern_memmap, but could be accessed with a UC
134 mapping. This would not cause attribute aliasing. But
135 registers typically can be accessed only with four-byte or
136 eight-byte accesses, and the copy_from_user() path doesn't allow
137 any control over the access size, so this would be dangerous.
138
139 ioremap()
140
141 This returns a kernel identity mapping for use inside the
142 kernel.
143
144 If the region is in kern_memmap, we should use the attribute
145 specified there. Otherwise, if the EFI memory map reports that
146 the entire granule supports WB, we should use that (granules
147 that are partially reserved or occupied by firmware do not appear
148 in kern_memmap). Otherwise, we should use a UC mapping.
149
150PAST PROBLEM CASES
151
152 mmap of various MMIO regions from /dev/mem by "X" on Intel platforms
153
154 The EFI memory map may not report these MMIO regions.
155
156 These must be allowed so that X will work. This means that
157 when the EFI memory map is incomplete, every /dev/mem mmap must
158 succeed. It may create either WB or UC user mappings, depending
159 on whether the region is in kern_memmap or the EFI memory map.
160
161 mmap of 0x0-0xA0000 /dev/mem by "hwinfo" on HP sx1000 with VGA enabled
162
163 See https://bugzilla.novell.com/show_bug.cgi?id=140858.
164
165 The EFI memory map reports the following attributes:
166 0x00000-0x9FFFF WB only
167 0xA0000-0xBFFFF UC only (VGA frame buffer)
168 0xC0000-0xFFFFF WB only
169
170 This mmap is done with user pages, not kernel identity mappings,
171 so it is safe to use WB mappings.
172
173 The kernel VGA driver may ioremap the VGA frame buffer at 0xA0000,
174 which will use a granule-sized UC mapping covering 0-0xFFFFF. This
175 granule covers some WB-only memory, but since UC is non-speculative,
176 the processor will never generate an uncacheable reference to the
177 WB-only areas unless the driver explicitly touches them.
178
179 mmap of 0x0-0xFFFFF legacy_mem by "X"
180
181 If the EFI memory map reports this entire range as WB, there
182 is no VGA MMIO hole, and the mmap should fail or be done with
183 a WB mapping.
184
185 There's no easy way for X to determine whether the 0xA0000-0xBFFFF
186 region is a frame buffer or just memory, so I think it's best to
187 just fail this mmap request rather than using a WB mapping. As
188 far as I know, there's no need to map legacy_mem with WB
189 mappings.
190
191 Otherwise, a UC mapping of the entire region is probably safe.
192 The VGA hole means the region will not be in kern_memmap. The
193 HP sx1000 chipset doesn't support UC access to the memory surrounding
194 the VGA hole, but X doesn't need that area anyway and should not
195 reference it.
196
197 mmap of 0xA0000-0xBFFFF legacy_mem by "X" on HP sx1000 with VGA disabled
198
199 The EFI memory map reports the following attributes:
200 0x00000-0xFFFFF WB only (no VGA MMIO hole)
201
202 This is a special case of the previous case, and the mmap should
203 fail for the same reason as above.
204
205NOTES
206
207 [1] SDM rev 2.2, vol 2, sec 4.4.1.
208 [2] SDM rev 2.2, vol 2, sec 4.4.6.
diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
index 12cfedce73b..c33d0ba7e30 100644
--- a/arch/ia64/kernel/efi.c
+++ b/arch/ia64/kernel/efi.c
@@ -8,6 +8,8 @@
8 * Copyright (C) 1999-2003 Hewlett-Packard Co. 8 * Copyright (C) 1999-2003 Hewlett-Packard Co.
9 * David Mosberger-Tang <davidm@hpl.hp.com> 9 * David Mosberger-Tang <davidm@hpl.hp.com>
10 * Stephane Eranian <eranian@hpl.hp.com> 10 * Stephane Eranian <eranian@hpl.hp.com>
11 * (c) Copyright 2006 Hewlett-Packard Development Company, L.P.
12 * Bjorn Helgaas <bjorn.helgaas@hp.com>
11 * 13 *
12 * All EFI Runtime Services are not implemented yet as EFI only 14 * All EFI Runtime Services are not implemented yet as EFI only
13 * supports physical mode addressing on SoftSDV. This is to be fixed 15 * supports physical mode addressing on SoftSDV. This is to be fixed
@@ -622,28 +624,20 @@ efi_get_iobase (void)
622 return 0; 624 return 0;
623} 625}
624 626
625static efi_memory_desc_t * 627static struct kern_memdesc *
626efi_memory_descriptor (unsigned long phys_addr) 628kern_memory_descriptor (unsigned long phys_addr)
627{ 629{
628 void *efi_map_start, *efi_map_end, *p; 630 struct kern_memdesc *md;
629 efi_memory_desc_t *md;
630 u64 efi_desc_size;
631
632 efi_map_start = __va(ia64_boot_param->efi_memmap);
633 efi_map_end = efi_map_start + ia64_boot_param->efi_memmap_size;
634 efi_desc_size = ia64_boot_param->efi_memdesc_size;
635 631
636 for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) { 632 for (md = kern_memmap; md->start != ~0UL; md++) {
637 md = p; 633 if (phys_addr - md->start < (md->num_pages << EFI_PAGE_SHIFT))
638
639 if (phys_addr - md->phys_addr < (md->num_pages << EFI_PAGE_SHIFT))
640 return md; 634 return md;
641 } 635 }
642 return 0; 636 return 0;
643} 637}
644 638
645static int 639static efi_memory_desc_t *
646efi_memmap_has_mmio (void) 640efi_memory_descriptor (unsigned long phys_addr)
647{ 641{
648 void *efi_map_start, *efi_map_end, *p; 642 void *efi_map_start, *efi_map_end, *p;
649 efi_memory_desc_t *md; 643 efi_memory_desc_t *md;
@@ -656,8 +650,8 @@ efi_memmap_has_mmio (void)
656 for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) { 650 for (p = efi_map_start; p < efi_map_end; p += efi_desc_size) {
657 md = p; 651 md = p;
658 652
659 if (md->type == EFI_MEMORY_MAPPED_IO) 653 if (phys_addr - md->phys_addr < (md->num_pages << EFI_PAGE_SHIFT))
660 return 1; 654 return md;
661 } 655 }
662 return 0; 656 return 0;
663} 657}
@@ -683,71 +677,125 @@ efi_mem_attributes (unsigned long phys_addr)
683} 677}
684EXPORT_SYMBOL(efi_mem_attributes); 678EXPORT_SYMBOL(efi_mem_attributes);
685 679
686/* 680u64
687 * Determines whether the memory at phys_addr supports the desired 681efi_mem_attribute (unsigned long phys_addr, unsigned long size)
688 * attribute (WB, UC, etc). If this returns 1, the caller can safely
689 * access size bytes at phys_addr with the specified attribute.
690 */
691int
692efi_mem_attribute_range (unsigned long phys_addr, unsigned long size, u64 attr)
693{ 682{
694 unsigned long end = phys_addr + size; 683 unsigned long end = phys_addr + size;
695 efi_memory_desc_t *md = efi_memory_descriptor(phys_addr); 684 efi_memory_desc_t *md = efi_memory_descriptor(phys_addr);
685 u64 attr;
686
687 if (!md)
688 return 0;
689
690 /*
691 * EFI_MEMORY_RUNTIME is not a memory attribute; it just tells
692 * the kernel that firmware needs this region mapped.
693 */
694 attr = md->attribute & ~EFI_MEMORY_RUNTIME;
695 do {
696 unsigned long md_end = efi_md_end(md);
697
698 if (end <= md_end)
699 return attr;
700
701 md = efi_memory_descriptor(md_end);
702 if (!md || (md->attribute & ~EFI_MEMORY_RUNTIME) != attr)
703 return 0;
704 } while (md);
705 return 0;
706}
707
708u64
709kern_mem_attribute (unsigned long phys_addr, unsigned long size)
710{
711 unsigned long end = phys_addr + size;
712 struct kern_memdesc *md;
713 u64 attr;
696 714
697 /* 715 /*
698 * Some firmware doesn't report MMIO regions in the EFI memory 716 * This is a hack for ioremap calls before we set up kern_memmap.
699 * map. The Intel BigSur (a.k.a. HP i2000) has this problem. 717 * Maybe we should do efi_memmap_init() earlier instead.
700 * On those platforms, we have to assume UC is valid everywhere.
701 */ 718 */
702 if (!md || (md->attribute & attr) != attr) { 719 if (!kern_memmap) {
703 if (attr == EFI_MEMORY_UC && !efi_memmap_has_mmio()) 720 attr = efi_mem_attribute(phys_addr, size);
704 return 1; 721 if (attr & EFI_MEMORY_WB)
722 return EFI_MEMORY_WB;
705 return 0; 723 return 0;
706 } 724 }
707 725
726 md = kern_memory_descriptor(phys_addr);
727 if (!md)
728 return 0;
729
730 attr = md->attribute;
708 do { 731 do {
709 unsigned long md_end = efi_md_end(md); 732 unsigned long md_end = kmd_end(md);
710 733
711 if (end <= md_end) 734 if (end <= md_end)
712 return 1; 735 return attr;
713 736
714 md = efi_memory_descriptor(md_end); 737 md = kern_memory_descriptor(md_end);
715 if (!md || (md->attribute & attr) != attr) 738 if (!md || md->attribute != attr)
716 return 0; 739 return 0;
717 } while (md); 740 } while (md);
718 return 0; 741 return 0;
719} 742}
743EXPORT_SYMBOL(kern_mem_attribute);
720 744
721/*
722 * For /dev/mem, we only allow read & write system calls to access
723 * write-back memory, because read & write don't allow the user to
724 * control access size.
725 */
726int 745int
727valid_phys_addr_range (unsigned long phys_addr, unsigned long size) 746valid_phys_addr_range (unsigned long phys_addr, unsigned long size)
728{ 747{
729 return efi_mem_attribute_range(phys_addr, size, EFI_MEMORY_WB); 748 u64 attr;
749
750 /*
751 * /dev/mem reads and writes use copy_to_user(), which implicitly
752 * uses a granule-sized kernel identity mapping. It's really
753 * only safe to do this for regions in kern_memmap. For more
754 * details, see Documentation/ia64/aliasing.txt.
755 */
756 attr = kern_mem_attribute(phys_addr, size);
757 if (attr & EFI_MEMORY_WB || attr & EFI_MEMORY_UC)
758 return 1;
759 return 0;
730} 760}
731 761
732/*
733 * We allow mmap of anything in the EFI memory map that supports
734 * either write-back or uncacheable access. For uncacheable regions,
735 * the supported access sizes are system-dependent, and the user is
736 * responsible for using the correct size.
737 *
738 * Note that this doesn't currently allow access to hot-added memory,
739 * because that doesn't appear in the boot-time EFI memory map.
740 */
741int 762int
742valid_mmap_phys_addr_range (unsigned long phys_addr, unsigned long size) 763valid_mmap_phys_addr_range (unsigned long phys_addr, unsigned long size)
743{ 764{
744 if (efi_mem_attribute_range(phys_addr, size, EFI_MEMORY_WB)) 765 /*
745 return 1; 766 * MMIO regions are often missing from the EFI memory map.
767 * We must allow mmap of them for programs like X, so we
768 * currently can't do any useful validation.
769 */
770 return 1;
771}
746 772
747 if (efi_mem_attribute_range(phys_addr, size, EFI_MEMORY_UC)) 773pgprot_t
748 return 1; 774phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size,
775 pgprot_t vma_prot)
776{
777 unsigned long phys_addr = pfn << PAGE_SHIFT;
778 u64 attr;
749 779
750 return 0; 780 /*
781 * For /dev/mem mmap, we use user mappings, but if the region is
782 * in kern_memmap (and hence may be covered by a kernel mapping),
783 * we must use the same attribute as the kernel mapping.
784 */
785 attr = kern_mem_attribute(phys_addr, size);
786 if (attr & EFI_MEMORY_WB)
787 return pgprot_cacheable(vma_prot);
788 else if (attr & EFI_MEMORY_UC)
789 return pgprot_noncached(vma_prot);
790
791 /*
792 * Some chipsets don't support UC access to memory. If
793 * WB is supported, we prefer that.
794 */
795 if (efi_mem_attribute(phys_addr, size) & EFI_MEMORY_WB)
796 return pgprot_cacheable(vma_prot);
797
798 return pgprot_noncached(vma_prot);
751} 799}
752 800
753int __init 801int __init
diff --git a/arch/ia64/mm/ioremap.c b/arch/ia64/mm/ioremap.c
index 643ccc6960c..07bd02b6c37 100644
--- a/arch/ia64/mm/ioremap.c
+++ b/arch/ia64/mm/ioremap.c
@@ -11,6 +11,7 @@
11#include <linux/module.h> 11#include <linux/module.h>
12#include <linux/efi.h> 12#include <linux/efi.h>
13#include <asm/io.h> 13#include <asm/io.h>
14#include <asm/meminit.h>
14 15
15static inline void __iomem * 16static inline void __iomem *
16__ioremap (unsigned long offset, unsigned long size) 17__ioremap (unsigned long offset, unsigned long size)
@@ -21,16 +22,29 @@ __ioremap (unsigned long offset, unsigned long size)
21void __iomem * 22void __iomem *
22ioremap (unsigned long offset, unsigned long size) 23ioremap (unsigned long offset, unsigned long size)
23{ 24{
24 if (efi_mem_attribute_range(offset, size, EFI_MEMORY_WB)) 25 u64 attr;
25 return phys_to_virt(offset); 26 unsigned long gran_base, gran_size;
26 27
27 if (efi_mem_attribute_range(offset, size, EFI_MEMORY_UC)) 28 /*
29 * For things in kern_memmap, we must use the same attribute
30 * as the rest of the kernel. For more details, see
31 * Documentation/ia64/aliasing.txt.
32 */
33 attr = kern_mem_attribute(offset, size);
34 if (attr & EFI_MEMORY_WB)
35 return phys_to_virt(offset);
36 else if (attr & EFI_MEMORY_UC)
28 return __ioremap(offset, size); 37 return __ioremap(offset, size);
29 38
30 /* 39 /*
31 * Someday this should check ACPI resources so we 40 * Some chipsets don't support UC access to memory. If
32 * can do the right thing for hot-plugged regions. 41 * WB is supported for the whole granule, we prefer that.
33 */ 42 */
43 gran_base = GRANULEROUNDDOWN(offset);
44 gran_size = GRANULEROUNDUP(offset + size) - gran_base;
45 if (efi_mem_attribute(gran_base, gran_size) & EFI_MEMORY_WB)
46 return phys_to_virt(offset);
47
34 return __ioremap(offset, size); 48 return __ioremap(offset, size);
35} 49}
36EXPORT_SYMBOL(ioremap); 50EXPORT_SYMBOL(ioremap);
@@ -38,6 +52,9 @@ EXPORT_SYMBOL(ioremap);
38void __iomem * 52void __iomem *
39ioremap_nocache (unsigned long offset, unsigned long size) 53ioremap_nocache (unsigned long offset, unsigned long size)
40{ 54{
55 if (kern_mem_attribute(offset, size) & EFI_MEMORY_WB)
56 return 0;
57
41 return __ioremap(offset, size); 58 return __ioremap(offset, size);
42} 59}
43EXPORT_SYMBOL(ioremap_nocache); 60EXPORT_SYMBOL(ioremap_nocache);
diff --git a/arch/ia64/pci/pci.c b/arch/ia64/pci/pci.c
index ab829a22f8a..30d148f3404 100644
--- a/arch/ia64/pci/pci.c
+++ b/arch/ia64/pci/pci.c
@@ -645,18 +645,31 @@ char *ia64_pci_get_legacy_mem(struct pci_bus *bus)
645int 645int
646pci_mmap_legacy_page_range(struct pci_bus *bus, struct vm_area_struct *vma) 646pci_mmap_legacy_page_range(struct pci_bus *bus, struct vm_area_struct *vma)
647{ 647{
648 unsigned long size = vma->vm_end - vma->vm_start;
649 pgprot_t prot;
648 char *addr; 650 char *addr;
649 651
652 /*
653 * Avoid attribute aliasing. See Documentation/ia64/aliasing.txt
654 * for more details.
655 */
656 if (!valid_mmap_phys_addr_range(vma->vm_pgoff << PAGE_SHIFT, size))
657 return -EINVAL;
658 prot = phys_mem_access_prot(NULL, vma->vm_pgoff, size,
659 vma->vm_page_prot);
660 if (pgprot_val(prot) != pgprot_val(pgprot_noncached(vma->vm_page_prot)))
661 return -EINVAL;
662
650 addr = pci_get_legacy_mem(bus); 663 addr = pci_get_legacy_mem(bus);
651 if (IS_ERR(addr)) 664 if (IS_ERR(addr))
652 return PTR_ERR(addr); 665 return PTR_ERR(addr);
653 666
654 vma->vm_pgoff += (unsigned long)addr >> PAGE_SHIFT; 667 vma->vm_pgoff += (unsigned long)addr >> PAGE_SHIFT;
655 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 668 vma->vm_page_prot = prot;
656 vma->vm_flags |= (VM_SHM | VM_RESERVED | VM_IO); 669 vma->vm_flags |= (VM_SHM | VM_RESERVED | VM_IO);
657 670
658 if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 671 if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
659 vma->vm_end - vma->vm_start, vma->vm_page_prot)) 672 size, vma->vm_page_prot))
660 return -EAGAIN; 673 return -EAGAIN;
661 674
662 return 0; 675 return 0;
diff --git a/include/asm-ia64/io.h b/include/asm-ia64/io.h
index c2e3742108b..781ee2c7e8c 100644
--- a/include/asm-ia64/io.h
+++ b/include/asm-ia64/io.h
@@ -88,6 +88,7 @@ phys_to_virt (unsigned long address)
88} 88}
89 89
90#define ARCH_HAS_VALID_PHYS_ADDR_RANGE 90#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
91extern u64 kern_mem_attribute (unsigned long phys_addr, unsigned long size);
91extern int valid_phys_addr_range (unsigned long addr, size_t count); /* efi.c */ 92extern int valid_phys_addr_range (unsigned long addr, size_t count); /* efi.c */
92extern int valid_mmap_phys_addr_range (unsigned long addr, size_t count); 93extern int valid_mmap_phys_addr_range (unsigned long addr, size_t count);
93 94
diff --git a/include/asm-ia64/pgtable.h b/include/asm-ia64/pgtable.h
index c0f8144f234..90f3a232923 100644
--- a/include/asm-ia64/pgtable.h
+++ b/include/asm-ia64/pgtable.h
@@ -317,22 +317,20 @@ ia64_phys_addr_valid (unsigned long addr)
317#define pte_mkhuge(pte) (__pte(pte_val(pte))) 317#define pte_mkhuge(pte) (__pte(pte_val(pte)))
318 318
319/* 319/*
320 * Macro to a page protection value as "uncacheable". Note that "protection" is really a 320 * Make page protection values cacheable, uncacheable, or write-
321 * misnomer here as the protection value contains the memory attribute bits, dirty bits, 321 * combining. Note that "protection" is really a misnomer here as the
322 * and various other bits as well. 322 * protection value contains the memory attribute bits, dirty bits, and
323 * various other bits as well.
323 */ 324 */
325#define pgprot_cacheable(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_WB)
324#define pgprot_noncached(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_UC) 326#define pgprot_noncached(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_UC)
325
326/*
327 * Macro to make mark a page protection value as "write-combining".
328 * Note that "protection" is really a misnomer here as the protection
329 * value contains the memory attribute bits, dirty bits, and various
330 * other bits as well. Accesses through a write-combining translation
331 * works bypasses the caches, but does allow for consecutive writes to
332 * be combined into single (but larger) write transactions.
333 */
334#define pgprot_writecombine(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_WC) 327#define pgprot_writecombine(prot) __pgprot((pgprot_val(prot) & ~_PAGE_MA_MASK) | _PAGE_MA_WC)
335 328
329struct file;
330extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
331 unsigned long size, pgprot_t vma_prot);
332#define __HAVE_PHYS_MEM_ACCESS_PROT
333
336static inline unsigned long 334static inline unsigned long
337pgd_index (unsigned long address) 335pgd_index (unsigned long address)
338{ 336{
diff --git a/include/linux/efi.h b/include/linux/efi.h
index e203613d3ae..66d621dbcb6 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -294,6 +294,7 @@ extern void efi_enter_virtual_mode (void); /* switch EFI to virtual mode, if pos
294extern u64 efi_get_iobase (void); 294extern u64 efi_get_iobase (void);
295extern u32 efi_mem_type (unsigned long phys_addr); 295extern u32 efi_mem_type (unsigned long phys_addr);
296extern u64 efi_mem_attributes (unsigned long phys_addr); 296extern u64 efi_mem_attributes (unsigned long phys_addr);
297extern u64 efi_mem_attribute (unsigned long phys_addr, unsigned long size);
297extern int efi_mem_attribute_range (unsigned long phys_addr, unsigned long size, 298extern int efi_mem_attribute_range (unsigned long phys_addr, unsigned long size,
298 u64 attr); 299 u64 attr);
299extern int __init efi_uart_console_only (void); 300extern int __init efi_uart_console_only (void);