diff options
author | Dave Hansen <dave.hansen@linux.intel.com> | 2016-02-12 16:02:10 -0500 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-02-18 03:31:51 -0500 |
commit | 8f62c883222c9e3c06d60b5e55e307a3d1f18257 (patch) | |
tree | 7b9a6263f7232ebc49bfeae15668b198d1a0b032 /include/linux/mm.h | |
parent | 63c17fb8e5a46a16e10e82005748837fd11a2024 (diff) |
x86/mm/pkeys: Add arch-specific VMA protection bits
Lots of things seem to do:
vma->vm_page_prot = vm_get_page_prot(flags);
and the ptes get created right from things we pull out
of ->vm_page_prot. So it is very convenient if we can
store the protection key in flags and vm_page_prot, just
like the existing permission bits (_PAGE_RW/PRESENT). It
greatly reduces the amount of plumbing and arch-specific
hacking we have to do in generic code.
This also takes the new PROT_PKEY{0,1,2,3} flags and
turns *those* in to VM_ flags for vma->vm_flags.
The protection key values are stored in 4 places:
1. "prot" argument to system calls
2. vma->vm_flags, filled from the mmap "prot"
3. vma->vm_page prot, filled from vma->vm_flags
4. the PTE itself.
The pseudocode for these for steps are as follows:
mmap(PROT_PKEY*)
vma->vm_flags = ... | arch_calc_vm_prot_bits(mmap_prot);
vma->vm_page_prot = ... | arch_vm_get_page_prot(vma->vm_flags);
pte = pfn | vma->vm_page_prot
Note that this provides a new definitions for x86:
arch_vm_get_page_prot()
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20160212210210.FE483A42@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/linux/mm.h')
-rw-r--r-- | include/linux/mm.h | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h index 54d173bcc327..3056369bab1d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h | |||
@@ -183,6 +183,13 @@ extern unsigned int kobjsize(const void *objp); | |||
183 | 183 | ||
184 | #if defined(CONFIG_X86) | 184 | #if defined(CONFIG_X86) |
185 | # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ | 185 | # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ |
186 | #if defined (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) | ||
187 | # define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0 | ||
188 | # define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */ | ||
189 | # define VM_PKEY_BIT1 VM_HIGH_ARCH_1 | ||
190 | # define VM_PKEY_BIT2 VM_HIGH_ARCH_2 | ||
191 | # define VM_PKEY_BIT3 VM_HIGH_ARCH_3 | ||
192 | #endif | ||
186 | #elif defined(CONFIG_PPC) | 193 | #elif defined(CONFIG_PPC) |
187 | # define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */ | 194 | # define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */ |
188 | #elif defined(CONFIG_PARISC) | 195 | #elif defined(CONFIG_PARISC) |