aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2018-03-18 10:38:59 -0400
committerDavid S. Miller <davem@davemloft.net>2018-03-18 10:38:59 -0400
commit88fe35293446d19c4870e581b8b78d4714fc63d2 (patch)
tree3c24aa4630b4a19f347a4237ff419de4c77a3004
parent8f5fd927c3a7576d57248a2d7a0861c3f2795973 (diff)
parentb9fa03656b049d2db61c60233d5cde272ade0ac8 (diff)
Merge branch 'sparc64-ADI'
Khalid Aziz says: ==================== Application Data Integrity feature introduced by SPARC M7 V12 changes: This series is same as v10 and v11 and was simply rebased on 4.16-rc2 kernel and patch 11 was added to update signal delivery code to use the new helper functions added by Eric Biederman. Can mm maintainers please review patches 2, 7, 8 and 9 which are arch independent, and include/linux/mm.h and mm/ksm.c changes in patch 10 and ack these if everything looks good? SPARC M7 processor adds additional metadata for memory address space that can be used to secure access to regions of memory. This additional metadata is implemented as a 4-bit tag attached to each cacheline size block of memory. A task can set a tag on any number of such blocks. Access to such block is granted only if the virtual address used to access that block of memory has the tag encoded in the uppermost 4 bits of VA. Since sparc processor does not implement all 64 bits of VA, top 4 bits are available for ADI tags. Any mismatch between tag encoded in VA and tag set on the memory block results in a trap. Tags are verified in the VA presented to the MMU and tags are associated with the physical page VA maps on to. If a memory page is swapped out and page frame gets reused for another task, the tags are lost and hence must be saved when swapping or migrating the page. A userspace task enables ADI through mprotect(). This patch series adds a page protection bit PROT_ADI and a corresponding VMA flag VM_SPARC_ADI. VM_SPARC_ADI is used to trigger setting TTE.mcd bit in the sparc pte that enables ADI checking on the corresponding page. MMU validates the tag embedded in VA for every page that has TTE.mcd bit set in its pte. After enabling ADI on a memory range, the userspace task can set ADI version tags using stxa instruction with ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY ASI. Once userspace task calls mprotect() with PROT_ADI, kernel takes following overall steps: 1. Find the VMAs covering the address range passed in to mprotect and set VM_SPARC_ADI flag. If address range covers a subset of a VMA, the VMA will be split. 2. When a page is allocated for a VA and the VMA covering this VA has VM_SPARC_ADI flag set, set the TTE.mcd bit so MMU will check the vwersion tag. 3. Userspace can now set version tags on the memory it has enabled ADI on. Userspace accesses ADI enabled memory using a virtual address that has the version tag embedded in the high bits. MMU validates this version tag against the actual tag set on the memory. If tag matches, MMU performs the VA->PA translation and access is granted. If there is a mismatch, hypervisor sends a data access exception or precise memory corruption detected exception depending upon whether precise exceptions are enabled or not (controlled by MCDPERR register). Kernel sends SIGSEGV to the task with appropriate si_code. 4. If a page is being swapped out or migrated, kernel must save any ADI tags set on the page. Kernel maintains a page worth of tag storage descriptors. Each descriptors pointsto a tag storage space and the address range it covers. If the page being swapped out or migrated has ADI enabled on it, kernel finds a tag storage descriptor that covers the address range for the page or allocates a new descriptor if none of the existing descriptors cover the address range. Kernel saves tags from the page into the tag storage space descriptor points to. 5. When the page is swapped back in or reinstantiated after migration, kernel restores the version tags on the new physical page by retrieving the original tag from tag storage pointed to by a tag storage descriptor for the virtual address range for new page. User task can disable ADI by calling mprotect() again on the memory range with PROT_ADI bit unset. Kernel clears the VM_SPARC_ADI flag in VMAs, merges adjacent VMAs if necessary, and clears TTE.mcd bit in the corresponding ptes. IOMMU does not support ADI checking. Any version tags embedded in the top bits of VA meant for IOMMU, are cleared and replaced with sign extension of the first non-version tag bit (bit 59 for SPARC M7) for IOMMU addresses. This patch series adds support for this feature in 11 patches: Patch 1/11 Tag mismatch on access by a task results in a trap from hypervisor as data access exception or a precide memory corruption detected exception. As part of handling these exceptions, kernel sends a SIGSEGV to user process with special si_code to indicate which fault occurred. This patch adds three new si_codes to differentiate between various mismatch errors. Patch 2/11 When a page is swapped or migrated, metadata associated with the page must be saved so it can be restored later. This patch adds a new function that saves/restores this metadata when updating pte upon a swap/migration. Patch 3/11 SPARC M7 processor adds new fields to control registers to support ADI feature. It also adds a new exception for precise traps on tag mismatch. This patch adds definitions for the new control register fields, new ASIs for ADI and an exception handler for the precise trap on tag mismatch. Patch 4/11 New hypervisor fault types were added by sparc M7 processor to support ADI feature. This patch adds code to handle these fault types for data access exception handler. Patch 5/11 When ADI is in use for a page and a tag mismatch occurs, processor raises "Memory corruption Detected" trap. This patch adds a handler for this trap. Patch 6/11 ADI usage is governed by ADI properties on a platform. These properties are provided to kernel by firmware. Thsi patch adds new auxiliary vectors that provide these values to userpsace. Patch 7/11 arch_validate_prot() is used to validate the new protection bits asked for by the userspace app. Validating protection bits may need the context of address space the bits are being applied to. One such example is PROT_ADI bit on sparc processor that enables ADI protection on an address range. ADI protection applies only to addresses covered by physical RAM and not other PFN mapped addresses or device addresses. This patch adds "address" to the parameters being passed to arch_validate_prot() to provide that context. Patch 8/11 When protection bits are changed on a page, kernel carries forward all protection bits except for read/write/exec. Additional code was added to allow kernel to clear PKEY bits on x86 but this requirement to clear other bits is not unique to x86. This patch extends the existing code to allow other architectures to clear any other protection bits as well on protection bit change. Patch 9/11 When a processor supports additional metadata on memory pages, that additional metadata needs to be copied to new memory pages when those pages are moved. This patch allows architecture specific code to replace the default copy_highpage() routine with arch specific version that copies the metadata as well besides the data on the page. Patch 10/11 This patch adds support for a user space task to enable ADI and enable tag checking for subsets of its address space. As part of enabling this feature, this patch adds to support manipulation of precise exception for memory corruption detection, adds code to save and restore tags on page swap and migration, and adds code to handle ADI tagged addresses for DMA. Patch 11/11 Update signal delivery code in arch/sparc/kernel/traps_64.c to use the new helper function force_sig_fault() added by commit f8ec66014ffd ("signal: Add send_sig_fault and force_sig_fault"). Changelog v12: - Rebased to 4.16-rc2 - Added patch 11 to update signal delivery functions Changelog v11: - Rebased to 4.15 Changelog v10: - Patch 1/10: Updated si_codes definitions for SEGV to match 4.14 - Patch 2/10: No changes - Patch 3/10: Updated copyright - Patch 4/10: No changes - Patch 5/10: No changes - Patch 6/10: Updated copyright - Patch 7/10: No changes - Patch 8/10: No changes - Patch 9/10: No changes - Patch 10/10: Added code to return from kernel path to set PSTATE.mcde if kernel continues execution in another thread (Suggested by Anthony) Changelog v9: - Patch 1/10: No changes - Patch 2/10: No changes - Patch 3/10: No changes - Patch 4/10: No changes - Patch 5/10: No changes - Patch 6/10: No changes - Patch 7/10: No changes - Patch 8/10: No changes - Patch 9/10: New patch - Patch 10/10: Patch 9 from v8. Added code to copy ADI tags when pages are migrated. Updated code to detect overflow and underflow of addresses when allocating tag storage. Changelog v8: - Patch 1/9: No changes - Patch 2/9: Fixed and erroneous "}" - Patch 3/9: Minor print formatting change - Patch 4/9: No changes - Patch 5/9: No changes - Patch 6/9: Added AT_ADI_UEONADI back - Patch 7/9: Added addr parameter to powerpc arch_validate_prot() - Patch 8/9: No changes - Patch 9/9: - Documentation updates - Added an IPI on mprotect(...PROT_ADI...) call and restore of TSTATE.MCDE on context switch - Removed restriction on enabling ADI on read-only memory - Changed kzalloc() for tag storage to use GFP_NOWAIT - Added code to handle overflow and underflow when allocating tag storage - Replaced sun_m7_patch_1insn_range() with sun4v_patch_1insn_range() - Added membar after restoring ADI tags in copy_user_highpage() Changelog v7: - Patch 1/9: No changes - Patch 2/9: Updated parameters to arch specific swap in/out handlers - Patch 3/9: No changes - Patch 4/9: new patch split off from patch 4/4 in v6 - Patch 5/9: new patch split off from patch 4/4 in v6 - Patch 6/9: new patch split off from patch 4/4 in v6 - Patch 7/9: new patch - Patch 8/9: new patch - Patch 9/9: - Enhanced arch_validate_prot() to enable ADI only on writable addresses backed by physical RAM - Added support for saving/restoring ADI tags for each ADI block size address range on a page on swap in/out - copy ADI tags on COW - Updated values for auxiliary vectors to not conflict with values on other architectures to avoid conflict in glibc - Disable same page merging on ADI enabled pages - Enable ADI only on writable addresses backed by physical RAM - Split parts of patch off into separate patches Changelog v6: - Patch 1/4: No changes - Patch 2/4: No changes - Patch 3/4: Added missing nop in the delay slot in sun4v_mcd_detect_precise - Patch 4/4: Eliminated instructions to read and write PSTATE as well as MCDPER and PMCDPER on every access to userspace addresses by setting PSTATE and PMCDPER correctly upon entry into kernel Changelog v5: - Patch 1/4: No changes - Patch 2/4: Replaced set_swp_pte_at() with new architecture functions arch_do_swap_page() and arch_unmap_one() that suppoprt architecture specific actions to be taken on page swap and migration - Patch 3/4: Fixed indentation issues in assembly code - Patch 4/4: - Fixed indentation issues and instrcuctions in assembly code - Removed CONFIG_SPARC64 from mdesc.c - Changed to maintain state of MCDPER register in thread info flags as opposed to in mm context. MCDPER is a per-thread state and belongs in thread info flag as opposed to mm context which is shared across threads. Added comments to clarify this is a lazily maintained state and must be updated on context switch and copy_process() - Updated code to use the new arch_do_swap_page() and arch_unmap_one() functions Testing: - All functionality was tested with 8K normal pages as well as hugepages using malloc, mmap and shm. - Multiple long duration stress tests were run using hugepages over 2+ months. Normal pages were tested with shorter duration stress tests. - Tested swapping with malloc and shm by reducing max memory and allocating three times the available system memory by active processes using ADI on allocated memory. Ran through multiple hours long runs of this test. - Tested page migration with malloc and shm by migrating data pages of active ADI test process using migratepages, back and forth between two nodes every few seconds over an hour long run. Verified page migration through /proc/<pid>/numa_maps. - Tested COW support using test that forks children that read from ADI enabled pages shared with parent and other children and write to them as well forcing COW. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r--Documentation/sparc/adi.txt278
-rw-r--r--arch/powerpc/include/asm/mman.h4
-rw-r--r--arch/powerpc/kernel/syscalls.c2
-rw-r--r--arch/sparc/include/asm/adi.h6
-rw-r--r--arch/sparc/include/asm/adi_64.h47
-rw-r--r--arch/sparc/include/asm/elf_64.h5
-rw-r--r--arch/sparc/include/asm/hypervisor.h2
-rw-r--r--arch/sparc/include/asm/mman.h84
-rw-r--r--arch/sparc/include/asm/mmu_64.h17
-rw-r--r--arch/sparc/include/asm/mmu_context_64.h51
-rw-r--r--arch/sparc/include/asm/page_64.h6
-rw-r--r--arch/sparc/include/asm/pgtable_64.h48
-rw-r--r--arch/sparc/include/asm/thread_info_64.h2
-rw-r--r--arch/sparc/include/asm/trap_block.h2
-rw-r--r--arch/sparc/include/asm/ttable.h10
-rw-r--r--arch/sparc/include/uapi/asm/asi.h5
-rw-r--r--arch/sparc/include/uapi/asm/auxvec.h11
-rw-r--r--arch/sparc/include/uapi/asm/mman.h2
-rw-r--r--arch/sparc/include/uapi/asm/pstate.h10
-rw-r--r--arch/sparc/kernel/Makefile1
-rw-r--r--arch/sparc/kernel/adi_64.c397
-rw-r--r--arch/sparc/kernel/entry.h3
-rw-r--r--arch/sparc/kernel/etrap_64.S27
-rw-r--r--arch/sparc/kernel/head_64.S1
-rw-r--r--arch/sparc/kernel/mdesc.c2
-rw-r--r--arch/sparc/kernel/process_64.c25
-rw-r--r--arch/sparc/kernel/rtrap_64.S33
-rw-r--r--arch/sparc/kernel/setup_64.c2
-rw-r--r--arch/sparc/kernel/sun4v_mcd.S18
-rw-r--r--arch/sparc/kernel/traps_64.c130
-rw-r--r--arch/sparc/kernel/ttable_64.S6
-rw-r--r--arch/sparc/kernel/urtt_fill.S7
-rw-r--r--arch/sparc/kernel/vmlinux.lds.S5
-rw-r--r--arch/sparc/mm/gup.c37
-rw-r--r--arch/sparc/mm/hugetlbpage.c14
-rw-r--r--arch/sparc/mm/init_64.c69
-rw-r--r--arch/sparc/mm/tsb.c21
-rw-r--r--arch/x86/kernel/signal_compat.c2
-rw-r--r--include/asm-generic/pgtable.h36
-rw-r--r--include/linux/highmem.h4
-rw-r--r--include/linux/mm.h9
-rw-r--r--include/linux/mman.h2
-rw-r--r--include/uapi/asm-generic/siginfo.h5
-rw-r--r--mm/ksm.c4
-rw-r--r--mm/memory.c1
-rw-r--r--mm/mprotect.c4
-rw-r--r--mm/rmap.c14
47 files changed, 1446 insertions, 25 deletions
diff --git a/Documentation/sparc/adi.txt b/Documentation/sparc/adi.txt
new file mode 100644
index 000000000000..e1aed155fb89
--- /dev/null
+++ b/Documentation/sparc/adi.txt
@@ -0,0 +1,278 @@
1Application Data Integrity (ADI)
2================================
3
4SPARC M7 processor adds the Application Data Integrity (ADI) feature.
5ADI allows a task to set version tags on any subset of its address
6space. Once ADI is enabled and version tags are set for ranges of
7address space of a task, the processor will compare the tag in pointers
8to memory in these ranges to the version set by the application
9previously. Access to memory is granted only if the tag in given pointer
10matches the tag set by the application. In case of mismatch, processor
11raises an exception.
12
13Following steps must be taken by a task to enable ADI fully:
14
151. Set the user mode PSTATE.mcde bit. This acts as master switch for
16 the task's entire address space to enable/disable ADI for the task.
17
182. Set TTE.mcd bit on any TLB entries that correspond to the range of
19 addresses ADI is being enabled on. MMU checks the version tag only
20 on the pages that have TTE.mcd bit set.
21
223. Set the version tag for virtual addresses using stxa instruction
23 and one of the MCD specific ASIs. Each stxa instruction sets the
24 given tag for one ADI block size number of bytes. This step must
25 be repeated for entire page to set tags for entire page.
26
27ADI block size for the platform is provided by the hypervisor to kernel
28in machine description tables. Hypervisor also provides the number of
29top bits in the virtual address that specify the version tag. Once
30version tag has been set for a memory location, the tag is stored in the
31physical memory and the same tag must be present in the ADI version tag
32bits of the virtual address being presented to the MMU. For example on
33SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
34size is same as cacheline size which is 64 bytes. A task that sets ADI
35version to, say 10, on a range of memory, must access that memory using
36virtual addresses that contain 0xa in bits 63-60.
37
38ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
39When ADI is enabled on a set of pages by a task for the first time,
40kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
41addresses are set with an stxa instruction on the addresses using
42ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
43provided by the hypervisor to the kernel. Kernel returns the value of
44ADI block size to userspace using auxiliary vector along with other ADI
45info. Following auxiliary vectors are provided by the kernel:
46
47 AT_ADI_BLKSZ ADI block size. This is the granularity and
48 alignment, in bytes, of ADI versioning.
49 AT_ADI_NBITS Number of ADI version bits in the VA
50
51
52IMPORTANT NOTES:
53
54- Version tag values of 0x0 and 0xf are reserved. These values match any
55 tag in virtual address and never generate a mismatch exception.
56
57- Version tags are set on virtual addresses from userspace even though
58 tags are stored in physical memory. Tags are set on a physical page
59 after it has been allocated to a task and a pte has been created for
60 it.
61
62- When a task frees a memory page it had set version tags on, the page
63 goes back to free page pool. When this page is re-allocated to a task,
64 kernel clears the page using block initialization ASI which clears the
65 version tags as well for the page. If a page allocated to a task is
66 freed and allocated back to the same task, old version tags set by the
67 task on that page will no longer be present.
68
69- ADI tag mismatches are not detected for non-faulting loads.
70
71- Kernel does not set any tags for user pages and it is entirely a
72 task's responsibility to set any version tags. Kernel does ensure the
73 version tags are preserved if a page is swapped out to the disk and
74 swapped back in. It also preserves that version tags if a page is
75 migrated.
76
77- ADI works for any size pages. A userspace task need not be aware of
78 page size when using ADI. It can simply select a virtual address
79 range, enable ADI on the range using mprotect() and set version tags
80 for the entire range. mprotect() ensures range is aligned to page size
81 and is a multiple of page size.
82
83- ADI tags can only be set on writable memory. For example, ADI tags can
84 not be set on read-only mappings.
85
86
87
88ADI related traps
89-----------------
90
91With ADI enabled, following new traps may occur:
92
93Disrupting memory corruption
94
95 When a store accesses a memory localtion that has TTE.mcd=1,
96 the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
97 tag in the address used (bits 63:60) does not match the tag set on
98 the corresponding cacheline, a memory corruption trap occurs. By
99 default, it is a disrupting trap and is sent to the hypervisor
100 first. Hypervisor creates a sun4v error report and sends a
101 resumable error (TT=0x7e) trap to the kernel. The kernel sends
102 a SIGSEGV to the task that resulted in this trap with the following
103 info:
104
105 siginfo.si_signo = SIGSEGV;
106 siginfo.errno = 0;
107 siginfo.si_code = SEGV_ADIDERR;
108 siginfo.si_addr = addr; /* PC where first mismatch occurred */
109 siginfo.si_trapno = 0;
110
111
112Precise memory corruption
113
114 When a store accesses a memory location that has TTE.mcd=1,
115 the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
116 tag in the address used (bits 63:60) does not match the tag set on
117 the corresponding cacheline, a memory corruption trap occurs. If
118 MCD precise exception is enabled (MCDPERR=1), a precise
119 exception is sent to the kernel with TT=0x1a. The kernel sends
120 a SIGSEGV to the task that resulted in this trap with the following
121 info:
122
123 siginfo.si_signo = SIGSEGV;
124 siginfo.errno = 0;
125 siginfo.si_code = SEGV_ADIPERR;
126 siginfo.si_addr = addr; /* address that caused trap */
127 siginfo.si_trapno = 0;
128
129 NOTE: ADI tag mismatch on a load always results in precise trap.
130
131
132MCD disabled
133
134 When a task has not enabled ADI and attempts to set ADI version
135 on a memory address, processor sends an MCD disabled trap. This
136 trap is handled by hypervisor first and the hypervisor vectors this
137 trap through to the kernel as Data Access Exception trap with
138 fault type set to 0xa (invalid ASI). When this occurs, the kernel
139 sends the task SIGSEGV signal with following info:
140
141 siginfo.si_signo = SIGSEGV;
142 siginfo.errno = 0;
143 siginfo.si_code = SEGV_ACCADI;
144 siginfo.si_addr = addr; /* address that caused trap */
145 siginfo.si_trapno = 0;
146
147
148Sample program to use ADI
149-------------------------
150
151Following sample program is meant to illustrate how to use the ADI
152functionality.
153
154#include <unistd.h>
155#include <stdio.h>
156#include <stdlib.h>
157#include <elf.h>
158#include <sys/ipc.h>
159#include <sys/shm.h>
160#include <sys/mman.h>
161#include <asm/asi.h>
162
163#ifndef AT_ADI_BLKSZ
164#define AT_ADI_BLKSZ 48
165#endif
166#ifndef AT_ADI_NBITS
167#define AT_ADI_NBITS 49
168#endif
169
170#ifndef PROT_ADI
171#define PROT_ADI 0x10
172#endif
173
174#define BUFFER_SIZE 32*1024*1024UL
175
176main(int argc, char* argv[], char* envp[])
177{
178 unsigned long i, mcde, adi_blksz, adi_nbits;
179 char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
180 int shmid, version;
181 Elf64_auxv_t *auxv;
182
183 adi_blksz = 0;
184
185 while(*envp++ != NULL);
186 for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
187 switch (auxv->a_type) {
188 case AT_ADI_BLKSZ:
189 adi_blksz = auxv->a_un.a_val;
190 break;
191 case AT_ADI_NBITS:
192 adi_nbits = auxv->a_un.a_val;
193 break;
194 }
195 }
196 if (adi_blksz == 0) {
197 fprintf(stderr, "Oops! ADI is not supported\n");
198 exit(1);
199 }
200
201 printf("ADI capabilities:\n");
202 printf("\tBlock size = %ld\n", adi_blksz);
203 printf("\tNumber of bits = %ld\n", adi_nbits);
204
205 if ((shmid = shmget(2, BUFFER_SIZE,
206 IPC_CREAT | SHM_R | SHM_W)) < 0) {
207 perror("shmget failed");
208 exit(1);
209 }
210
211 shmaddr = shmat(shmid, NULL, 0);
212 if (shmaddr == (char *)-1) {
213 perror("shm attach failed");
214 shmctl(shmid, IPC_RMID, NULL);
215 exit(1);
216 }
217
218 if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
219 perror("mprotect failed");
220 goto err_out;
221 }
222
223 /* Set the ADI version tag on the shm segment
224 */
225 version = 10;
226 tmp_addr = shmaddr;
227 end = shmaddr + BUFFER_SIZE;
228 while (tmp_addr < end) {
229 asm volatile(
230 "stxa %1, [%0]0x90\n\t"
231 :
232 : "r" (tmp_addr), "r" (version));
233 tmp_addr += adi_blksz;
234 }
235 asm volatile("membar #Sync\n\t");
236
237 /* Create a versioned address from the normal address by placing
238 * version tag in the upper adi_nbits bits
239 */
240 tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
241 tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
242 veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
243 | (unsigned long)tmp_addr);
244
245 printf("Starting the writes:\n");
246 for (i = 0; i < BUFFER_SIZE; i++) {
247 veraddr[i] = (char)(i);
248 if (!(i % (1024 * 1024)))
249 printf(".");
250 }
251 printf("\n");
252
253 printf("Verifying data...");
254 fflush(stdout);
255 for (i = 0; i < BUFFER_SIZE; i++)
256 if (veraddr[i] != (char)i)
257 printf("\nIndex %lu mismatched\n", i);
258 printf("Done.\n");
259
260 /* Disable ADI and clean up
261 */
262 if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
263 perror("mprotect failed");
264 goto err_out;
265 }
266
267 if (shmdt((const void *)shmaddr) != 0)
268 perror("Detach failure");
269 shmctl(shmid, IPC_RMID, NULL);
270
271 exit(0);
272
273err_out:
274 if (shmdt((const void *)shmaddr) != 0)
275 perror("Detach failure");
276 shmctl(shmid, IPC_RMID, NULL);
277 exit(1);
278}
diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 07e3f54de9e3..e3f1b5ba5d5c 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -43,7 +43,7 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
43} 43}
44#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) 44#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
45 45
46static inline bool arch_validate_prot(unsigned long prot) 46static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
47{ 47{
48 if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) 48 if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO))
49 return false; 49 return false;
@@ -51,7 +51,7 @@ static inline bool arch_validate_prot(unsigned long prot)
51 return false; 51 return false;
52 return true; 52 return true;
53} 53}
54#define arch_validate_prot(prot) arch_validate_prot(prot) 54#define arch_validate_prot arch_validate_prot
55 55
56#endif /* CONFIG_PPC64 */ 56#endif /* CONFIG_PPC64 */
57#endif /* _ASM_POWERPC_MMAN_H */ 57#endif /* _ASM_POWERPC_MMAN_H */
diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
index a877bf8269fe..6d90ddbd2d11 100644
--- a/arch/powerpc/kernel/syscalls.c
+++ b/arch/powerpc/kernel/syscalls.c
@@ -48,7 +48,7 @@ static inline long do_mmap2(unsigned long addr, size_t len,
48{ 48{
49 long ret = -EINVAL; 49 long ret = -EINVAL;
50 50
51 if (!arch_validate_prot(prot)) 51 if (!arch_validate_prot(prot, addr))
52 goto out; 52 goto out;
53 53
54 if (shift) { 54 if (shift) {
diff --git a/arch/sparc/include/asm/adi.h b/arch/sparc/include/asm/adi.h
new file mode 100644
index 000000000000..acad0d04e4c6
--- /dev/null
+++ b/arch/sparc/include/asm/adi.h
@@ -0,0 +1,6 @@
1#ifndef ___ASM_SPARC_ADI_H
2#define ___ASM_SPARC_ADI_H
3#if defined(__sparc__) && defined(__arch64__)
4#include <asm/adi_64.h>
5#endif
6#endif
diff --git a/arch/sparc/include/asm/adi_64.h b/arch/sparc/include/asm/adi_64.h
new file mode 100644
index 000000000000..85f7a763af85
--- /dev/null
+++ b/arch/sparc/include/asm/adi_64.h
@@ -0,0 +1,47 @@
1/* adi_64.h: ADI related data structures
2 *
3 * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
4 * Author: Khalid Aziz (khalid.aziz@oracle.com)
5 *
6 * This work is licensed under the terms of the GNU GPL, version 2.
7 */
8#ifndef __ASM_SPARC64_ADI_H
9#define __ASM_SPARC64_ADI_H
10
11#include <linux/types.h>
12
13#ifndef __ASSEMBLY__
14
15struct adi_caps {
16 __u64 blksz;
17 __u64 nbits;
18 __u64 ue_on_adi;
19};
20
21struct adi_config {
22 bool enabled;
23 struct adi_caps caps;
24};
25
26extern struct adi_config adi_state;
27
28extern void mdesc_adi_init(void);
29
30static inline bool adi_capable(void)
31{
32 return adi_state.enabled;
33}
34
35static inline unsigned long adi_blksize(void)
36{
37 return adi_state.caps.blksz;
38}
39
40static inline unsigned long adi_nbits(void)
41{
42 return adi_state.caps.nbits;
43}
44
45#endif /* __ASSEMBLY__ */
46
47#endif /* !(__ASM_SPARC64_ADI_H) */
diff --git a/arch/sparc/include/asm/elf_64.h b/arch/sparc/include/asm/elf_64.h
index 25340df3570c..7e078bc73ef5 100644
--- a/arch/sparc/include/asm/elf_64.h
+++ b/arch/sparc/include/asm/elf_64.h
@@ -10,6 +10,7 @@
10#include <asm/processor.h> 10#include <asm/processor.h>
11#include <asm/extable_64.h> 11#include <asm/extable_64.h>
12#include <asm/spitfire.h> 12#include <asm/spitfire.h>
13#include <asm/adi.h>
13 14
14/* 15/*
15 * Sparc section types 16 * Sparc section types
@@ -215,9 +216,13 @@ extern unsigned int vdso_enabled;
215 216
216#define ARCH_DLINFO \ 217#define ARCH_DLINFO \
217do { \ 218do { \
219 extern struct adi_config adi_state; \
218 if (vdso_enabled) \ 220 if (vdso_enabled) \
219 NEW_AUX_ENT(AT_SYSINFO_EHDR, \ 221 NEW_AUX_ENT(AT_SYSINFO_EHDR, \
220 (unsigned long)current->mm->context.vdso); \ 222 (unsigned long)current->mm->context.vdso); \
223 NEW_AUX_ENT(AT_ADI_BLKSZ, adi_state.caps.blksz); \
224 NEW_AUX_ENT(AT_ADI_NBITS, adi_state.caps.nbits); \
225 NEW_AUX_ENT(AT_ADI_UEONADI, adi_state.caps.ue_on_adi); \
221} while (0) 226} while (0)
222 227
223struct linux_binprm; 228struct linux_binprm;
diff --git a/arch/sparc/include/asm/hypervisor.h b/arch/sparc/include/asm/hypervisor.h
index ab9c6b027b75..08650d503cc2 100644
--- a/arch/sparc/include/asm/hypervisor.h
+++ b/arch/sparc/include/asm/hypervisor.h
@@ -570,6 +570,8 @@ struct hv_fault_status {
570#define HV_FAULT_TYPE_RESV1 13 570#define HV_FAULT_TYPE_RESV1 13
571#define HV_FAULT_TYPE_UNALIGNED 14 571#define HV_FAULT_TYPE_UNALIGNED 14
572#define HV_FAULT_TYPE_INV_PGSZ 15 572#define HV_FAULT_TYPE_INV_PGSZ 15
573#define HV_FAULT_TYPE_MCD 17
574#define HV_FAULT_TYPE_MCD_DIS 18
573/* Values 16 --> -2 are reserved. */ 575/* Values 16 --> -2 are reserved. */
574#define HV_FAULT_TYPE_MULTIPLE -1 576#define HV_FAULT_TYPE_MULTIPLE -1
575 577
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
index 7e9472143f9b..f94532f25db1 100644
--- a/arch/sparc/include/asm/mman.h
+++ b/arch/sparc/include/asm/mman.h
@@ -7,5 +7,87 @@
7#ifndef __ASSEMBLY__ 7#ifndef __ASSEMBLY__
8#define arch_mmap_check(addr,len,flags) sparc_mmap_check(addr,len) 8#define arch_mmap_check(addr,len,flags) sparc_mmap_check(addr,len)
9int sparc_mmap_check(unsigned long addr, unsigned long len); 9int sparc_mmap_check(unsigned long addr, unsigned long len);
10#endif 10
11#ifdef CONFIG_SPARC64
12#include <asm/adi_64.h>
13
14static inline void ipi_set_tstate_mcde(void *arg)
15{
16 struct mm_struct *mm = arg;
17
18 /* Set TSTATE_MCDE for the task using address map that ADI has been
19 * enabled on if the task is running. If not, it will be set
20 * automatically at the next context switch
21 */
22 if (current->mm == mm) {
23 struct pt_regs *regs;
24
25 regs = task_pt_regs(current);
26 regs->tstate |= TSTATE_MCDE;
27 }
28}
29
30#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
31static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
32{
33 if (adi_capable() && (prot & PROT_ADI)) {
34 struct pt_regs *regs;
35
36 if (!current->mm->context.adi) {
37 regs = task_pt_regs(current);
38 regs->tstate |= TSTATE_MCDE;
39 current->mm->context.adi = true;
40 on_each_cpu_mask(mm_cpumask(current->mm),
41 ipi_set_tstate_mcde, current->mm, 0);
42 }
43 return VM_SPARC_ADI;
44 } else {
45 return 0;
46 }
47}
48
49#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
50static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
51{
52 return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
53}
54
55#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
56static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
57{
58 if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
59 return 0;
60 if (prot & PROT_ADI) {
61 if (!adi_capable())
62 return 0;
63
64 if (addr) {
65 struct vm_area_struct *vma;
66
67 vma = find_vma(current->mm, addr);
68 if (vma) {
69 /* ADI can not be enabled on PFN
70 * mapped pages
71 */
72 if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
73 return 0;
74
75 /* Mergeable pages can become unmergeable
76 * if ADI is enabled on them even if they
77 * have identical data on them. This can be
78 * because ADI enabled pages with identical
79 * data may still not have identical ADI
80 * tags on them. Disallow ADI on mergeable
81 * pages.
82 */
83 if (vma->vm_flags & VM_MERGEABLE)
84 return 0;
85 }
86 }
87 }
88 return 1;
89}
90#endif /* CONFIG_SPARC64 */
91
92#endif /* __ASSEMBLY__ */
11#endif /* __SPARC_MMAN_H__ */ 93#endif /* __SPARC_MMAN_H__ */
diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
index ad4fb93508ba..7e2704c770e9 100644
--- a/arch/sparc/include/asm/mmu_64.h
+++ b/arch/sparc/include/asm/mmu_64.h
@@ -90,6 +90,20 @@ struct tsb_config {
90#define MM_NUM_TSBS 1 90#define MM_NUM_TSBS 1
91#endif 91#endif
92 92
93/* ADI tags are stored when a page is swapped out and the storage for
94 * tags is allocated dynamically. There is a tag storage descriptor
95 * associated with each set of tag storage pages. Tag storage descriptors
96 * are allocated dynamically. Since kernel will allocate a full page for
97 * each tag storage descriptor, we can store up to
98 * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
99 */
100typedef struct {
101 unsigned long start; /* Start address for this tag storage */
102 unsigned long end; /* Last address for tag storage */
103 unsigned char *tags; /* Where the tags are */
104 unsigned long tag_users; /* number of references to descriptor */
105} tag_storage_desc_t;
106
93typedef struct { 107typedef struct {
94 spinlock_t lock; 108 spinlock_t lock;
95 unsigned long sparc64_ctx_val; 109 unsigned long sparc64_ctx_val;
@@ -98,6 +112,9 @@ typedef struct {
98 struct tsb_config tsb_block[MM_NUM_TSBS]; 112 struct tsb_config tsb_block[MM_NUM_TSBS];
99 struct hv_tsb_descr tsb_descr[MM_NUM_TSBS]; 113 struct hv_tsb_descr tsb_descr[MM_NUM_TSBS];
100 void *vdso; 114 void *vdso;
115 bool adi;
116 tag_storage_desc_t *tag_store;
117 spinlock_t tag_lock;
101} mm_context_t; 118} mm_context_t;
102 119
103#endif /* !__ASSEMBLY__ */ 120#endif /* !__ASSEMBLY__ */
diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
index b361702ef52a..312fcee8df2b 100644
--- a/arch/sparc/include/asm/mmu_context_64.h
+++ b/arch/sparc/include/asm/mmu_context_64.h
@@ -9,8 +9,10 @@
9#include <linux/spinlock.h> 9#include <linux/spinlock.h>
10#include <linux/mm_types.h> 10#include <linux/mm_types.h>
11#include <linux/smp.h> 11#include <linux/smp.h>
12#include <linux/sched.h>
12 13
13#include <asm/spitfire.h> 14#include <asm/spitfire.h>
15#include <asm/adi_64.h>
14#include <asm-generic/mm_hooks.h> 16#include <asm-generic/mm_hooks.h>
15#include <asm/percpu.h> 17#include <asm/percpu.h>
16 18
@@ -136,6 +138,55 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
136 138
137#define deactivate_mm(tsk,mm) do { } while (0) 139#define deactivate_mm(tsk,mm) do { } while (0)
138#define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL) 140#define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
141
142#define __HAVE_ARCH_START_CONTEXT_SWITCH
143static inline void arch_start_context_switch(struct task_struct *prev)
144{
145 /* Save the current state of MCDPER register for the process
146 * we are switching from
147 */
148 if (adi_capable()) {
149 register unsigned long tmp_mcdper;
150
151 __asm__ __volatile__(
152 ".word 0x83438000\n\t" /* rd %mcdper, %g1 */
153 "mov %%g1, %0\n\t"
154 : "=r" (tmp_mcdper)
155 :
156 : "g1");
157 if (tmp_mcdper)
158 set_tsk_thread_flag(prev, TIF_MCDPER);
159 else
160 clear_tsk_thread_flag(prev, TIF_MCDPER);
161 }
162}
163
164#define finish_arch_post_lock_switch finish_arch_post_lock_switch
165static inline void finish_arch_post_lock_switch(void)
166{
167 /* Restore the state of MCDPER register for the new process
168 * just switched to.
169 */
170 if (adi_capable()) {
171 register unsigned long tmp_mcdper;
172
173 tmp_mcdper = test_thread_flag(TIF_MCDPER);
174 __asm__ __volatile__(
175 "mov %0, %%g1\n\t"
176 ".word 0x9d800001\n\t" /* wr %g0, %g1, %mcdper" */
177 ".word 0xaf902001\n\t" /* wrpr %g0, 1, %pmcdper */
178 :
179 : "ir" (tmp_mcdper)
180 : "g1");
181 if (current && current->mm && current->mm->context.adi) {
182 struct pt_regs *regs;
183
184 regs = task_pt_regs(current);
185 regs->tstate |= TSTATE_MCDE;
186 }
187 }
188}
189
139#endif /* !(__ASSEMBLY__) */ 190#endif /* !(__ASSEMBLY__) */
140 191
141#endif /* !(__SPARC64_MMU_CONTEXT_H) */ 192#endif /* !(__SPARC64_MMU_CONTEXT_H) */
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
index c28379b1b0fc..e80f2d5bf62f 100644
--- a/arch/sparc/include/asm/page_64.h
+++ b/arch/sparc/include/asm/page_64.h
@@ -48,6 +48,12 @@ struct page;
48void clear_user_page(void *addr, unsigned long vaddr, struct page *page); 48void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
49#define copy_page(X,Y) memcpy((void *)(X), (void *)(Y), PAGE_SIZE) 49#define copy_page(X,Y) memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
50void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage); 50void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
51#define __HAVE_ARCH_COPY_USER_HIGHPAGE
52struct vm_area_struct;
53void copy_user_highpage(struct page *to, struct page *from,
54 unsigned long vaddr, struct vm_area_struct *vma);
55#define __HAVE_ARCH_COPY_HIGHPAGE
56void copy_highpage(struct page *to, struct page *from);
51 57
52/* Unlike sparc32, sparc64's parameter passing API is more 58/* Unlike sparc32, sparc64's parameter passing API is more
53 * sane in that structures which as small enough are passed 59 * sane in that structures which as small enough are passed
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 339920fdf9ed..44d6ac47e035 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -19,6 +19,7 @@
19#include <asm/types.h> 19#include <asm/types.h>
20#include <asm/spitfire.h> 20#include <asm/spitfire.h>
21#include <asm/asi.h> 21#include <asm/asi.h>
22#include <asm/adi.h>
22#include <asm/page.h> 23#include <asm/page.h>
23#include <asm/processor.h> 24#include <asm/processor.h>
24 25
@@ -164,6 +165,8 @@ bool kern_addr_valid(unsigned long addr);
164#define _PAGE_E_4V _AC(0x0000000000000800,UL) /* side-Effect */ 165#define _PAGE_E_4V _AC(0x0000000000000800,UL) /* side-Effect */
165#define _PAGE_CP_4V _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */ 166#define _PAGE_CP_4V _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */
166#define _PAGE_CV_4V _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */ 167#define _PAGE_CV_4V _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */
168/* Bit 9 is used to enable MCD corruption detection instead on M7 */
169#define _PAGE_MCD_4V _AC(0x0000000000000200,UL) /* Memory Corruption */
167#define _PAGE_P_4V _AC(0x0000000000000100,UL) /* Privileged Page */ 170#define _PAGE_P_4V _AC(0x0000000000000100,UL) /* Privileged Page */
168#define _PAGE_EXEC_4V _AC(0x0000000000000080,UL) /* Executable Page */ 171#define _PAGE_EXEC_4V _AC(0x0000000000000080,UL) /* Executable Page */
169#define _PAGE_W_4V _AC(0x0000000000000040,UL) /* Writable */ 172#define _PAGE_W_4V _AC(0x0000000000000040,UL) /* Writable */
@@ -604,6 +607,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
604 return pte; 607 return pte;
605} 608}
606 609
610static inline pte_t pte_mkmcd(pte_t pte)
611{
612 pte_val(pte) |= _PAGE_MCD_4V;
613 return pte;
614}
615
616static inline pte_t pte_mknotmcd(pte_t pte)
617{
618 pte_val(pte) &= ~_PAGE_MCD_4V;
619 return pte;
620}
621
607static inline unsigned long pte_young(pte_t pte) 622static inline unsigned long pte_young(pte_t pte)
608{ 623{
609 unsigned long mask; 624 unsigned long mask;
@@ -1046,6 +1061,39 @@ int page_in_phys_avail(unsigned long paddr);
1046int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long, 1061int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
1047 unsigned long, pgprot_t); 1062 unsigned long, pgprot_t);
1048 1063
1064void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
1065 unsigned long addr, pte_t pte);
1066
1067int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
1068 unsigned long addr, pte_t oldpte);
1069
1070#define __HAVE_ARCH_DO_SWAP_PAGE
1071static inline void arch_do_swap_page(struct mm_struct *mm,
1072 struct vm_area_struct *vma,
1073 unsigned long addr,
1074 pte_t pte, pte_t oldpte)
1075{
1076 /* If this is a new page being mapped in, there can be no
1077 * ADI tags stored away for this page. Skip looking for
1078 * stored tags
1079 */
1080 if (pte_none(oldpte))
1081 return;
1082
1083 if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
1084 adi_restore_tags(mm, vma, addr, pte);
1085}
1086
1087#define __HAVE_ARCH_UNMAP_ONE
1088static inline int arch_unmap_one(struct mm_struct *mm,
1089 struct vm_area_struct *vma,
1090 unsigned long addr, pte_t oldpte)
1091{
1092 if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
1093 return adi_save_tags(mm, vma, addr, oldpte);
1094 return 0;
1095}
1096
1049static inline int io_remap_pfn_range(struct vm_area_struct *vma, 1097static inline int io_remap_pfn_range(struct vm_area_struct *vma,
1050 unsigned long from, unsigned long pfn, 1098 unsigned long from, unsigned long pfn,
1051 unsigned long size, pgprot_t prot) 1099 unsigned long size, pgprot_t prot)
diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
index f7e7b0baec9f..7fb676360928 100644
--- a/arch/sparc/include/asm/thread_info_64.h
+++ b/arch/sparc/include/asm/thread_info_64.h
@@ -188,7 +188,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
188 * in using in assembly, else we can't use the mask as 188 * in using in assembly, else we can't use the mask as
189 * an immediate value in instructions such as andcc. 189 * an immediate value in instructions such as andcc.
190 */ 190 */
191/* flag bit 12 is available */ 191#define TIF_MCDPER 12 /* Precise MCD exception */
192#define TIF_MEMDIE 13 /* is terminating due to OOM killer */ 192#define TIF_MEMDIE 13 /* is terminating due to OOM killer */
193#define TIF_POLLING_NRFLAG 14 193#define TIF_POLLING_NRFLAG 14
194 194
diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h
index 6a4c8652ad67..0f6d0c4f6683 100644
--- a/arch/sparc/include/asm/trap_block.h
+++ b/arch/sparc/include/asm/trap_block.h
@@ -76,6 +76,8 @@ extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
76 __sun4v_1insn_patch_end; 76 __sun4v_1insn_patch_end;
77extern struct sun4v_1insn_patch_entry __fast_win_ctrl_1insn_patch, 77extern struct sun4v_1insn_patch_entry __fast_win_ctrl_1insn_patch,
78 __fast_win_ctrl_1insn_patch_end; 78 __fast_win_ctrl_1insn_patch_end;
79extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
80 __sun_m7_1insn_patch_end;
79 81
80struct sun4v_2insn_patch_entry { 82struct sun4v_2insn_patch_entry {
81 unsigned int addr; 83 unsigned int addr;
diff --git a/arch/sparc/include/asm/ttable.h b/arch/sparc/include/asm/ttable.h
index ede2b66cf4a0..8f6469408019 100644
--- a/arch/sparc/include/asm/ttable.h
+++ b/arch/sparc/include/asm/ttable.h
@@ -219,6 +219,16 @@
219 nop; \ 219 nop; \
220 nop; 220 nop;
221 221
222#define SUN4V_MCD_PRECISE \
223 ldxa [%g0] ASI_SCRATCHPAD, %g2; \
224 ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \
225 ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \
226 ba,pt %xcc, etrap; \
227 rd %pc, %g7; \
228 ba,pt %xcc, sun4v_mcd_detect_precise; \
229 nop; \
230 nop;
231
222/* Before touching these macros, you owe it to yourself to go and 232/* Before touching these macros, you owe it to yourself to go and
223 * see how arch/sparc64/kernel/winfixup.S works... -DaveM 233 * see how arch/sparc64/kernel/winfixup.S works... -DaveM
224 * 234 *
diff --git a/arch/sparc/include/uapi/asm/asi.h b/arch/sparc/include/uapi/asm/asi.h
index d371b269571a..fbb30a5b082f 100644
--- a/arch/sparc/include/uapi/asm/asi.h
+++ b/arch/sparc/include/uapi/asm/asi.h
@@ -145,6 +145,8 @@
145 * ASIs, "(4V)" designates SUN4V specific ASIs. "(NG4)" designates SPARC-T4 145 * ASIs, "(4V)" designates SUN4V specific ASIs. "(NG4)" designates SPARC-T4
146 * and later ASIs. 146 * and later ASIs.
147 */ 147 */
148#define ASI_MCD_PRIV_PRIMARY 0x02 /* (NG7) Privileged MCD version VA */
149#define ASI_MCD_REAL 0x05 /* (NG7) Privileged MCD version PA */
148#define ASI_PHYS_USE_EC 0x14 /* PADDR, E-cachable */ 150#define ASI_PHYS_USE_EC 0x14 /* PADDR, E-cachable */
149#define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-bit */ 151#define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-bit */
150#define ASI_BLK_AIUP_4V 0x16 /* (4V) Prim, user, block ld/st */ 152#define ASI_BLK_AIUP_4V 0x16 /* (4V) Prim, user, block ld/st */
@@ -245,6 +247,9 @@
245#define ASI_UDBL_CONTROL_R 0x7f /* External UDB control regs rd low*/ 247#define ASI_UDBL_CONTROL_R 0x7f /* External UDB control regs rd low*/
246#define ASI_INTR_R 0x7f /* IRQ vector dispatch read */ 248#define ASI_INTR_R 0x7f /* IRQ vector dispatch read */
247#define ASI_INTR_DATAN_R 0x7f /* (III) In irq vector data reg N */ 249#define ASI_INTR_DATAN_R 0x7f /* (III) In irq vector data reg N */
250#define ASI_MCD_PRIMARY 0x90 /* (NG7) MCD version load/store */
251#define ASI_MCD_ST_BLKINIT_PRIMARY \
252 0x92 /* (NG7) MCD store BLKINIT primary */
248#define ASI_PIC 0xb0 /* (NG4) PIC registers */ 253#define ASI_PIC 0xb0 /* (NG4) PIC registers */
249#define ASI_PST8_P 0xc0 /* Primary, 8 8-bit, partial */ 254#define ASI_PST8_P 0xc0 /* Primary, 8 8-bit, partial */
250#define ASI_PST8_S 0xc1 /* Secondary, 8 8-bit, partial */ 255#define ASI_PST8_S 0xc1 /* Secondary, 8 8-bit, partial */
diff --git a/arch/sparc/include/uapi/asm/auxvec.h b/arch/sparc/include/uapi/asm/auxvec.h
index 5f80a70cc901..f9937ccfcd99 100644
--- a/arch/sparc/include/uapi/asm/auxvec.h
+++ b/arch/sparc/include/uapi/asm/auxvec.h
@@ -3,6 +3,17 @@
3 3
4#define AT_SYSINFO_EHDR 33 4#define AT_SYSINFO_EHDR 33
5 5
6#ifdef CONFIG_SPARC64
7/* Avoid overlap with other AT_* values since they are consolidated in
8 * glibc and any overlaps can cause problems
9 */
10#define AT_ADI_BLKSZ 48
11#define AT_ADI_NBITS 49
12#define AT_ADI_UEONADI 50
13
14#define AT_VECTOR_SIZE_ARCH 4
15#else
6#define AT_VECTOR_SIZE_ARCH 1 16#define AT_VECTOR_SIZE_ARCH 1
17#endif
7 18
8#endif /* !(__ASMSPARC_AUXVEC_H) */ 19#endif /* !(__ASMSPARC_AUXVEC_H) */
diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h
index 715a2c927e79..f6f99ec65bb3 100644
--- a/arch/sparc/include/uapi/asm/mman.h
+++ b/arch/sparc/include/uapi/asm/mman.h
@@ -6,6 +6,8 @@
6 6
7/* SunOS'ified... */ 7/* SunOS'ified... */
8 8
9#define PROT_ADI 0x10 /* ADI enabled */
10
9#define MAP_RENAME MAP_ANONYMOUS /* In SunOS terminology */ 11#define MAP_RENAME MAP_ANONYMOUS /* In SunOS terminology */
10#define MAP_NORESERVE 0x40 /* don't reserve swap pages */ 12#define MAP_NORESERVE 0x40 /* don't reserve swap pages */
11#define MAP_INHERIT 0x80 /* SunOS doesn't do this, but... */ 13#define MAP_INHERIT 0x80 /* SunOS doesn't do this, but... */
diff --git a/arch/sparc/include/uapi/asm/pstate.h b/arch/sparc/include/uapi/asm/pstate.h
index b6999c9e7e86..ceca96e685c2 100644
--- a/arch/sparc/include/uapi/asm/pstate.h
+++ b/arch/sparc/include/uapi/asm/pstate.h
@@ -11,7 +11,12 @@
11 * ----------------------------------------------------------------------- 11 * -----------------------------------------------------------------------
12 * 63 12 11 10 9 8 7 6 5 4 3 2 1 0 12 * 63 12 11 10 9 8 7 6 5 4 3 2 1 0
13 */ 13 */
14/* IG on V9 conflicts with MCDE on M7. PSTATE_MCDE will only be used on
15 * processors that support ADI which do not use IG, hence there is no
16 * functional conflict
17 */
14#define PSTATE_IG _AC(0x0000000000000800,UL) /* Interrupt Globals. */ 18#define PSTATE_IG _AC(0x0000000000000800,UL) /* Interrupt Globals. */
19#define PSTATE_MCDE _AC(0x0000000000000800,UL) /* MCD Enable */
15#define PSTATE_MG _AC(0x0000000000000400,UL) /* MMU Globals. */ 20#define PSTATE_MG _AC(0x0000000000000400,UL) /* MMU Globals. */
16#define PSTATE_CLE _AC(0x0000000000000200,UL) /* Current Little Endian.*/ 21#define PSTATE_CLE _AC(0x0000000000000200,UL) /* Current Little Endian.*/
17#define PSTATE_TLE _AC(0x0000000000000100,UL) /* Trap Little Endian. */ 22#define PSTATE_TLE _AC(0x0000000000000100,UL) /* Trap Little Endian. */
@@ -48,7 +53,12 @@
48#define TSTATE_ASI _AC(0x00000000ff000000,UL) /* AddrSpace ID. */ 53#define TSTATE_ASI _AC(0x00000000ff000000,UL) /* AddrSpace ID. */
49#define TSTATE_PIL _AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/ 54#define TSTATE_PIL _AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/
50#define TSTATE_PSTATE _AC(0x00000000000fff00,UL) /* PSTATE. */ 55#define TSTATE_PSTATE _AC(0x00000000000fff00,UL) /* PSTATE. */
56/* IG on V9 conflicts with MCDE on M7. TSTATE_MCDE will only be used on
57 * processors that support ADI which do not support IG, hence there is
58 * no functional conflict
59 */
51#define TSTATE_IG _AC(0x0000000000080000,UL) /* Interrupt Globals.*/ 60#define TSTATE_IG _AC(0x0000000000080000,UL) /* Interrupt Globals.*/
61#define TSTATE_MCDE _AC(0x0000000000080000,UL) /* MCD enable. */
52#define TSTATE_MG _AC(0x0000000000040000,UL) /* MMU Globals. */ 62#define TSTATE_MG _AC(0x0000000000040000,UL) /* MMU Globals. */
53#define TSTATE_CLE _AC(0x0000000000020000,UL) /* CurrLittleEndian. */ 63#define TSTATE_CLE _AC(0x0000000000020000,UL) /* CurrLittleEndian. */
54#define TSTATE_TLE _AC(0x0000000000010000,UL) /* TrapLittleEndian. */ 64#define TSTATE_TLE _AC(0x0000000000010000,UL) /* TrapLittleEndian. */
diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile
index cc97545737f0..76cb57750dda 100644
--- a/arch/sparc/kernel/Makefile
+++ b/arch/sparc/kernel/Makefile
@@ -69,6 +69,7 @@ obj-$(CONFIG_SPARC64) += visemul.o
69obj-$(CONFIG_SPARC64) += hvapi.o 69obj-$(CONFIG_SPARC64) += hvapi.o
70obj-$(CONFIG_SPARC64) += sstate.o 70obj-$(CONFIG_SPARC64) += sstate.o
71obj-$(CONFIG_SPARC64) += mdesc.o 71obj-$(CONFIG_SPARC64) += mdesc.o
72obj-$(CONFIG_SPARC64) += adi_64.o
72obj-$(CONFIG_SPARC64) += pcr.o 73obj-$(CONFIG_SPARC64) += pcr.o
73obj-$(CONFIG_SPARC64) += nmi.o 74obj-$(CONFIG_SPARC64) += nmi.o
74obj-$(CONFIG_SPARC64_SMP) += cpumap.o 75obj-$(CONFIG_SPARC64_SMP) += cpumap.o
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
new file mode 100644
index 000000000000..d0a2ac975b42
--- /dev/null
+++ b/arch/sparc/kernel/adi_64.c
@@ -0,0 +1,397 @@
1/* adi_64.c: support for ADI (Application Data Integrity) feature on
2 * sparc m7 and newer processors. This feature is also known as
3 * SSM (Silicon Secured Memory).
4 *
5 * Copyright (C) 2016 Oracle and/or its affiliates. All rights reserved.
6 * Author: Khalid Aziz (khalid.aziz@oracle.com)
7 *
8 * This work is licensed under the terms of the GNU GPL, version 2.
9 */
10#include <linux/init.h>
11#include <linux/slab.h>
12#include <linux/mm_types.h>
13#include <asm/mdesc.h>
14#include <asm/adi_64.h>
15#include <asm/mmu_64.h>
16#include <asm/pgtable_64.h>
17
18/* Each page of storage for ADI tags can accommodate tags for 128
19 * pages. When ADI enabled pages are being swapped out, it would be
20 * prudent to allocate at least enough tag storage space to accommodate
21 * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
22 * store tags for four SWAPFILE_CLUSTER pages to reduce need for
23 * further allocations for same vma.
24 */
25#define TAG_STORAGE_PAGES 8
26
27struct adi_config adi_state;
28EXPORT_SYMBOL(adi_state);
29
30/* mdesc_adi_init() : Parse machine description provided by the
31 * hypervisor to detect ADI capabilities
32 *
33 * Hypervisor reports ADI capabilities of platform in "hwcap-list" property
34 * for "cpu" node. If the platform supports ADI, "hwcap-list" property
35 * contains the keyword "adp". If the platform supports ADI, "platform"
36 * node will contain "adp-blksz", "adp-nbits" and "ue-on-adp" properties
37 * to describe the ADI capabilities.
38 */
39void __init mdesc_adi_init(void)
40{
41 struct mdesc_handle *hp = mdesc_grab();
42 const char *prop;
43 u64 pn, *val;
44 int len;
45
46 if (!hp)
47 goto adi_not_found;
48
49 pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "cpu");
50 if (pn == MDESC_NODE_NULL)
51 goto adi_not_found;
52
53 prop = mdesc_get_property(hp, pn, "hwcap-list", &len);
54 if (!prop)
55 goto adi_not_found;
56
57 /*
58 * Look for "adp" keyword in hwcap-list which would indicate
59 * ADI support
60 */
61 adi_state.enabled = false;
62 while (len) {
63 int plen;
64
65 if (!strcmp(prop, "adp")) {
66 adi_state.enabled = true;
67 break;
68 }
69
70 plen = strlen(prop) + 1;
71 prop += plen;
72 len -= plen;
73 }
74
75 if (!adi_state.enabled)
76 goto adi_not_found;
77
78 /* Find the ADI properties in "platform" node. If all ADI
79 * properties are not found, ADI support is incomplete and
80 * do not enable ADI in the kernel.
81 */
82 pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "platform");
83 if (pn == MDESC_NODE_NULL)
84 goto adi_not_found;
85
86 val = (u64 *) mdesc_get_property(hp, pn, "adp-blksz", &len);
87 if (!val)
88 goto adi_not_found;
89 adi_state.caps.blksz = *val;
90
91 val = (u64 *) mdesc_get_property(hp, pn, "adp-nbits", &len);
92 if (!val)
93 goto adi_not_found;
94 adi_state.caps.nbits = *val;
95
96 val = (u64 *) mdesc_get_property(hp, pn, "ue-on-adp", &len);
97 if (!val)
98 goto adi_not_found;
99 adi_state.caps.ue_on_adi = *val;
100
101 /* Some of the code to support swapping ADI tags is written
102 * assumption that two ADI tags can fit inside one byte. If
103 * this assumption is broken by a future architecture change,
104 * that code will have to be revisited. If that were to happen,
105 * disable ADI support so we do not get unpredictable results
106 * with programs trying to use ADI and their pages getting
107 * swapped out
108 */
109 if (adi_state.caps.nbits > 4) {
110 pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
111 adi_state.enabled = false;
112 }
113
114 mdesc_release(hp);
115 return;
116
117adi_not_found:
118 adi_state.enabled = false;
119 adi_state.caps.blksz = 0;
120 adi_state.caps.nbits = 0;
121 if (hp)
122 mdesc_release(hp);
123}
124
125tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
126 struct vm_area_struct *vma,
127 unsigned long addr)
128{
129 tag_storage_desc_t *tag_desc = NULL;
130 unsigned long i, max_desc, flags;
131
132 /* Check if this vma already has tag storage descriptor
133 * allocated for it.
134 */
135 max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
136 if (mm->context.tag_store) {
137 tag_desc = mm->context.tag_store;
138 spin_lock_irqsave(&mm->context.tag_lock, flags);
139 for (i = 0; i < max_desc; i++) {
140 if ((addr >= tag_desc->start) &&
141 ((addr + PAGE_SIZE - 1) <= tag_desc->end))
142 break;
143 tag_desc++;
144 }
145 spin_unlock_irqrestore(&mm->context.tag_lock, flags);
146
147 /* If no matching entries were found, this must be a
148 * freshly allocated page
149 */
150 if (i >= max_desc)
151 tag_desc = NULL;
152 }
153
154 return tag_desc;
155}
156
157tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
158 struct vm_area_struct *vma,
159 unsigned long addr)
160{
161 unsigned char *tags;
162 unsigned long i, size, max_desc, flags;
163 tag_storage_desc_t *tag_desc, *open_desc;
164 unsigned long end_addr, hole_start, hole_end;
165
166 max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
167 open_desc = NULL;
168 hole_start = 0;
169 hole_end = ULONG_MAX;
170 end_addr = addr + PAGE_SIZE - 1;
171
172 /* Check if this vma already has tag storage descriptor
173 * allocated for it.
174 */
175 spin_lock_irqsave(&mm->context.tag_lock, flags);
176 if (mm->context.tag_store) {
177 tag_desc = mm->context.tag_store;
178
179 /* Look for a matching entry for this address. While doing
180 * that, look for the first open slot as well and find
181 * the hole in already allocated range where this request
182 * will fit in.
183 */
184 for (i = 0; i < max_desc; i++) {
185 if (tag_desc->tag_users == 0) {
186 if (open_desc == NULL)
187 open_desc = tag_desc;
188 } else {
189 if ((addr >= tag_desc->start) &&
190 (tag_desc->end >= (addr + PAGE_SIZE - 1))) {
191 tag_desc->tag_users++;
192 goto out;
193 }
194 }
195 if ((tag_desc->start > end_addr) &&
196 (tag_desc->start < hole_end))
197 hole_end = tag_desc->start;
198 if ((tag_desc->end < addr) &&
199 (tag_desc->end > hole_start))
200 hole_start = tag_desc->end;
201 tag_desc++;
202 }
203
204 } else {
205 size = sizeof(tag_storage_desc_t)*max_desc;
206 mm->context.tag_store = kzalloc(size, GFP_NOWAIT|__GFP_NOWARN);
207 if (mm->context.tag_store == NULL) {
208 tag_desc = NULL;
209 goto out;
210 }
211 tag_desc = mm->context.tag_store;
212 for (i = 0; i < max_desc; i++, tag_desc++)
213 tag_desc->tag_users = 0;
214 open_desc = mm->context.tag_store;
215 i = 0;
216 }
217
218 /* Check if we ran out of tag storage descriptors */
219 if (open_desc == NULL) {
220 tag_desc = NULL;
221 goto out;
222 }
223
224 /* Mark this tag descriptor slot in use and then initialize it */
225 tag_desc = open_desc;
226 tag_desc->tag_users = 1;
227
228 /* Tag storage has not been allocated for this vma and space
229 * is available in tag storage descriptor. Since this page is
230 * being swapped out, there is high probability subsequent pages
231 * in the VMA will be swapped out as well. Allocate pages to
232 * store tags for as many pages in this vma as possible but not
233 * more than TAG_STORAGE_PAGES. Each byte in tag space holds
234 * two ADI tags since each ADI tag is 4 bits. Each ADI tag
235 * covers adi_blksize() worth of addresses. Check if the hole is
236 * big enough to accommodate full address range for using
237 * TAG_STORAGE_PAGES number of tag pages.
238 */
239 size = TAG_STORAGE_PAGES * PAGE_SIZE;
240 end_addr = addr + (size*2*adi_blksize()) - 1;
241 /* Check for overflow. If overflow occurs, allocate only one page */
242 if (end_addr < addr) {
243 size = PAGE_SIZE;
244 end_addr = addr + (size*2*adi_blksize()) - 1;
245 /* If overflow happens with the minimum tag storage
246 * allocation as well, adjust ending address for this
247 * tag storage.
248 */
249 if (end_addr < addr)
250 end_addr = ULONG_MAX;
251 }
252 if (hole_end < end_addr) {
253 /* Available hole is too small on the upper end of
254 * address. Can we expand the range towards the lower
255 * address and maximize use of this slot?
256 */
257 unsigned long tmp_addr;
258
259 end_addr = hole_end - 1;
260 tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
261 /* Check for underflow. If underflow occurs, allocate
262 * only one page for storing ADI tags
263 */
264 if (tmp_addr > addr) {
265 size = PAGE_SIZE;
266 tmp_addr = end_addr - (size*2*adi_blksize()) - 1;
267 /* If underflow happens with the minimum tag storage
268 * allocation as well, adjust starting address for
269 * this tag storage.
270 */
271 if (tmp_addr > addr)
272 tmp_addr = 0;
273 }
274 if (tmp_addr < hole_start) {
275 /* Available hole is restricted on lower address
276 * end as well
277 */
278 tmp_addr = hole_start + 1;
279 }
280 addr = tmp_addr;
281 size = (end_addr + 1 - addr)/(2*adi_blksize());
282 size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
283 size = size * PAGE_SIZE;
284 }
285 tags = kzalloc(size, GFP_NOWAIT|__GFP_NOWARN);
286 if (tags == NULL) {
287 tag_desc->tag_users = 0;
288 tag_desc = NULL;
289 goto out;
290 }
291 tag_desc->start = addr;
292 tag_desc->tags = tags;
293 tag_desc->end = end_addr;
294
295out:
296 spin_unlock_irqrestore(&mm->context.tag_lock, flags);
297 return tag_desc;
298}
299
300void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
301{
302 unsigned long flags;
303 unsigned char *tags = NULL;
304
305 spin_lock_irqsave(&mm->context.tag_lock, flags);
306 tag_desc->tag_users--;
307 if (tag_desc->tag_users == 0) {
308 tag_desc->start = tag_desc->end = 0;
309 /* Do not free up the tag storage space allocated
310 * by the first descriptor. This is persistent
311 * emergency tag storage space for the task.
312 */
313 if (tag_desc != mm->context.tag_store) {
314 tags = tag_desc->tags;
315 tag_desc->tags = NULL;
316 }
317 }
318 spin_unlock_irqrestore(&mm->context.tag_lock, flags);
319 kfree(tags);
320}
321
322#define tag_start(addr, tag_desc) \
323 ((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
324
325/* Retrieve any saved ADI tags for the page being swapped back in and
326 * restore these tags to the newly allocated physical page.
327 */
328void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
329 unsigned long addr, pte_t pte)
330{
331 unsigned char *tag;
332 tag_storage_desc_t *tag_desc;
333 unsigned long paddr, tmp, version1, version2;
334
335 /* Check if the swapped out page has an ADI version
336 * saved. If yes, restore version tag to the newly
337 * allocated page.
338 */
339 tag_desc = find_tag_store(mm, vma, addr);
340 if (tag_desc == NULL)
341 return;
342
343 tag = tag_start(addr, tag_desc);
344 paddr = pte_val(pte) & _PAGE_PADDR_4V;
345 for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
346 version1 = (*tag) >> 4;
347 version2 = (*tag) & 0x0f;
348 *tag++ = 0;
349 asm volatile("stxa %0, [%1] %2\n\t"
350 :
351 : "r" (version1), "r" (tmp),
352 "i" (ASI_MCD_REAL));
353 tmp += adi_blksize();
354 asm volatile("stxa %0, [%1] %2\n\t"
355 :
356 : "r" (version2), "r" (tmp),
357 "i" (ASI_MCD_REAL));
358 }
359 asm volatile("membar #Sync\n\t");
360
361 /* Check and mark this tag space for release later if
362 * the swapped in page was the last user of tag space
363 */
364 del_tag_store(tag_desc, mm);
365}
366
367/* A page is about to be swapped out. Save any ADI tags associated with
368 * this physical page so they can be restored later when the page is swapped
369 * back in.
370 */
371int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
372 unsigned long addr, pte_t oldpte)
373{
374 unsigned char *tag;
375 tag_storage_desc_t *tag_desc;
376 unsigned long version1, version2, paddr, tmp;
377
378 tag_desc = alloc_tag_store(mm, vma, addr);
379 if (tag_desc == NULL)
380 return -1;
381
382 tag = tag_start(addr, tag_desc);
383 paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
384 for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
385 asm volatile("ldxa [%1] %2, %0\n\t"
386 : "=r" (version1)
387 : "r" (tmp), "i" (ASI_MCD_REAL));
388 tmp += adi_blksize();
389 asm volatile("ldxa [%1] %2, %0\n\t"
390 : "=r" (version2)
391 : "r" (tmp), "i" (ASI_MCD_REAL));
392 *tag = (version1 << 4) | version2;
393 tag++;
394 }
395
396 return 0;
397}
diff --git a/arch/sparc/kernel/entry.h b/arch/sparc/kernel/entry.h
index 7378567b601f..c746c0fd5d6b 100644
--- a/arch/sparc/kernel/entry.h
+++ b/arch/sparc/kernel/entry.h
@@ -160,6 +160,9 @@ void sun4v_resum_overflow(struct pt_regs *regs);
160void sun4v_nonresum_error(struct pt_regs *regs, 160void sun4v_nonresum_error(struct pt_regs *regs,
161 unsigned long offset); 161 unsigned long offset);
162void sun4v_nonresum_overflow(struct pt_regs *regs); 162void sun4v_nonresum_overflow(struct pt_regs *regs);
163void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs,
164 unsigned long addr,
165 unsigned long context);
163 166
164extern unsigned long sun4v_err_itlb_vaddr; 167extern unsigned long sun4v_err_itlb_vaddr;
165extern unsigned long sun4v_err_itlb_ctx; 168extern unsigned long sun4v_err_itlb_ctx;
diff --git a/arch/sparc/kernel/etrap_64.S b/arch/sparc/kernel/etrap_64.S
index 5c77a2e0e991..08cc41f64725 100644
--- a/arch/sparc/kernel/etrap_64.S
+++ b/arch/sparc/kernel/etrap_64.S
@@ -151,7 +151,32 @@ etrap_save: save %g2, -STACK_BIAS, %sp
151 stx %g6, [%sp + PTREGS_OFF + PT_V9_G6] 151 stx %g6, [%sp + PTREGS_OFF + PT_V9_G6]
152 stx %g7, [%sp + PTREGS_OFF + PT_V9_G7] 152 stx %g7, [%sp + PTREGS_OFF + PT_V9_G7]
153 or %l7, %l0, %l7 153 or %l7, %l0, %l7
154 sethi %hi(TSTATE_TSO | TSTATE_PEF), %l0 154661: sethi %hi(TSTATE_TSO | TSTATE_PEF), %l0
155 /* If userspace is using ADI, it could potentially pass
156 * a pointer with version tag embedded in it. To maintain
157 * the ADI security, we must enable PSTATE.mcde. Userspace
158 * would have already set TTE.mcd in an earlier call to
159 * kernel and set the version tag for the address being
160 * dereferenced. Setting PSTATE.mcde would ensure any
161 * access to userspace data through a system call honors
162 * ADI and does not allow a rogue app to bypass ADI by
163 * using system calls. Setting PSTATE.mcde only affects
164 * accesses to virtual addresses that have TTE.mcd set.
165 * Set PMCDPER to ensure any exceptions caused by ADI
166 * version tag mismatch are exposed before system call
167 * returns to userspace. Setting PMCDPER affects only
168 * writes to virtual addresses that have TTE.mcd set and
169 * have a version tag set as well.
170 */
171 .section .sun_m7_1insn_patch, "ax"
172 .word 661b
173 sethi %hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
174 .previous
175661: nop
176 .section .sun_m7_1insn_patch, "ax"
177 .word 661b
178 .word 0xaf902001 /* wrpr %g0, 1, %pmcdper */
179 .previous
155 or %l7, %l0, %l7 180 or %l7, %l0, %l7
156 wrpr %l2, %tnpc 181 wrpr %l2, %tnpc
157 wrpr %l7, (TSTATE_PRIV | TSTATE_IE), %tstate 182 wrpr %l7, (TSTATE_PRIV | TSTATE_IE), %tstate
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index a41e6e16eb36..540bfc98472c 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -897,6 +897,7 @@ sparc64_boot_end:
897#include "syscalls.S" 897#include "syscalls.S"
898#include "helpers.S" 898#include "helpers.S"
899#include "sun4v_tlb_miss.S" 899#include "sun4v_tlb_miss.S"
900#include "sun4v_mcd.S"
900#include "sun4v_ivec.S" 901#include "sun4v_ivec.S"
901#include "ktlb.S" 902#include "ktlb.S"
902#include "tsb.S" 903#include "tsb.S"
diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
index 418592a09b41..39a2503fa3e1 100644
--- a/arch/sparc/kernel/mdesc.c
+++ b/arch/sparc/kernel/mdesc.c
@@ -22,6 +22,7 @@
22#include <linux/uaccess.h> 22#include <linux/uaccess.h>
23#include <asm/oplib.h> 23#include <asm/oplib.h>
24#include <asm/smp.h> 24#include <asm/smp.h>
25#include <asm/adi.h>
25 26
26/* Unlike the OBP device tree, the machine description is a full-on 27/* Unlike the OBP device tree, the machine description is a full-on
27 * DAG. An arbitrary number of ARCs are possible from one 28 * DAG. An arbitrary number of ARCs are possible from one
@@ -1345,5 +1346,6 @@ void __init sun4v_mdesc_init(void)
1345 1346
1346 cur_mdesc = hp; 1347 cur_mdesc = hp;
1347 1348
1349 mdesc_adi_init();
1348 report_platform_properties(); 1350 report_platform_properties();
1349} 1351}
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index 318efd784a0b..454a8af28f13 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -670,6 +670,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
670 return 0; 670 return 0;
671} 671}
672 672
673/* TIF_MCDPER in thread info flags for current task is updated lazily upon
674 * a context switch. Update this flag in current task's thread flags
675 * before dup so the dup'd task will inherit the current TIF_MCDPER flag.
676 */
677int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
678{
679 if (adi_capable()) {
680 register unsigned long tmp_mcdper;
681
682 __asm__ __volatile__(
683 ".word 0x83438000\n\t" /* rd %mcdper, %g1 */
684 "mov %%g1, %0\n\t"
685 : "=r" (tmp_mcdper)
686 :
687 : "g1");
688 if (tmp_mcdper)
689 set_thread_flag(TIF_MCDPER);
690 else
691 clear_thread_flag(TIF_MCDPER);
692 }
693
694 *dst = *src;
695 return 0;
696}
697
673typedef struct { 698typedef struct {
674 union { 699 union {
675 unsigned int pr_regs[32]; 700 unsigned int pr_regs[32];
diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S
index 0b21042ab181..f6528884a2c8 100644
--- a/arch/sparc/kernel/rtrap_64.S
+++ b/arch/sparc/kernel/rtrap_64.S
@@ -25,13 +25,31 @@
25 .align 32 25 .align 32
26__handle_preemption: 26__handle_preemption:
27 call SCHEDULE_USER 27 call SCHEDULE_USER
28 wrpr %g0, RTRAP_PSTATE, %pstate 28661: wrpr %g0, RTRAP_PSTATE, %pstate
29 /* If userspace is using ADI, it could potentially pass
30 * a pointer with version tag embedded in it. To maintain
31 * the ADI security, we must re-enable PSTATE.mcde before
32 * we continue execution in the kernel for another thread.
33 */
34 .section .sun_m7_1insn_patch, "ax"
35 .word 661b
36 wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
37 .previous
29 ba,pt %xcc, __handle_preemption_continue 38 ba,pt %xcc, __handle_preemption_continue
30 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 39 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
31 40
32__handle_user_windows: 41__handle_user_windows:
33 call fault_in_user_windows 42 call fault_in_user_windows
34 wrpr %g0, RTRAP_PSTATE, %pstate 43661: wrpr %g0, RTRAP_PSTATE, %pstate
44 /* If userspace is using ADI, it could potentially pass
45 * a pointer with version tag embedded in it. To maintain
46 * the ADI security, we must re-enable PSTATE.mcde before
47 * we continue execution in the kernel for another thread.
48 */
49 .section .sun_m7_1insn_patch, "ax"
50 .word 661b
51 wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
52 .previous
35 ba,pt %xcc, __handle_preemption_continue 53 ba,pt %xcc, __handle_preemption_continue
36 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 54 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
37 55
@@ -48,7 +66,16 @@ __handle_signal:
48 add %sp, PTREGS_OFF, %o0 66 add %sp, PTREGS_OFF, %o0
49 mov %l0, %o2 67 mov %l0, %o2
50 call do_notify_resume 68 call do_notify_resume
51 wrpr %g0, RTRAP_PSTATE, %pstate 69661: wrpr %g0, RTRAP_PSTATE, %pstate
70 /* If userspace is using ADI, it could potentially pass
71 * a pointer with version tag embedded in it. To maintain
72 * the ADI security, we must re-enable PSTATE.mcde before
73 * we continue execution in the kernel for another thread.
74 */
75 .section .sun_m7_1insn_patch, "ax"
76 .word 661b
77 wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
78 .previous
52 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 79 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
53 80
54 /* Signal delivery can modify pt_regs tstate, so we must 81 /* Signal delivery can modify pt_regs tstate, so we must
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 34f7a533a74f..7944b3ca216a 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -294,6 +294,8 @@ static void __init sun4v_patch(void)
294 case SUN4V_CHIP_SPARC_M7: 294 case SUN4V_CHIP_SPARC_M7:
295 case SUN4V_CHIP_SPARC_M8: 295 case SUN4V_CHIP_SPARC_M8:
296 case SUN4V_CHIP_SPARC_SN: 296 case SUN4V_CHIP_SPARC_SN:
297 sun4v_patch_1insn_range(&__sun_m7_1insn_patch,
298 &__sun_m7_1insn_patch_end);
297 sun_m7_patch_2insn_range(&__sun_m7_2insn_patch, 299 sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
298 &__sun_m7_2insn_patch_end); 300 &__sun_m7_2insn_patch_end);
299 break; 301 break;
diff --git a/arch/sparc/kernel/sun4v_mcd.S b/arch/sparc/kernel/sun4v_mcd.S
new file mode 100644
index 000000000000..d6c69ebca110
--- /dev/null
+++ b/arch/sparc/kernel/sun4v_mcd.S
@@ -0,0 +1,18 @@
1/* sun4v_mcd.S: Sun4v memory corruption detected precise exception handler
2 *
3 * Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
4 * Authors: Bob Picco <bob.picco@oracle.com>,
5 * Khalid Aziz <khalid.aziz@oracle.com>
6 *
7 * This work is licensed under the terms of the GNU GPL, version 2.
8 */
9 .text
10 .align 32
11
12sun4v_mcd_detect_precise:
13 mov %l4, %o1
14 mov %l5, %o2
15 call sun4v_mem_corrupt_detect_precise
16 add %sp, PTREGS_OFF, %o0
17 ba,a,pt %xcc, rtrap
18 nop
diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 0a56dc257cb9..462a21abd105 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -362,7 +362,6 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig
362{ 362{
363 unsigned short type = (type_ctx >> 16); 363 unsigned short type = (type_ctx >> 16);
364 unsigned short ctx = (type_ctx & 0xffff); 364 unsigned short ctx = (type_ctx & 0xffff);
365 siginfo_t info;
366 365
367 if (notify_die(DIE_TRAP, "data access exception", regs, 366 if (notify_die(DIE_TRAP, "data access exception", regs,
368 0, 0x8, SIGTRAP) == NOTIFY_STOP) 367 0, 0x8, SIGTRAP) == NOTIFY_STOP)
@@ -397,12 +396,29 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig
397 if (is_no_fault_exception(regs)) 396 if (is_no_fault_exception(regs))
398 return; 397 return;
399 398
400 info.si_signo = SIGSEGV; 399 /* MCD (Memory Corruption Detection) disabled trap (TT=0x19) in HV
401 info.si_errno = 0; 400 * is vectored thorugh data access exception trap with fault type
402 info.si_code = SEGV_MAPERR; 401 * set to HV_FAULT_TYPE_MCD_DIS. Check for MCD disabled trap.
403 info.si_addr = (void __user *) addr; 402 * Accessing an address with invalid ASI for the address, for
404 info.si_trapno = 0; 403 * example setting an ADI tag on an address with ASI_MCD_PRIMARY
405 force_sig_info(SIGSEGV, &info, current); 404 * when TTE.mcd is not set for the VA, is also vectored into
405 * kerbel by HV as data access exception with fault type set to
406 * HV_FAULT_TYPE_INV_ASI.
407 */
408 switch (type) {
409 case HV_FAULT_TYPE_INV_ASI:
410 force_sig_fault(SIGILL, ILL_ILLADR, (void __user *)addr, 0,
411 current);
412 break;
413 case HV_FAULT_TYPE_MCD_DIS:
414 force_sig_fault(SIGSEGV, SEGV_ACCADI, (void __user *)addr, 0,
415 current);
416 break;
417 default:
418 force_sig_fault(SIGSEGV, SEGV_MAPERR, (void __user *)addr, 0,
419 current);
420 break;
421 }
406} 422}
407 423
408void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) 424void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
@@ -1847,6 +1863,7 @@ struct sun4v_error_entry {
1847#define SUN4V_ERR_ATTRS_ASI 0x00000080 1863#define SUN4V_ERR_ATTRS_ASI 0x00000080
1848#define SUN4V_ERR_ATTRS_PRIV_REG 0x00000100 1864#define SUN4V_ERR_ATTRS_PRIV_REG 0x00000100
1849#define SUN4V_ERR_ATTRS_SPSTATE_MSK 0x00000600 1865#define SUN4V_ERR_ATTRS_SPSTATE_MSK 0x00000600
1866#define SUN4V_ERR_ATTRS_MCD 0x00000800
1850#define SUN4V_ERR_ATTRS_SPSTATE_SHFT 9 1867#define SUN4V_ERR_ATTRS_SPSTATE_SHFT 9
1851#define SUN4V_ERR_ATTRS_MODE_MSK 0x03000000 1868#define SUN4V_ERR_ATTRS_MODE_MSK 0x03000000
1852#define SUN4V_ERR_ATTRS_MODE_SHFT 24 1869#define SUN4V_ERR_ATTRS_MODE_SHFT 24
@@ -2044,6 +2061,50 @@ static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent,
2044 } 2061 }
2045} 2062}
2046 2063
2064/* Handle memory corruption detected error which is vectored in
2065 * through resumable error trap.
2066 */
2067void do_mcd_err(struct pt_regs *regs, struct sun4v_error_entry ent)
2068{
2069 if (notify_die(DIE_TRAP, "MCD error", regs, 0, 0x34,
2070 SIGSEGV) == NOTIFY_STOP)
2071 return;
2072
2073 if (regs->tstate & TSTATE_PRIV) {
2074 /* MCD exception could happen because the task was
2075 * running a system call with MCD enabled and passed a
2076 * non-versioned pointer or pointer with bad version
2077 * tag to the system call. In such cases, hypervisor
2078 * places the address of offending instruction in the
2079 * resumable error report. This is a deferred error,
2080 * so the read/write that caused the trap was potentially
2081 * retired long time back and we may have no choice
2082 * but to send SIGSEGV to the process.
2083 */
2084 const struct exception_table_entry *entry;
2085
2086 entry = search_exception_tables(regs->tpc);
2087 if (entry) {
2088 /* Looks like a bad syscall parameter */
2089#ifdef DEBUG_EXCEPTIONS
2090 pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
2091 regs->tpc);
2092 pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
2093 ent.err_raddr, entry->fixup);
2094#endif
2095 regs->tpc = entry->fixup;
2096 regs->tnpc = regs->tpc + 4;
2097 return;
2098 }
2099 }
2100
2101 /* Send SIGSEGV to the userspace process with the right signal
2102 * code
2103 */
2104 force_sig_fault(SIGSEGV, SEGV_ADIDERR, (void __user *)ent.err_raddr,
2105 0, current);
2106}
2107
2047/* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate. 2108/* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate.
2048 * Log the event and clear the first word of the entry. 2109 * Log the event and clear the first word of the entry.
2049 */ 2110 */
@@ -2081,6 +2142,14 @@ void sun4v_resum_error(struct pt_regs *regs, unsigned long offset)
2081 goto out; 2142 goto out;
2082 } 2143 }
2083 2144
2145 /* If this is a memory corruption detected error vectored in
2146 * by HV through resumable error trap, call the handler
2147 */
2148 if (local_copy.err_attrs & SUN4V_ERR_ATTRS_MCD) {
2149 do_mcd_err(regs, local_copy);
2150 return;
2151 }
2152
2084 sun4v_log_error(regs, &local_copy, cpu, 2153 sun4v_log_error(regs, &local_copy, cpu,
2085 KERN_ERR "RESUMABLE ERROR", 2154 KERN_ERR "RESUMABLE ERROR",
2086 &sun4v_resum_oflow_cnt); 2155 &sun4v_resum_oflow_cnt);
@@ -2656,6 +2725,53 @@ void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_c
2656 force_sig_info(SIGBUS, &info, current); 2725 force_sig_info(SIGBUS, &info, current);
2657} 2726}
2658 2727
2728/* sun4v_mem_corrupt_detect_precise() - Handle precise exception on an ADI
2729 * tag mismatch.
2730 *
2731 * ADI version tag mismatch on a load from memory always results in a
2732 * precise exception. Tag mismatch on a store to memory will result in
2733 * precise exception if MCDPER or PMCDPER is set to 1.
2734 */
2735void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs, unsigned long addr,
2736 unsigned long context)
2737{
2738 if (notify_die(DIE_TRAP, "memory corruption precise exception", regs,
2739 0, 0x8, SIGSEGV) == NOTIFY_STOP)
2740 return;
2741
2742 if (regs->tstate & TSTATE_PRIV) {
2743 /* MCD exception could happen because the task was running
2744 * a system call with MCD enabled and passed a non-versioned
2745 * pointer or pointer with bad version tag to the system
2746 * call.
2747 */
2748 const struct exception_table_entry *entry;
2749
2750 entry = search_exception_tables(regs->tpc);
2751 if (entry) {
2752 /* Looks like a bad syscall parameter */
2753#ifdef DEBUG_EXCEPTIONS
2754 pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
2755 regs->tpc);
2756 pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
2757 regs->tpc, entry->fixup);
2758#endif
2759 regs->tpc = entry->fixup;
2760 regs->tnpc = regs->tpc + 4;
2761 return;
2762 }
2763 pr_emerg("%s: ADDR[%016lx] CTX[%lx], going.\n",
2764 __func__, addr, context);
2765 die_if_kernel("MCD precise", regs);
2766 }
2767
2768 if (test_thread_flag(TIF_32BIT)) {
2769 regs->tpc &= 0xffffffff;
2770 regs->tnpc &= 0xffffffff;
2771 }
2772 force_sig_fault(SIGSEGV, SEGV_ADIPERR, (void __user *)addr, 0, current);
2773}
2774
2659void do_privop(struct pt_regs *regs) 2775void do_privop(struct pt_regs *regs)
2660{ 2776{
2661 enum ctx_state prev_state = exception_enter(); 2777 enum ctx_state prev_state = exception_enter();
diff --git a/arch/sparc/kernel/ttable_64.S b/arch/sparc/kernel/ttable_64.S
index 18685fe69b91..86e737e59c7e 100644
--- a/arch/sparc/kernel/ttable_64.S
+++ b/arch/sparc/kernel/ttable_64.S
@@ -26,8 +26,10 @@ tl0_ill: membar #Sync
26 TRAP_7INSNS(do_illegal_instruction) 26 TRAP_7INSNS(do_illegal_instruction)
27tl0_privop: TRAP(do_privop) 27tl0_privop: TRAP(do_privop)
28tl0_resv012: BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17) 28tl0_resv012: BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17)
29tl0_resv018: BTRAP(0x18) BTRAP(0x19) BTRAP(0x1a) BTRAP(0x1b) BTRAP(0x1c) BTRAP(0x1d) 29tl0_resv018: BTRAP(0x18) BTRAP(0x19)
30tl0_resv01e: BTRAP(0x1e) BTRAP(0x1f) 30tl0_mcd: SUN4V_MCD_PRECISE
31tl0_resv01b: BTRAP(0x1b)
32tl0_resv01c: BTRAP(0x1c) BTRAP(0x1d) BTRAP(0x1e) BTRAP(0x1f)
31tl0_fpdis: TRAP_NOSAVE(do_fpdis) 33tl0_fpdis: TRAP_NOSAVE(do_fpdis)
32tl0_fpieee: TRAP_SAVEFPU(do_fpieee) 34tl0_fpieee: TRAP_SAVEFPU(do_fpieee)
33tl0_fpother: TRAP_NOSAVE(do_fpother_check_fitos) 35tl0_fpother: TRAP_NOSAVE(do_fpother_check_fitos)
diff --git a/arch/sparc/kernel/urtt_fill.S b/arch/sparc/kernel/urtt_fill.S
index 44183aa59168..e4cee7be5cd0 100644
--- a/arch/sparc/kernel/urtt_fill.S
+++ b/arch/sparc/kernel/urtt_fill.S
@@ -50,7 +50,12 @@ user_rtt_fill_fixup_common:
50 SET_GL(0) 50 SET_GL(0)
51 .previous 51 .previous
52 52
53 wrpr %g0, RTRAP_PSTATE, %pstate 53661: wrpr %g0, RTRAP_PSTATE, %pstate
54 .section .sun_m7_1insn_patch, "ax"
55 .word 661b
56 /* Re-enable PSTATE.mcde to maintain ADI security */
57 wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
58 .previous
54 59
55 mov %l1, %g6 60 mov %l1, %g6
56 ldx [%g6 + TI_TASK], %g4 61 ldx [%g6 + TI_TASK], %g4
diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
index 5a2344574f39..61afd787bd0c 100644
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -145,6 +145,11 @@ SECTIONS
145 *(.pause_3insn_patch) 145 *(.pause_3insn_patch)
146 __pause_3insn_patch_end = .; 146 __pause_3insn_patch_end = .;
147 } 147 }
148 .sun_m7_1insn_patch : {
149 __sun_m7_1insn_patch = .;
150 *(.sun_m7_1insn_patch)
151 __sun_m7_1insn_patch_end = .;
152 }
148 .sun_m7_2insn_patch : { 153 .sun_m7_2insn_patch : {
149 __sun_m7_2insn_patch = .; 154 __sun_m7_2insn_patch = .;
150 *(.sun_m7_2insn_patch) 155 *(.sun_m7_2insn_patch)
diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
index 5335ba3c850e..357b6047653a 100644
--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -12,6 +12,7 @@
12#include <linux/pagemap.h> 12#include <linux/pagemap.h>
13#include <linux/rwsem.h> 13#include <linux/rwsem.h>
14#include <asm/pgtable.h> 14#include <asm/pgtable.h>
15#include <asm/adi.h>
15 16
16/* 17/*
17 * The performance critical leaf functions are made noinline otherwise gcc 18 * The performance critical leaf functions are made noinline otherwise gcc
@@ -201,6 +202,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
201 pgd_t *pgdp; 202 pgd_t *pgdp;
202 int nr = 0; 203 int nr = 0;
203 204
205#ifdef CONFIG_SPARC64
206 if (adi_capable()) {
207 long addr = start;
208
209 /* If userspace has passed a versioned address, kernel
210 * will not find it in the VMAs since it does not store
211 * the version tags in the list of VMAs. Storing version
212 * tags in list of VMAs is impractical since they can be
213 * changed any time from userspace without dropping into
214 * kernel. Any address search in VMAs will be done with
215 * non-versioned addresses. Ensure the ADI version bits
216 * are dropped here by sign extending the last bit before
217 * ADI bits. IOMMU does not implement version tags.
218 */
219 addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
220 start = addr;
221 }
222#endif
204 start &= PAGE_MASK; 223 start &= PAGE_MASK;
205 addr = start; 224 addr = start;
206 len = (unsigned long) nr_pages << PAGE_SHIFT; 225 len = (unsigned long) nr_pages << PAGE_SHIFT;
@@ -231,6 +250,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
231 pgd_t *pgdp; 250 pgd_t *pgdp;
232 int nr = 0; 251 int nr = 0;
233 252
253#ifdef CONFIG_SPARC64
254 if (adi_capable()) {
255 long addr = start;
256
257 /* If userspace has passed a versioned address, kernel
258 * will not find it in the VMAs since it does not store
259 * the version tags in the list of VMAs. Storing version
260 * tags in list of VMAs is impractical since they can be
261 * changed any time from userspace without dropping into
262 * kernel. Any address search in VMAs will be done with
263 * non-versioned addresses. Ensure the ADI version bits
264 * are dropped here by sign extending the last bit before
265 * ADI bits. IOMMU does not implements version tags,
266 */
267 addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
268 start = addr;
269 }
270#endif
234 start &= PAGE_MASK; 271 start &= PAGE_MASK;
235 addr = start; 272 addr = start;
236 len = (unsigned long) nr_pages << PAGE_SHIFT; 273 len = (unsigned long) nr_pages << PAGE_SHIFT;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 0112d6942288..f78793a06bbd 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -182,8 +182,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
182 struct page *page, int writeable) 182 struct page *page, int writeable)
183{ 183{
184 unsigned int shift = huge_page_shift(hstate_vma(vma)); 184 unsigned int shift = huge_page_shift(hstate_vma(vma));
185 pte_t pte;
185 186
186 return hugepage_shift_to_tte(entry, shift); 187 pte = hugepage_shift_to_tte(entry, shift);
188
189#ifdef CONFIG_SPARC64
190 /* If this vma has ADI enabled on it, turn on TTE.mcd
191 */
192 if (vma->vm_flags & VM_SPARC_ADI)
193 return pte_mkmcd(pte);
194 else
195 return pte_mknotmcd(pte);
196#else
197 return pte;
198#endif
187} 199}
188 200
189static unsigned int sun4v_huge_tte_to_shift(pte_t entry) 201static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 995f9490334d..cb9ebac6663f 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -3160,3 +3160,72 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
3160 do_flush_tlb_kernel_range(start, end); 3160 do_flush_tlb_kernel_range(start, end);
3161 } 3161 }
3162} 3162}
3163
3164void copy_user_highpage(struct page *to, struct page *from,
3165 unsigned long vaddr, struct vm_area_struct *vma)
3166{
3167 char *vfrom, *vto;
3168
3169 vfrom = kmap_atomic(from);
3170 vto = kmap_atomic(to);
3171 copy_user_page(vto, vfrom, vaddr, to);
3172 kunmap_atomic(vto);
3173 kunmap_atomic(vfrom);
3174
3175 /* If this page has ADI enabled, copy over any ADI tags
3176 * as well
3177 */
3178 if (vma->vm_flags & VM_SPARC_ADI) {
3179 unsigned long pfrom, pto, i, adi_tag;
3180
3181 pfrom = page_to_phys(from);
3182 pto = page_to_phys(to);
3183
3184 for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
3185 asm volatile("ldxa [%1] %2, %0\n\t"
3186 : "=r" (adi_tag)
3187 : "r" (i), "i" (ASI_MCD_REAL));
3188 asm volatile("stxa %0, [%1] %2\n\t"
3189 :
3190 : "r" (adi_tag), "r" (pto),
3191 "i" (ASI_MCD_REAL));
3192 pto += adi_blksize();
3193 }
3194 asm volatile("membar #Sync\n\t");
3195 }
3196}
3197EXPORT_SYMBOL(copy_user_highpage);
3198
3199void copy_highpage(struct page *to, struct page *from)
3200{
3201 char *vfrom, *vto;
3202
3203 vfrom = kmap_atomic(from);
3204 vto = kmap_atomic(to);
3205 copy_page(vto, vfrom);
3206 kunmap_atomic(vto);
3207 kunmap_atomic(vfrom);
3208
3209 /* If this platform is ADI enabled, copy any ADI tags
3210 * as well
3211 */
3212 if (adi_capable()) {
3213 unsigned long pfrom, pto, i, adi_tag;
3214
3215 pfrom = page_to_phys(from);
3216 pto = page_to_phys(to);
3217
3218 for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
3219 asm volatile("ldxa [%1] %2, %0\n\t"
3220 : "=r" (adi_tag)
3221 : "r" (i), "i" (ASI_MCD_REAL));
3222 asm volatile("stxa %0, [%1] %2\n\t"
3223 :
3224 : "r" (adi_tag), "r" (pto),
3225 "i" (ASI_MCD_REAL));
3226 pto += adi_blksize();
3227 }
3228 asm volatile("membar #Sync\n\t");
3229 }
3230}
3231EXPORT_SYMBOL(copy_highpage);
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index 75a04c1a2383..f5edc28aa3a5 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -546,6 +546,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
546 546
547 mm->context.sparc64_ctx_val = 0UL; 547 mm->context.sparc64_ctx_val = 0UL;
548 548
549 mm->context.tag_store = NULL;
550 spin_lock_init(&mm->context.tag_lock);
551
549#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) 552#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
550 /* We reset them to zero because the fork() page copying 553 /* We reset them to zero because the fork() page copying
551 * will re-increment the counters as the parent PTEs are 554 * will re-increment the counters as the parent PTEs are
@@ -611,4 +614,22 @@ void destroy_context(struct mm_struct *mm)
611 } 614 }
612 615
613 spin_unlock_irqrestore(&ctx_alloc_lock, flags); 616 spin_unlock_irqrestore(&ctx_alloc_lock, flags);
617
618 /* If ADI tag storage was allocated for this task, free it */
619 if (mm->context.tag_store) {
620 tag_storage_desc_t *tag_desc;
621 unsigned long max_desc;
622 unsigned char *tags;
623
624 tag_desc = mm->context.tag_store;
625 max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
626 for (i = 0; i < max_desc; i++) {
627 tags = tag_desc->tags;
628 tag_desc->tags = NULL;
629 kfree(tags);
630 tag_desc++;
631 }
632 kfree(mm->context.tag_store);
633 mm->context.tag_store = NULL;
634 }
614} 635}
diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
index 0d930d8987cc..838a4dc90a6d 100644
--- a/arch/x86/kernel/signal_compat.c
+++ b/arch/x86/kernel/signal_compat.c
@@ -27,7 +27,7 @@ static inline void signal_compat_build_tests(void)
27 */ 27 */
28 BUILD_BUG_ON(NSIGILL != 11); 28 BUILD_BUG_ON(NSIGILL != 11);
29 BUILD_BUG_ON(NSIGFPE != 13); 29 BUILD_BUG_ON(NSIGFPE != 13);
30 BUILD_BUG_ON(NSIGSEGV != 4); 30 BUILD_BUG_ON(NSIGSEGV != 7);
31 BUILD_BUG_ON(NSIGBUS != 5); 31 BUILD_BUG_ON(NSIGBUS != 5);
32 BUILD_BUG_ON(NSIGTRAP != 4); 32 BUILD_BUG_ON(NSIGTRAP != 4);
33 BUILD_BUG_ON(NSIGCHLD != 6); 33 BUILD_BUG_ON(NSIGCHLD != 6);
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 2cfa3075d148..6fbbc0b6c05e 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -400,6 +400,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b)
400#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 400#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
401#endif 401#endif
402 402
403#ifndef __HAVE_ARCH_DO_SWAP_PAGE
404/*
405 * Some architectures support metadata associated with a page. When a
406 * page is being swapped out, this metadata must be saved so it can be
407 * restored when the page is swapped back in. SPARC M7 and newer
408 * processors support an ADI (Application Data Integrity) tag for the
409 * page as metadata for the page. arch_do_swap_page() can restore this
410 * metadata when a page is swapped back in.
411 */
412static inline void arch_do_swap_page(struct mm_struct *mm,
413 struct vm_area_struct *vma,
414 unsigned long addr,
415 pte_t pte, pte_t oldpte)
416{
417
418}
419#endif
420
421#ifndef __HAVE_ARCH_UNMAP_ONE
422/*
423 * Some architectures support metadata associated with a page. When a
424 * page is being swapped out, this metadata must be saved so it can be
425 * restored when the page is swapped back in. SPARC M7 and newer
426 * processors support an ADI (Application Data Integrity) tag for the
427 * page as metadata for the page. arch_unmap_one() can save this
428 * metadata on a swap-out of a page.
429 */
430static inline int arch_unmap_one(struct mm_struct *mm,
431 struct vm_area_struct *vma,
432 unsigned long addr,
433 pte_t orig_pte)
434{
435 return 0;
436}
437#endif
438
403#ifndef __HAVE_ARCH_PGD_OFFSET_GATE 439#ifndef __HAVE_ARCH_PGD_OFFSET_GATE
404#define pgd_offset_gate(mm, addr) pgd_offset(mm, addr) 440#define pgd_offset_gate(mm, addr) pgd_offset(mm, addr)
405#endif 441#endif
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 776f90f3a1cd..0690679832d4 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -237,6 +237,8 @@ static inline void copy_user_highpage(struct page *to, struct page *from,
237 237
238#endif 238#endif
239 239
240#ifndef __HAVE_ARCH_COPY_HIGHPAGE
241
240static inline void copy_highpage(struct page *to, struct page *from) 242static inline void copy_highpage(struct page *to, struct page *from)
241{ 243{
242 char *vfrom, *vto; 244 char *vfrom, *vto;
@@ -248,4 +250,6 @@ static inline void copy_highpage(struct page *to, struct page *from)
248 kunmap_atomic(vfrom); 250 kunmap_atomic(vfrom);
249} 251}
250 252
253#endif
254
251#endif /* _LINUX_HIGHMEM_H */ 255#endif /* _LINUX_HIGHMEM_H */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ad06d42adb1a..32fe6919a11b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -245,6 +245,9 @@ extern unsigned int kobjsize(const void *objp);
245# define VM_GROWSUP VM_ARCH_1 245# define VM_GROWSUP VM_ARCH_1
246#elif defined(CONFIG_IA64) 246#elif defined(CONFIG_IA64)
247# define VM_GROWSUP VM_ARCH_1 247# define VM_GROWSUP VM_ARCH_1
248#elif defined(CONFIG_SPARC64)
249# define VM_SPARC_ADI VM_ARCH_1 /* Uses ADI tag for access control */
250# define VM_ARCH_CLEAR VM_SPARC_ADI
248#elif !defined(CONFIG_MMU) 251#elif !defined(CONFIG_MMU)
249# define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ 252# define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */
250#endif 253#endif
@@ -287,6 +290,12 @@ extern unsigned int kobjsize(const void *objp);
287/* This mask is used to clear all the VMA flags used by mlock */ 290/* This mask is used to clear all the VMA flags used by mlock */
288#define VM_LOCKED_CLEAR_MASK (~(VM_LOCKED | VM_LOCKONFAULT)) 291#define VM_LOCKED_CLEAR_MASK (~(VM_LOCKED | VM_LOCKONFAULT))
289 292
293/* Arch-specific flags to clear when updating VM flags on protection change */
294#ifndef VM_ARCH_CLEAR
295# define VM_ARCH_CLEAR VM_NONE
296#endif
297#define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
298
290/* 299/*
291 * mapping from the currently active vm_flags protection bits (the 300 * mapping from the currently active vm_flags protection bits (the
292 * low four bits) to a page protection mask.. 301 * low four bits) to a page protection mask..
diff --git a/include/linux/mman.h b/include/linux/mman.h
index 6a4d1caaff5c..4b08e9c9c538 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -92,7 +92,7 @@ static inline void vm_unacct_memory(long pages)
92 * 92 *
93 * Returns true if the prot flags are valid 93 * Returns true if the prot flags are valid
94 */ 94 */
95static inline bool arch_validate_prot(unsigned long prot) 95static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
96{ 96{
97 return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0; 97 return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0;
98} 98}
diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h
index 99c902e460c2..a01fd0a92b1b 100644
--- a/include/uapi/asm-generic/siginfo.h
+++ b/include/uapi/asm-generic/siginfo.h
@@ -246,7 +246,10 @@ typedef struct siginfo {
246#else 246#else
247# define SEGV_PKUERR 4 /* failed protection key checks */ 247# define SEGV_PKUERR 4 /* failed protection key checks */
248#endif 248#endif
249#define NSIGSEGV 4 249#define SEGV_ACCADI 5 /* ADI not enabled for mapped object */
250#define SEGV_ADIDERR 6 /* Disrupting MCD error */
251#define SEGV_ADIPERR 7 /* Precise MCD exception */
252#define NSIGSEGV 7
250 253
251/* 254/*
252 * SIGBUS si_codes 255 * SIGBUS si_codes
diff --git a/mm/ksm.c b/mm/ksm.c
index 293721f5da70..adb5f991da8e 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2369,6 +2369,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
2369 if (*vm_flags & VM_SAO) 2369 if (*vm_flags & VM_SAO)
2370 return 0; 2370 return 0;
2371#endif 2371#endif
2372#ifdef VM_SPARC_ADI
2373 if (*vm_flags & VM_SPARC_ADI)
2374 return 0;
2375#endif
2372 2376
2373 if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { 2377 if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
2374 err = __ksm_enter(mm); 2378 err = __ksm_enter(mm);
diff --git a/mm/memory.c b/mm/memory.c
index 5fcfc24904d1..aed37325d94e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3053,6 +3053,7 @@ int do_swap_page(struct vm_fault *vmf)
3053 if (pte_swp_soft_dirty(vmf->orig_pte)) 3053 if (pte_swp_soft_dirty(vmf->orig_pte))
3054 pte = pte_mksoft_dirty(pte); 3054 pte = pte_mksoft_dirty(pte);
3055 set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); 3055 set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
3056 arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
3056 vmf->orig_pte = pte; 3057 vmf->orig_pte = pte;
3057 3058
3058 /* ksm created a completely new copy */ 3059 /* ksm created a completely new copy */
diff --git a/mm/mprotect.c b/mm/mprotect.c
index e3309fcf586b..c1d6af7455da 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -417,7 +417,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
417 end = start + len; 417 end = start + len;
418 if (end <= start) 418 if (end <= start)
419 return -ENOMEM; 419 return -ENOMEM;
420 if (!arch_validate_prot(prot)) 420 if (!arch_validate_prot(prot, start))
421 return -EINVAL; 421 return -EINVAL;
422 422
423 reqprot = prot; 423 reqprot = prot;
@@ -475,7 +475,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
475 * cleared from the VMA. 475 * cleared from the VMA.
476 */ 476 */
477 mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC | 477 mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC |
478 ARCH_VM_PKEY_FLAGS; 478 VM_FLAGS_CLEAR;
479 479
480 new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey); 480 new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey);
481 newflags = calc_vm_prot_bits(prot, new_vma_pkey); 481 newflags = calc_vm_prot_bits(prot, new_vma_pkey);
diff --git a/mm/rmap.c b/mm/rmap.c
index 47db27f8049e..144c66e688a9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1497,6 +1497,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
1497 (flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) { 1497 (flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) {
1498 swp_entry_t entry; 1498 swp_entry_t entry;
1499 pte_t swp_pte; 1499 pte_t swp_pte;
1500
1501 if (arch_unmap_one(mm, vma, address, pteval) < 0) {
1502 set_pte_at(mm, address, pvmw.pte, pteval);
1503 ret = false;
1504 page_vma_mapped_walk_done(&pvmw);
1505 break;
1506 }
1507
1500 /* 1508 /*
1501 * Store the pfn of the page in a special migration 1509 * Store the pfn of the page in a special migration
1502 * pte. do_swap_page() will wait until the migration 1510 * pte. do_swap_page() will wait until the migration
@@ -1556,6 +1564,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
1556 page_vma_mapped_walk_done(&pvmw); 1564 page_vma_mapped_walk_done(&pvmw);
1557 break; 1565 break;
1558 } 1566 }
1567 if (arch_unmap_one(mm, vma, address, pteval) < 0) {
1568 set_pte_at(mm, address, pvmw.pte, pteval);
1569 ret = false;
1570 page_vma_mapped_walk_done(&pvmw);
1571 break;
1572 }
1559 if (list_empty(&mm->mmlist)) { 1573 if (list_empty(&mm->mmlist)) {
1560 spin_lock(&mmlist_lock); 1574 spin_lock(&mmlist_lock);
1561 if (list_empty(&mm->mmlist)) 1575 if (list_empty(&mm->mmlist))