aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/vm/hmm.rst141
-rw-r--r--include/linux/hmm.h7
2 files changed, 78 insertions, 70 deletions
diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 7cdf7282e022..7b6eeda5a7c0 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -10,7 +10,7 @@ of this being specialized struct page for such memory (see sections 5 to 7 of
10this document). 10this document).
11 11
12HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 12HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
13allowing a device to transparently access program address coherently with 13allowing a device to transparently access program addresses coherently with
14the CPU meaning that any valid pointer on the CPU is also a valid pointer 14the CPU meaning that any valid pointer on the CPU is also a valid pointer
15for the device. This is becoming mandatory to simplify the use of advanced 15for the device. This is becoming mandatory to simplify the use of advanced
16heterogeneous computing where GPU, DSP, or FPGA are used to perform various 16heterogeneous computing where GPU, DSP, or FPGA are used to perform various
@@ -22,8 +22,8 @@ expose the hardware limitations that are inherent to many platforms. The third
22section gives an overview of the HMM design. The fourth section explains how 22section gives an overview of the HMM design. The fourth section explains how
23CPU page-table mirroring works and the purpose of HMM in this context. The 23CPU page-table mirroring works and the purpose of HMM in this context. The
24fifth section deals with how device memory is represented inside the kernel. 24fifth section deals with how device memory is represented inside the kernel.
25Finally, the last section presents a new migration helper that allows lever- 25Finally, the last section presents a new migration helper that allows
26aging the device DMA engine. 26leveraging the device DMA engine.
27 27
28.. contents:: :local: 28.. contents:: :local:
29 29
@@ -39,20 +39,20 @@ address space. I use shared address space to refer to the opposite situation:
39i.e., one in which any application memory region can be used by a device 39i.e., one in which any application memory region can be used by a device
40transparently. 40transparently.
41 41
42Split address space happens because device can only access memory allocated 42Split address space happens because devices can only access memory allocated
43through device specific API. This implies that all memory objects in a program 43through a device specific API. This implies that all memory objects in a program
44are not equal from the device point of view which complicates large programs 44are not equal from the device point of view which complicates large programs
45that rely on a wide set of libraries. 45that rely on a wide set of libraries.
46 46
47Concretely this means that code that wants to leverage devices like GPUs needs 47Concretely, this means that code that wants to leverage devices like GPUs needs
48to copy object between generically allocated memory (malloc, mmap private, mmap 48to copy objects between generically allocated memory (malloc, mmap private, mmap
49share) and memory allocated through the device driver API (this still ends up 49share) and memory allocated through the device driver API (this still ends up
50with an mmap but of the device file). 50with an mmap but of the device file).
51 51
52For flat data sets (array, grid, image, ...) this isn't too hard to achieve but 52For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
53complex data sets (list, tree, ...) are hard to get right. Duplicating a 53for complex data sets (list, tree, ...) it's hard to get right. Duplicating a
54complex data set needs to re-map all the pointer relations between each of its 54complex data set needs to re-map all the pointer relations between each of its
55elements. This is error prone and program gets harder to debug because of the 55elements. This is error prone and programs get harder to debug because of the
56duplicate data set and addresses. 56duplicate data set and addresses.
57 57
58Split address space also means that libraries cannot transparently use data 58Split address space also means that libraries cannot transparently use data
@@ -77,12 +77,12 @@ I/O bus, device memory characteristics
77 77
78I/O buses cripple shared address spaces due to a few limitations. Most I/O 78I/O buses cripple shared address spaces due to a few limitations. Most I/O
79buses only allow basic memory access from device to main memory; even cache 79buses only allow basic memory access from device to main memory; even cache
80coherency is often optional. Access to device memory from CPU is even more 80coherency is often optional. Access to device memory from a CPU is even more
81limited. More often than not, it is not cache coherent. 81limited. More often than not, it is not cache coherent.
82 82
83If we only consider the PCIE bus, then a device can access main memory (often 83If we only consider the PCIE bus, then a device can access main memory (often
84through an IOMMU) and be cache coherent with the CPUs. However, it only allows 84through an IOMMU) and be cache coherent with the CPUs. However, it only allows
85a limited set of atomic operations from device on main memory. This is worse 85a limited set of atomic operations from the device on main memory. This is worse
86in the other direction: the CPU can only access a limited range of the device 86in the other direction: the CPU can only access a limited range of the device
87memory and cannot perform atomic operations on it. Thus device memory cannot 87memory and cannot perform atomic operations on it. Thus device memory cannot
88be considered the same as regular memory from the kernel point of view. 88be considered the same as regular memory from the kernel point of view.
@@ -93,20 +93,20 @@ The final limitation is latency. Access to main memory from the device has an
93order of magnitude higher latency than when the device accesses its own memory. 93order of magnitude higher latency than when the device accesses its own memory.
94 94
95Some platforms are developing new I/O buses or additions/modifications to PCIE 95Some platforms are developing new I/O buses or additions/modifications to PCIE
96to address some of these limitations (OpenCAPI, CCIX). They mainly allow two- 96to address some of these limitations (OpenCAPI, CCIX). They mainly allow
97way cache coherency between CPU and device and allow all atomic operations the 97two-way cache coherency between CPU and device and allow all atomic operations the
98architecture supports. Sadly, not all platforms are following this trend and 98architecture supports. Sadly, not all platforms are following this trend and
99some major architectures are left without hardware solutions to these problems. 99some major architectures are left without hardware solutions to these problems.
100 100
101So for shared address space to make sense, not only must we allow devices to 101So for shared address space to make sense, not only must we allow devices to
102access any memory but we must also permit any memory to be migrated to device 102access any memory but we must also permit any memory to be migrated to device
103memory while device is using it (blocking CPU access while it happens). 103memory while the device is using it (blocking CPU access while it happens).
104 104
105 105
106Shared address space and migration 106Shared address space and migration
107================================== 107==================================
108 108
109HMM intends to provide two main features. First one is to share the address 109HMM intends to provide two main features. The first one is to share the address
110space by duplicating the CPU page table in the device page table so the same 110space by duplicating the CPU page table in the device page table so the same
111address points to the same physical memory for any valid main memory address in 111address points to the same physical memory for any valid main memory address in
112the process address space. 112the process address space.
@@ -121,14 +121,14 @@ why HMM provides helpers to factor out everything that can be while leaving the
121hardware specific details to the device driver. 121hardware specific details to the device driver.
122 122
123The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that 123The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that
124allows allocating a struct page for each page of the device memory. Those pages 124allows allocating a struct page for each page of device memory. Those pages
125are special because the CPU cannot map them. However, they allow migrating 125are special because the CPU cannot map them. However, they allow migrating
126main memory to device memory using existing migration mechanisms and everything 126main memory to device memory using existing migration mechanisms and everything
127looks like a page is swapped out to disk from the CPU point of view. Using a 127looks like a page that is swapped out to disk from the CPU point of view. Using a
128struct page gives the easiest and cleanest integration with existing mm mech- 128struct page gives the easiest and cleanest integration with existing mm
129anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE 129mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE
130memory for the device memory and second to perform migration. Policy decisions 130memory for the device memory and second to perform migration. Policy decisions
131of what and when to migrate things is left to the device driver. 131of what and when to migrate is left to the device driver.
132 132
133Note that any CPU access to a device page triggers a page fault and a migration 133Note that any CPU access to a device page triggers a page fault and a migration
134back to main memory. For example, when a page backing a given CPU address A is 134back to main memory. For example, when a page backing a given CPU address A is
@@ -136,8 +136,8 @@ migrated from a main memory page to a device page, then any CPU access to
136address A triggers a page fault and initiates a migration back to main memory. 136address A triggers a page fault and initiates a migration back to main memory.
137 137
138With these two features, HMM not only allows a device to mirror process address 138With these two features, HMM not only allows a device to mirror process address
139space and keeping both CPU and device page table synchronized, but also lever- 139space and keeps both CPU and device page tables synchronized, but also
140ages device memory by migrating the part of the data set that is actively being 140leverages device memory by migrating the part of the data set that is actively being
141used by the device. 141used by the device.
142 142
143 143
@@ -151,21 +151,28 @@ registration of an hmm_mirror struct::
151 151
152 int hmm_mirror_register(struct hmm_mirror *mirror, 152 int hmm_mirror_register(struct hmm_mirror *mirror,
153 struct mm_struct *mm); 153 struct mm_struct *mm);
154 int hmm_mirror_register_locked(struct hmm_mirror *mirror,
155 struct mm_struct *mm);
156 154
157 155The mirror struct has a set of callbacks that are used
158The locked variant is to be used when the driver is already holding mmap_sem
159of the mm in write mode. The mirror struct has a set of callbacks that are used
160to propagate CPU page tables:: 156to propagate CPU page tables::
161 157
162 struct hmm_mirror_ops { 158 struct hmm_mirror_ops {
159 /* release() - release hmm_mirror
160 *
161 * @mirror: pointer to struct hmm_mirror
162 *
163 * This is called when the mm_struct is being released. The callback
164 * must ensure that all access to any pages obtained from this mirror
165 * is halted before the callback returns. All future access should
166 * fault.
167 */
168 void (*release)(struct hmm_mirror *mirror);
169
163 /* sync_cpu_device_pagetables() - synchronize page tables 170 /* sync_cpu_device_pagetables() - synchronize page tables
164 * 171 *
165 * @mirror: pointer to struct hmm_mirror 172 * @mirror: pointer to struct hmm_mirror
166 * @update_type: type of update that occurred to the CPU page table 173 * @update: update information (see struct mmu_notifier_range)
167 * @start: virtual start address of the range to update 174 * Return: -EAGAIN if update.blockable false and callback need to
168 * @end: virtual end address of the range to update 175 * block, 0 otherwise.
169 * 176 *
170 * This callback ultimately originates from mmu_notifiers when the CPU 177 * This callback ultimately originates from mmu_notifiers when the CPU
171 * page table is updated. The device driver must update its page table 178 * page table is updated. The device driver must update its page table
@@ -176,14 +183,12 @@ to propagate CPU page tables::
176 * page tables are completely updated (TLBs flushed, etc); this is a 183 * page tables are completely updated (TLBs flushed, etc); this is a
177 * synchronous call. 184 * synchronous call.
178 */ 185 */
179 void (*update)(struct hmm_mirror *mirror, 186 int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror,
180 enum hmm_update action, 187 const struct hmm_update *update);
181 unsigned long start,
182 unsigned long end);
183 }; 188 };
184 189
185The device driver must perform the update action to the range (mark range 190The device driver must perform the update action to the range (mark range
186read only, or fully unmap, ...). The device must be done with the update before 191read only, or fully unmap, etc.). The device must complete the update before
187the driver callback returns. 192the driver callback returns.
188 193
189When the device driver wants to populate a range of virtual addresses, it can 194When the device driver wants to populate a range of virtual addresses, it can
@@ -194,17 +199,18 @@ use either::
194 199
195The first one (hmm_range_snapshot()) will only fetch present CPU page table 200The first one (hmm_range_snapshot()) will only fetch present CPU page table
196entries and will not trigger a page fault on missing or non-present entries. 201entries and will not trigger a page fault on missing or non-present entries.
197The second one does trigger a page fault on missing or read-only entry if the 202The second one does trigger a page fault on missing or read-only entries if
198write parameter is true. Page faults use the generic mm page fault code path 203write access is requested (see below). Page faults use the generic mm page
199just like a CPU page fault. 204fault code path just like a CPU page fault.
200 205
201Both functions copy CPU page table entries into their pfns array argument. Each 206Both functions copy CPU page table entries into their pfns array argument. Each
202entry in that array corresponds to an address in the virtual range. HMM 207entry in that array corresponds to an address in the virtual range. HMM
203provides a set of flags to help the driver identify special CPU page table 208provides a set of flags to help the driver identify special CPU page table
204entries. 209entries.
205 210
206Locking with the update() callback is the most important aspect the driver must 211Locking within the sync_cpu_device_pagetables() callback is the most important
207respect in order to keep things properly synchronized. The usage pattern is:: 212aspect the driver must respect in order to keep things properly synchronized.
213The usage pattern is::
208 214
209 int driver_populate_range(...) 215 int driver_populate_range(...)
210 { 216 {
@@ -239,11 +245,11 @@ respect in order to keep things properly synchronized. The usage pattern is::
239 hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); 245 hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
240 goto again; 246 goto again;
241 } 247 }
242 hmm_mirror_unregister(&range); 248 hmm_range_unregister(&range);
243 return ret; 249 return ret;
244 } 250 }
245 take_lock(driver->update); 251 take_lock(driver->update);
246 if (!range.valid) { 252 if (!hmm_range_valid(&range)) {
247 release_lock(driver->update); 253 release_lock(driver->update);
248 up_read(&mm->mmap_sem); 254 up_read(&mm->mmap_sem);
249 goto again; 255 goto again;
@@ -251,15 +257,15 @@ respect in order to keep things properly synchronized. The usage pattern is::
251 257
252 // Use pfns array content to update device page table 258 // Use pfns array content to update device page table
253 259
254 hmm_mirror_unregister(&range); 260 hmm_range_unregister(&range);
255 release_lock(driver->update); 261 release_lock(driver->update);
256 up_read(&mm->mmap_sem); 262 up_read(&mm->mmap_sem);
257 return 0; 263 return 0;
258 } 264 }
259 265
260The driver->update lock is the same lock that the driver takes inside its 266The driver->update lock is the same lock that the driver takes inside its
261update() callback. That lock must be held before checking the range.valid 267sync_cpu_device_pagetables() callback. That lock must be held before calling
262field to avoid any race with a concurrent CPU page table update. 268hmm_range_valid() to avoid any race with a concurrent CPU page table update.
263 269
264HMM implements all this on top of the mmu_notifier API because we wanted a 270HMM implements all this on top of the mmu_notifier API because we wanted a
265simpler API and also to be able to perform optimizations latter on like doing 271simpler API and also to be able to perform optimizations latter on like doing
@@ -279,46 +285,47 @@ concurrently).
279Leverage default_flags and pfn_flags_mask 285Leverage default_flags and pfn_flags_mask
280========================================= 286=========================================
281 287
282The hmm_range struct has 2 fields default_flags and pfn_flags_mask that allows 288The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify
283to set fault or snapshot policy for a whole range instead of having to set them 289fault or snapshot policy for the whole range instead of having to set them
284for each entries in the range. 290for each entry in the pfns array.
291
292For instance, if the device flags for range.flags are::
285 293
286For instance if the device flags for device entries are: 294 range.flags[HMM_PFN_VALID] = (1 << 63);
287 VALID (1 << 63) 295 range.flags[HMM_PFN_WRITE] = (1 << 62);
288 WRITE (1 << 62)
289 296
290Now let say that device driver wants to fault with at least read a range then 297and the device driver wants pages for a range with at least read permission,
291it does set:: 298it sets::
292 299
293 range->default_flags = (1 << 63); 300 range->default_flags = (1 << 63);
294 range->pfn_flags_mask = 0; 301 range->pfn_flags_mask = 0;
295 302
296and calls hmm_range_fault() as described above. This will fill fault all page 303and calls hmm_range_fault() as described above. This will fill fault all pages
297in the range with at least read permission. 304in the range with at least read permission.
298 305
299Now let say driver wants to do the same except for one page in the range for 306Now let's say the driver wants to do the same except for one page in the range for
300which its want to have write. Now driver set:: 307which it wants to have write permission. Now driver set::
301 308
302 range->default_flags = (1 << 63); 309 range->default_flags = (1 << 63);
303 range->pfn_flags_mask = (1 << 62); 310 range->pfn_flags_mask = (1 << 62);
304 range->pfns[index_of_write] = (1 << 62); 311 range->pfns[index_of_write] = (1 << 62);
305 312
306With this HMM will fault in all page with at least read (ie valid) and for the 313With this, HMM will fault in all pages with at least read (i.e., valid) and for the
307address == range->start + (index_of_write << PAGE_SHIFT) it will fault with 314address == range->start + (index_of_write << PAGE_SHIFT) it will fault with
308write permission ie if the CPU pte does not have write permission set then HMM 315write permission i.e., if the CPU pte does not have write permission set then HMM
309will call handle_mm_fault(). 316will call handle_mm_fault().
310 317
311Note that HMM will populate the pfns array with write permission for any entry 318Note that HMM will populate the pfns array with write permission for any page
312that have write permission within the CPU pte no matter what are the values set 319that is mapped with CPU write permission no matter what values are set
313in default_flags or pfn_flags_mask. 320in default_flags or pfn_flags_mask.
314 321
315 322
316Represent and manage device memory from core kernel point of view 323Represent and manage device memory from core kernel point of view
317================================================================= 324=================================================================
318 325
319Several different designs were tried to support device memory. First one used 326Several different designs were tried to support device memory. The first one
320a device specific data structure to keep information about migrated memory and 327used a device specific data structure to keep information about migrated memory
321HMM hooked itself in various places of mm code to handle any access to 328and HMM hooked itself in various places of mm code to handle any access to
322addresses that were backed by device memory. It turns out that this ended up 329addresses that were backed by device memory. It turns out that this ended up
323replicating most of the fields of struct page and also needed many kernel code 330replicating most of the fields of struct page and also needed many kernel code
324paths to be updated to understand this new kind of memory. 331paths to be updated to understand this new kind of memory.
@@ -341,7 +348,7 @@ The hmm_devmem_ops is where most of the important things are::
341 348
342 struct hmm_devmem_ops { 349 struct hmm_devmem_ops {
343 void (*free)(struct hmm_devmem *devmem, struct page *page); 350 void (*free)(struct hmm_devmem *devmem, struct page *page);
344 int (*fault)(struct hmm_devmem *devmem, 351 vm_fault_t (*fault)(struct hmm_devmem *devmem,
345 struct vm_area_struct *vma, 352 struct vm_area_struct *vma,
346 unsigned long addr, 353 unsigned long addr,
347 struct page *page, 354 struct page *page,
@@ -417,9 +424,9 @@ willing to pay to keep all the code simpler.
417Memory cgroup (memcg) and rss accounting 424Memory cgroup (memcg) and rss accounting
418======================================== 425========================================
419 426
420For now device memory is accounted as any regular page in rss counters (either 427For now, device memory is accounted as any regular page in rss counters (either
421anonymous if device page is used for anonymous, file if device page is used for 428anonymous if device page is used for anonymous, file if device page is used for
422file backed page or shmem if device page is used for shared memory). This is a 429file backed page, or shmem if device page is used for shared memory). This is a
423deliberate choice to keep existing applications, that might start using device 430deliberate choice to keep existing applications, that might start using device
424memory without knowing about it, running unimpacted. 431memory without knowing about it, running unimpacted.
425 432
@@ -439,6 +446,6 @@ get more experience in how device memory is used and its impact on memory
439resource control. 446resource control.
440 447
441 448
442Note that device memory can never be pinned by device driver nor through GUP 449Note that device memory can never be pinned by a device driver nor through GUP
443and thus such memory is always free upon process exit. Or when last reference 450and thus such memory is always free upon process exit. Or when last reference
444is dropped in case of shared memory or file backed memory. 451is dropped in case of shared memory or file backed memory.
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 044a36d7c3f8..740bb00853f5 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -418,9 +418,10 @@ struct hmm_mirror_ops {
418 * 418 *
419 * @mirror: pointer to struct hmm_mirror 419 * @mirror: pointer to struct hmm_mirror
420 * 420 *
421 * This is called when the mm_struct is being released. 421 * This is called when the mm_struct is being released. The callback
422 * The callback should make sure no references to the mirror occur 422 * must ensure that all access to any pages obtained from this mirror
423 * after the callback returns. 423 * is halted before the callback returns. All future access should
424 * fault.
424 */ 425 */
425 void (*release)(struct hmm_mirror *mirror); 426 void (*release)(struct hmm_mirror *mirror);
426 427