aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/vm/hugetlbpage.txt
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/vm/hugetlbpage.txt')
-rw-r--r--Documentation/vm/hugetlbpage.txt405
1 files changed, 166 insertions, 239 deletions
diff --git a/Documentation/vm/hugetlbpage.txt b/Documentation/vm/hugetlbpage.txt
index 82a7bd1800b2..457634c1e03e 100644
--- a/Documentation/vm/hugetlbpage.txt
+++ b/Documentation/vm/hugetlbpage.txt
@@ -11,23 +11,21 @@ This optimization is more critical now as bigger and bigger physical memories
11(several GBs) are more readily available. 11(several GBs) are more readily available.
12 12
13Users can use the huge page support in Linux kernel by either using the mmap 13Users can use the huge page support in Linux kernel by either using the mmap
14system call or standard SYSv shared memory system calls (shmget, shmat). 14system call or standard SYSV shared memory system calls (shmget, shmat).
15 15
16First the Linux kernel needs to be built with the CONFIG_HUGETLBFS 16First the Linux kernel needs to be built with the CONFIG_HUGETLBFS
17(present under "File systems") and CONFIG_HUGETLB_PAGE (selected 17(present under "File systems") and CONFIG_HUGETLB_PAGE (selected
18automatically when CONFIG_HUGETLBFS is selected) configuration 18automatically when CONFIG_HUGETLBFS is selected) configuration
19options. 19options.
20 20
21The kernel built with huge page support should show the number of configured 21The /proc/meminfo file provides information about the total number of
22huge pages in the system by running the "cat /proc/meminfo" command. 22persistent hugetlb pages in the kernel's huge page pool. It also displays
23information about the number of free, reserved and surplus huge pages and the
24default huge page size. The huge page size is needed for generating the
25proper alignment and size of the arguments to system calls that map huge page
26regions.
23 27
24/proc/meminfo also provides information about the total number of hugetlb 28The output of "cat /proc/meminfo" will include lines like:
25pages configured in the kernel. It also displays information about the
26number of free hugetlb pages at any time. It also displays information about
27the configured huge page size - this is needed for generating the proper
28alignment and size of the arguments to the above system calls.
29
30The output of "cat /proc/meminfo" will have lines like:
31 29
32..... 30.....
33HugePages_Total: vvv 31HugePages_Total: vvv
@@ -53,59 +51,63 @@ HugePages_Surp is short for "surplus," and is the number of huge pages in
53/proc/filesystems should also show a filesystem of type "hugetlbfs" configured 51/proc/filesystems should also show a filesystem of type "hugetlbfs" configured
54in the kernel. 52in the kernel.
55 53
56/proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb 54/proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge
57pages in the kernel. Super user can dynamically request more (or free some 55pages in the kernel's huge page pool. "Persistent" huge pages will be
58pre-configured) huge pages. 56returned to the huge page pool when freed by a task. A user with root
59The allocation (or deallocation) of hugetlb pages is possible only if there are 57privileges can dynamically allocate more or free some persistent huge pages
60enough physically contiguous free pages in system (freeing of huge pages is 58by increasing or decreasing the value of 'nr_hugepages'.
61possible only if there are enough hugetlb pages free that can be transferred
62back to regular memory pool).
63 59
64Pages that are used as hugetlb pages are reserved inside the kernel and cannot 60Pages that are used as huge pages are reserved inside the kernel and cannot
65be used for other purposes. 61be used for other purposes. Huge pages cannot be swapped out under
62memory pressure.
66 63
67Once the kernel with Hugetlb page support is built and running, a user can 64Once a number of huge pages have been pre-allocated to the kernel huge page
68use either the mmap system call or shared memory system calls to start using 65pool, a user with appropriate privilege can use either the mmap system call
69the huge pages. It is required that the system administrator preallocate 66or shared memory system calls to use the huge pages. See the discussion of
70enough memory for huge page purposes. 67Using Huge Pages, below.
71 68
72The administrator can preallocate huge pages on the kernel boot command line by 69The administrator can allocate persistent huge pages on the kernel boot
73specifying the "hugepages=N" parameter, where 'N' = the number of huge pages 70command line by specifying the "hugepages=N" parameter, where 'N' = the
74requested. This is the most reliable method for preallocating huge pages as 71number of huge pages requested. This is the most reliable method of
75memory has not yet become fragmented. 72allocating huge pages as memory has not yet become fragmented.
76 73
77Some platforms support multiple huge page sizes. To preallocate huge pages 74Some platforms support multiple huge page sizes. To allocate huge pages
78of a specific size, one must preceed the huge pages boot command parameters 75of a specific size, one must preceed the huge pages boot command parameters
79with a huge page size selection parameter "hugepagesz=<size>". <size> must 76with a huge page size selection parameter "hugepagesz=<size>". <size> must
80be specified in bytes with optional scale suffix [kKmMgG]. The default huge 77be specified in bytes with optional scale suffix [kKmMgG]. The default huge
81page size may be selected with the "default_hugepagesz=<size>" boot parameter. 78page size may be selected with the "default_hugepagesz=<size>" boot parameter.
82 79
83/proc/sys/vm/nr_hugepages indicates the current number of configured [default 80When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages
84size] hugetlb pages in the kernel. Super user can dynamically request more 81indicates the current number of pre-allocated huge pages of the default size.
85(or free some pre-configured) huge pages. 82Thus, one can use the following command to dynamically allocate/deallocate
86 83default sized persistent huge pages:
87Use the following command to dynamically allocate/deallocate default sized
88huge pages:
89 84
90 echo 20 > /proc/sys/vm/nr_hugepages 85 echo 20 > /proc/sys/vm/nr_hugepages
91 86
92This command will try to configure 20 default sized huge pages in the system. 87This command will try to adjust the number of default sized huge pages in the
88huge page pool to 20, allocating or freeing huge pages, as required.
89
93On a NUMA platform, the kernel will attempt to distribute the huge page pool 90On a NUMA platform, the kernel will attempt to distribute the huge page pool
94over the all on-line nodes. These huge pages, allocated when nr_hugepages 91over all the set of allowed nodes specified by the NUMA memory policy of the
95is increased, are called "persistent huge pages". 92task that modifies nr_hugepages. The default for the allowed nodes--when the
93task has default memory policy--is all on-line nodes with memory. Allowed
94nodes with insufficient available, contiguous memory for a huge page will be
95silently skipped when allocating persistent huge pages. See the discussion
96below of the interaction of task memory policy, cpusets and per node attributes
97with the allocation and freeing of persistent huge pages.
96 98
97The success or failure of huge page allocation depends on the amount of 99The success or failure of huge page allocation depends on the amount of
98physically contiguous memory that is preset in system at the time of the 100physically contiguous memory that is present in system at the time of the
99allocation attempt. If the kernel is unable to allocate huge pages from 101allocation attempt. If the kernel is unable to allocate huge pages from
100some nodes in a NUMA system, it will attempt to make up the difference by 102some nodes in a NUMA system, it will attempt to make up the difference by
101allocating extra pages on other nodes with sufficient available contiguous 103allocating extra pages on other nodes with sufficient available contiguous
102memory, if any. 104memory, if any.
103 105
104System administrators may want to put this command in one of the local rc init 106System administrators may want to put this command in one of the local rc
105files. This will enable the kernel to request huge pages early in the boot 107init files. This will enable the kernel to allocate huge pages early in
106process when the possibility of getting physical contiguous pages is still 108the boot process when the possibility of getting physical contiguous pages
107very high. Administrators can verify the number of huge pages actually 109is still very high. Administrators can verify the number of huge pages
108allocated by checking the sysctl or meminfo. To check the per node 110actually allocated by checking the sysctl or meminfo. To check the per node
109distribution of huge pages in a NUMA system, use: 111distribution of huge pages in a NUMA system, use:
110 112
111 cat /sys/devices/system/node/node*/meminfo | fgrep Huge 113 cat /sys/devices/system/node/node*/meminfo | fgrep Huge
@@ -113,45 +115,47 @@ distribution of huge pages in a NUMA system, use:
113/proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of 115/proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
114huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are 116huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
115requested by applications. Writing any non-zero value into this file 117requested by applications. Writing any non-zero value into this file
116indicates that the hugetlb subsystem is allowed to try to obtain "surplus" 118indicates that the hugetlb subsystem is allowed to try to obtain that
117huge pages from the buddy allocator, when the normal pool is exhausted. As 119number of "surplus" huge pages from the kernel's normal page pool, when the
118these surplus huge pages go out of use, they are freed back to the buddy 120persistent huge page pool is exhausted. As these surplus huge pages become
119allocator. 121unused, they are freed back to the kernel's normal page pool.
120 122
121When increasing the huge page pool size via nr_hugepages, any surplus 123When increasing the huge page pool size via nr_hugepages, any existing surplus
122pages will first be promoted to persistent huge pages. Then, additional 124pages will first be promoted to persistent huge pages. Then, additional
123huge pages will be allocated, if necessary and if possible, to fulfill 125huge pages will be allocated, if necessary and if possible, to fulfill
124the new huge page pool size. 126the new persistent huge page pool size.
125 127
126The administrator may shrink the pool of preallocated huge pages for 128The administrator may shrink the pool of persistent huge pages for
127the default huge page size by setting the nr_hugepages sysctl to a 129the default huge page size by setting the nr_hugepages sysctl to a
128smaller value. The kernel will attempt to balance the freeing of huge pages 130smaller value. The kernel will attempt to balance the freeing of huge pages
129across all on-line nodes. Any free huge pages on the selected nodes will 131across all nodes in the memory policy of the task modifying nr_hugepages.
130be freed back to the buddy allocator. 132Any free huge pages on the selected nodes will be freed back to the kernel's
131 133normal page pool.
132Caveat: Shrinking the pool via nr_hugepages such that it becomes less 134
133than the number of huge pages in use will convert the balance to surplus 135Caveat: Shrinking the persistent huge page pool via nr_hugepages such that
134huge pages even if it would exceed the overcommit value. As long as 136it becomes less than the number of huge pages in use will convert the balance
135this condition holds, however, no more surplus huge pages will be 137of the in-use huge pages to surplus huge pages. This will occur even if
136allowed on the system until one of the two sysctls are increased 138the number of surplus pages it would exceed the overcommit value. As long as
137sufficiently, or the surplus huge pages go out of use and are freed. 139this condition holds--that is, until nr_hugepages+nr_overcommit_hugepages is
140increased sufficiently, or the surplus huge pages go out of use and are freed--
141no more surplus huge pages will be allowed to be allocated.
138 142
139With support for multiple huge page pools at run-time available, much of 143With support for multiple huge page pools at run-time available, much of
140the huge page userspace interface has been duplicated in sysfs. The above 144the huge page userspace interface in /proc/sys/vm has been duplicated in sysfs.
141information applies to the default huge page size which will be 145The /proc interfaces discussed above have been retained for backwards
142controlled by the /proc interfaces for backwards compatibility. The root 146compatibility. The root huge page control directory in sysfs is:
143huge page control directory in sysfs is:
144 147
145 /sys/kernel/mm/hugepages 148 /sys/kernel/mm/hugepages
146 149
147For each huge page size supported by the running kernel, a subdirectory 150For each huge page size supported by the running kernel, a subdirectory
148will exist, of the form 151will exist, of the form:
149 152
150 hugepages-${size}kB 153 hugepages-${size}kB
151 154
152Inside each of these directories, the same set of files will exist: 155Inside each of these directories, the same set of files will exist:
153 156
154 nr_hugepages 157 nr_hugepages
158 nr_hugepages_mempolicy
155 nr_overcommit_hugepages 159 nr_overcommit_hugepages
156 free_hugepages 160 free_hugepages
157 resv_hugepages 161 resv_hugepages
@@ -159,6 +163,102 @@ Inside each of these directories, the same set of files will exist:
159 163
160which function as described above for the default huge page-sized case. 164which function as described above for the default huge page-sized case.
161 165
166
167Interaction of Task Memory Policy with Huge Page Allocation/Freeing
168
169Whether huge pages are allocated and freed via the /proc interface or
170the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA
171nodes from which huge pages are allocated or freed are controlled by the
172NUMA memory policy of the task that modifies the nr_hugepages_mempolicy
173sysctl or attribute. When the nr_hugepages attribute is used, mempolicy
174is ignored.
175
176The recommended method to allocate or free huge pages to/from the kernel
177huge page pool, using the nr_hugepages example above, is:
178
179 numactl --interleave <node-list> echo 20 \
180 >/proc/sys/vm/nr_hugepages_mempolicy
181
182or, more succinctly:
183
184 numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy
185
186This will allocate or free abs(20 - nr_hugepages) to or from the nodes
187specified in <node-list>, depending on whether number of persistent huge pages
188is initially less than or greater than 20, respectively. No huge pages will be
189allocated nor freed on any node not included in the specified <node-list>.
190
191When adjusting the persistent hugepage count via nr_hugepages_mempolicy, any
192memory policy mode--bind, preferred, local or interleave--may be used. The
193resulting effect on persistent huge page allocation is as follows:
194
1951) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt],
196 persistent huge pages will be distributed across the node or nodes
197 specified in the mempolicy as if "interleave" had been specified.
198 However, if a node in the policy does not contain sufficient contiguous
199 memory for a huge page, the allocation will not "fallback" to the nearest
200 neighbor node with sufficient contiguous memory. To do this would cause
201 undesirable imbalance in the distribution of the huge page pool, or
202 possibly, allocation of persistent huge pages on nodes not allowed by
203 the task's memory policy.
204
2052) One or more nodes may be specified with the bind or interleave policy.
206 If more than one node is specified with the preferred policy, only the
207 lowest numeric id will be used. Local policy will select the node where
208 the task is running at the time the nodes_allowed mask is constructed.
209 For local policy to be deterministic, the task must be bound to a cpu or
210 cpus in a single node. Otherwise, the task could be migrated to some
211 other node at any time after launch and the resulting node will be
212 indeterminate. Thus, local policy is not very useful for this purpose.
213 Any of the other mempolicy modes may be used to specify a single node.
214
2153) The nodes allowed mask will be derived from any non-default task mempolicy,
216 whether this policy was set explicitly by the task itself or one of its
217 ancestors, such as numactl. This means that if the task is invoked from a
218 shell with non-default policy, that policy will be used. One can specify a
219 node list of "all" with numactl --interleave or --membind [-m] to achieve
220 interleaving over all nodes in the system or cpuset.
221
2224) Any task mempolicy specifed--e.g., using numactl--will be constrained by
223 the resource limits of any cpuset in which the task runs. Thus, there will
224 be no way for a task with non-default policy running in a cpuset with a
225 subset of the system nodes to allocate huge pages outside the cpuset
226 without first moving to a cpuset that contains all of the desired nodes.
227
2285) Boot-time huge page allocation attempts to distribute the requested number
229 of huge pages over all on-lines nodes with memory.
230
231Per Node Hugepages Attributes
232
233A subset of the contents of the root huge page control directory in sysfs,
234described above, will be replicated under each the system device of each
235NUMA node with memory in:
236
237 /sys/devices/system/node/node[0-9]*/hugepages/
238
239Under this directory, the subdirectory for each supported huge page size
240contains the following attribute files:
241
242 nr_hugepages
243 free_hugepages
244 surplus_hugepages
245
246The free_' and surplus_' attribute files are read-only. They return the number
247of free and surplus [overcommitted] huge pages, respectively, on the parent
248node.
249
250The nr_hugepages attribute returns the total number of huge pages on the
251specified node. When this attribute is written, the number of persistent huge
252pages on the parent node will be adjusted to the specified value, if sufficient
253resources exist, regardless of the task's mempolicy or cpuset constraints.
254
255Note that the number of overcommit and reserve pages remain global quantities,
256as we don't know until fault time, when the faulting task's mempolicy is
257applied, from which node the huge page allocation will be attempted.
258
259
260Using Huge Pages
261
162If the user applications are going to request huge pages using mmap system 262If the user applications are going to request huge pages using mmap system
163call, then it is required that system administrator mount a file system of 263call, then it is required that system administrator mount a file system of
164type hugetlbfs: 264type hugetlbfs:
@@ -199,184 +299,11 @@ map_hugetlb.c.
199******************************************************************* 299*******************************************************************
200 300
201/* 301/*
202 * Example of using huge page memory in a user application using Sys V shared 302 * hugepage-shm: see Documentation/vm/hugepage-shm.c
203 * memory system calls. In this example the app is requesting 256MB of
204 * memory that is backed by huge pages. The application uses the flag
205 * SHM_HUGETLB in the shmget system call to inform the kernel that it is
206 * requesting huge pages.
207 *
208 * For the ia64 architecture, the Linux kernel reserves Region number 4 for
209 * huge pages. That means the addresses starting with 0x800000... will need
210 * to be specified. Specifying a fixed address is not required on ppc64,
211 * i386 or x86_64.
212 *
213 * Note: The default shared memory limit is quite low on many kernels,
214 * you may need to increase it via:
215 *
216 * echo 268435456 > /proc/sys/kernel/shmmax
217 *
218 * This will increase the maximum size per shared memory segment to 256MB.
219 * The other limit that you will hit eventually is shmall which is the
220 * total amount of shared memory in pages. To set it to 16GB on a system
221 * with a 4kB pagesize do:
222 *
223 * echo 4194304 > /proc/sys/kernel/shmall
224 */ 303 */
225#include <stdlib.h>
226#include <stdio.h>
227#include <sys/types.h>
228#include <sys/ipc.h>
229#include <sys/shm.h>
230#include <sys/mman.h>
231
232#ifndef SHM_HUGETLB
233#define SHM_HUGETLB 04000
234#endif
235
236#define LENGTH (256UL*1024*1024)
237
238#define dprintf(x) printf(x)
239
240/* Only ia64 requires this */
241#ifdef __ia64__
242#define ADDR (void *)(0x8000000000000000UL)
243#define SHMAT_FLAGS (SHM_RND)
244#else
245#define ADDR (void *)(0x0UL)
246#define SHMAT_FLAGS (0)
247#endif
248
249int main(void)
250{
251 int shmid;
252 unsigned long i;
253 char *shmaddr;
254
255 if ((shmid = shmget(2, LENGTH,
256 SHM_HUGETLB | IPC_CREAT | SHM_R | SHM_W)) < 0) {
257 perror("shmget");
258 exit(1);
259 }
260 printf("shmid: 0x%x\n", shmid);
261
262 shmaddr = shmat(shmid, ADDR, SHMAT_FLAGS);
263 if (shmaddr == (char *)-1) {
264 perror("Shared memory attach failure");
265 shmctl(shmid, IPC_RMID, NULL);
266 exit(2);
267 }
268 printf("shmaddr: %p\n", shmaddr);
269
270 dprintf("Starting the writes:\n");
271 for (i = 0; i < LENGTH; i++) {
272 shmaddr[i] = (char)(i);
273 if (!(i % (1024 * 1024)))
274 dprintf(".");
275 }
276 dprintf("\n");
277
278 dprintf("Starting the Check...");
279 for (i = 0; i < LENGTH; i++)
280 if (shmaddr[i] != (char)i)
281 printf("\nIndex %lu mismatched\n", i);
282 dprintf("Done.\n");
283
284 if (shmdt((const void *)shmaddr) != 0) {
285 perror("Detach failure");
286 shmctl(shmid, IPC_RMID, NULL);
287 exit(3);
288 }
289
290 shmctl(shmid, IPC_RMID, NULL);
291
292 return 0;
293}
294 304
295******************************************************************* 305*******************************************************************
296 306
297/* 307/*
298 * Example of using huge page memory in a user application using the mmap 308 * hugepage-mmap: see Documentation/vm/hugepage-mmap.c
299 * system call. Before running this application, make sure that the
300 * administrator has mounted the hugetlbfs filesystem (on some directory
301 * like /mnt) using the command mount -t hugetlbfs nodev /mnt. In this
302 * example, the app is requesting memory of size 256MB that is backed by
303 * huge pages.
304 *
305 * For ia64 architecture, Linux kernel reserves Region number 4 for huge pages.
306 * That means the addresses starting with 0x800000... will need to be
307 * specified. Specifying a fixed address is not required on ppc64, i386
308 * or x86_64.
309 */ 309 */
310#include <stdlib.h>
311#include <stdio.h>
312#include <unistd.h>
313#include <sys/mman.h>
314#include <fcntl.h>
315
316#define FILE_NAME "/mnt/hugepagefile"
317#define LENGTH (256UL*1024*1024)
318#define PROTECTION (PROT_READ | PROT_WRITE)
319
320/* Only ia64 requires this */
321#ifdef __ia64__
322#define ADDR (void *)(0x8000000000000000UL)
323#define FLAGS (MAP_SHARED | MAP_FIXED)
324#else
325#define ADDR (void *)(0x0UL)
326#define FLAGS (MAP_SHARED)
327#endif
328
329void check_bytes(char *addr)
330{
331 printf("First hex is %x\n", *((unsigned int *)addr));
332}
333
334void write_bytes(char *addr)
335{
336 unsigned long i;
337
338 for (i = 0; i < LENGTH; i++)
339 *(addr + i) = (char)i;
340}
341
342void read_bytes(char *addr)
343{
344 unsigned long i;
345
346 check_bytes(addr);
347 for (i = 0; i < LENGTH; i++)
348 if (*(addr + i) != (char)i) {
349 printf("Mismatch at %lu\n", i);
350 break;
351 }
352}
353
354int main(void)
355{
356 void *addr;
357 int fd;
358
359 fd = open(FILE_NAME, O_CREAT | O_RDWR, 0755);
360 if (fd < 0) {
361 perror("Open failed");
362 exit(1);
363 }
364
365 addr = mmap(ADDR, LENGTH, PROTECTION, FLAGS, fd, 0);
366 if (addr == MAP_FAILED) {
367 perror("mmap");
368 unlink(FILE_NAME);
369 exit(1);
370 }
371
372 printf("Returned address is %p\n", addr);
373 check_bytes(addr);
374 write_bytes(addr);
375 read_bytes(addr);
376
377 munmap(addr, LENGTH);
378 close(fd);
379 unlink(FILE_NAME);
380
381 return 0;
382}