diff options
author | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-04-16 18:20:36 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org> | 2005-04-16 18:20:36 -0400 |
commit | 1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 (patch) | |
tree | 0bba044c4ce775e45a88a51686b5d9f90697ea9d /Documentation/vm |
Linux-2.6.12-rc2v2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
Diffstat (limited to 'Documentation/vm')
-rw-r--r-- | Documentation/vm/balance | 93 | ||||
-rw-r--r-- | Documentation/vm/hugetlbpage.txt | 284 | ||||
-rw-r--r-- | Documentation/vm/locking | 131 | ||||
-rw-r--r-- | Documentation/vm/numa | 41 | ||||
-rw-r--r-- | Documentation/vm/overcommit-accounting | 73 |
5 files changed, 622 insertions, 0 deletions
diff --git a/Documentation/vm/balance b/Documentation/vm/balance new file mode 100644 index 000000000000..bd3d31bc4915 --- /dev/null +++ b/Documentation/vm/balance | |||
@@ -0,0 +1,93 @@ | |||
1 | Started Jan 2000 by Kanoj Sarcar <kanoj@sgi.com> | ||
2 | |||
3 | Memory balancing is needed for non __GFP_WAIT as well as for non | ||
4 | __GFP_IO allocations. | ||
5 | |||
6 | There are two reasons to be requesting non __GFP_WAIT allocations: | ||
7 | the caller can not sleep (typically intr context), or does not want | ||
8 | to incur cost overheads of page stealing and possible swap io for | ||
9 | whatever reasons. | ||
10 | |||
11 | __GFP_IO allocation requests are made to prevent file system deadlocks. | ||
12 | |||
13 | In the absence of non sleepable allocation requests, it seems detrimental | ||
14 | to be doing balancing. Page reclamation can be kicked off lazily, that | ||
15 | is, only when needed (aka zone free memory is 0), instead of making it | ||
16 | a proactive process. | ||
17 | |||
18 | That being said, the kernel should try to fulfill requests for direct | ||
19 | mapped pages from the direct mapped pool, instead of falling back on | ||
20 | the dma pool, so as to keep the dma pool filled for dma requests (atomic | ||
21 | or not). A similar argument applies to highmem and direct mapped pages. | ||
22 | OTOH, if there is a lot of free dma pages, it is preferable to satisfy | ||
23 | regular memory requests by allocating one from the dma pool, instead | ||
24 | of incurring the overhead of regular zone balancing. | ||
25 | |||
26 | In 2.2, memory balancing/page reclamation would kick off only when the | ||
27 | _total_ number of free pages fell below 1/64 th of total memory. With the | ||
28 | right ratio of dma and regular memory, it is quite possible that balancing | ||
29 | would not be done even when the dma zone was completely empty. 2.2 has | ||
30 | been running production machines of varying memory sizes, and seems to be | ||
31 | doing fine even with the presence of this problem. In 2.3, due to | ||
32 | HIGHMEM, this problem is aggravated. | ||
33 | |||
34 | In 2.3, zone balancing can be done in one of two ways: depending on the | ||
35 | zone size (and possibly of the size of lower class zones), we can decide | ||
36 | at init time how many free pages we should aim for while balancing any | ||
37 | zone. The good part is, while balancing, we do not need to look at sizes | ||
38 | of lower class zones, the bad part is, we might do too frequent balancing | ||
39 | due to ignoring possibly lower usage in the lower class zones. Also, | ||
40 | with a slight change in the allocation routine, it is possible to reduce | ||
41 | the memclass() macro to be a simple equality. | ||
42 | |||
43 | Another possible solution is that we balance only when the free memory | ||
44 | of a zone _and_ all its lower class zones falls below 1/64th of the | ||
45 | total memory in the zone and its lower class zones. This fixes the 2.2 | ||
46 | balancing problem, and stays as close to 2.2 behavior as possible. Also, | ||
47 | the balancing algorithm works the same way on the various architectures, | ||
48 | which have different numbers and types of zones. If we wanted to get | ||
49 | fancy, we could assign different weights to free pages in different | ||
50 | zones in the future. | ||
51 | |||
52 | Note that if the size of the regular zone is huge compared to dma zone, | ||
53 | it becomes less significant to consider the free dma pages while | ||
54 | deciding whether to balance the regular zone. The first solution | ||
55 | becomes more attractive then. | ||
56 | |||
57 | The appended patch implements the second solution. It also "fixes" two | ||
58 | problems: first, kswapd is woken up as in 2.2 on low memory conditions | ||
59 | for non-sleepable allocations. Second, the HIGHMEM zone is also balanced, | ||
60 | so as to give a fighting chance for replace_with_highmem() to get a | ||
61 | HIGHMEM page, as well as to ensure that HIGHMEM allocations do not | ||
62 | fall back into regular zone. This also makes sure that HIGHMEM pages | ||
63 | are not leaked (for example, in situations where a HIGHMEM page is in | ||
64 | the swapcache but is not being used by anyone) | ||
65 | |||
66 | kswapd also needs to know about the zones it should balance. kswapd is | ||
67 | primarily needed in a situation where balancing can not be done, | ||
68 | probably because all allocation requests are coming from intr context | ||
69 | and all process contexts are sleeping. For 2.3, kswapd does not really | ||
70 | need to balance the highmem zone, since intr context does not request | ||
71 | highmem pages. kswapd looks at the zone_wake_kswapd field in the zone | ||
72 | structure to decide whether a zone needs balancing. | ||
73 | |||
74 | Page stealing from process memory and shm is done if stealing the page would | ||
75 | alleviate memory pressure on any zone in the page's node that has fallen below | ||
76 | its watermark. | ||
77 | |||
78 | pages_min/pages_low/pages_high/low_on_memory/zone_wake_kswapd: These are | ||
79 | per-zone fields, used to determine when a zone needs to be balanced. When | ||
80 | the number of pages falls below pages_min, the hysteric field low_on_memory | ||
81 | gets set. This stays set till the number of free pages becomes pages_high. | ||
82 | When low_on_memory is set, page allocation requests will try to free some | ||
83 | pages in the zone (providing GFP_WAIT is set in the request). Orthogonal | ||
84 | to this, is the decision to poke kswapd to free some zone pages. That | ||
85 | decision is not hysteresis based, and is done when the number of free | ||
86 | pages is below pages_low; in which case zone_wake_kswapd is also set. | ||
87 | |||
88 | |||
89 | (Good) Ideas that I have heard: | ||
90 | 1. Dynamic experience should influence balancing: number of failed requests | ||
91 | for a zone can be tracked and fed into the balancing scheme (jalvo@mbay.net) | ||
92 | 2. Implement a replace_with_highmem()-like replace_with_regular() to preserve | ||
93 | dma pages. (lkd@tantalophile.demon.co.uk) | ||
diff --git a/Documentation/vm/hugetlbpage.txt b/Documentation/vm/hugetlbpage.txt new file mode 100644 index 000000000000..1b9bcd1fe98b --- /dev/null +++ b/Documentation/vm/hugetlbpage.txt | |||
@@ -0,0 +1,284 @@ | |||
1 | |||
2 | The intent of this file is to give a brief summary of hugetlbpage support in | ||
3 | the Linux kernel. This support is built on top of multiple page size support | ||
4 | that is provided by most modern architectures. For example, i386 | ||
5 | architecture supports 4K and 4M (2M in PAE mode) page sizes, ia64 | ||
6 | architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M, | ||
7 | 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical | ||
8 | translations. Typically this is a very scarce resource on processor. | ||
9 | Operating systems try to make best use of limited number of TLB resources. | ||
10 | This optimization is more critical now as bigger and bigger physical memories | ||
11 | (several GBs) are more readily available. | ||
12 | |||
13 | Users can use the huge page support in Linux kernel by either using the mmap | ||
14 | system call or standard SYSv shared memory system calls (shmget, shmat). | ||
15 | |||
16 | First the Linux kernel needs to be built with CONFIG_HUGETLB_PAGE (present | ||
17 | under Processor types and feature) and CONFIG_HUGETLBFS (present under file | ||
18 | system option on config menu) config options. | ||
19 | |||
20 | The kernel built with hugepage support should show the number of configured | ||
21 | hugepages in the system by running the "cat /proc/meminfo" command. | ||
22 | |||
23 | /proc/meminfo also provides information about the total number of hugetlb | ||
24 | pages configured in the kernel. It also displays information about the | ||
25 | number of free hugetlb pages at any time. It also displays information about | ||
26 | the configured hugepage size - this is needed for generating the proper | ||
27 | alignment and size of the arguments to the above system calls. | ||
28 | |||
29 | The output of "cat /proc/meminfo" will have output like: | ||
30 | |||
31 | ..... | ||
32 | HugePages_Total: xxx | ||
33 | HugePages_Free: yyy | ||
34 | Hugepagesize: zzz KB | ||
35 | |||
36 | /proc/filesystems should also show a filesystem of type "hugetlbfs" configured | ||
37 | in the kernel. | ||
38 | |||
39 | /proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb | ||
40 | pages in the kernel. Super user can dynamically request more (or free some | ||
41 | pre-configured) hugepages. | ||
42 | The allocation( or deallocation) of hugetlb pages is posible only if there are | ||
43 | enough physically contiguous free pages in system (freeing of hugepages is | ||
44 | possible only if there are enough hugetlb pages free that can be transfered | ||
45 | back to regular memory pool). | ||
46 | |||
47 | Pages that are used as hugetlb pages are reserved inside the kernel and can | ||
48 | not be used for other purposes. | ||
49 | |||
50 | Once the kernel with Hugetlb page support is built and running, a user can | ||
51 | use either the mmap system call or shared memory system calls to start using | ||
52 | the huge pages. It is required that the system administrator preallocate | ||
53 | enough memory for huge page purposes. | ||
54 | |||
55 | Use the following command to dynamically allocate/deallocate hugepages: | ||
56 | |||
57 | echo 20 > /proc/sys/vm/nr_hugepages | ||
58 | |||
59 | This command will try to configure 20 hugepages in the system. The success | ||
60 | or failure of allocation depends on the amount of physically contiguous | ||
61 | memory that is preset in system at this time. System administrators may want | ||
62 | to put this command in one of the local rc init file. This will enable the | ||
63 | kernel to request huge pages early in the boot process (when the possibility | ||
64 | of getting physical contiguous pages is still very high). | ||
65 | |||
66 | If the user applications are going to request hugepages using mmap system | ||
67 | call, then it is required that system administrator mount a file system of | ||
68 | type hugetlbfs: | ||
69 | |||
70 | mount none /mnt/huge -t hugetlbfs <uid=value> <gid=value> <mode=value> | ||
71 | <size=value> <nr_inodes=value> | ||
72 | |||
73 | This command mounts a (pseudo) filesystem of type hugetlbfs on the directory | ||
74 | /mnt/huge. Any files created on /mnt/huge uses hugepages. The uid and gid | ||
75 | options sets the owner and group of the root of the file system. By default | ||
76 | the uid and gid of the current process are taken. The mode option sets the | ||
77 | mode of root of file system to value & 0777. This value is given in octal. | ||
78 | By default the value 0755 is picked. The size option sets the maximum value of | ||
79 | memory (huge pages) allowed for that filesystem (/mnt/huge). The size is | ||
80 | rounded down to HPAGE_SIZE. The option nr_inode sets the maximum number of | ||
81 | inodes that /mnt/huge can use. If the size or nr_inode options are not | ||
82 | provided on command line then no limits are set. For size and nr_inodes | ||
83 | options, you can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For | ||
84 | example, size=2K has the same meaning as size=2048. An example is given at | ||
85 | the end of this document. | ||
86 | |||
87 | read and write system calls are not supported on files that reside on hugetlb | ||
88 | file systems. | ||
89 | |||
90 | A regular chown, chgrp and chmod commands (with right permissions) could be | ||
91 | used to change the file attributes on hugetlbfs. | ||
92 | |||
93 | Also, it is important to note that no such mount command is required if the | ||
94 | applications are going to use only shmat/shmget system calls. Users who | ||
95 | wish to use hugetlb page via shared memory segment should be a member of | ||
96 | a supplementary group and system admin needs to configure that gid into | ||
97 | /proc/sys/vm/hugetlb_shm_group. It is possible for same or different | ||
98 | applications to use any combination of mmaps and shm* calls. Though the | ||
99 | mount of filesystem will be required for using mmaps. | ||
100 | |||
101 | ******************************************************************* | ||
102 | |||
103 | /* | ||
104 | * Example of using hugepage memory in a user application using Sys V shared | ||
105 | * memory system calls. In this example the app is requesting 256MB of | ||
106 | * memory that is backed by huge pages. The application uses the flag | ||
107 | * SHM_HUGETLB in the shmget system call to inform the kernel that it is | ||
108 | * requesting hugepages. | ||
109 | * | ||
110 | * For the ia64 architecture, the Linux kernel reserves Region number 4 for | ||
111 | * hugepages. That means the addresses starting with 0x800000... will need | ||
112 | * to be specified. Specifying a fixed address is not required on ppc64, | ||
113 | * i386 or x86_64. | ||
114 | * | ||
115 | * Note: The default shared memory limit is quite low on many kernels, | ||
116 | * you may need to increase it via: | ||
117 | * | ||
118 | * echo 268435456 > /proc/sys/kernel/shmmax | ||
119 | * | ||
120 | * This will increase the maximum size per shared memory segment to 256MB. | ||
121 | * The other limit that you will hit eventually is shmall which is the | ||
122 | * total amount of shared memory in pages. To set it to 16GB on a system | ||
123 | * with a 4kB pagesize do: | ||
124 | * | ||
125 | * echo 4194304 > /proc/sys/kernel/shmall | ||
126 | */ | ||
127 | #include <stdlib.h> | ||
128 | #include <stdio.h> | ||
129 | #include <sys/types.h> | ||
130 | #include <sys/ipc.h> | ||
131 | #include <sys/shm.h> | ||
132 | #include <sys/mman.h> | ||
133 | |||
134 | #ifndef SHM_HUGETLB | ||
135 | #define SHM_HUGETLB 04000 | ||
136 | #endif | ||
137 | |||
138 | #define LENGTH (256UL*1024*1024) | ||
139 | |||
140 | #define dprintf(x) printf(x) | ||
141 | |||
142 | /* Only ia64 requires this */ | ||
143 | #ifdef __ia64__ | ||
144 | #define ADDR (void *)(0x8000000000000000UL) | ||
145 | #define SHMAT_FLAGS (SHM_RND) | ||
146 | #else | ||
147 | #define ADDR (void *)(0x0UL) | ||
148 | #define SHMAT_FLAGS (0) | ||
149 | #endif | ||
150 | |||
151 | int main(void) | ||
152 | { | ||
153 | int shmid; | ||
154 | unsigned long i; | ||
155 | char *shmaddr; | ||
156 | |||
157 | if ((shmid = shmget(2, LENGTH, | ||
158 | SHM_HUGETLB | IPC_CREAT | SHM_R | SHM_W)) < 0) { | ||
159 | perror("shmget"); | ||
160 | exit(1); | ||
161 | } | ||
162 | printf("shmid: 0x%x\n", shmid); | ||
163 | |||
164 | shmaddr = shmat(shmid, ADDR, SHMAT_FLAGS); | ||
165 | if (shmaddr == (char *)-1) { | ||
166 | perror("Shared memory attach failure"); | ||
167 | shmctl(shmid, IPC_RMID, NULL); | ||
168 | exit(2); | ||
169 | } | ||
170 | printf("shmaddr: %p\n", shmaddr); | ||
171 | |||
172 | dprintf("Starting the writes:\n"); | ||
173 | for (i = 0; i < LENGTH; i++) { | ||
174 | shmaddr[i] = (char)(i); | ||
175 | if (!(i % (1024 * 1024))) | ||
176 | dprintf("."); | ||
177 | } | ||
178 | dprintf("\n"); | ||
179 | |||
180 | dprintf("Starting the Check..."); | ||
181 | for (i = 0; i < LENGTH; i++) | ||
182 | if (shmaddr[i] != (char)i) | ||
183 | printf("\nIndex %lu mismatched\n", i); | ||
184 | dprintf("Done.\n"); | ||
185 | |||
186 | if (shmdt((const void *)shmaddr) != 0) { | ||
187 | perror("Detach failure"); | ||
188 | shmctl(shmid, IPC_RMID, NULL); | ||
189 | exit(3); | ||
190 | } | ||
191 | |||
192 | shmctl(shmid, IPC_RMID, NULL); | ||
193 | |||
194 | return 0; | ||
195 | } | ||
196 | |||
197 | ******************************************************************* | ||
198 | |||
199 | /* | ||
200 | * Example of using hugepage memory in a user application using the mmap | ||
201 | * system call. Before running this application, make sure that the | ||
202 | * administrator has mounted the hugetlbfs filesystem (on some directory | ||
203 | * like /mnt) using the command mount -t hugetlbfs nodev /mnt. In this | ||
204 | * example, the app is requesting memory of size 256MB that is backed by | ||
205 | * huge pages. | ||
206 | * | ||
207 | * For ia64 architecture, Linux kernel reserves Region number 4 for hugepages. | ||
208 | * That means the addresses starting with 0x800000... will need to be | ||
209 | * specified. Specifying a fixed address is not required on ppc64, i386 | ||
210 | * or x86_64. | ||
211 | */ | ||
212 | #include <stdlib.h> | ||
213 | #include <stdio.h> | ||
214 | #include <unistd.h> | ||
215 | #include <sys/mman.h> | ||
216 | #include <fcntl.h> | ||
217 | |||
218 | #define FILE_NAME "/mnt/hugepagefile" | ||
219 | #define LENGTH (256UL*1024*1024) | ||
220 | #define PROTECTION (PROT_READ | PROT_WRITE) | ||
221 | |||
222 | /* Only ia64 requires this */ | ||
223 | #ifdef __ia64__ | ||
224 | #define ADDR (void *)(0x8000000000000000UL) | ||
225 | #define FLAGS (MAP_SHARED | MAP_FIXED) | ||
226 | #else | ||
227 | #define ADDR (void *)(0x0UL) | ||
228 | #define FLAGS (MAP_SHARED) | ||
229 | #endif | ||
230 | |||
231 | void check_bytes(char *addr) | ||
232 | { | ||
233 | printf("First hex is %x\n", *((unsigned int *)addr)); | ||
234 | } | ||
235 | |||
236 | void write_bytes(char *addr) | ||
237 | { | ||
238 | unsigned long i; | ||
239 | |||
240 | for (i = 0; i < LENGTH; i++) | ||
241 | *(addr + i) = (char)i; | ||
242 | } | ||
243 | |||
244 | void read_bytes(char *addr) | ||
245 | { | ||
246 | unsigned long i; | ||
247 | |||
248 | check_bytes(addr); | ||
249 | for (i = 0; i < LENGTH; i++) | ||
250 | if (*(addr + i) != (char)i) { | ||
251 | printf("Mismatch at %lu\n", i); | ||
252 | break; | ||
253 | } | ||
254 | } | ||
255 | |||
256 | int main(void) | ||
257 | { | ||
258 | void *addr; | ||
259 | int fd; | ||
260 | |||
261 | fd = open(FILE_NAME, O_CREAT | O_RDWR, 0755); | ||
262 | if (fd < 0) { | ||
263 | perror("Open failed"); | ||
264 | exit(1); | ||
265 | } | ||
266 | |||
267 | addr = mmap(ADDR, LENGTH, PROTECTION, FLAGS, fd, 0); | ||
268 | if (addr == MAP_FAILED) { | ||
269 | perror("mmap"); | ||
270 | unlink(FILE_NAME); | ||
271 | exit(1); | ||
272 | } | ||
273 | |||
274 | printf("Returned address is %p\n", addr); | ||
275 | check_bytes(addr); | ||
276 | write_bytes(addr); | ||
277 | read_bytes(addr); | ||
278 | |||
279 | munmap(addr, LENGTH); | ||
280 | close(fd); | ||
281 | unlink(FILE_NAME); | ||
282 | |||
283 | return 0; | ||
284 | } | ||
diff --git a/Documentation/vm/locking b/Documentation/vm/locking new file mode 100644 index 000000000000..c3ef09ae3bb1 --- /dev/null +++ b/Documentation/vm/locking | |||
@@ -0,0 +1,131 @@ | |||
1 | Started Oct 1999 by Kanoj Sarcar <kanojsarcar@yahoo.com> | ||
2 | |||
3 | The intent of this file is to have an uptodate, running commentary | ||
4 | from different people about how locking and synchronization is done | ||
5 | in the Linux vm code. | ||
6 | |||
7 | page_table_lock & mmap_sem | ||
8 | -------------------------------------- | ||
9 | |||
10 | Page stealers pick processes out of the process pool and scan for | ||
11 | the best process to steal pages from. To guarantee the existence | ||
12 | of the victim mm, a mm_count inc and a mmdrop are done in swap_out(). | ||
13 | Page stealers hold kernel_lock to protect against a bunch of races. | ||
14 | The vma list of the victim mm is also scanned by the stealer, | ||
15 | and the page_table_lock is used to preserve list sanity against the | ||
16 | process adding/deleting to the list. This also guarantees existence | ||
17 | of the vma. Vma existence is not guaranteed once try_to_swap_out() | ||
18 | drops the page_table_lock. To guarantee the existence of the underlying | ||
19 | file structure, a get_file is done before the swapout() method is | ||
20 | invoked. The page passed into swapout() is guaranteed not to be reused | ||
21 | for a different purpose because the page reference count due to being | ||
22 | present in the user's pte is not released till after swapout() returns. | ||
23 | |||
24 | Any code that modifies the vmlist, or the vm_start/vm_end/ | ||
25 | vm_flags:VM_LOCKED/vm_next of any vma *in the list* must prevent | ||
26 | kswapd from looking at the chain. | ||
27 | |||
28 | The rules are: | ||
29 | 1. To scan the vmlist (look but don't touch) you must hold the | ||
30 | mmap_sem with read bias, i.e. down_read(&mm->mmap_sem) | ||
31 | 2. To modify the vmlist you need to hold the mmap_sem with | ||
32 | read&write bias, i.e. down_write(&mm->mmap_sem) *AND* | ||
33 | you need to take the page_table_lock. | ||
34 | 3. The swapper takes _just_ the page_table_lock, this is done | ||
35 | because the mmap_sem can be an extremely long lived lock | ||
36 | and the swapper just cannot sleep on that. | ||
37 | 4. The exception to this rule is expand_stack, which just | ||
38 | takes the read lock and the page_table_lock, this is ok | ||
39 | because it doesn't really modify fields anybody relies on. | ||
40 | 5. You must be able to guarantee that while holding page_table_lock | ||
41 | or page_table_lock of mm A, you will not try to get either lock | ||
42 | for mm B. | ||
43 | |||
44 | The caveats are: | ||
45 | 1. find_vma() makes use of, and updates, the mmap_cache pointer hint. | ||
46 | The update of mmap_cache is racy (page stealer can race with other code | ||
47 | that invokes find_vma with mmap_sem held), but that is okay, since it | ||
48 | is a hint. This can be fixed, if desired, by having find_vma grab the | ||
49 | page_table_lock. | ||
50 | |||
51 | |||
52 | Code that add/delete elements from the vmlist chain are | ||
53 | 1. callers of insert_vm_struct | ||
54 | 2. callers of merge_segments | ||
55 | 3. callers of avl_remove | ||
56 | |||
57 | Code that changes vm_start/vm_end/vm_flags:VM_LOCKED of vma's on | ||
58 | the list: | ||
59 | 1. expand_stack | ||
60 | 2. mprotect | ||
61 | 3. mlock | ||
62 | 4. mremap | ||
63 | |||
64 | It is advisable that changes to vm_start/vm_end be protected, although | ||
65 | in some cases it is not really needed. Eg, vm_start is modified by | ||
66 | expand_stack(), it is hard to come up with a destructive scenario without | ||
67 | having the vmlist protection in this case. | ||
68 | |||
69 | The page_table_lock nests with the inode i_mmap_lock and the kmem cache | ||
70 | c_spinlock spinlocks. This is okay, since the kmem code asks for pages after | ||
71 | dropping c_spinlock. The page_table_lock also nests with pagecache_lock and | ||
72 | pagemap_lru_lock spinlocks, and no code asks for memory with these locks | ||
73 | held. | ||
74 | |||
75 | The page_table_lock is grabbed while holding the kernel_lock spinning monitor. | ||
76 | |||
77 | The page_table_lock is a spin lock. | ||
78 | |||
79 | Note: PTL can also be used to guarantee that no new clones using the | ||
80 | mm start up ... this is a loose form of stability on mm_users. For | ||
81 | example, it is used in copy_mm to protect against a racing tlb_gather_mmu | ||
82 | single address space optimization, so that the zap_page_range (from | ||
83 | vmtruncate) does not lose sending ipi's to cloned threads that might | ||
84 | be spawned underneath it and go to user mode to drag in pte's into tlbs. | ||
85 | |||
86 | swap_list_lock/swap_device_lock | ||
87 | ------------------------------- | ||
88 | The swap devices are chained in priority order from the "swap_list" header. | ||
89 | The "swap_list" is used for the round-robin swaphandle allocation strategy. | ||
90 | The #free swaphandles is maintained in "nr_swap_pages". These two together | ||
91 | are protected by the swap_list_lock. | ||
92 | |||
93 | The swap_device_lock, which is per swap device, protects the reference | ||
94 | counts on the corresponding swaphandles, maintained in the "swap_map" | ||
95 | array, and the "highest_bit" and "lowest_bit" fields. | ||
96 | |||
97 | Both of these are spinlocks, and are never acquired from intr level. The | ||
98 | locking hierarchy is swap_list_lock -> swap_device_lock. | ||
99 | |||
100 | To prevent races between swap space deletion or async readahead swapins | ||
101 | deciding whether a swap handle is being used, ie worthy of being read in | ||
102 | from disk, and an unmap -> swap_free making the handle unused, the swap | ||
103 | delete and readahead code grabs a temp reference on the swaphandle to | ||
104 | prevent warning messages from swap_duplicate <- read_swap_cache_async. | ||
105 | |||
106 | Swap cache locking | ||
107 | ------------------ | ||
108 | Pages are added into the swap cache with kernel_lock held, to make sure | ||
109 | that multiple pages are not being added (and hence lost) by associating | ||
110 | all of them with the same swaphandle. | ||
111 | |||
112 | Pages are guaranteed not to be removed from the scache if the page is | ||
113 | "shared": ie, other processes hold reference on the page or the associated | ||
114 | swap handle. The only code that does not follow this rule is shrink_mmap, | ||
115 | which deletes pages from the swap cache if no process has a reference on | ||
116 | the page (multiple processes might have references on the corresponding | ||
117 | swap handle though). lookup_swap_cache() races with shrink_mmap, when | ||
118 | establishing a reference on a scache page, so, it must check whether the | ||
119 | page it located is still in the swapcache, or shrink_mmap deleted it. | ||
120 | (This race is due to the fact that shrink_mmap looks at the page ref | ||
121 | count with pagecache_lock, but then drops pagecache_lock before deleting | ||
122 | the page from the scache). | ||
123 | |||
124 | do_wp_page and do_swap_page have MP races in them while trying to figure | ||
125 | out whether a page is "shared", by looking at the page_count + swap_count. | ||
126 | To preserve the sum of the counts, the page lock _must_ be acquired before | ||
127 | calling is_page_shared (else processes might switch their swap_count refs | ||
128 | to the page count refs, after the page count ref has been snapshotted). | ||
129 | |||
130 | Swap device deletion code currently breaks all the scache assumptions, | ||
131 | since it grabs neither mmap_sem nor page_table_lock. | ||
diff --git a/Documentation/vm/numa b/Documentation/vm/numa new file mode 100644 index 000000000000..4b8db1bd3b78 --- /dev/null +++ b/Documentation/vm/numa | |||
@@ -0,0 +1,41 @@ | |||
1 | Started Nov 1999 by Kanoj Sarcar <kanoj@sgi.com> | ||
2 | |||
3 | The intent of this file is to have an uptodate, running commentary | ||
4 | from different people about NUMA specific code in the Linux vm. | ||
5 | |||
6 | What is NUMA? It is an architecture where the memory access times | ||
7 | for different regions of memory from a given processor varies | ||
8 | according to the "distance" of the memory region from the processor. | ||
9 | Each region of memory to which access times are the same from any | ||
10 | cpu, is called a node. On such architectures, it is beneficial if | ||
11 | the kernel tries to minimize inter node communications. Schemes | ||
12 | for this range from kernel text and read-only data replication | ||
13 | across nodes, and trying to house all the data structures that | ||
14 | key components of the kernel need on memory on that node. | ||
15 | |||
16 | Currently, all the numa support is to provide efficient handling | ||
17 | of widely discontiguous physical memory, so architectures which | ||
18 | are not NUMA but can have huge holes in the physical address space | ||
19 | can use the same code. All this code is bracketed by CONFIG_DISCONTIGMEM. | ||
20 | |||
21 | The initial port includes NUMAizing the bootmem allocator code by | ||
22 | encapsulating all the pieces of information into a bootmem_data_t | ||
23 | structure. Node specific calls have been added to the allocator. | ||
24 | In theory, any platform which uses the bootmem allocator should | ||
25 | be able to to put the bootmem and mem_map data structures anywhere | ||
26 | it deems best. | ||
27 | |||
28 | Each node's page allocation data structures have also been encapsulated | ||
29 | into a pg_data_t. The bootmem_data_t is just one part of this. To | ||
30 | make the code look uniform between NUMA and regular UMA platforms, | ||
31 | UMA platforms have a statically allocated pg_data_t too (contig_page_data). | ||
32 | For the sake of uniformity, the function num_online_nodes() is also defined | ||
33 | for all platforms. As we run benchmarks, we might decide to NUMAize | ||
34 | more variables like low_on_memory, nr_free_pages etc into the pg_data_t. | ||
35 | |||
36 | The NUMA aware page allocation code currently tries to allocate pages | ||
37 | from different nodes in a round robin manner. This will be changed to | ||
38 | do concentratic circle search, starting from current node, once the | ||
39 | NUMA port achieves more maturity. The call alloc_pages_node has been | ||
40 | added, so that drivers can make the call and not worry about whether | ||
41 | it is running on a NUMA or UMA platform. | ||
diff --git a/Documentation/vm/overcommit-accounting b/Documentation/vm/overcommit-accounting new file mode 100644 index 000000000000..21c7b1f8f32b --- /dev/null +++ b/Documentation/vm/overcommit-accounting | |||
@@ -0,0 +1,73 @@ | |||
1 | The Linux kernel supports the following overcommit handling modes | ||
2 | |||
3 | 0 - Heuristic overcommit handling. Obvious overcommits of | ||
4 | address space are refused. Used for a typical system. It | ||
5 | ensures a seriously wild allocation fails while allowing | ||
6 | overcommit to reduce swap usage. root is allowed to | ||
7 | allocate slighly more memory in this mode. This is the | ||
8 | default. | ||
9 | |||
10 | 1 - Always overcommit. Appropriate for some scientific | ||
11 | applications. | ||
12 | |||
13 | 2 - Don't overcommit. The total address space commit | ||
14 | for the system is not permitted to exceed swap + a | ||
15 | configurable percentage (default is 50) of physical RAM. | ||
16 | Depending on the percentage you use, in most situations | ||
17 | this means a process will not be killed while accessing | ||
18 | pages but will receive errors on memory allocation as | ||
19 | appropriate. | ||
20 | |||
21 | The overcommit policy is set via the sysctl `vm.overcommit_memory'. | ||
22 | |||
23 | The overcommit percentage is set via `vm.overcommit_ratio'. | ||
24 | |||
25 | The current overcommit limit and amount committed are viewable in | ||
26 | /proc/meminfo as CommitLimit and Committed_AS respectively. | ||
27 | |||
28 | Gotchas | ||
29 | ------- | ||
30 | |||
31 | The C language stack growth does an implicit mremap. If you want absolute | ||
32 | guarantees and run close to the edge you MUST mmap your stack for the | ||
33 | largest size you think you will need. For typical stack usage this does | ||
34 | not matter much but it's a corner case if you really really care | ||
35 | |||
36 | In mode 2 the MAP_NORESERVE flag is ignored. | ||
37 | |||
38 | |||
39 | How It Works | ||
40 | ------------ | ||
41 | |||
42 | The overcommit is based on the following rules | ||
43 | |||
44 | For a file backed map | ||
45 | SHARED or READ-only - 0 cost (the file is the map not swap) | ||
46 | PRIVATE WRITABLE - size of mapping per instance | ||
47 | |||
48 | For an anonymous or /dev/zero map | ||
49 | SHARED - size of mapping | ||
50 | PRIVATE READ-only - 0 cost (but of little use) | ||
51 | PRIVATE WRITABLE - size of mapping per instance | ||
52 | |||
53 | Additional accounting | ||
54 | Pages made writable copies by mmap | ||
55 | shmfs memory drawn from the same pool | ||
56 | |||
57 | Status | ||
58 | ------ | ||
59 | |||
60 | o We account mmap memory mappings | ||
61 | o We account mprotect changes in commit | ||
62 | o We account mremap changes in size | ||
63 | o We account brk | ||
64 | o We account munmap | ||
65 | o We report the commit status in /proc | ||
66 | o Account and check on fork | ||
67 | o Review stack handling/building on exec | ||
68 | o SHMfs accounting | ||
69 | o Implement actual limit enforcement | ||
70 | |||
71 | To Do | ||
72 | ----- | ||
73 | o Account ptrace pages (this is hard) | ||