aboutsummaryrefslogtreecommitdiffstats
path: root/Documentation/vm
diff options
context:
space:
mode:
authorDave Hansen <dave.hansen@intel.com>2014-01-23 18:52:47 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2014-01-23 19:36:50 -0500
commit57ea8171d2bc245e22e760d2e4292da8e5a15175 (patch)
tree1be4c08906126c6490b369300f51e5283cd9b472 /Documentation/vm
parent372c7209d6a05130b9d867f7ba350dec19e54030 (diff)
mm: documentation: remove hopelessly out-of-date locking doc
Documentation/vm/locking is a blast from the past. In the entire git history, it has had precisely Three modifications. Two of those look to be pure renames, and the third was from 2005. The doc contains such gems as: > The page_table_lock is grabbed while holding the > kernel_lock spinning monitor. > Page stealers hold kernel_lock to protect against a bunch of > races. Or this which talks about mmap_sem: > 4. The exception to this rule is expand_stack, which just > takes the read lock and the page_table_lock, this is ok > because it doesn't really modify fields anybody relies on. expand_stack() doesn't take any locks any more directly, and the mmap_sem acquisition was long ago moved up in to the page fault code itself. It could be argued that we need to rewrite this, but it is dangerous to leave it as-is. It will confuse more people than it helps. Signed-off-by: Dave Hansen <dave.hansen@intel.com> Cc: Hugh Dickins <hughd@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation/vm')
-rw-r--r--Documentation/vm/locking130
1 files changed, 0 insertions, 130 deletions
diff --git a/Documentation/vm/locking b/Documentation/vm/locking
deleted file mode 100644
index f61228bd6395..000000000000
--- a/Documentation/vm/locking
+++ /dev/null
@@ -1,130 +0,0 @@
1Started Oct 1999 by Kanoj Sarcar <kanojsarcar@yahoo.com>
2
3The intent of this file is to have an uptodate, running commentary
4from different people about how locking and synchronization is done
5in the Linux vm code.
6
7page_table_lock & mmap_sem
8--------------------------------------
9
10Page stealers pick processes out of the process pool and scan for
11the best process to steal pages from. To guarantee the existence
12of the victim mm, a mm_count inc and a mmdrop are done in swap_out().
13Page stealers hold kernel_lock to protect against a bunch of races.
14The vma list of the victim mm is also scanned by the stealer,
15and the page_table_lock is used to preserve list sanity against the
16process adding/deleting to the list. This also guarantees existence
17of the vma. Vma existence is not guaranteed once try_to_swap_out()
18drops the page_table_lock. To guarantee the existence of the underlying
19file structure, a get_file is done before the swapout() method is
20invoked. The page passed into swapout() is guaranteed not to be reused
21for a different purpose because the page reference count due to being
22present in the user's pte is not released till after swapout() returns.
23
24Any code that modifies the vmlist, or the vm_start/vm_end/
25vm_flags:VM_LOCKED/vm_next of any vma *in the list* must prevent
26kswapd from looking at the chain.
27
28The rules are:
291. To scan the vmlist (look but don't touch) you must hold the
30 mmap_sem with read bias, i.e. down_read(&mm->mmap_sem)
312. To modify the vmlist you need to hold the mmap_sem with
32 read&write bias, i.e. down_write(&mm->mmap_sem) *AND*
33 you need to take the page_table_lock.
343. The swapper takes _just_ the page_table_lock, this is done
35 because the mmap_sem can be an extremely long lived lock
36 and the swapper just cannot sleep on that.
374. The exception to this rule is expand_stack, which just
38 takes the read lock and the page_table_lock, this is ok
39 because it doesn't really modify fields anybody relies on.
405. You must be able to guarantee that while holding page_table_lock
41 or page_table_lock of mm A, you will not try to get either lock
42 for mm B.
43
44The caveats are:
451. find_vma() makes use of, and updates, the mmap_cache pointer hint.
46The update of mmap_cache is racy (page stealer can race with other code
47that invokes find_vma with mmap_sem held), but that is okay, since it
48is a hint. This can be fixed, if desired, by having find_vma grab the
49page_table_lock.
50
51
52Code that add/delete elements from the vmlist chain are
531. callers of insert_vm_struct
542. callers of merge_segments
553. callers of avl_remove
56
57Code that changes vm_start/vm_end/vm_flags:VM_LOCKED of vma's on
58the list:
591. expand_stack
602. mprotect
613. mlock
624. mremap
63
64It is advisable that changes to vm_start/vm_end be protected, although
65in some cases it is not really needed. Eg, vm_start is modified by
66expand_stack(), it is hard to come up with a destructive scenario without
67having the vmlist protection in this case.
68
69The page_table_lock nests with the inode i_mmap_mutex and the kmem cache
70c_spinlock spinlocks. This is okay, since the kmem code asks for pages after
71dropping c_spinlock. The page_table_lock also nests with pagecache_lock and
72pagemap_lru_lock spinlocks, and no code asks for memory with these locks
73held.
74
75The page_table_lock is grabbed while holding the kernel_lock spinning monitor.
76
77The page_table_lock is a spin lock.
78
79Note: PTL can also be used to guarantee that no new clones using the
80mm start up ... this is a loose form of stability on mm_users. For
81example, it is used in copy_mm to protect against a racing tlb_gather_mmu
82single address space optimization, so that the zap_page_range (from
83truncate) does not lose sending ipi's to cloned threads that might
84be spawned underneath it and go to user mode to drag in pte's into tlbs.
85
86swap_lock
87--------------
88The swap devices are chained in priority order from the "swap_list" header.
89The "swap_list" is used for the round-robin swaphandle allocation strategy.
90The #free swaphandles is maintained in "nr_swap_pages". These two together
91are protected by the swap_lock.
92
93The swap_lock also protects all the device reference counts on the
94corresponding swaphandles, maintained in the "swap_map" array, and the
95"highest_bit" and "lowest_bit" fields.
96
97The swap_lock is a spinlock, and is never acquired from intr level.
98
99To prevent races between swap space deletion or async readahead swapins
100deciding whether a swap handle is being used, ie worthy of being read in
101from disk, and an unmap -> swap_free making the handle unused, the swap
102delete and readahead code grabs a temp reference on the swaphandle to
103prevent warning messages from swap_duplicate <- read_swap_cache_async.
104
105Swap cache locking
106------------------
107Pages are added into the swap cache with kernel_lock held, to make sure
108that multiple pages are not being added (and hence lost) by associating
109all of them with the same swaphandle.
110
111Pages are guaranteed not to be removed from the scache if the page is
112"shared": ie, other processes hold reference on the page or the associated
113swap handle. The only code that does not follow this rule is shrink_mmap,
114which deletes pages from the swap cache if no process has a reference on
115the page (multiple processes might have references on the corresponding
116swap handle though). lookup_swap_cache() races with shrink_mmap, when
117establishing a reference on a scache page, so, it must check whether the
118page it located is still in the swapcache, or shrink_mmap deleted it.
119(This race is due to the fact that shrink_mmap looks at the page ref
120count with pagecache_lock, but then drops pagecache_lock before deleting
121the page from the scache).
122
123do_wp_page and do_swap_page have MP races in them while trying to figure
124out whether a page is "shared", by looking at the page_count + swap_count.
125To preserve the sum of the counts, the page lock _must_ be acquired before
126calling is_page_shared (else processes might switch their swap_count refs
127to the page count refs, after the page count ref has been snapshotted).
128
129Swap device deletion code currently breaks all the scache assumptions,
130since it grabs neither mmap_sem nor page_table_lock.