diff options
author | Jonathan Herman <hermanjl@cs.unc.edu> | 2013-01-17 16:15:55 -0500 |
---|---|---|
committer | Jonathan Herman <hermanjl@cs.unc.edu> | 2013-01-17 16:15:55 -0500 |
commit | 8dea78da5cee153b8af9c07a2745f6c55057fe12 (patch) | |
tree | a8f4d49d63b1ecc92f2fddceba0655b2472c5bd9 /Documentation/vm/unevictable-lru.txt | |
parent | 406089d01562f1e2bf9f089fd7637009ebaad589 (diff) |
Patched in Tegra support.
Diffstat (limited to 'Documentation/vm/unevictable-lru.txt')
-rw-r--r-- | Documentation/vm/unevictable-lru.txt | 22 |
1 files changed, 13 insertions, 9 deletions
diff --git a/Documentation/vm/unevictable-lru.txt b/Documentation/vm/unevictable-lru.txt index a68db7692ee..97bae3c576c 100644 --- a/Documentation/vm/unevictable-lru.txt +++ b/Documentation/vm/unevictable-lru.txt | |||
@@ -197,8 +197,12 @@ the pages are also "rescued" from the unevictable list in the process of | |||
197 | freeing them. | 197 | freeing them. |
198 | 198 | ||
199 | page_evictable() also checks for mlocked pages by testing an additional page | 199 | page_evictable() also checks for mlocked pages by testing an additional page |
200 | flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is | 200 | flag, PG_mlocked (as wrapped by PageMlocked()). If the page is NOT mlocked, |
201 | faulted into a VM_LOCKED vma, or found in a vma being VM_LOCKED. | 201 | and a non-NULL VMA is supplied, page_evictable() will check whether the VMA is |
202 | VM_LOCKED via is_mlocked_vma(). is_mlocked_vma() will SetPageMlocked() and | ||
203 | update the appropriate statistics if the vma is VM_LOCKED. This method allows | ||
204 | efficient "culling" of pages in the fault path that are being faulted in to | ||
205 | VM_LOCKED VMAs. | ||
202 | 206 | ||
203 | 207 | ||
204 | VMSCAN'S HANDLING OF UNEVICTABLE PAGES | 208 | VMSCAN'S HANDLING OF UNEVICTABLE PAGES |
@@ -367,8 +371,8 @@ mlock_fixup() filters several classes of "special" VMAs: | |||
367 | mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to | 371 | mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to |
368 | allocate the huge pages and populate the ptes. | 372 | allocate the huge pages and populate the ptes. |
369 | 373 | ||
370 | 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages, | 374 | 3) VMAs with VM_DONTEXPAND or VM_RESERVED are generally userspace mappings of |
371 | such as the VDSO page, relay channel pages, etc. These pages | 375 | kernel pages, such as the VDSO page, relay channel pages, etc. These pages |
372 | are inherently unevictable and are not managed on the LRU lists. | 376 | are inherently unevictable and are not managed on the LRU lists. |
373 | mlock_fixup() treats these VMAs the same as hugetlbfs VMAs. It calls | 377 | mlock_fixup() treats these VMAs the same as hugetlbfs VMAs. It calls |
374 | make_pages_present() to populate the ptes. | 378 | make_pages_present() to populate the ptes. |
@@ -534,7 +538,7 @@ different reverse map mechanisms. | |||
534 | process because mlocked pages are migratable. However, for reclaim, if | 538 | process because mlocked pages are migratable. However, for reclaim, if |
535 | the page is mapped into a VM_LOCKED VMA, the scan stops. | 539 | the page is mapped into a VM_LOCKED VMA, the scan stops. |
536 | 540 | ||
537 | try_to_unmap_anon() attempts to acquire in read mode the mmap semaphore of | 541 | try_to_unmap_anon() attempts to acquire in read mode the mmap semphore of |
538 | the mm_struct to which the VMA belongs. If this is successful, it will | 542 | the mm_struct to which the VMA belongs. If this is successful, it will |
539 | mlock the page via mlock_vma_page() - we wouldn't have gotten to | 543 | mlock the page via mlock_vma_page() - we wouldn't have gotten to |
540 | try_to_unmap_anon() if the page were already mlocked - and will return | 544 | try_to_unmap_anon() if the page were already mlocked - and will return |
@@ -615,11 +619,11 @@ all PTEs from the page. For this purpose, the unevictable/mlock infrastructure | |||
615 | introduced a variant of try_to_unmap() called try_to_munlock(). | 619 | introduced a variant of try_to_unmap() called try_to_munlock(). |
616 | 620 | ||
617 | try_to_munlock() calls the same functions as try_to_unmap() for anonymous and | 621 | try_to_munlock() calls the same functions as try_to_unmap() for anonymous and |
618 | mapped file pages with an additional argument specifying unlock versus unmap | 622 | mapped file pages with an additional argument specifing unlock versus unmap |
619 | processing. Again, these functions walk the respective reverse maps looking | 623 | processing. Again, these functions walk the respective reverse maps looking |
620 | for VM_LOCKED VMAs. When such a VMA is found for anonymous pages and file | 624 | for VM_LOCKED VMAs. When such a VMA is found for anonymous pages and file |
621 | pages mapped in linear VMAs, as in the try_to_unmap() case, the functions | 625 | pages mapped in linear VMAs, as in the try_to_unmap() case, the functions |
622 | attempt to acquire the associated mmap semaphore, mlock the page via | 626 | attempt to acquire the associated mmap semphore, mlock the page via |
623 | mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the | 627 | mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the |
624 | pre-clearing of the page's PG_mlocked done by munlock_vma_page. | 628 | pre-clearing of the page's PG_mlocked done by munlock_vma_page. |
625 | 629 | ||
@@ -637,7 +641,7 @@ with it - the usual fallback position. | |||
637 | Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's | 641 | Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's |
638 | reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA. | 642 | reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA. |
639 | However, the scan can terminate when it encounters a VM_LOCKED VMA and can | 643 | However, the scan can terminate when it encounters a VM_LOCKED VMA and can |
640 | successfully acquire the VMA's mmap semaphore for read and mlock the page. | 644 | successfully acquire the VMA's mmap semphore for read and mlock the page. |
641 | Although try_to_munlock() might be called a great many times when munlocking a | 645 | Although try_to_munlock() might be called a great many times when munlocking a |
642 | large region or tearing down a large address space that has been mlocked via | 646 | large region or tearing down a large address space that has been mlocked via |
643 | mlockall(), overall this is a fairly rare event. | 647 | mlockall(), overall this is a fairly rare event. |
@@ -647,7 +651,7 @@ PAGE RECLAIM IN shrink_*_list() | |||
647 | ------------------------------- | 651 | ------------------------------- |
648 | 652 | ||
649 | shrink_active_list() culls any obviously unevictable pages - i.e. | 653 | shrink_active_list() culls any obviously unevictable pages - i.e. |
650 | !page_evictable(page) - diverting these to the unevictable list. | 654 | !page_evictable(page, NULL) - diverting these to the unevictable list. |
651 | However, shrink_active_list() only sees unevictable pages that made it onto the | 655 | However, shrink_active_list() only sees unevictable pages that made it onto the |
652 | active/inactive lru lists. Note that these pages do not have PageUnevictable | 656 | active/inactive lru lists. Note that these pages do not have PageUnevictable |
653 | set - otherwise they would be on the unevictable list and shrink_active_list | 657 | set - otherwise they would be on the unevictable list and shrink_active_list |