summaryrefslogtreecommitdiffstats
path: root/mm/mlock.c
diff options
context:
space:
mode:
authorShakeel Butt <shakeelb@google.com>2017-11-15 20:38:26 -0500
committerLinus Torvalds <torvalds@linux-foundation.org>2017-11-15 21:21:07 -0500
commit72b03fcd5d515441d4aefcad01c1c4392c8099c9 (patch)
treed96a20e332ed3f147ef4431c5e410205fc376e9a /mm/mlock.c
parent4518085e127dff97e74f74a8780d7564e273bec8 (diff)
mm: mlock: remove lru_add_drain_all()
lru_add_drain_all() is not required by mlock() and it will drain everything that has been cached at the time mlock is called. And that is not really related to the memory which will be faulted in (and cached) and mlocked by the syscall itself. If anything lru_add_drain_all() should be called _after_ pages have been mlocked and faulted in but even that is not strictly needed because those pages would get to the appropriate LRUs lazily during the reclaim path. Moreover follow_page_pte (gup) will drain the local pcp LRU cache. On larger machines the overhead of lru_add_drain_all() in mlock() can be significant when mlocking data already in memory. We have observed high latency in mlock() due to lru_add_drain_all() when the users were mlocking in memory tmpfs files. [mhocko@suse.com: changelog fix] Link: http://lkml.kernel.org/r/20171019222507.2894-1-shakeelb@google.com Signed-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Yisheng Xie <xieyisheng1@huawei.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Greg Thelen <gthelen@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mlock.c')
-rw-r--r--mm/mlock.c5
1 files changed, 0 insertions, 5 deletions
diff --git a/mm/mlock.c b/mm/mlock.c
index ed37cb208d19..30472d438794 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -670,8 +670,6 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla
670 if (!can_do_mlock()) 670 if (!can_do_mlock())
671 return -EPERM; 671 return -EPERM;
672 672
673 lru_add_drain_all(); /* flush pagevec */
674
675 len = PAGE_ALIGN(len + (offset_in_page(start))); 673 len = PAGE_ALIGN(len + (offset_in_page(start)));
676 start &= PAGE_MASK; 674 start &= PAGE_MASK;
677 675
@@ -798,9 +796,6 @@ SYSCALL_DEFINE1(mlockall, int, flags)
798 if (!can_do_mlock()) 796 if (!can_do_mlock())
799 return -EPERM; 797 return -EPERM;
800 798
801 if (flags & MCL_CURRENT)
802 lru_add_drain_all(); /* flush pagevec */
803
804 lock_limit = rlimit(RLIMIT_MEMLOCK); 799 lock_limit = rlimit(RLIMIT_MEMLOCK);
805 lock_limit >>= PAGE_SHIFT; 800 lock_limit >>= PAGE_SHIFT;
806 801