aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>2013-10-16 16:46:48 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-10-17 00:35:52 -0400
commit16c794b4f3ff38e97a71ce2472c7792f859d1022 (patch)
treeca123b6bd95a344c302bf77338333aa251d48f2b /mm
parentae39332162a837c3791bb21172d22382a90a6fd1 (diff)
mm/hugetlb.c: correct missing private flag clearing
We should clear the page's private flag when returing the page to the hugepage pool. Otherwise, marked hugepage can be allocated to the user who tries to allocate the non-reserved hugepage. If this user fail to map this hugepage, he would try to return the page to the hugepage pool. Since this page has a private flag, resv_huge_pages would mistakenly increase. This patch fixes this situation. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.cz> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hughd@google.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Hillf Danton <dhillf@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/hugetlb.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b49579c7f2a5..691f2264a6ce 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -653,6 +653,7 @@ static void free_huge_page(struct page *page)
653 BUG_ON(page_count(page)); 653 BUG_ON(page_count(page));
654 BUG_ON(page_mapcount(page)); 654 BUG_ON(page_mapcount(page));
655 restore_reserve = PagePrivate(page); 655 restore_reserve = PagePrivate(page);
656 ClearPagePrivate(page);
656 657
657 spin_lock(&hugetlb_lock); 658 spin_lock(&hugetlb_lock);
658 hugetlb_cgroup_uncharge_page(hstate_index(h), 659 hugetlb_cgroup_uncharge_page(hstate_index(h),