diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2017-11-15 20:35:33 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-15 21:21:04 -0500 |
commit | b4e98d9ac775907cc53fb08fcb6776deb7694e30 (patch) | |
tree | 4a82caff5eab86a66f078622acfd68df5ac92235 /mm/debug.c | |
parent | 7d6c4dfa4de96d11b9d6adaf5aa5ca8c54670258 (diff) |
mm: account pud page tables
On a machine with 5-level paging support a process can allocate
significant amount of memory and stay unnoticed by oom-killer and memory
cgroup. The trick is to allocate a lot of PUD page tables. We don't
account PUD page tables, only PMD and PTE.
We already addressed the same issue for PMD page tables, see commit
dc6c9a35b66b ("mm: account pmd page tables to the process").
Introduction of 5-level paging brings the same issue for PUD page
tables.
The patch expands accounting to PUD level.
[kirill.shutemov@linux.intel.com: s/pmd_t/pud_t/]
Link: http://lkml.kernel.org/r/20171004074305.x35eh5u7ybbt5kar@black.fi.intel.com
[heiko.carstens@de.ibm.com: s390/mm: fix pud table accounting]
Link: http://lkml.kernel.org/r/20171103090551.18231-1-heiko.carstens@de.ibm.com
Link: http://lkml.kernel.org/r/20171002080427.3320-1-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/debug.c')
-rw-r--r-- | mm/debug.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/debug.c b/mm/debug.c index 6726bec731c9..a12d826bb774 100644 --- a/mm/debug.c +++ b/mm/debug.c | |||
@@ -105,7 +105,8 @@ void dump_mm(const struct mm_struct *mm) | |||
105 | "get_unmapped_area %p\n" | 105 | "get_unmapped_area %p\n" |
106 | #endif | 106 | #endif |
107 | "mmap_base %lu mmap_legacy_base %lu highest_vm_end %lu\n" | 107 | "mmap_base %lu mmap_legacy_base %lu highest_vm_end %lu\n" |
108 | "pgd %p mm_users %d mm_count %d nr_ptes %lu nr_pmds %lu map_count %d\n" | 108 | "pgd %p mm_users %d mm_count %d\n" |
109 | "nr_ptes %lu nr_pmds %lu nr_puds %lu map_count %d\n" | ||
109 | "hiwater_rss %lx hiwater_vm %lx total_vm %lx locked_vm %lx\n" | 110 | "hiwater_rss %lx hiwater_vm %lx total_vm %lx locked_vm %lx\n" |
110 | "pinned_vm %lx data_vm %lx exec_vm %lx stack_vm %lx\n" | 111 | "pinned_vm %lx data_vm %lx exec_vm %lx stack_vm %lx\n" |
111 | "start_code %lx end_code %lx start_data %lx end_data %lx\n" | 112 | "start_code %lx end_code %lx start_data %lx end_data %lx\n" |
@@ -136,7 +137,8 @@ void dump_mm(const struct mm_struct *mm) | |||
136 | mm->pgd, atomic_read(&mm->mm_users), | 137 | mm->pgd, atomic_read(&mm->mm_users), |
137 | atomic_read(&mm->mm_count), | 138 | atomic_read(&mm->mm_count), |
138 | atomic_long_read((atomic_long_t *)&mm->nr_ptes), | 139 | atomic_long_read((atomic_long_t *)&mm->nr_ptes), |
139 | mm_nr_pmds((struct mm_struct *)mm), | 140 | mm_nr_pmds(mm), |
141 | mm_nr_puds(mm), | ||
140 | mm->map_count, | 142 | mm->map_count, |
141 | mm->hiwater_rss, mm->hiwater_vm, mm->total_vm, mm->locked_vm, | 143 | mm->hiwater_rss, mm->hiwater_vm, mm->total_vm, mm->locked_vm, |
142 | mm->pinned_vm, mm->data_vm, mm->exec_vm, mm->stack_vm, | 144 | mm->pinned_vm, mm->data_vm, mm->exec_vm, mm->stack_vm, |