aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* drivers/lguest/page_tables.c: rename do_set_pte()Andrew Morton2014-04-07
| | | | | | | | | | "mm: introduce vm_ops->map_pages()" wants to export a do_set_pte() from core kernel. Rename lguest's do_set_pte() to something more lguest-specific. Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* tools/vm/page-types.c: page-cache sniffing featureKonstantin Khlebnikov2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After this patch 'page-types' can walk over a file's mappings and analyze populated page cache pages mostly without disturbing its state. It maps chunk of file, marks VMA as MADV_RANDOM to turn off readahead, pokes VMA via mincore() to determine cached pages, triggers page-fault only for them, and finally gathers information via pagemap/kpageflags. Before unmap it marks VMA as MADV_SEQUENTIAL for ignoring reference bits. usage: page-types -f <path> If <path> is directory it will analyse all files in all subdirectories. Symlinks are not followed as well as mount points. Hardlinks aren't handled, they'll be dumped as many times as they are found. Recursive walk brings all dentries into dcache and populates page cache of block-devices aka 'Buffers'. Probably it's worth to add ioctl for dumping file page cache as array of PFNs as a replacement for this hackish juggling with mmap/madvise/mincore/pagemap. Also recursive walk could be replaced with dumping cached inodes via some ioctl or debugfs interface followed by openning them via open_by_handle_at, this would fix hardlinks handling and unneeded population of dcache and buffers. This interface might be used as data source for constructing readahead plans and for background optimizations of actively used files. collateral changes: + fix 64-bit LFS: define _FILE_OFFSET_BITS instead of _LARGEFILE64_SOURCE + replace lseek + read with single pread + make show_page_range() reusable after flush usage example: ~/src/linux/tools/vm$ sudo ./page-types -L -f page-types foffset offset flags page-types Inode: 2229277 Size: 89065 (22 pages) Modify: Tue Feb 25 12:00:59 2014 (162 seconds ago) Access: Tue Feb 25 12:01:00 2014 (161 seconds ago) 0 3cbf3b __RU_lA____M________________________ 1 38946a __RU_lA____M________________________ 2 1a3cec __RU_lA____M________________________ 3 1a8321 __RU_lA____M________________________ 4 3af7cc __RU_lA____M________________________ 5 1ed532 __RU_lA_____________________________ 6 2e436a __RU_lA_____________________________ 7 29a35e ___U_lA_____________________________ 8 2de86e ___U_lA_____________________________ 9 3bdfb4 ___U_lA_____________________________ 10 3cd8a3 ___U_lA_____________________________ 11 2afa50 ___U_lA_____________________________ 12 2534c2 ___U_lA_____________________________ 13 1b7a40 ___U_lA_____________________________ 14 17b0be ___U_lA_____________________________ 15 392b0c ___U_lA_____________________________ 16 3ba46a __RU_lA_____________________________ 17 397dc8 ___U_lA_____________________________ 18 1f2a36 ___U_lA_____________________________ 19 21fd30 __RU_lA_____________________________ 20 2c35ba __RU_l______________________________ 21 20f181 __RU_l______________________________ flags page-count MB symbolic-flags long-symbolic-flags 0x000000000000002c 2 0 __RU_l______________________________ referenced,uptodate,lru 0x0000000000000068 11 0 ___U_lA_____________________________ uptodate,lru,active 0x000000000000006c 4 0 __RU_lA_____________________________ referenced,uptodate,lru,active 0x000000000000086c 5 0 __RU_lA____M________________________ referenced,uptodate,lru,active,mmap total 22 0 ~/src/linux/tools/vm$ sudo ./page-types -f / flags page-count MB symbolic-flags long-symbolic-flags 0x0000000000000028 21761 85 ___U_l______________________________ uptodate,lru 0x000000000000002c 127279 497 __RU_l______________________________ referenced,uptodate,lru 0x0000000000000068 74160 289 ___U_lA_____________________________ uptodate,lru,active 0x000000000000006c 84469 329 __RU_lA_____________________________ referenced,uptodate,lru,active 0x000000000000007c 1 0 __RUDlA_____________________________ referenced,uptodate,dirty,lru,active 0x0000000000000228 370 1 ___U_l___I__________________________ uptodate,lru,reclaim 0x0000000000000828 49 0 ___U_l_____M________________________ uptodate,lru,mmap 0x000000000000082c 126 0 __RU_l_____M________________________ referenced,uptodate,lru,mmap 0x0000000000000868 137 0 ___U_lA____M________________________ uptodate,lru,active,mmap 0x000000000000086c 12890 50 __RU_lA____M________________________ referenced,uptodate,lru,active,mmap total 321242 1254 Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: disable split page table lock for !MMUKirill A. Shutemov2014-04-07
| | | | | | | | | | | | | | | | | | | There's no reason to enable split page table lock if don't have page tables. It also triggers build error at least on ARM since we don't define pmd_page() for !MMU. In file included from arch/arm/kernel/asm-offsets.c:14:0: include/linux/mm.h: In function 'pte_lockptr': include/linux/mm.h:1392:2: error: implicit declaration of function 'pmd_page' [-Werror=implicit-function-declaration] include/linux/mm.h:1392:2: warning: passing argument 1 of 'ptlock_ptr' makes pointer from integer without a cast [enabled by default] include/linux/mm.h:1384:27: note: expected 'struct page *' but argument is of type 'int' Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* exec: kill the unnecessary mm->def_flags setting in load_elf_binary()Alex Thorlton2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | load_elf_binary() sets current->mm->def_flags = def_flags and def_flags is always zero. Not only this looks strange, this is unnecessary because mm_init() has already set ->def_flags = 0. Signed-off-by: Alex Thorlton <athorlton@sgi.com> Suggested-by: Oleg Nesterov <oleg@redhat.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mel Gorman <mgorman@suse.de> Acked-by: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: David Rientjes <rientjes@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm, thp: add VM_INIT_DEF_MASK and PRCTL_THP_DISABLEAlex Thorlton2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Add VM_INIT_DEF_MASK, to allow us to set the default flags for VMs. It also adds a prctl control which allows us to set the THP disable bit in mm->def_flags so that VMs will pick up the setting as they are created. Signed-off-by: Alex Thorlton <athorlton@sgi.com> Suggested-by: Oleg Nesterov <oleg@redhat.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mel Gorman <mgorman@suse.de> Acked-by: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: David Rientjes <rientjes@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: revert "thp: make MADV_HUGEPAGE check for mm->def_flags"Alex Thorlton2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main motivation behind this patch is to provide a way to disable THP for jobs where the code cannot be modified, and using a malloc hook with madvise is not an option (i.e. statically allocated data). This patch allows us to do just that, without affecting other jobs running on the system. We need to do this sort of thing for jobs where THP hurts performance, due to the possibility of increased remote memory accesses that can be created by situations such as the following: When you touch 1 byte of an untouched, contiguous 2MB chunk, a THP will be handed out, and the THP will be stuck on whatever node the chunk was originally referenced from. If many remote nodes need to do work on that same chunk, they'll be making remote accesses. With THP disabled, 4K pages can be handed out to separate nodes as they're needed, greatly reducing the amount of remote accesses to memory. This patch is based on some of my work combined with some suggestions/patches given by Oleg Nesterov. The main goal here is to add a prctl switch to allow us to disable to THP on a per mm_struct basis. Here's a bit of test data with the new patch in place... First with the flag unset: # perf stat -a ./prctl_wrapper_mmv3 0 ./thp_pthread -C 0 -m 0 -c 512 -b 256g Setting thp_disabled for this task... thp_disable: 0 Set thp_disabled state to 0 Process pid = 18027 PF/ MAX MIN TOTCPU/ TOT_PF/ TOT_PF/ WSEC/ TYPE: CPUS WALL WALL SYS USER TOTCPU CPU WALL_SEC SYS_SEC CPU NODES 512 1.120 0.060 0.000 0.110 0.110 0.000 28571428864 -9223372036854775808 55803572 23 Performance counter stats for './prctl_wrapper_mmv3_hack 0 ./thp_pthread -C 0 -m 0 -c 512 -b 256g': 273719072.841402 task-clock # 641.026 CPUs utilized [100.00%] 1,008,986 context-switches # 0.000 M/sec [100.00%] 7,717 CPU-migrations # 0.000 M/sec [100.00%] 1,698,932 page-faults # 0.000 M/sec 355,222,544,890,379 cycles # 1.298 GHz [100.00%] 536,445,412,234,588 stalled-cycles-frontend # 151.02% frontend cycles idle [100.00%] 409,110,531,310,223 stalled-cycles-backend # 115.17% backend cycles idle [100.00%] 148,286,797,266,411 instructions # 0.42 insns per cycle # 3.62 stalled cycles per insn [100.00%] 27,061,793,159,503 branches # 98.867 M/sec [100.00%] 1,188,655,196 branch-misses # 0.00% of all branches 427.001706337 seconds time elapsed Now with the flag set: # perf stat -a ./prctl_wrapper_mmv3 1 ./thp_pthread -C 0 -m 0 -c 512 -b 256g Setting thp_disabled for this task... thp_disable: 1 Set thp_disabled state to 1 Process pid = 144957 PF/ MAX MIN TOTCPU/ TOT_PF/ TOT_PF/ WSEC/ TYPE: CPUS WALL WALL SYS USER TOTCPU CPU WALL_SEC SYS_SEC CPU NODES 512 0.620 0.260 0.250 0.320 0.570 0.001 51612901376 128000000000 100806448 23 Performance counter stats for './prctl_wrapper_mmv3_hack 1 ./thp_pthread -C 0 -m 0 -c 512 -b 256g': 138789390.540183 task-clock # 641.959 CPUs utilized [100.00%] 534,205 context-switches # 0.000 M/sec [100.00%] 4,595 CPU-migrations # 0.000 M/sec [100.00%] 63,133,119 page-faults # 0.000 M/sec 147,977,747,269,768 cycles # 1.066 GHz [100.00%] 200,524,196,493,108 stalled-cycles-frontend # 135.51% frontend cycles idle [100.00%] 105,175,163,716,388 stalled-cycles-backend # 71.07% backend cycles idle [100.00%] 180,916,213,503,160 instructions # 1.22 insns per cycle # 1.11 stalled cycles per insn [100.00%] 26,999,511,005,868 branches # 194.536 M/sec [100.00%] 714,066,351 branch-misses # 0.00% of all branches 216.196778807 seconds time elapsed As with previous versions of the patch, We're getting about a 2x performance increase here. Here's a link to the test case I used, along with the little wrapper to activate the flag: http://oss.sgi.com/projects/memtests/thp_pthread_mmprctlv3.tar.gz This patch (of 3): Revert commit 8e72033f2a48 and add in code to fix up any issues caused by the revert. The revert is necessary because hugepage_madvise would return -EINVAL when VM_NOHUGEPAGE is set, which will break subsequent chunks of this patch set. Here's a snip of an e-mail from Gerald detailing the original purpose of this code, and providing justification for the revert: "The intent of commit 8e72033f2a48 was to guard against any future programming errors that may result in an madvice(MADV_HUGEPAGE) on guest mappings, which would crash the kernel. Martin suggested adding the bit to arch/s390/mm/pgtable.c, if 8e72033f2a48 was to be reverted, because that check will also prevent a kernel crash in the case described above, it will now send a SIGSEGV instead. This would now also allow to do the madvise on other parts, if needed, so it is a more flexible approach. One could also say that it would have been better to do it this way right from the beginning..." Signed-off-by: Alex Thorlton <athorlton@sgi.com> Suggested-by: Oleg Nesterov <oleg@redhat.com> Tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/compaction: clean-up code on success of ballon isolationJoonsoo Kim2014-04-07
| | | | | | | | | | | | It is just for clean-up to reduce code size and improve readability. There is no functional change. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/compaction: check pageblock suitability once per pageblockJoonsoo Kim2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | isolation_suitable() and migrate_async_suitable() is used to be sure that this pageblock range is fine to be migragted. It isn't needed to call it on every page. Current code do well if not suitable, but, don't do well when suitable. 1) It re-checks isolation_suitable() on each page of a pageblock that was already estabilished as suitable. 2) It re-checks migrate_async_suitable() on each page of a pageblock that was not entered through the next_pageblock: label, because last_pageblock_nr is not otherwise updated. This patch fixes situation by 1) calling isolation_suitable() only once per pageblock and 2) always updating last_pageblock_nr to the pageblock that was just checked. Additionally, move PageBuddy() check after pageblock unit check, since pageblock check is the first thing we should do and makes things more simple. [vbabka@suse.cz: rephrase commit description] Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/compaction: change the timing to check to drop the spinlockJoonsoo Kim2014-04-07
| | | | | | | | | | | | | | | | | | | | | | It is odd to drop the spinlock when we scan (SWAP_CLUSTER_MAX - 1) th pfn page. This may results in below situation while isolating migratepage. 1. try isolate 0x0 ~ 0x200 pfn pages. 2. When low_pfn is 0x1ff, ((low_pfn+1) % SWAP_CLUSTER_MAX) == 0, so drop the spinlock. 3. Then, to complete isolating, retry to aquire the lock. I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking the criteria about dropping the lock. This has no harm 0x0 pfn, because, at this time, locked variable would be false. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/compaction: do not call suitable_migration_target() on every pageJoonsoo Kim2014-04-07
| | | | | | | | | | | | | | | | | suitable_migration_target() checks that pageblock is suitable for migration target. In isolate_freepages_block(), it is called on every page and this is inefficient. So make it called once per pageblock. suitable_migration_target() also checks if page is highorder or not, but it's criteria for highorder is pageblock order. So calling it once within pageblock range has no problem. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/compaction: disallow high-order page for migration targetJoonsoo Kim2014-04-07
| | | | | | | | | | | | | | | | | | Purpose of compaction is to get a high order page. Currently, if we find high-order page while searching migration target page, we break it to order-0 pages and use them as migration target. It is contrary to purpose of compaction, so disallow high-order page to be used for migration target. Additionally, clean-up logic in suitable_migration_target() to simplify the code. There is no functional changes from this clean-up. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: exclude memoryless nodes from zone_reclaimMichal Hocko2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We had a report about strange OOM killer strikes on a PPC machine although there was a lot of swap free and a tons of anonymous memory which could be swapped out. In the end it turned out that the OOM was a side effect of zone reclaim which wasn't unmapping and swapping out and so the system was pushed to the OOM. Although this sounds like a bug somewhere in the kswapd vs. zone reclaim vs. direct reclaim interaction numactl on the said hardware suggests that the zone reclaim should not have been set in the first place: node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 node 0 size: 0 MB node 0 free: 0 MB node 2 cpus: node 2 size: 7168 MB node 2 free: 6019 MB node distances: node 0 2 0: 10 40 2: 40 10 So all the CPUs are associated with Node0 which doesn't have any memory while Node2 contains all the available memory. Node distances cause an automatic zone_reclaim_mode enabling. Zone reclaim is intended to keep the allocations local but this doesn't make any sense on the memoryless nodes. So let's exclude such nodes for init_zone_allows_reclaim which evaluates zone reclaim behavior and suitable reclaim_nodes. Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: David Rientjes <rientjes@google.com> Acked-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Tested-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/memory.c: update comment in unmap_single_vma()Davidlohr Bueso2014-04-07
| | | | | | | | | The described issue now occurs inside mmap_region(). And unfortunately is still valid. Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/vmscan: do not check compaction_ready on promoted zonesWeijie Yang2014-04-07
| | | | | | | | | | | | | | | | | | | We abort direct reclaim if we find the zone is ready for compaction. Sometimes the zone is just a promoted highmem zone to force a scan of highmem, which is not the intended zone the caller want to allocate a page from. In this situation, setting aborted_reclaim to indicate the caller turned back to retry the allocation is waste of time and could cause a loop in __alloc_pages_slowpath(). This patch does not check compaction_ready() on promoted zones to avoid the above situation. Only set aborted_reclaim if the caller intended zone is ready for compaction. Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/vmscan: restore sc->gfp_mask after promoting it to __GFP_HIGHMEMWeijie Yang2014-04-07
| | | | | | | | | | | | | | | | | We promote sc->gfp_mask to __GFP_HIGHMEM to forcibly scan highmem if there are too many buffer_heads pinning highmem. See cc715d99e5 ("mm: vmscan: forcibly scan highmem if there are too many buffer_heads pinning highmem"). This patch restores sc->gfp_mask to its caller original value after finishing the scan job, to avoid the impact on other invocations from its upper caller, such as vmpressure_prio(), shrink_slab(). Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: move mmu notifier call from change_protection to change_pmd_rangeRik van Riel2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The NUMA scanning code can end up iterating over many gigabytes of unpopulated memory, especially in the case of a freshly started KVM guest with lots of memory. This results in the mmu notifier code being called even when there are no mapped pages in a virtual address range. The amount of time wasted can be enough to trigger soft lockup warnings with very large KVM guests. This patch moves the mmu notifier call to the pmd level, which represents 1GB areas of memory on x86-64. Furthermore, the mmu notifier code is only called from the address in the PMD where present mappings are first encountered. The hugetlbfs code is left alone for now; hugetlb mappings are not relocatable, and as such are left alone by the NUMA code, and should never trigger this problem to begin with. Signed-off-by: Rik van Riel <riel@redhat.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Reported-by: Xing Gang <gang.xing@hp.com> Tested-by: Chegu Vinod <chegu_vinod@hp.com> Cc: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: numa: recheck for transhuge pages under lock during protection changesMel Gorman2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sasha reported the following bug using trinity kernel BUG at mm/mprotect.c:149! invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC Dumping ftrace buffer: (ftrace buffer empty) Modules linked in: CPU: 20 PID: 26219 Comm: trinity-c216 Tainted: G W 3.14.0-rc5-next-20140305-sasha-00011-ge06f5f3-dirty #105 task: ffff8800b6c80000 ti: ffff880228436000 task.ti: ffff880228436000 RIP: change_protection_range+0x3b3/0x500 Call Trace: change_protection+0x25/0x30 change_prot_numa+0x1b/0x30 task_numa_work+0x279/0x360 task_work_run+0xae/0xf0 do_notify_resume+0x8e/0xe0 retint_signal+0x4d/0x92 The VM_BUG_ON was added in -mm by the patch "mm,numa: reorganize change_pmd_range". The race existed without the patch but was just harder to hit. The problem is that a transhuge check is made without holding the PTL. It's possible at the time of the check that a parallel fault clears the pmd and inserts a new one which then triggers the VM_BUG_ON check. This patch removes the VM_BUG_ON but fixes the race by rechecking transhuge under the PTL when marking page tables for NUMA hinting and bailing if a race occurred. It is not a problem for calls to mprotect() as they hold mmap_sem for write. Signed-off-by: Mel Gorman <mgorman@suse.de> Reported-by: Sasha Levin <sasha.levin@oracle.com> Reviewed-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm,numa: reorganize change_pmd_range()Rik van Riel2014-04-07
| | | | | | | | | | | | | | | | Reorganize the order of ifs in change_pmd_range a little, in preparation for the next patch. [akpm@linux-foundation.org: fix indenting, per David] Signed-off-by: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Reported-by: Xing Gang <gang.xing@hp.com> Tested-by: Chegu Vinod <chegu_vinod@hp.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/hugetlb.c: add NULL check of return value of huge_pte_offsetNaoya Horiguchi2014-04-07
| | | | | | | | | | | | | huge_pte_offset() could return NULL, so we need NULL check to avoid potential NULL pointer dereferences. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ntfs: logging clean-upFabian Frederick2014-04-07
| | | | | | | | | | | | | | | | | | | | | | | | | | - Convert spinlock/static array to va_format (inspired by Joe Perches help on previous logging patches). - Convert printk(KERN_ERR to pr_warn in __ntfs_warning. - Convert printk(KERN_ERR to pr_err in __ntfs_error. - Convert printk(KERN_DEBUG to pr_debug in __ntfs_debug. (Note that __ntfs_debug is still guarded by #if DEBUG) - Improve !DEBUG to parse all arguments (Joe Perches). - Sparse pr_foo() conversions in super.c NTFS, NTFS-fs prefixes as well as 'warning' and 'error' were removed : pr_foo() automatically adds module name and error level is already specified. Signed-off-by: Fabian Frederick <fabf@skynet.be> Cc: Anton Altaparmakov <anton@tuxera.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'nfs-for-3.15-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfsLinus Torvalds2014-04-06
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull NFS client updates from Trond Myklebust: "Highlights include: - Stable fix for a use after free issue in the NFSv4.1 open code - Fix the SUNRPC bi-directional RPC code to account for TCP segmentation - Optimise usage of readdirplus when confronted with 'ls -l' situations - Soft mount bugfixes - NFS over RDMA bugfixes - NFSv4 close locking fixes - Various NFSv4.x client state management optimisations - Rename/unlink code cleanups" * tag 'nfs-for-3.15-1' of git://git.linux-nfs.org/projects/trondmy/linux-nfs: (28 commits) nfs: pass string length to pr_notice message about readdir loops NFSv4: Fix a use-after-free problem in open() SUNRPC: rpc_restart_call/rpc_restart_call_prepare should clear task->tk_status SUNRPC: Don't let rpc_delay() clobber non-timeout errors SUNRPC: Ensure call_connect_status() deals correctly with SOFTCONN tasks SUNRPC: Ensure call_status() deals correctly with SOFTCONN tasks NFSv4: Ensure we respect soft mount timeouts during trunking discovery NFSv4: Schedule recovery if nfs40_walk_client_list() is interrupted NFS: advertise only supported callback netids SUNRPC: remove KERN_INFO from dprintk() call sites SUNRPC: Fix large reads on NFS/RDMA NFS: Clean up: revert increase in READDIR RPC buffer max size SUNRPC: Ensure that call_bind times out correctly SUNRPC: Ensure that call_connect times out correctly nfs: emit a fsnotify_nameremove call in sillyrename codepath nfs: remove synchronous rename code nfs: convert nfs_rename to use async_rename infrastructure nfs: make nfs_async_rename non-static nfs: abstract out code needed to complete a sillyrename NFSv4: Clear the open state flags if the new stateid does not match ...
| * nfs: pass string length to pr_notice message about readdir loopsJeff Layton2014-04-05
| | | | | | | | | | | | | | | | | | | | | | There is no guarantee that the strings in the nfs_cache_array will be NULL-terminated. In the event that we end up hitting a readdir loop, we need to ensure that we pass the warning message the length of the string. Reported-by: Lachlan McIlroy <lmcilroy@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * NFSv4: Fix a use-after-free problem in open()Trond Myklebust2014-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If we interrupt the nfs4_wait_for_completion_rpc_task() call in nfs4_run_open_task(), then we don't prevent the RPC call from completing. So freeing up the opendata->f_attr.mdsthreshold in the error path in _nfs4_do_open() leads to a use-after-free when the XDR decoder tries to decode the mdsthreshold information from the server. Fixes: 82be417aa37c0 (NFSv4.1 cache mdsthreshold values on OPEN) Tested-by: Steve Dickson <SteveD@redhat.com> Cc: stable@vger.kernel.org # 3.5+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * SUNRPC: rpc_restart_call/rpc_restart_call_prepare should clear task->tk_statusTrond Myklebust2014-03-20
| | | | | | | | | | | | | | When restarting an rpc call, we should not be carrying over data from the previous call. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * SUNRPC: Don't let rpc_delay() clobber non-timeout errorsTrond Myklebust2014-03-20
| | | | | | | | Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * SUNRPC: Ensure call_connect_status() deals correctly with SOFTCONN tasksSteve Dickson2014-03-20
| | | | | | | | | | | | | | | | | | | | | | Don't schedule an rpc_delay before checking to see if the task is a SOFTCONN because the tk_callback from the delay (__rpc_atrun) clears the task status before the rpc_exit_task can be run. Signed-off-by: Steve Dickson <steved@redhat.com> Fixes: 561ec1603171c (SUNRPC: call_connect_status should recheck...) Link: http://lkml.kernel.org/r/5329CF7C.7090308@RedHat.com Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * SUNRPC: Ensure call_status() deals correctly with SOFTCONN tasksTrond Myklebust2014-03-19
| | | | | | | | Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * NFSv4: Ensure we respect soft mount timeouts during trunking discoveryTrond Myklebust2014-03-19
| | | | | | | | | | Tested-by: Steve Dickson <steved@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * NFSv4: Schedule recovery if nfs40_walk_client_list() is interruptedTrond Myklebust2014-03-19
| | | | | | | | | | | | | | | | | | | | If a timeout or a signal interrupts the NFSv4 trunking discovery SETCLIENTID_CONFIRM call, then we don't know whether or not the server has changed the callback identifier on us. Assume that it did, and schedule a 'path down' recovery... Tested-by: Steve Dickson <steved@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * NFS: advertise only supported callback netidsChuck Lever2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NFSv4.0 clients use the SETCLIENTID operation to inform NFS servers how to contact a client's callback service. If a server cannot contact a client's callback service, that server will not delegate to that client, which results in a performance loss. Our client advertises "rdma" as the callback netid when the forward channel is "rdma". But our client always starts only "tcp" and "tcp6" callback services. Instead of advertising the forward channel netid, advertise "tcp" or "tcp6" as the callback netid, based on the value of the clientaddr mount option, since those are what our client currently supports. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=69171 Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * SUNRPC: remove KERN_INFO from dprintk() call sitesChuck Lever2014-03-17
| | | | | | | | | | | | | | | | The use of KERN_INFO causes garbage characters to appear when debugging is enabled. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * SUNRPC: Fix large reads on NFS/RDMAChuck Lever2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After commit a11a2bf4, "SUNRPC: Optimise away unnecessary data moves in xdr_align_pages", Thu Aug 2 13:21:43 2012, READs larger than a few hundred bytes via NFS/RDMA no longer work. This commit exposed a long-standing bug in rpcrdma_inline_fixup(). I reproduce this with an rsize=4096 mount using the cthon04 basic tests. Test 5 fails with an EIO error. For my reproducer, kernel log shows: NFS: server cheating in read reply: count 4096 > recvd 0 rpcrdma_inline_fixup() is zeroing the xdr_stream::page_len field, and xdr_align_pages() is now returning that value to the READ XDR decoder function. That field is set up by xdr_inline_pages() by the READ XDR encoder function. As far as I can tell, it is supposed to be left alone after that, as it describes the dimensions of the reply xdr_stream, not the contents of that stream. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=68391 Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * NFS: Clean up: revert increase in READDIR RPC buffer max sizeChuck Lever2014-03-17
| | | | | | | | | | | | | | | | | | | | Security labels go with each directory entry, thus they are always stored in the page cache, not in the head buffer. The length of the reply that goes in head[0] should not have changed to support NFSv4.2 labels. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| * Merge branch 'devel' into linux-nextTrond Myklebust2014-03-17
| |\
| | * SUNRPC: Ensure that call_bind times out correctlyTrond Myklebust2014-03-17
| | | | | | | | | | | | | | | | | | | | | If the rpcbind server is unavailable, we still want the RPC client to respect the timeout. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * SUNRPC: Ensure that call_connect times out correctlyTrond Myklebust2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | When the server is unavailable due to a networking error, etc, we want the RPC client to respect the timeout delays when attempting to reconnect. Reported-by: Neil Brown <neilb@suse.de> Fixes: 561ec1603171 (SUNRPC: call_connect_status should recheck bind..) Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * nfs: emit a fsnotify_nameremove call in sillyrename codepathJeff Layton2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a file is sillyrenamed, then the generic vfs_unlink code will skip emitting fsnotify events for it. This patch has the sillyrename code do that instead. In truth this is a little bit odd since we aren't actually removing the dentry per-se, but renaming it. Still, this is probably the right thing to do since it's what userland apps expect to see when an unlink() occurs or some file is renamed on top of the dentry. Signed-off-by: Jeff Layton <jlayton@redhat.com> Tested-by: Anna Schumaker <Anna.Schumaker@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * nfs: remove synchronous rename codeJeff Layton2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | Now that nfs_rename uses the async infrastructure, we can remove this. Signed-off-by: Jeff Layton <jlayton@redhat.com> Tested-by: Anna Schumaker <Anna.Schumaker@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * nfs: convert nfs_rename to use async_rename infrastructureJeff Layton2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There isn't much sense in maintaining two separate versions of rename code. Convert nfs_rename to use the asynchronous rename infrastructure that nfs_sillyrename uses, and emulate synchronous behavior by having the task just wait on the reply. Signed-off-by: Jeff Layton <jlayton@redhat.com> Tested-by: Anna Schumaker <Anna.Schumaker@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * nfs: make nfs_async_rename non-staticJeff Layton2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | ...and move the prototype for nfs_sillyrename to internal.h. Signed-off-by: Jeff Layton <jlayton@redhat.com> Tested-by: Anna Schumaker <Anna.Schumaker@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * nfs: abstract out code needed to complete a sillyrenameJeff Layton2014-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The async rename code is currently "polluted" with some parts that are really just for sillyrenames. Add a new "complete" operation vector to the nfs_renamedata to separate out the stuff that just needs to be done for a sillyrename. Signed-off-by: Jeff Layton <jlayton@redhat.com> Tested-by: Anna Schumaker <Anna.Schumaker@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4: Clear the open state flags if the new stateid does not matchTrond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RFC3530 and RFC5661 both prescribe that the 'opaque' field of the open stateid returned by new OPEN/OPEN_DOWNGRADE/CLOSE calls for the same file and open owner should match. If this is not the case, assume that the open state has been lost, and that we need to recover it. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4: Use correct locking when updating nfs4_state in nfs4_close_doneTrond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | | | | The stateid and state->flags should be updated atomically under protection of the state->seqlock. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4.1: Ensure that we free existing layout segments if we get a new layoutTrond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | | | | If the server returns a completely new layout stateid in response to our LAYOUTGET, then make sure to free any existing layout segments. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4.1: Minor optimisation in get_layout_by_fh_locked()Trond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | | | | If the filehandles match, but the igrab() fails, or the layout is freed before we can get it, then just return NULL. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4.1: Ensure that the layout recall callback matches layout stateidsTrond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | It is not sufficient to compare filehandles when we receive a layout recall from the server; we also need to check that the layout stateids match. Reported-by: shaobingqing <shaobingqing@bwstor.com.cn> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4: Don't update the open stateid unless it is newer than the old oneTrond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | This patch is in preparation for the NFSv4.1 parallel open capability. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFSv4.1: Fix wraparound issues in pnfs_seqid_is_newer()Trond Myklebust2014-02-19
| | | | | | | | | | | | | | | | | | | | | | | | Subtraction of signed integers does not have well defined wraparound semantics in the C99 standard. In order to be wraparound-safe, we have to use unsigned subtraction, and then cast the result. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * NFS: Be more aggressive in using readdirplus for 'ls -l' situationsTrond Myklebust2014-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | Try to detect 'ls -l' by having nfs_getattr() look at whether or not there is an opendir() file descriptor for the parent directory. If so, then assume that we want to force use of readdirplus in order to avoid the multiple GETATTR calls over the wire. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
| | * SUNRPC: RPC callbacks may be split across several TCP segmentsTrond Myklebust2014-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since TCP is a stream protocol, our callback read code needs to take into account the fact that RPC callbacks are not always confined to a single TCP segment. This patch adds support for multiple TCP segments by ensuring that we only remove the rpc_rqst structure from the 'free backchannel requests' list once the data has been completely received. We rely on the fact that TCP data is ordered for the duration of the connection. Reported-by: shaobingqing <shaobingqing@bwstor.com.cn> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>