diff options
author | Johannes Weiner <hannes@cmpxchg.org> | 2013-10-16 16:47:00 -0400 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-10-17 00:35:53 -0400 |
commit | 84235de394d9775bfaa7fa9762a59d91fef0c1fc (patch) | |
tree | c4066a9859f12ddce7f2f4deae4efa93a2c56f52 /fs | |
parent | 4942642080ea82d99ab5b653abb9a12b7ba31f4a (diff) |
fs: buffer: move allocation failure loop into the allocator
Buffer allocation has a very crude indefinite loop around waking the
flusher threads and performing global NOFS direct reclaim because it can
not handle allocation failures.
The most immediate problem with this is that the allocation may fail due
to a memory cgroup limit, where flushers + direct reclaim might not make
any progress towards resolving the situation at all. Because unlike the
global case, a memory cgroup may not have any cache at all, only
anonymous pages but no swap. This situation will lead to a reclaim
livelock with insane IO from waking the flushers and thrashing unrelated
filesystem cache in a tight loop.
Use __GFP_NOFAIL allocations for buffers for now. This makes sure that
any looping happens in the page allocator, which knows how to
orchestrate kswapd, direct reclaim, and the flushers sensibly. It also
allows memory cgroups to detect allocations that can't handle failure
and will allow them to ultimately bypass the limit if reclaim can not
make progress.
Reported-by: azurIt <azurit@pobox.sk>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/buffer.c | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/fs/buffer.c b/fs/buffer.c index 4d7433534f5c..6024877335ca 100644 --- a/fs/buffer.c +++ b/fs/buffer.c | |||
@@ -1005,9 +1005,19 @@ grow_dev_page(struct block_device *bdev, sector_t block, | |||
1005 | struct buffer_head *bh; | 1005 | struct buffer_head *bh; |
1006 | sector_t end_block; | 1006 | sector_t end_block; |
1007 | int ret = 0; /* Will call free_more_memory() */ | 1007 | int ret = 0; /* Will call free_more_memory() */ |
1008 | gfp_t gfp_mask; | ||
1008 | 1009 | ||
1009 | page = find_or_create_page(inode->i_mapping, index, | 1010 | gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS; |
1010 | (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE); | 1011 | gfp_mask |= __GFP_MOVABLE; |
1012 | /* | ||
1013 | * XXX: __getblk_slow() can not really deal with failure and | ||
1014 | * will endlessly loop on improvised global reclaim. Prefer | ||
1015 | * looping in the allocator rather than here, at least that | ||
1016 | * code knows what it's doing. | ||
1017 | */ | ||
1018 | gfp_mask |= __GFP_NOFAIL; | ||
1019 | |||
1020 | page = find_or_create_page(inode->i_mapping, index, gfp_mask); | ||
1011 | if (!page) | 1021 | if (!page) |
1012 | return ret; | 1022 | return ret; |
1013 | 1023 | ||