diff options
author | Shaohua Li <shli@fb.com> | 2015-06-11 19:50:48 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2015-06-11 20:33:44 -0400 |
commit | fb05e7a89f500cfc06ae277bdc911b281928995d (patch) | |
tree | 126cd57257dd5d24d11331ca286718c0f83afb5e /net | |
parent | 0fae3bf018d97b210051c8797a49d66d31071847 (diff) |
net: don't wait for order-3 page allocation
We saw excessive direct memory compaction triggered by skb_page_frag_refill.
This causes performance issues and add latency. Commit 5640f7685831e0
introduces the order-3 allocation. According to the changelog, the order-3
allocation isn't a must-have but to improve performance. But direct memory
compaction has high overhead. The benefit of order-3 allocation can't
compensate the overhead of direct memory compaction.
This patch makes the order-3 page allocation atomic. If there is no memory
pressure and memory isn't fragmented, the alloction will still success, so we
don't sacrifice the order-3 benefit here. If the atomic allocation fails,
direct memory compaction will not be triggered, skb_page_frag_refill will
fallback to order-0 immediately, hence the direct memory compaction overhead is
avoided. In the allocation failure case, kswapd is waken up and doing
compaction, so chances are allocation could success next time.
alloc_skb_with_frags is the same.
The mellanox driver does similar thing, if this is accepted, we must fix
the driver too.
V3: fix the same issue in alloc_skb_with_frags as pointed out by Eric
V2: make the changelog clearer
Cc: Eric Dumazet <edumazet@google.com>
Cc: Chris Mason <clm@fb.com>
Cc: Debabrata Banerjee <dbavatar@gmail.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/skbuff.c | 2 | ||||
-rw-r--r-- | net/core/sock.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3cfff2a3d651..41ec02242ea7 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c | |||
@@ -4398,7 +4398,7 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len, | |||
4398 | 4398 | ||
4399 | while (order) { | 4399 | while (order) { |
4400 | if (npages >= 1 << order) { | 4400 | if (npages >= 1 << order) { |
4401 | page = alloc_pages(gfp_mask | | 4401 | page = alloc_pages((gfp_mask & ~__GFP_WAIT) | |
4402 | __GFP_COMP | | 4402 | __GFP_COMP | |
4403 | __GFP_NOWARN | | 4403 | __GFP_NOWARN | |
4404 | __GFP_NORETRY, | 4404 | __GFP_NORETRY, |
diff --git a/net/core/sock.c b/net/core/sock.c index 469d6039c7f5..dc30dc5bb1b8 100644 --- a/net/core/sock.c +++ b/net/core/sock.c | |||
@@ -1880,7 +1880,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) | |||
1880 | 1880 | ||
1881 | pfrag->offset = 0; | 1881 | pfrag->offset = 0; |
1882 | if (SKB_FRAG_PAGE_ORDER) { | 1882 | if (SKB_FRAG_PAGE_ORDER) { |
1883 | pfrag->page = alloc_pages(gfp | __GFP_COMP | | 1883 | pfrag->page = alloc_pages((gfp & ~__GFP_WAIT) | __GFP_COMP | |
1884 | __GFP_NOWARN | __GFP_NORETRY, | 1884 | __GFP_NOWARN | __GFP_NORETRY, |
1885 | SKB_FRAG_PAGE_ORDER); | 1885 | SKB_FRAG_PAGE_ORDER); |
1886 | if (likely(pfrag->page)) { | 1886 | if (likely(pfrag->page)) { |