diff options
author | Eric Dumazet <dada1@cosmosbay.com> | 2008-11-05 04:38:06 -0500 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2008-11-05 04:38:06 -0500 |
commit | 270acefafeb74ce2fe93d35b75733870bf1e11e7 (patch) | |
tree | 9368122a53b2834d2cd7894a1a316a9fde5d19ca /net | |
parent | d99a7bd210a14001007fc5233597c78877f0a11c (diff) |
net: sk_free_datagram() should use sk_mem_reclaim_partial()
I noticed a contention on udp_memory_allocated on regular UDP applications.
While tcp_memory_allocated is seldom used, it appears each incoming UDP frame
is currently touching udp_memory_allocated when queued, and when received by
application.
One possible solution is to use sk_mem_reclaim_partial() instead of
sk_mem_reclaim(), so that we keep a small reserve (less than one page)
of memory for each UDP socket.
We did something very similar on TCP side in commit
9993e7d313e80bdc005d09c7def91903e0068f07
([TCP]: Do not purge sk_forward_alloc entirely in tcp_delack_timer())
A more complex solution would need to convert prot->memory_allocated to
use a percpu_counter with batches of 64 or 128 pages.
Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/datagram.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/net/core/datagram.c b/net/core/datagram.c index ee631843c2f5..5e2ac0c4b07c 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c | |||
@@ -209,7 +209,7 @@ struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned flags, | |||
209 | void skb_free_datagram(struct sock *sk, struct sk_buff *skb) | 209 | void skb_free_datagram(struct sock *sk, struct sk_buff *skb) |
210 | { | 210 | { |
211 | kfree_skb(skb); | 211 | kfree_skb(skb); |
212 | sk_mem_reclaim(sk); | 212 | sk_mem_reclaim_partial(sk); |
213 | } | 213 | } |
214 | 214 | ||
215 | /** | 215 | /** |
@@ -248,8 +248,7 @@ int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags) | |||
248 | spin_unlock_bh(&sk->sk_receive_queue.lock); | 248 | spin_unlock_bh(&sk->sk_receive_queue.lock); |
249 | } | 249 | } |
250 | 250 | ||
251 | kfree_skb(skb); | 251 | skb_free_datagram(sk, skb); |
252 | sk_mem_reclaim(sk); | ||
253 | return err; | 252 | return err; |
254 | } | 253 | } |
255 | 254 | ||