diff options
| author | Eric Dumazet <eric.dumazet@gmail.com> | 2010-04-27 18:13:20 -0400 |
|---|---|---|
| committer | David S. Miller <davem@davemloft.net> | 2010-04-27 18:13:20 -0400 |
| commit | c377411f2494a931ff7facdbb3a6839b1266bcf6 (patch) | |
| tree | 6846cdcec913f50839e3916856f78f7e059ff5fb /net/ipv4 | |
| parent | 6e7676c1a76aed6e957611d8d7a9e5592e23aeba (diff) | |
net: sk_add_backlog() take rmem_alloc into account
Current socket backlog limit is not enough to really stop DDOS attacks,
because user thread spend many time to process a full backlog each
round, and user might crazy spin on socket lock.
We should add backlog size and receive_queue size (aka rmem_alloc) to
pace writers, and let user run without being slow down too much.
Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
stress situations.
Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
receiver can now process ~200.000 pps (instead of ~100 pps before the
patch) on a 8 core machine.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4')
| -rw-r--r-- | net/ipv4/udp.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index fa3d2874db4..63eb56b2d87 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c | |||
| @@ -1372,6 +1372,10 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) | |||
| 1372 | goto drop; | 1372 | goto drop; |
| 1373 | } | 1373 | } |
| 1374 | 1374 | ||
| 1375 | |||
| 1376 | if (sk_rcvqueues_full(sk, skb)) | ||
| 1377 | goto drop; | ||
| 1378 | |||
| 1375 | rc = 0; | 1379 | rc = 0; |
| 1376 | 1380 | ||
| 1377 | bh_lock_sock(sk); | 1381 | bh_lock_sock(sk); |
