aboutsummaryrefslogtreecommitdiffstats
path: root/net/Kconfig
diff options
context:
space:
mode:
authorWillem de Bruijn <willemb@google.com>2013-05-20 00:02:32 -0400
committerDavid S. Miller <davem@davemloft.net>2013-05-20 16:48:04 -0400
commit99bbc70741903c063b3ccad90a3e06fc55df9245 (patch)
treea3377d2461242bf1134464ce3fe6d69f82c907c2 /net/Kconfig
parent4a5bddf7ea6b6c5916eccbc2fa1950555073ff48 (diff)
rps: selective flow shedding during softnet overflow
A cpu executing the network receive path sheds packets when its input queue grows to netdev_max_backlog. A single high rate flow (such as a spoofed source DoS) can exceed a single cpu processing rate and will degrade throughput of other flows hashed onto the same cpu. This patch adds a more fine grained hashtable. If the netdev backlog is above a threshold, IRQ cpus track the ratio of total traffic of each flow (using 4096 buckets, configurable). The ratio is measured by counting the number of packets per flow over the last 256 packets from the source cpu. Any flow that occupies a large fraction of this (set at 50%) will see packet drop while above the threshold. Tested: Setup is a muli-threaded UDP echo server with network rx IRQ on cpu0, kernel receive (RPS) on cpu0 and application threads on cpus 2--7 each handling 20k req/s. Throughput halves when hit with a 400 kpps antagonist storm. With this patch applied, antagonist overload is dropped and the server processes its complete load. The patch is effective when kernel receive processing is the bottleneck. The above RPS scenario is a extreme, but the same is reached with RFS and sufficient kernel processing (iptables, packet socket tap, ..). Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/Kconfig')
-rw-r--r--net/Kconfig12
1 files changed, 12 insertions, 0 deletions
diff --git a/net/Kconfig b/net/Kconfig
index 2ddc9046868e..08de901415ee 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -259,6 +259,18 @@ config BPF_JIT
259 packet sniffing (libpcap/tcpdump). Note : Admin should enable 259 packet sniffing (libpcap/tcpdump). Note : Admin should enable
260 this feature changing /proc/sys/net/core/bpf_jit_enable 260 this feature changing /proc/sys/net/core/bpf_jit_enable
261 261
262config NET_FLOW_LIMIT
263 boolean
264 depends on RPS
265 default y
266 ---help---
267 The network stack has to drop packets when a receive processing CPU's
268 backlog reaches netdev_max_backlog. If a few out of many active flows
269 generate the vast majority of load, drop their traffic earlier to
270 maintain capacity for the other flows. This feature provides servers
271 with many clients some protection against DoS by a single (spoofed)
272 flow that greatly exceeds average workload.
273
262menu "Network testing" 274menu "Network testing"
263 275
264config NET_PKTGEN 276config NET_PKTGEN