diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-05-23 07:02:42 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-05-23 17:36:00 -0400 |
commit | 8efa885406359af300d46910642b50ca82c0fe47 (patch) | |
tree | 1eecc0b8152d775b5c261a2a1749a2f711f81f13 /net/sched/sch_sfq.c | |
parent | a4910b744486254cfa61995954c118fb2283c4fd (diff) |
sch_sfq: avoid giving spurious NET_XMIT_CN signals
While chasing a possible net_sched bug, I found that IP fragments have
litle chance to pass a congestioned SFQ qdisc :
- Say SFQ qdisc is full because one flow is non responsive.
- ip_fragment() wants to send two fragments belonging to an idle flow.
- sfq_enqueue() queues first packet, but see queue limit reached :
- sfq_enqueue() drops one packet from 'big consumer', and returns
NET_XMIT_CN.
- ip_fragment() cancel remaining fragments.
This patch restores fairness, making sure we return NET_XMIT_CN only if
we dropped a packet from the same flow.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Patrick McHardy <kaber@trash.net>
CC: Jarek Poplawski <jarkao2@gmail.com>
CC: Jamal Hadi Salim <hadi@cyberus.ca>
CC: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/sch_sfq.c')
-rw-r--r-- | net/sched/sch_sfq.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index 7ef87f9eb675..b1d00f8e09f8 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c | |||
@@ -361,7 +361,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch) | |||
361 | { | 361 | { |
362 | struct sfq_sched_data *q = qdisc_priv(sch); | 362 | struct sfq_sched_data *q = qdisc_priv(sch); |
363 | unsigned int hash; | 363 | unsigned int hash; |
364 | sfq_index x; | 364 | sfq_index x, qlen; |
365 | struct sfq_slot *slot; | 365 | struct sfq_slot *slot; |
366 | int uninitialized_var(ret); | 366 | int uninitialized_var(ret); |
367 | 367 | ||
@@ -405,8 +405,12 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch) | |||
405 | if (++sch->q.qlen <= q->limit) | 405 | if (++sch->q.qlen <= q->limit) |
406 | return NET_XMIT_SUCCESS; | 406 | return NET_XMIT_SUCCESS; |
407 | 407 | ||
408 | qlen = slot->qlen; | ||
408 | sfq_drop(sch); | 409 | sfq_drop(sch); |
409 | return NET_XMIT_CN; | 410 | /* Return Congestion Notification only if we dropped a packet |
411 | * from this flow. | ||
412 | */ | ||
413 | return (qlen != slot->qlen) ? NET_XMIT_CN : NET_XMIT_SUCCESS; | ||
410 | } | 414 | } |
411 | 415 | ||
412 | static struct sk_buff * | 416 | static struct sk_buff * |