diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-01-20 00:27:16 -0500 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-01-20 19:59:32 -0500 |
commit | fd245a4adb5288eac37250875f237c40a20a1944 (patch) | |
tree | 1c16670c53dab9d9d05b26a7e7ae8a6a8267e847 /net/sched/sch_netem.c | |
parent | 817fb15dfd988d8dda916ee04fa506f0c466b9d6 (diff) |
net_sched: move TCQ_F_THROTTLED flag
In commit 371121057607e (net: QDISC_STATE_RUNNING dont need atomic bit
ops) I moved QDISC_STATE_RUNNING flag to __state container, located in
the cache line containing qdisc lock and often dirtied fields.
I now move TCQ_F_THROTTLED bit too, so that we let first cache line read
mostly, and shared by all cpus. This should speedup HTB/CBQ for example.
Not using test_bit()/__clear_bit()/__test_and_set_bit allows to use an
"unsigned int" for __state container, reducing by 8 bytes Qdisc size.
Introduce helpers to hide implementation details.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Patrick McHardy <kaber@trash.net>
CC: Jesper Dangaard Brouer <hawk@diku.dk>
CC: Jarek Poplawski <jarkao2@gmail.com>
CC: Jamal Hadi Salim <hadi@cyberus.ca>
CC: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/sch_netem.c')
-rw-r--r-- | net/sched/sch_netem.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index c2bbbe60d544..c26ef3614f7e 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c | |||
@@ -266,7 +266,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch) | |||
266 | struct netem_sched_data *q = qdisc_priv(sch); | 266 | struct netem_sched_data *q = qdisc_priv(sch); |
267 | struct sk_buff *skb; | 267 | struct sk_buff *skb; |
268 | 268 | ||
269 | if (sch->flags & TCQ_F_THROTTLED) | 269 | if (qdisc_is_throttled(sch)) |
270 | return NULL; | 270 | return NULL; |
271 | 271 | ||
272 | skb = q->qdisc->ops->peek(q->qdisc); | 272 | skb = q->qdisc->ops->peek(q->qdisc); |