diff options
author | jamal <hadi@cyberus.ca> | 2011-06-26 04:13:54 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-06-27 03:14:10 -0400 |
commit | d5b8aa1d246fddfe4042be6f6eb169efa5cfbb94 (patch) | |
tree | 5c0be52e7fba52ba38c8cd606b5c15e2435e343a /net/sched/sch_generic.c | |
parent | 85a43a9edaf5b541381acbf4061bace1121d6ef0 (diff) |
net_sched: fix dequeuer fairness
Results on dummy device can be seen in my netconf 2011
slides. These results are for a 10Gige IXGBE intel
nic - on another i5 machine, very similar specs to
the one used in the netconf2011 results.
It turns out - this is a hell lot worse than dummy
and so this patch is even more beneficial for 10G.
Test setup:
----------
System under test sending packets out.
Additional box connected directly dropping packets.
Installed prio qdisc on the eth device and default
netdev default length of 1000 used as is.
The 3 prio bands each were set to 100 (didnt factor in
the results).
5 packet runs were made and the middle 3 picked.
results
-------
The "cpu" column indicates the which cpu the sample
was taken on,
The "Pkt runx" carries the number of packets a cpu
dequeued when forced to be in the "dequeuer" role.
The "avg" for each run is the number of times each
cpu should be a "dequeuer" if the system was fair.
3.0-rc4 (plain)
cpu Pkt run1 Pkt run2 Pkt run3
================================================
cpu0 21853354 21598183 22199900
cpu1 431058 473476 393159
cpu2 481975 477529 458466
cpu3 23261406 23412299 22894315
avg 11506948 11490372 11486460
3.0-rc4 with patch and default weight 64
cpu Pkt run1 Pkt run2 Pkt run3
================================================
cpu0 13205312 13109359 13132333
cpu1 10189914 10159127 10122270
cpu2 10213871 10124367 10168722
cpu3 13165760 13164767 13096705
avg 11693714 11639405 11630008
As you can see the system is still not perfect but
is a lot better than what it was before...
At the moment we use the old backlog weight, weight_p
which is 64 packets. It seems to be reasonably fine
with that value.
The system could be made more fair if we reduce the
weight_p (as per my presentation), but we are going
to affect the shared backlog weight. Unless deemed
necessary, I think the default value is fine. If not
we could add yet another knob.
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/sch_generic.c')
-rw-r--r-- | net/sched/sch_generic.c | 12 |
1 files changed, 7 insertions, 5 deletions
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index b4c680900d7a..d253c16a314c 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c | |||
@@ -189,15 +189,17 @@ static inline int qdisc_restart(struct Qdisc *q) | |||
189 | 189 | ||
190 | void __qdisc_run(struct Qdisc *q) | 190 | void __qdisc_run(struct Qdisc *q) |
191 | { | 191 | { |
192 | unsigned long start_time = jiffies; | 192 | int quota = weight_p; |
193 | int work = 0; | ||
193 | 194 | ||
194 | while (qdisc_restart(q)) { | 195 | while (qdisc_restart(q)) { |
196 | work++; | ||
195 | /* | 197 | /* |
196 | * Postpone processing if | 198 | * Ordered by possible occurrence: Postpone processing if |
197 | * 1. another process needs the CPU; | 199 | * 1. we've exceeded packet quota |
198 | * 2. we've been doing it for too long. | 200 | * 2. another process needs the CPU; |
199 | */ | 201 | */ |
200 | if (need_resched() || jiffies != start_time) { | 202 | if (work >= quota || need_resched()) { |
201 | __netif_schedule(q); | 203 | __netif_schedule(q); |
202 | break; | 204 | break; |
203 | } | 205 | } |