diff options
author | Eric W. Biederman <ebiederm@xmission.com> | 2011-10-13 18:25:23 -0400 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2011-10-19 16:59:42 -0400 |
commit | 850a545bd8a41648445bfc5541afe36ca105b0b8 (patch) | |
tree | c3d3da6af635780cc67bcd95951aa21d7b435442 /net/core/dev.c | |
parent | 06a59ecb921de1d44efcf2cdf543bc689fe2e0d8 (diff) |
net: Move rcu_barrier from rollback_registered_many to netdev_run_todo.
This patch moves the rcu_barrier from rollback_registered_many
(inside the rtnl_lock) into netdev_run_todo (just outside the rtnl_lock).
This allows us to gain the full benefit of sychronize_net calling
synchronize_rcu_expedited when the rtnl_lock is held.
The rcu_barrier in rollback_registered_many was originally a synchronize_net
but was promoted to be a rcu_barrier() when it was found that people were
unnecessarily hitting the 250ms wait in netdev_wait_allrefs(). Changing
the rcu_barrier back to a synchronize_net is therefore safe.
Since we only care about waiting for the rcu callbacks before we get
to netdev_wait_allrefs() it is also safe to move the wait into
netdev_run_todo.
This was tested by creating and destroying 1000 tap devices and observing
/proc/lock_stat. /proc/lock_stat reports this change reduces the hold
times of the rtnl_lock by a factor of 10. There was no observable
difference in the amount of time it takes to destroy a network device.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/dev.c')
-rw-r--r-- | net/core/dev.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/net/core/dev.c b/net/core/dev.c index cbb5918e4fc5..09aef266d4d1 100644 --- a/net/core/dev.c +++ b/net/core/dev.c | |||
@@ -5235,7 +5235,7 @@ static void rollback_registered_many(struct list_head *head) | |||
5235 | dev = list_first_entry(head, struct net_device, unreg_list); | 5235 | dev = list_first_entry(head, struct net_device, unreg_list); |
5236 | call_netdevice_notifiers(NETDEV_UNREGISTER_BATCH, dev); | 5236 | call_netdevice_notifiers(NETDEV_UNREGISTER_BATCH, dev); |
5237 | 5237 | ||
5238 | rcu_barrier(); | 5238 | synchronize_net(); |
5239 | 5239 | ||
5240 | list_for_each_entry(dev, head, unreg_list) | 5240 | list_for_each_entry(dev, head, unreg_list) |
5241 | dev_put(dev); | 5241 | dev_put(dev); |
@@ -5748,6 +5748,12 @@ void netdev_run_todo(void) | |||
5748 | 5748 | ||
5749 | __rtnl_unlock(); | 5749 | __rtnl_unlock(); |
5750 | 5750 | ||
5751 | /* Wait for rcu callbacks to finish before attempting to drain | ||
5752 | * the device list. This usually avoids a 250ms wait. | ||
5753 | */ | ||
5754 | if (!list_empty(&list)) | ||
5755 | rcu_barrier(); | ||
5756 | |||
5751 | while (!list_empty(&list)) { | 5757 | while (!list_empty(&list)) { |
5752 | struct net_device *dev | 5758 | struct net_device *dev |
5753 | = list_first_entry(&list, struct net_device, todo_list); | 5759 | = list_first_entry(&list, struct net_device, todo_list); |