aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* pkt_sched: Schedule qdiscs instead of netdev_queue.David S. Miller2008-07-17
| | | | | | | | | | | | | | When we have shared qdiscs, packets come out of the qdiscs for multiple transmit queues. Therefore it doesn't make any sense to schedule the transmit queue when logically we cannot know ahead of time the TX queue of the SKB that the qdisc->dequeue() will give us. Just for sanity I added a BUG check to make sure we never get into a state where the noop_qdisc is scheduled. Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Add and use qdisc_root() and qdisc_root_lock().David S. Miller2008-07-17
| | | | | | | | | | | | | | When code wants to lock the qdisc tree state, the logic operation it's doing is locking the top-level qdisc that sits of the root of the netdev_queue. Add qdisc_root_lock() to represent this and convert the easiest cases. In order for this to work out in all cases, we have to hook up the noop_qdisc to a dummy netdev_queue. Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Make QDISC_RUNNING a qdisc state.David S. Miller2008-07-17
| | | | | | | Currently it is associated with a netdev_queue, but when we have qdisc sharing that no longer makes any sense. Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Move gso_skb into Qdisc.David S. Miller2008-07-17
| | | | | | | | | | We liberate any dangling gso_skb during qdisc destruction. It really only matters for the root qdisc. But when qdiscs can be shared by multiple netdev_queue objects, we can't have the gso_skb in the netdev_queue any more. Signed-off-by: David S. Miller <davem@davemloft.net>
* niu: Add TX multiqueue support.David S. Miller2008-07-17
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Kill plain netif_schedule()David S. Miller2008-07-17
| | | | | | No more users. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Convert all drivers away from netif_schedule().David S. Miller2008-07-17
| | | | | | | They logically all want to trigger a schedule for all device TX queues. Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Implement simple sw TX hashing.David S. Miller2008-07-17
| | | | | | | | | | | It just xor hashes over IPv4/IPv6 addresses and ports of transport. The only assumption it makes is that skb_network_header() is set correctly. With bug fixes from Eric Dumazet. Signed-off-by: David S. Miller <davem@davemloft.net>
* mac80211: Reimplement WME using ->select_queue().David S. Miller2008-07-17
| | | | | | | | | | The only behavior change is that we do not drop packets under any circumstances. If that is absolutely needed, we could easily add it back. With cleanups and help from Johannes Berg. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Add netdev->select_queue() method.David S. Miller2008-07-17
| | | | | | | | | | | | | | Devices or device layers can set this to control the queue selection performed by dev_pick_tx(). This function runs under RCU protection, which allows overriding functions to have some way of synchronizing with things like dynamic ->real_num_tx_queues adjustments. This makes the spinlock prefetch in dev_queue_xmit() a little bit less effective, but that's the price right now for correctness. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: netdev_priv() can now be sane again.David S. Miller2008-07-17
| | | | | | | | | | The private area of a netdev is now at a fixed offset once more. Unfortunately, some assumptions that netdev_priv() == netdev->priv crept back into the tree. In particular this happened in the loopback driver. Make it use netdev->ml_priv. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Kill struct net_device_subqueue and netdev->egress_subqueue*David S. Miller2008-07-17
| | | | | | No longer used. Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Use queue aware tests throughout.David S. Miller2008-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This effectively "flips the switch" by making the core networking and multiqueue-aware drivers use the new TX multiqueue structures. Non-multiqueue drivers need no changes. The interfaces they use such as netif_stop_queue() degenerate into an operation on TX queue zero. So everything "just works" for them. Code that really wants to do "X" to all TX queues now invokes a routine that does so, such as netif_tx_wake_all_queues(), netif_tx_stop_all_queues(), etc. pktgen and netpoll required a little bit more surgery than the others. In particular the pktgen changes, whilst functional, could be largely improved. The initial check in pktgen_xmit() will sometimes check the wrong queue, which is mostly harmless. The thing to do is probably to invoke fill_packet() earlier. The bulk of the netpoll changes is to make the code operate solely on the TX queue indicated by by the SKB queue mapping. Setting of the SKB queue mapping is entirely confined inside of net/core/dev.c:dev_pick_tx(). If we end up needing any kind of special semantics (drops, for example) it will be implemented here. Finally, we now have a "real_num_tx_queues" which is where the driver indicates how many TX queues are actually active. With IGB changes from Jeff Kirsher. Signed-off-by: David S. Miller <davem@davemloft.net>
* mac80211: Temporarily mark QoS support BROKEN.David S. Miller2008-07-17
| | | | | | We will undo this after a few changsets. Signed-off-by: David S. Miller <davem@davemloft.net>
* pkt_sched: Remove RR scheduler.David S. Miller2008-07-17
| | | | | | | | | | | | | | | This actually fixes a bug added by the RR scheduler changes. The ->bands and ->prio2band parameters were being set outside of the sch_tree_lock() and thus could result in strange behavior and inconsistencies. It might be possible, in the new design (where there will be one qdisc per device TX queue) to allow similar functionality via a TX hash algorithm for RR but I really see no reason to export this aspect of how these multiqueue cards actually implement the scheduling of the the individual DMA TX rings and the single physical MAC/PHY port. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Kill NETIF_F_MULTI_QUEUE.David S. Miller2008-07-17
| | | | | | | There is no need for a feature bit for something that can be tested by simply checking the TX queue count. Signed-off-by: David S. Miller <davem@davemloft.net>
* netdev: Allocate multiple queues for TX.David S. Miller2008-07-17
| | | | | | | | | | | | | | | | alloc_netdev_mq() now allocates an array of netdev_queue structures for TX, based upon the queue_count argument. Furthermore, all accesses to the TX queues are now vectored through the netdev_get_tx_queue() and netdev_for_each_tx_queue() interfaces. This makes it easy to grep the tree for all things that want to get to a TX queue of a net device. Problem spots which are not really multiqueue aware yet, and only work with one queue, can easily be spotted by grepping for all netdev_get_tx_queue() calls that pass in a zero index. Signed-off-by: David S. Miller <davem@davemloft.net>
* igb: Kill CONFIG_NETDEVICES_MULTIQUEUE references, no longer exists.David S. Miller2008-07-17
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* garp: retry sending JoinIn messages after allocation failuresPatrick McHardy2008-07-16
| | | | | | | | Increase reliability by retrying to send JoinIn messages after memory allocation failures on each TRANSMIT_PDU event until it succeeds. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* core: add stat to track unresolved discards in neighbor cacheNeil Horman2008-07-16
| | | | | | | | | | | | | | in __neigh_event_send, if we have a neighbour entry which is in NUD_INCOMPLETE state, we enqueue any outbound frames to that neighbour to the neighbours arp_queue, which is default capped to a length of 3 skbs. If that queue exceeds its set length, it will drop an skb on the queue to enqueue the newly arrived skb. This results in a drop for which we have no statistics incremented. This patch adds an unresolved_discards stat to /proc/net/stat/ndisc_cache to track these lost frames. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_ADD_STATS_USERPavel Emelyanov2008-07-16
| | | | | | | | | Done with NET_XXX_STATS macros :) To be continued... Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_ADD_STATS_BHPavel Emelyanov2008-07-16
| | | | | | | | | | | | This one is tricky. The thing is that this macro is only used when killing tw buckets, but since this killer is promiscuous wrt to which net each particular tw belongs to, I have to use it only when NET_NS is off. When the net namespaces are on, I use the INET_INC_STATS_BH for each bucket. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_INC_STATS_USERPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_INC_STATS_BHPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to NET_INC_STATSPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: replace tcp_sock argument with sock in some placesPavel Emelyanov2008-07-16
| | | | | | | | | These places have a tcp_sock, but we'd prefer the sock itself to get net from it. Fortunately, tcp_sk macro is just a type cast, so this replace is really cheap. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* inet: prepare net on the stack for NET accounting macrosPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* sock: add net to prot->enter_memory_pressure callbackPavel Emelyanov2008-07-16
| | | | | | | | | | | | The tcp_enter_memory_pressure calls NET_INC_STATS, but doesn't have where to get the net from. I decided to add a sk argument, not the net itself, only to factor all the required sock_net(sk) calls inside the enter_memory_pressure callback itself. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_ADD_STATS_USERPavel Emelyanov2008-07-16
| | | | | | | Now we're done with the TCP_XXX_STATS macros. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_DEC_STATSPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_INC_STATS_BHPavel Emelyanov2008-07-16
| | | | | | | | | Same as before - the sock is always there to get the net from, but there are also some places with the net already saved on the stack. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to TCP_INC_STATSPavel Emelyanov2008-07-16
| | | | | | | Fortunately (almost) all the TCP code has a sock to get the net from :) Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: add net to tcp_mib_initPavel Emelyanov2008-07-16
| | | | | | | | | | This one sets TCP MIBs after zeroing them, and thus requires the net. The existing single caller can use init_net (temporarily). Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: drop unused TCP_XXX_STATS macrosPavel Emelyanov2008-07-16
| | | | | | | TCP_INC_STATS_USER and TCP_ADD_STATS_BH are currently unused. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* inet: prepare struct net for TCP MIB accountingPavel Emelyanov2008-07-16
| | | | | | | | | This is the same as the first patch in the set, but preparing the net for TCP_XXX_STATS - save the struct net on the stack where required and possible. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to IP_ADD_STATS_BHPavel Emelyanov2008-07-16
| | | | | | | | Very simple - only ip_evictor (fragments) requires such. This patch ends up the IP_XXX_STATS patching. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to IP_INC_STATS_BHPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: add net to IP_INC_STATSPavel Emelyanov2008-07-16
| | | | | | | | All the callers already have either the net itself, or the place where to get it from. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* mib: drop unused IP_INC_STATS_USERPavel Emelyanov2008-07-16
| | | | | Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv4: prepare net initialization for IP accountingPavel Emelyanov2008-07-16
| | | | | | | | | | | Some places, that deal with IP statistics already have where to get a struct net from, but use it directly, without declaring a separate variable on the stack. So, save this net on the stack for future IP_XXX_STATS macros. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* netdrv intel: always enable VLAN filtering except in promiscous modePatrick McHardy2008-07-16
| | | | | | | | | | | | Currently VLAN filtering is enabled when the first VLAN is added. Obviously before that there's no point in receiving any VLAN packets. Now that we disable VLAN filtering in promiscous mode, we can keep the VLAN filters enabled the remaining time. Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* netdrv intel: disable VLAN filtering in promiscous modePatrick McHardy2008-07-16
| | | | | | | | | | | | | | | As discussed in this thread: http://www.mail-archive.com/netdev@vger.kernel.org/msg53976.html promiscous mode means to disable *all* filters. Currently only unicast and multicast filtering is disabled. This patch changes all Intel drivers to also disable VLAN filtering. Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/ipv4/tcp.c: Fix use of PULLHUP instead of POLLHUP in comments.Will Newton2008-07-16
| | | | | | | | Change PULLHUP to POLLHUP in tcp_poll comments and clean up another comment for grammar and coding style. Signed-off-by: Will Newton <will.newton@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: make __skb_splice_bits staticHarvey Harrison2008-07-16
| | | | | | | net/core/skbuff.c:1335:5: warning: symbol '__skb_splice_bits' was not declared. Should it be static? Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'stealer/ipvs/sync-daemon-cleanup-for-next' of ↵David S. Miller2008-07-16
|\ | | | | | | git://git.stealer.net/linux-2.6
| * ipvs: Use schedule_timeout_interruptible() instead of msleep_interruptible()Sven Wegener2008-07-16
| | | | | | | | | | | | | | | | So that kthread_stop() can wake up the thread and we don't have to wait one second in the worst case for the daemon to actually stop. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Put backup thread on mcast socket wait queueSven Wegener2008-07-16
| | | | | | | | | | | | | | | | | | Instead of doing an endless loop with sleeping for one second, we now put the backup thread onto the mcast socket wait queue and it gets woken up as soon as we have data to process. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Use kthread_run() instead of doing a double-fork via kernel_thread()Sven Wegener2008-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This also moves the setup code out of the daemons, so that we're able to return proper error codes to user space. The current code will return success to user space when the daemon is started with an invald mcast interface. With these changes we get an appropriate "No such device" error. We longer need our own completion to be sure the daemons are actually running, because they no longer contain code that can fail and kthread_run() takes care of the rest. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Use ERR_PTR for returning errors from make_receive_sock() and ↵Sven Wegener2008-07-16
| | | | | | | | | | | | | | | | | | | | make_send_sock() The additional information we now return to the caller is currently not used, but will be used to return errors to user space. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>
| * ipvs: Initialize mcast addr at compile timeSven Wegener2008-07-16
| | | | | | | | | | | | | | There's no need to do it at runtime, the values are constant. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Acked-by: Simon Horman <horms@verge.net.au>