aboutsummaryrefslogtreecommitdiffstats
path: root/net
Commit message (Collapse)AuthorAge
* [PATCH] coverity: sunrpc/xprt task null checkKAMBAROV, ZAUR2005-07-07
| | | | | | | | | | | | | | | | In __xprt_lock_write() we check to see if `task' is NULL, but in other places we just go and dereference it. `task' shouldn't be NULL anyway, so remove this test. This defect was found automatically by Coverity Prevent, a static analysis tool. Signed-off-by: Zaur Kambarov <zkambarov@coverity.com> Acked-by: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [TCP]: Never TSO defer under periods of congestion.David S. Miller2005-07-05
| | | | | | | | | Congestion window recover after loss depends upon the fact that if we have a full MSS sized frame at the head of the send queue, we will send it. TSO deferral can defeat the ACK clocking necessary to exit cleanly from recovery. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Blackhole queueing disciplineThomas Graf2005-07-05
| | | | | | | | | | | | Useful in combination with classful qdiscs to drop or temporary disable certain flows, e.g. one could block specific ds flows with dsmark. Unlike the noop qdisc it can be controlled by the user and statistic accounting is done. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Move to new TSO segmenting scheme.David S. Miller2005-07-05
| | | | | | | | | | | | | | | | | | | | | | | | Make TSO segment transmit size decisions at send time not earlier. The basic scheme is that we try to build as large a TSO frame as possible when pulling in the user data, but the size of the TSO frame output to the card is determined at transmit time. This is guided by tp->xmit_size_goal. It is always set to a multiple of MSS and tells sendmsg/sendpage how large an SKB to try and build. Later, tcp_write_xmit() and tcp_push_one() chop up the packet if necessary and conditions warrant. These routines can also decide to "defer" in order to wait for more ACKs to arrive and thus allow larger TSO frames to be emitted. A general observation is that TSO elongates the pipe, thus requiring a larger congestion window and larger buffering especially at the sender side. Therefore, it is important that applications 1) get a large enough socket send buffer (this is accomplished by our dynamic send buffer expansion code) 2) do large enough writes. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Break out send buffer expansion test.David S. Miller2005-07-05
| | | | | | | This makes it easier to understand, and allows easier tweaking of the heuristic later on. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Do not call tcp_tso_acked() if no work to do.David S. Miller2005-07-05
| | | | | | | In tcp_clean_rtx_queue(), if the TSO packet is not even partially acked, do not waste time calling tcp_tso_acked(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Kill bogus comment above tcp_tso_acked().David S. Miller2005-07-05
| | | | | | | | Everything stated there is out of data. tcp_trim_skb() does adjust the available socket send buffer space and skb->truesize now. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix send-side cpu utiliziation regression.David S. Miller2005-07-05
| | | | | | | | | | | | | | | Only put user data purely to pages when doing TSO. The extra page allocations cause two problems: 1) Add the overhead of the page allocations themselves. 2) Make us do small user copies when we get to the end of the TCP socket cache page. It is still beneficial to purely use pages for TSO, so we will do it for that case. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Eliminate redundant computations in tcp_write_xmit().David S. Miller2005-07-05
| | | | | | | | | | | | | | tcp_snd_test() is run for every packet output by a single call to tcp_write_xmit(), but this is not necessary. For one, the congestion window space needs to only be calculated one time, then used throughout the duration of the loop. This cleanup also makes experimenting with different TSO packetization schemes much easier. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Break out tcp_snd_test() into it's constituent parts.David S. Miller2005-07-05
| | | | | | | | | | | | | | | | | tcp_snd_test() does several different things, use inline functions to express this more clearly. 1) It initializes the TSO count of SKB, if necessary. 2) It performs the Nagle test. 3) It makes sure the congestion window is adhered to. 4) It makes sure SKB fits into the send window. This cleanup also sets things up so that things like the available packets in the congestion window does not need to be calculated multiple times by packet sending loops such as tcp_write_xmit(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix __tcp_push_pending_frames() 'nonagle' handling.David S. Miller2005-07-05
| | | | | | | | | | | | | | | | | | | | | 'nonagle' should be passed to the tcp_snd_test() function as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the tail of the write_queue. This is because Nagle does not apply to such frames since we cannot possibly tack more data onto them. However, while doing this __tcp_push_pending_frames() makes all of the packets in the write_queue use this modified 'nonagle' value. Fix the bug and simplify this function by just calling tcp_write_xmit() directly if sk_send_head is non-NULL. As a result, we can now make tcp_data_snd_check() just call tcp_push_pending_frames() instead of the specialized __tcp_data_snd_check(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix redundant calculations of tcp_current_mss()David S. Miller2005-07-05
| | | | | | | | | | | tcp_write_xmit() uses tcp_current_mss(), but some of it's callers, namely __tcp_push_pending_frames(), already has this value available already. While we're here, fix the "cur_mss" argument to be "unsigned int" instead of plain "unsigned". Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: tcp_write_xmit() tabbing cleanupDavid S. Miller2005-07-05
| | | | | | | Put the main basic block of work at the top-level of tabbing, and mark the TCP_CLOSE test with unlikely(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Kill extra cwnd validate in __tcp_push_pending_frames().David S. Miller2005-07-05
| | | | | | | | | | | The tcp_cwnd_validate() function should only be invoked if we actually send some frames, yet __tcp_push_pending_frames() will always invoke it. tcp_write_xmit() does the call for us, so the call here can simply be removed. Also, tcp_write_xmit() can be marked static. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Add missing skb_header_release() call to tcp_fragment().David S. Miller2005-07-05
| | | | | | | | When we add any new packet to the TCP socket write queue, we must call skb_header_release() on it in order for the TSO sharing checks in the drivers to work. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Move __tcp_data_snd_check into tcp_output.cDavid S. Miller2005-07-05
| | | | | | | | It reimplements portions of tcp_snd_check(), so it we move it to tcp_output.c we can consolidate it's logic much easier in a later change. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Move send test logic out of net/tcp.hDavid S. Miller2005-07-05
| | | | | | | | | | | | This just moves the code into tcp_output.c, no code logic changes are made by this patch. Using this as a baseline, we can begin to untangle the mess of comparisons for the Nagle test et al. We will also be able to reduce all of the redundant computation that occurs when outputting data packets. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix quick-ack decrementing with TSO.David S. Miller2005-07-05
| | | | | | | | | | | | | | | | | On each packet output, we call tcp_dec_quickack_mode() if the ACK flag is set. It drops tp->ack.quick until it hits zero, at which time we deflate the ATO value. When doing TSO, we are emitting multiple packets with ACK set, so we should decrement tp->ack.quick that many segments. Note that, unlike this case, tcp_enter_cwr() should not take the tcp_skb_pcount(skb) into consideration. That function, one time, readjusts tp->snd_cwnd and moves into TCP_CA_CWR state. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Simplify SKB data portion allocation with NETIF_F_SG.David S. Miller2005-07-05
| | | | | | | | | | | | | | | | | | | | The ideal and most optimal layout for an SKB when doing scatter-gather is to put all the headers at skb->data, and all the user data in the page array. This makes SKB splitting and combining extremely simple, especially before a packet goes onto the wire the first time. So, when sk_stream_alloc_pskb() is given a zero size, make sure there is no skb_tailroom(). This is achieved by applying SKB_DATA_ALIGN() to the header length used here. Next, make select_size() in TCP output segmentation use a length of zero when NETIF_F_SG is true on the outgoing interface. Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: improve readability of dev_set_promiscuity() in net/core/dev.cDavid Chau2005-07-05
| | | | | | | | | | A trivial patch to improve the readability of dev_set_promiscuity() in net/core/dev.c. New code does exactly the same thing as original code. Signed-off-by: David Chau <ddcc@mit.edu> Signed-off-by: Domen Puncer <domen@coderock.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: More broken memory allocation fixes for fib_trieRobert Olsson2005-07-05
| | | | | | | | | | | Below a patch to preallocate memory when doing resize of trie (inflate halve) If preallocations fails it just skips the resize of this tnode for this time. The oops we got when killing bgpd (with full routing) is now gone. Patrick memory patch is also used. Signed-off-by: Robert Olsson <robert.olsson@its.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
* [DECNET]: Fix memset overflow on 64bit archs while dumping decnet routing rulesThomas Graf2005-07-05
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Bug fix in rt_check_expire()Eric Dumazet2005-07-05
| | | | | | | | | | | | | | | | | | | | | | | | | - rt_check_expire() fixes (an overflow occured if size of the hash was >= 65536) reminder of the bugfix: The rt_check_expire() has a serious problem on machines with large route caches, and a standard HZ value of 1000. With default values, ie ip_rt_gc_interval = 60*HZ = 60000 ; the loop count : for (t = ip_rt_gc_interval << rt_hash_log; t >= 0; overflows (t is a 31 bit value) as soon rt_hash_log is >= 16 (65536 slots in route cache hash table). In this case, rt_check_expire() does nothing at all Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Use the fancy alloc_large_system_hash() function for route hash tableEric Dumazet2005-07-05
| | | | | | | - rt hash table allocated using alloc_large_system_hash() function. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Hashed spinlocks in net/ipv4/route.cEric Dumazet2005-07-05
| | | | | | | | | | - Locking abstraction - Spinlocks moved out of rt hash table : Less memory (50%) used by rt hash table. it's a win even on UP. - Sizing of spinlocks table depends on NR_CPUS Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Handle large allocations in fib_triePatrick McHardy2005-07-05
| | | | | | | | | Inflating a node a couple of times makes it exceed the 128k kmalloc limit. Use __get_free_pages for allocations > PAGE_SIZE, as in fib_hash. Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Robert Olsson <Robert.Olsson@data.slu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV6]: Makes IPv6 rcv registration happen last during initialisation.Herbert Xu2005-07-05
| | | | | Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Fix crash in ip_rcv while booting related to netconsoleHerbert Xu2005-07-05
| | | | | | | Makes IPv4 ip_rcv registration happen last in af_inet. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Report rate estimator configuration errors during qdisc allocationThomas Graf2005-07-05
| | | | | | | | | | | | | | Current behaviour is to not report an error if a rate estimator is created together with a qdisc and the configuration of the rate estimator is bogus. This leads to unexpected behaviour because the user is not notified. New behaviour is to report the error and let the whole qdisc creation operation fail so the user is able to fix his mistake. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Cleanup qdisc creation and alignment macrosThomas Graf2005-07-05
| | | | | | | | | Adds qdisc_alloc() to share code between qdisc_create() and qdisc_create_dflt(). Hides the qdisc alignment behind macros and makes use of them. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Remove unused security member in sk_buffThomas Graf2005-07-05
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: net/core/filter.c: make len cover the entire packetPatrick McHardy2005-07-05
| | | | | | | | | | As suggested by Herbert Xu: Since we don't require anything to be in the linear packet range anymore make len cover the entire packet. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Consolidate common code in net/core/filter.cPatrick McHardy2005-07-05
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Remove redundant code in net/core/filter.cPatrick McHardy2005-07-05
| | | | | | | | | skb_header_pointer handles linear and non-linear data, no need to handle linear data again. Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETFILTER]: Fix connection tracking bug in 2.6.12Patrick McHardy2005-06-28
| | | | | | | | | | | | | | | In 2.6.12 we started dropping the conntrack reference when a packet leaves the IP layer. This broke connection tracking on a bridge, because bridge-netfilter defers calling some NF_IP_* hooks to the bridge layer for locally generated packets going out a bridge, where the conntrack reference is no longer available. This patch keeps the reference in this case as a temporary solution, long term we will remove the defered hook calling. No attempt is made to drop the reference in the bridge-code when it is no longer needed, tc actions could already have sent the packet anywhere. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Micro optimization in eth_header()Denis Vlasenko2005-06-28
| | | | | Signed-off-by: Denis Vlasenko <vda@ilport.com.ua> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV6]: remove more unused IPV6_AUTHHDR things.YOSHIFUJI Hideaki2005-06-28
| | | | | | | | | Remove two more unused IPV6_AUTHHDR option things, which I failed to remove them last time, plus, mark IPV6_AUTHHDR obsolete. Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPVS]: Close race conditions on ip_vs_conn_tab list modificationNeil Horman2005-06-28
| | | | | | | | | | | | | | | | | | In an smp system, it is possible for an connection timer to expire, calling ip_vs_conn_expire while the connection table is being flushed, before ct_write_lock_bh is acquired. Since the list iterator loop in ip_vs_con_flush releases and re-acquires the spinlock (even though it doesn't re-enable softirqs), it is possible for the expiration function to modify the connection list, while it is being traversed in ip_vs_conn_flush. The result is that the next pointer gets set to NULL, and subsequently dereferenced, resulting in an oops. Signed-off-by: Neil Horman <nhorman@redhat.com> Acked-by: JulianAnastasov Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Broken memory allocation in fib_trieRobert Olsson2005-06-28
| | | | | | | | This should help up the insertion... but the resize is more crucial. and complex and needs some thinking. Signed-off-by: Robert Olsson <robert.olsson@its.uu.se> Signed-off-by: David S. Miller <davem@davemloft.net>
* [SCTP] Make init & delayed sack timeouts configurable by user.Vlad Yasevich2005-06-28
| | | | | | Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by: Sridhar Samudrala <sri@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: ipconfig.c: fix dhcp timeout behaviourMaxime Bizon2005-06-28
| | | | | | | | | | | | | | | | | | I think there is a small bug in ipconfig.c in case IPCONFIG_DHCP is set and dhcp is used. When a DHCPOFFER is received, ip address is kept until we get DHCPACK. If no ack is received, ic_dynamic() returns negatively, but leaves the offered ip address in ic_myaddr. This makes the main loop in ip_auto_config() break and uses the maybe incomplete configuration. Not sure if it's the best way to do, but the following trivial patch correct this. Signed-off-by: Maxime Bizon <mbizon@freebox.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV4]: Snmpv2 Mib IP counter ipInAddrErrors supportDietmar Eggemann2005-06-28
| | | | | | | | | | | | I followed Thomas' proposal to see every martian destination as a case where the ipInAddrErrors counter has to be incremented. There are two advantages by doing so: (1) The relation between the ipInReceive counter and all the other ipInXXX counters is more accurate in the case the RTN_UNICAST code check fails and (2) it makes the code in ip_route_input_slow easier. Signed-off-by: Dietmar Eggemann <dietmar.eggemann@gmx.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPV6]: Don't dump temporary addresses twiceYOSHIFUJI Hideaki2005-06-28
| | | | | | | | | | | | Each IPv6 Temporary Address (w/ CONFIG_IPV6_PRIVACY) is dumped twice to netlink. Because temporary addresses are listed in idev->addr_list, there's no need to dump idev->tempaddr separately. Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Missing padding fields in dumped structuresPatrick McHardy2005-06-28
| | | | | | | Plug holes with padding fields and initialized them to zero. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Missing initializations in dumped dataPatrick McHardy2005-06-28
| | | | | | | | Mostly missing initialization of padding fields of 1 or 2 bytes length, two instances of uninitialized nlmsgerr->msg of 16 bytes length. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Clear padding in netlink messagesPatrick McHardy2005-06-28
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETFILTER]: ipt_CLUSTERIP: fix ARP manglingHarald Welte2005-06-28
| | | | | | | | This patch adds mangling of ARP requests (in addition to replies), since ARP caches are made from snooping both requests and replies. Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [EBTABLES]: Fix thinkos in ebt_log.cDavid S. Miller2005-06-28
| | | | | | | | When converting over the skb_header_pointer(), I converted parts of this module incorrectly. Kill the 'u' union in ebt_log() and all the bogus references to it. Signed-off-by: David S. Miller <davem@davemloft.net>
* [IPVS]: Fix for overflowspageexec2005-06-26
| | | | | | | | | | | | | | | From: <pageexec@freemail.hu> $subject was fixed in 2.4 already, 2.6 needs it as well. The impact of the bugs is a kernel stack overflow and privilege escalation from CAP_NET_ADMIN via the IP_VS_SO_SET_STARTDAEMON/IP_VS_SO_GET_DAEMON ioctls. People running with 'root=all caps' (i.e., most users) are not really affected (there's nothing to escalate), but SELinux and similar users should take it seriously if they grant CAP_NET_ADMIN to other users. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Fix two socket hashing bugs.David S. Miller2005-06-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) netlink_release() should only decrement the hash entry count if the socket was actually hashed. This was causing hash->entries to underflow, which resulting in all kinds of troubles. On 64-bit systems, this would cause the following conditional to erroneously trigger: err = -ENOMEM; if (BITS_PER_LONG > 32 && unlikely(hash->entries >= UINT_MAX)) goto err; 2) netlink_autobind() needs to propagate the error return from netlink_insert(). Otherwise, callers will not see the error as they should and thus try to operate on a socket with a zero pid, which is very bad. However, it should not propagate -EBUSY. If two threads race to autobind the socket, that is fine. This is consistent with the autobind behavior in other protocols. So bug #1 above, combined with this one, resulted in hangs on netlink_sendmsg() calls to the rtnetlink socket. We'd try to do the user sendmsg() with the socket's pid set to zero, later we do a socket lookup using that pid (via the value we stashed away in NETLINK_CB(skb).pid), but that won't give us the user socket, it will give us the rtnetlink socket. So when we try to wake up the receive queue, we dive back into rtnetlink_rcv() which tries to recursively take the rtnetlink semaphore. Thanks to Jakub Jelink for providing backtraces. Also, thanks to Herbert Xu for supplying debugging patches to help track this down, and also finding a mistake in an earlier version of this fix. Signed-off-by: David S. Miller <davem@davemloft.net>