aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAge
* uml: convert network device to netdevice opsStephen Hemminger2009-03-27
| | | | | Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* uml: convert network device to internal network device statsStephen Hemminger2009-03-27
| | | | | | | Convert the UML network device to use internal network device stats. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* 3c503, smc-ultra: netdev_ops bugsStephen Hemminger2009-03-27
| | | | | | | A couple of drivers have leftovers from netdev ops conversion. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* smsc911x: enforce read-after-write timing restriction on eeprom accessSteve Glendinning2009-03-27
| | | | | | | | | The LAN911x datasheet specifies a minimum delay of 45ns between a write of E2P_DATA and any read. This patch adds a single dummy read of BYTE_TEST to enforce this timing constraint. Signed-off-by: Steve Glendinning <steve.glendinning@smsc.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* gianfar: fix headroom expansion codeStephen Hemminger2009-03-27
| | | | | | | | | | The code that was added to increase headroom was wrong. It doesn't handle the case where gfar_add_fcb() changes the skb. Better to do check at start of transmit (outside of lock), where error handling is better anyway. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* af_rose/x25: Sanity check the maximum user frame sizeAlan Cox2009-03-27
| | | | | | | | | Otherwise we can wrap the sizes and end up sending garbage. Closes #10423 Signed-off-by: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* appletalk: this warning can go I thinkAlan Cox2009-03-27
| | | | | | | Its past 2.2 ... Signed-off-by: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* benet: use do_div() for 64 bit divideStephen Hemminger2009-03-27
| | | | | | | | | | The benet driver is doing a 64 bit divide, which is not supported in Linux kernel on 32 bit architectures. The correct way to do this is to use do_div(). Compile tested on i386 only. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* xfrm: spin_lock() should be spin_unlock() in xfrm_state.cChuck Ebbert2009-03-27
| | | | | | | | | | | | spin_lock() should be spin_unlock() in xfrm_state_walk_done(). caused by: commit 12a169e7d8f4b1c95252d8b04ed0f1033ed7cfe2 "ipsec: Put dumpers on the dump list" Reported-by: Marc Milgram <mmilgram@redhat.com> Signed-off-by: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv6: Plug sk_buff leak in ipv6_rcv (net/ipv6/ip6_input.c)Jesper Nilsson2009-03-27
| | | | | | | | | | | | | | | | | | | | | Commit 778d80be52699596bf70e0eb0761cf5e1e46088d (ipv6: Add disable_ipv6 sysctl to disable IPv6 operaion on specific interface) seems to have introduced a leak of sk_buff's for ipv6 traffic, at least in some configurations where idev is NULL, or when ipv6 is disabled via sysctl. The problem is that if the first condition of the if-statement returns non-NULL, it returns an skb with only one reference, and when the other conditions apply, execution jumps to the "out" label, which does not call kfree_skb for it. To plug this leak, change to use the "drop" label instead. (this relies on it being ok to call kfree_skb on NULL) This also allows us to avoid calling rcu_read_unlock here, and removes the only user of the "out" label. Signed-off-by: Jesper Nilsson <jesper.nilsson@axis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: Add support for the OpenCores 10/100 Mbps Ethernet MAC.Thierry Reding2009-03-27
| | | | | | | | | | | | | | This patch adds a platform device driver that supports the OpenCores 10/100 Mbps Ethernet MAC. The driver expects three resources: one IORESOURCE_MEM resource defines the memory region for the core's memory-mapped registers while a second IORESOURCE_MEM resource defines the network packet buffer space. The third resource, of type IORESOURCE_IRQ, associates an interrupt with the driver. Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de> Acked-by: Florian Fainelli <florian@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'master' of ↵David S. Miller2009-03-27
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kaber/nf-next-2.6
| * ctnetlink: compute generic part of event more acuratelyHolger Eitzenberger2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | On a box with most of the optional Netfilter switches turned off some of the NLAs are never send, e. g. secmark, mark or the conntrack byte/packet counters. As a worst case scenario this may possibly still lead to ctnetlink skbs being reallocated in netlink_trim() later, loosing all the nice effects from the previous patches. I try to solve that (at least partly) by correctly #ifdef'ing the NLAs in the computation. Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: nf_conntrack: calculate per-protocol nlattr sizeHolger Eitzenberger2009-03-25
| | | | | | | | | | Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: nf_conntrack: add generic function to get len of generic policyHolger Eitzenberger2009-03-25
| | | | | | | | | | | | | | | | Usefull for all protocols which do not add additional data, such as GRE or UDPlite. Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: ctnetlink: allocate right-sized ctnetlink skbHolger Eitzenberger2009-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Try to allocate a Netlink skb roughly the size of the actual message, with the help from the l3 and l4 protocol helpers. This is all to prevent a reallocation in netlink_trim() later. The overhead of allocating the right-sized skb is rather small, with ctnetlink_alloc_skb() actually being inlined away on my x86_64 box. The size of the per-proto space is determined at registration time of the protocol helper. Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: nf_conntrack: use SLAB_DESTROY_BY_RCU and get rid of call_rcu()Eric Dumazet2009-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use "hlist_nulls" infrastructure we added in 2.6.29 for RCUification of UDP & TCP. This permits an easy conversion from call_rcu() based hash lists to a SLAB_DESTROY_BY_RCU one. Avoiding call_rcu() delay at nf_conn freeing time has numerous gains. First, it doesnt fill RCU queues (up to 10000 elements per cpu). This reduces OOM possibility, if queued elements are not taken into account This reduces latency problems when RCU queue size hits hilimit and triggers emergency mode. - It allows fast reuse of just freed elements, permitting better use of CPU cache. - We delete rcu_head from "struct nf_conn", shrinking size of this structure by 8 or 16 bytes. This patch only takes care of "struct nf_conn". call_rcu() is still used for less critical conntrack parts, that may be converted later if necessary. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: {ip,ip6,arp}_tables: fix incorrect loop detectionPatrick McHardy2009-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e1b4b9f ([NETFILTER]: {ip,ip6,arp}_tables: fix exponential worst-case search for loops) introduced a regression in the loop detection algorithm, causing sporadic incorrectly detected loops. When a chain has already been visited during the check, it is treated as having a standard target containing a RETURN verdict directly at the beginning in order to not check it again. The real target of the first rule is then incorrectly treated as STANDARD target and checked not to contain invalid verdicts. Fix by making sure the rule does actually contain a standard target. Based on patch by Francis Dupont <Francis_Dupont@isc.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: limit the length of the helper nameHolger Eitzenberger2009-03-25
| | | | | | | | | | | | | | | | | | This is necessary in order to have an upper bound for Netlink message calculation, which is not a problem at all, as there are no helpers with a longer name. Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netlink: add nla_policy_len()Holger Eitzenberger2009-03-25
| | | | | | | | | | | | | | | | | | It calculates the max. length of a Netlink policy, which is usefull for allocating Netlink buffers roughly the size of the actual message. Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: ctnetlink: add callbacks to the per-proto nlattrsHolger Eitzenberger2009-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is added a single callback for the l3 proto helper. The two callbacks for the l4 protos are necessary because of the general structure of a ctnetlink event, which is in short: CTA_TUPLE_ORIG <l3/l4-proto-attributes> CTA_TUPLE_REPLY <l3/l4-proto-attributes> CTA_ID ... CTA_PROTOINFO <l4-proto-attributes> CTA_TUPLE_MASTER <l3/l4-proto-attributes> Therefore the formular is size := sizeof(generic-nlas) + 3 * sizeof(tuple_nlas) + sizeof(protoinfo_nlas) Some of the NLAs are optional, e. g. CTA_TUPLE_MASTER, which is only set if it's an expected connection. But the number of optional NLAs is small enough to prevent netlink_trim() from reallocating if calculated properly. Signed-off-by: Holger Eitzenberger <holger@eitzenberger.org> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: factorize ifname_compare()Eric Dumazet2009-03-25
| | | | | | | | | | | | | | We use same not trivial helper function in four places. We can factorize it. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: nf_conntrack: use hlist_add_head_rcu() in nf_conntrack_set_hashsize()Eric Dumazet2009-03-25
| | | | | | | | | | | | | | | | | | | | Using hlist_add_head() in nf_conntrack_set_hashsize() is quite dangerous. Without any barrier, one CPU could see a loop while doing its lookup. Its true new table cannot be seen by another cpu, but previous table is still readable. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: Patrick McHardy <kaber@trash.net>
| * netfilter: fix xt_LED build failurePatrick McHardy2009-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | net/netfilter/xt_LED.c:40: error: field netfilter_led_trigger has incomplete type net/netfilter/xt_LED.c: In function led_timeout_callback: net/netfilter/xt_LED.c:78: warning: unused variable ledinternal net/netfilter/xt_LED.c: In function led_tg_check: net/netfilter/xt_LED.c:102: error: implicit declaration of function led_trigger_register net/netfilter/xt_LED.c: In function led_tg_destroy: net/netfilter/xt_LED.c:135: error: implicit declaration of function led_trigger_unregister Fix by adding a dependency on LED_TRIGGERS. Reported-by: Sachin Sant <sachinp@in.ibm.com> Tested-by: Subrata Modak <tosubrata@gmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net>
* | GRO: Disable GRO on legacy netif_rx pathHerbert Xu2009-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When I fixed the GRO crash in the legacy receive path I used napi_complete to replace __napi_complete. Unfortunately they're not the same when NETPOLL is enabled, which may result in us not calling __napi_complete at all. What's more, we really do need to keep the __napi_complete call within the IRQ-off section since in theory an IRQ can occur in between and fill up the backlog to the maximum, causing us to lock up. Since we can't seem to find a fix that works properly right now, this patch reverts all the GRO support from the netif_rx path. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'for-linus' of ↵Linus Torvalds2009-03-26
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: slob: fix lockup in slob_free() slub: use get_track() slub: rename calculate_min_partial() to set_min_partial() slub: add min_partial sysfs tunable slub: move min_partial to struct kmem_cache SLUB: Fix default slab order for big object sizes SLUB: Do not pass 8k objects through to the page allocator SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constants slob: clean up the code SLUB: Use ->objsize from struct kmem_cache_cpu in slab_free()
| | \
| | \
| | \
| | \
| | \
| | \
| | \
| | \
| *-------. \ Merge branches 'topic/slob/cleanups', 'topic/slob/fixes', 'topic/slub/core', ↵Pekka Enberg2009-03-24
| |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | 'topic/slub/cleanups' and 'topic/slub/perf' into for-linus
| | | | | | * | SLUB: Fix default slab order for big object sizesZhang Yanmin2009-02-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The default order of kmalloc-8192 on 2*4 stoakley is an issue of calculate_order. slab_size order name ------------------------------------------------- 4096 3 sgpool-128 8192 2 kmalloc-8192 16384 3 kmalloc-16384 kmalloc-8192's default order is smaller than sgpool-128's. On 4*4 tigerton machine, a similiar issue appears on another kmem_cache. Function calculate_order uses 'min_objects /= 2;' to shrink. Plus size calculation/checking in slab_order, sometimes above issue appear. Below patch against 2.6.29-rc2 fixes it. I checked the default orders of all kmem_cache and they don't become smaller than before. So the patch wouldn't hurt performance. Signed-off-by Zhang Yanmin <yanmin.zhang@linux.intel.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | | | * | SLUB: Do not pass 8k objects through to the page allocatorPekka Enberg2009-02-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Increase the maximum object size in SLUB so that 8k objects are not passed through to the page allocator anymore. The network stack uses 8k objects for performance critical operations. The patch is motivated by a SLAB vs. SLUB regression in the netperf benchmark. The problem is that the kfree(skb->head) call in skb_release_data() that is subject to page allocator pass-through as the size passed to __alloc_skb() is larger than 4 KB in this test. As explained by Yanmin Zhang: I use 2.6.29-rc2 kernel to run netperf UDP-U-4k CPU_NUM client/server pair loopback testing on x86-64 machines. Comparing with SLUB, SLAB's result is about 2.3 times of SLUB's. After applying the reverting patch, the result difference between SLUB and SLAB becomes 1% which we might consider as fluctuation. [ penberg@cs.helsinki.fi: fix oops in kmalloc() ] Reported-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Tested-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | | | * | SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As a preparational patch to bump up page allocator pass-through threshold, introduce two new constants SLUB_MAX_SIZE and SLUB_PAGE_SHIFT and convert mm/slub.c to use them. Reported-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Tested-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | | * | | slub: use get_track()Akinobu Mita2009-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use get_track() in set_track() Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | | * | | SLUB: Use ->objsize from struct kmem_cache_cpu in slab_free()Pekka Enberg2009-01-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's no reason to use ->objsize from struct kmem_cache in slab_free() for the SLAB_DEBUG_OBJECTS case. All it does is generate extra cache pressure as we try very hard not to touch struct kmem_cache in the fast-path. Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | * | | | slub: rename calculate_min_partial() to set_min_partial()David Rientjes2009-02-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As suggested by Christoph Lameter, rename calculate_min_partial() to set_min_partial() as the function doesn't really do any calculations. Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | * | | | slub: add min_partial sysfs tunableDavid Rientjes2009-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that a cache's min_partial has been moved to struct kmem_cache, it's possible to easily tune it from userspace by adding a sysfs attribute. It may not be desirable to keep a large number of partial slabs around if a cache is used infrequently and memory, especially when constrained by a cgroup, is scarce. It's better to allow userspace to set the minimum policy per cache instead of relying explicitly on kmem_cache_shrink(). The memory savings from simply moving min_partial from struct kmem_cache_node to struct kmem_cache is obviously not significant (unless maybe you're from SGI or something), at the largest it's # allocated caches * (MAX_NUMNODES - 1) * sizeof(unsigned long) The true savings occurs when userspace reduces the number of partial slabs that would otherwise be wasted, especially on machines with a large number of nodes (ia64 with CONFIG_NODES_SHIFT at 10 for default?). As well as the kernel estimates ideal values for n->min_partial and ensures it's within a sane range, userspace has no other input other than writing to /sys/kernel/slab/cache/shrink. There simply isn't any better heuristic to add when calculating the partial values for a better estimate that works for all possible caches. And since it's currently a static value, the user really has no way of reclaiming that wasted space, which can be significant when constrained by a cgroup (either cpusets or, later, memory controller slab limits) without shrinking it entirely. This also allows the user to specify that increased fragmentation and more partial slabs are actually desired to avoid the cost of allocating new slabs at runtime for specific caches. There's also no reason why this should be a per-struct kmem_cache_node value in the first place. You could argue that a machine would have such node size asymmetries that it should be specified on a per-node basis, but we know nobody is doing that right now since it's a purely static value at the moment and there's no convenient way to tune that via slub's sysfs interface. Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | | * | | | slub: move min_partial to struct kmem_cacheDavid Rientjes2009-02-23
| | | | | |/ / | | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although it allows for better cacheline use, it is unnecessary to save a copy of the cache's min_partial value in each kmem_cache_node. Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | | * / | | slob: fix lockup in slob_free()Nick Piggin2009-03-23
| | | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't hold SLOB lock when freeing the page. Reduces lock hold width. See the following thread for discussion of the bug: http://marc.info/?l=linux-kernel&m=123709983214143&w=2 Reported-by: Ingo Molnar <mingo@elte.hu> Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
| | * | / / slob: clean up the codeAmérico Wang2009-01-19
| | | |/ / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Use NULL instead of plain 0; - Rename slob_page() to is_slob_page(); - Define slob_page() to convert void* to struct slob_page*; - Rename slob_new_page() to slob_new_pages(); - Define slob_free_pages() accordingly. Compile tests only. Signed-off-by: WANG Cong <wangcong@zeuux.org> Signed-off-by: Matt Mackall <mpm@selenic.com> Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
* | | | | Merge branch 'for-linus' of ↵Linus Torvalds2009-03-26
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k: m68k: irq_node.handler() should return irqreturn_t m68k: section mismatch fixes: Atari SCSI m68k: section mismatch fixes: DMAsound for Atari MAINTAINERS: Replace dead link to m68k CVS repository by link to new git repository m68k: mac - Add SWIM floppy support m68k: mac - Add a new entry in mac_model to identify the floppy controller type. m68k: Add install target
| * | | | | m68k: irq_node.handler() should return irqreturn_tGeert Uytterhoeven2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit b5dc7840b3ebe9c7967dd8ba73db957767009ff9 ("m68k: introduce irq controller") reverted the return type of struct irq_node.handler() from irqreturn_t to int. Change it back to irqreturn_t, else it will give a compiler warning when irqreturn_t is turned into an enum in the near future: | arch/m68k/kernel/ints.c:231: warning: assignment from incompatible pointer type Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
| * | | | | m68k: section mismatch fixes: Atari SCSIMichael Schmitz2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | add __init annotations to probe routines Signed-off-by: Michael Schmitz <schmitz@debian.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
| * | | | | m68k: section mismatch fixes: DMAsound for AtariMichael Schmitz2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | add __initdata to driver presets struct Signed-off-By: Michael Schmitz <schmitz@debian.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
| * | | | | MAINTAINERS: Replace dead link to m68k CVS repository by link to new git ↵Geert Uytterhoeven2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | repository CVS is dead, long live git! Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
| * | | | | m68k: mac - Add SWIM floppy supportLaurent Vivier2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It allows to read data from a floppy, but not to write to, and to eject the floppy (useful on our Mac without eject button). Signed-off-by: Laurent Vivier <Laurent@lvivier.info> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
| * | | | | m68k: mac - Add a new entry in mac_model to identify the floppy controller type.Laurent Vivier2009-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a field "floppy_type" which can take the following values: MAC_FLOPPY_IWM for an IWM based mac MAC_FLOPPY_SWIM_ADDR1 for a SWIM based mac with controller at VIA1 + 0x1E000 MAC_FLOPPY_SWIM_ADDR2 for a SWIM based mac with controller at VIA1 + 0x16000 MAC_FLOPPY_IOP for an IOP based mac MAC_FLOPPY_AV for an AV based mac Signed-off-by: Laurent Vivier <Laurent@lvivier.info> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
| * | | | | m68k: Add install targetLaurent Vivier2009-03-26
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch enables the use of "make install" on m68k architecture to copy kernel to /boot. Signed-off-by: Laurent Vivier <Laurent@lvivier.info> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
* | | | | Merge branch 'bkl-removal' of git://git.lwn.net/linux-2.6Linus Torvalds2009-03-26
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'bkl-removal' of git://git.lwn.net/linux-2.6: Rationalize fasync return values Move FASYNC bit handling to f_op->fasync() Use f_lock to protect f_flags Rename struct file->f_ep_lock
| * | | | | Rationalize fasync return valuesJonathan Corbet2009-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most fasync implementations do something like: return fasync_helper(...); But fasync_helper() will return a positive value at times - a feature used in at least one place. Thus, a number of other drivers do: err = fasync_helper(...); if (err < 0) return err; return 0; In the interests of consistency and more concise code, it makes sense to map positive return values onto zero where ->fasync() is called. Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
| * | | | | Move FASYNC bit handling to f_op->fasync()Jonathan Corbet2009-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Removing the BKL from FASYNC handling ran into the challenge of keeping the setting of the FASYNC bit in filp->f_flags atomic with regard to calls to the underlying fasync() function. Andi Kleen suggested moving the handling of that bit into fasync(); this patch does exactly that. As a result, we have a couple of internal API changes: fasync() must now manage the FASYNC bit, and it will be called without the BKL held. As it happens, every fasync() implementation in the kernel with one exception calls fasync_helper(). So, if we make fasync_helper() set the FASYNC bit, we can avoid making any changes to the other fasync() functions - as long as those functions, themselves, have proper locking. Most fasync() implementations do nothing but call fasync_helper() - which has its own lock - so they are easily verified as correct. The BKL had already been pushed down into the rest. The networking code has its own version of fasync_helper(), so that code has been augmented with explicit FASYNC bit handling. Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: David Miller <davem@davemloft.net> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
| * | | | | Use f_lock to protect f_flagsJonathan Corbet2009-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Traditionally, changes to struct file->f_flags have been done under BKL protection, or with no protection at all. This patch causes all f_flags changes after file open/creation time to be done under protection of f_lock. This allows the removal of some BKL usage and fixes a number of longstanding (if microscopic) races. Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
| * | | | | Rename struct file->f_ep_lockJonathan Corbet2009-03-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This lock moves out of the CONFIG_EPOLL ifdef and becomes f_lock. For now, epoll remains the only user, but a future patch will use it to protect f_flags as well. Cc: Davide Libenzi <davidel@xmailserver.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jonathan Corbet <corbet@lwn.net>