aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/calxeda
Commit message (Collapse)AuthorAge
* net: calxedaxgmac: ip align receive buffersRob Herring2012-11-07
| | | | | | | | | | | | | | | | On gcc 4.7, we will get alignment traps in the ip stack if we don't align the ip headers on receive. The h/w can support this, so use ip aligned allocations. Cut down the unnecessary padding on the allocation. The buffer can start on any byte alignment, but the size including the begining offset must be 8 byte aligned. So the h/w buffer size must include the NET_IP_ALIGN offset. Thanks to Eric Dumazet for the initial patch highlighting the padding issues. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: rework transmit ring handlingRob Herring2012-11-07
| | | | | | | | | Only generate tx interrupts on every ring size / 4 descriptors. Move the netif_stop_queue call to the end of the xmit function rather than checking at the beginning. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: drop some unnecessary register writesRob Herring2012-11-07
| | | | | | | | | The interrupts have already been cleared, so we don't need to clear them again. Also, we could miss interrupts if they are cleared, but we don't process the packet. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: use raw i/o accessors in rx and tx pathsRob Herring2012-11-07
| | | | | | | | | | | | The standard readl/writel accessors involve a spinlock and cache sync operation on ARM platforms with an outer cache. Only DMA triggering accesses need this, so use the raw variants instead in the critical paths. The relaxed variants would be more appropriate, but don't exist on all arches. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: remove explicit rx dma buffer pollingRob Herring2012-11-07
| | | | | | | | | New received frames will trigger the rx DMA to poll the DMA descriptors, so there is no need to tell the h/w to poll. We also want to enable dropping frames from the fifo when there is no buffer. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: enable operate on 2nd frame modeRob Herring2012-11-07
| | | | | | | | Enable the tx dma to start reading the next frame while sending the current frame. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: remove skb recyclingEric Dumazet2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | Over time, skb recycling infrastructure got litle interest and many bugs. Generic rx path skb allocation is now using page fragments for efficient GRO / TCP coalescing, and recyling a tx skb for rx path is not worth the pain. Last identified bug is that fat skbs can be recycled and it can endup using high order pages after few iterations. With help from Maxime Bizon, who pointed out that commit 87151b8689d (net: allow pskb_expand_head() to get maximum tailroom) introduced this regression for recycled skbs. Instead of fixing this bug, lets remove skb recycling. Drivers wanting really hot skbs should use build_skb() anyway, to allocate/populate sk_buff right before netif_receive_skb() Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Maxime Bizon <mbizon@freebox.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: enable rx cut-thru modeRob Herring2012-07-11
| | | | | | | | Enabling RX cut-thru mode yields better performance as received frames start getting written to memory before a whole frame is received. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: set outstanding AXI bus transactions to 8Rob Herring2012-07-11
| | | | | | | | Increase the number of outstanding read and write AXI transactions from 1 to 8 for better performance. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: fix hang on rx refillRob Herring2012-07-11
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix intermittent hangs in xgmac_rx_refill. If a ring buffer entry already had an skb allocated, then xgmac_rx_refill would get stuck in a loop. This can happen on a rx error when we just leave the skb allocated to the entry. [ 7884.510000] INFO: rcu_preempt detected stall on CPU 0 (t=727315 jiffies) [ 7884.510000] [<c0010a59>] (unwind_backtrace+0x1/0x98) from [<c006fd93>] (__rcu_pending+0x11b/0x2c4) [ 7884.510000] [<c006fd93>] (__rcu_pending+0x11b/0x2c4) from [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) [ 7884.510000] [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) from [<c0036abb>] (update_process_times+0x2b/0x48) [ 7884.510000] [<c0036abb>] (update_process_times+0x2b/0x48) from [<c004e8fd>] (tick_sched_timer+0x51/0x94) [ 7884.510000] [<c004e8fd>] (tick_sched_timer+0x51/0x94) from [<c0045527>] (__run_hrtimer+0x4f/0x1e8) [ 7884.510000] [<c0045527>] (__run_hrtimer+0x4f/0x1e8) from [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) [ 7884.510000] [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) from [<c00101d3>] (twd_handler+0x17/0x24) [ 7884.510000] [<c00101d3>] (twd_handler+0x17/0x24) from [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) [ 7884.510000] [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) from [<c0069aab>] (generic_handle_irq+0x17/0x2c) [ 7884.510000] [<c0069aab>] (generic_handle_irq+0x17/0x2c) from [<c000cc8d>] (handle_IRQ+0x35/0x7c) [ 7884.510000] [<c000cc8d>] (handle_IRQ+0x35/0x7c) from [<c033b153>] (__irq_svc+0x33/0xb8) [ 7884.510000] [<c033b153>] (__irq_svc+0x33/0xb8) from [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) [ 7884.510000] [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) from [<c02458ed>] (xgmac_poll+0x265/0x3bc) [ 7884.510000] [<c02458ed>] (xgmac_poll+0x265/0x3bc) from [<c029fcbf>] (net_rx_action+0xc3/0x200) [ 7884.510000] [<c029fcbf>] (net_rx_action+0xc3/0x200) from [<c0030cab>] (__do_softirq+0xa3/0x1bc) Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxedaxgmac: fix net timeout recoveryRob Herring2012-07-11
| | | | | | | | Fix net tx watchdog timeout recovery. The descriptor ring was reset, but the DMA engine was not reset to the beginning of the ring. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: use eth_hw_addr_random() and reset addr_assign_typeDanny Kukawka2012-02-15
| | | | | | | | | | | | | Use eth_hw_addr_random() instead of calling random_ether_addr() to set addr_assign_type correctly to NET_ADDR_RANDOM. Reset the state to NET_ADDR_PERM as soon as the MAC get changed via .ndo_set_mac_address. v2: adapt to renamed eth_hw_addr_random() Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* xgmac: cleanupsstephen hemminger2012-01-05
| | | | | | | | Make local function static, make ethtool_ops const. Compile tested only. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: calxeda xgmac ethernet driver add missing HAS_IOMEM dependencyHeiko Carstens2011-12-27
| | | | | | | | | | | | Fix allyesconfig build on architectures without IOMEM: drivers/net/ethernet/calxeda/xgmac.c:1800:2: error: implicit declaration of function 'iounmap' [-Werror=implicit-function-declaration] Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Rob Herring <rob.herring@calxeda.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: add calxeda xgmac ethernet driverRob Herring2011-11-29
Add support for the XGMAC 10Gb ethernet device in the Calxeda Highbank SOC. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: David S. Miller <davem@davemloft.net>