aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/broadcom
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-05-01 17:08:52 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-05-01 17:08:52 -0400
commit73287a43cc79ca06629a88d1a199cd283f42456a (patch)
treeacf4456e260115bea77ee31a29f10ce17f0db45c /drivers/net/ethernet/broadcom
parent251df49db3327c64bf917bfdba94491fde2b4ee0 (diff)
parent20074f357da4a637430aec2879c9d864c5d2c23c (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: "Highlights (1721 non-merge commits, this has to be a record of some sort): 1) Add 'random' mode to team driver, from Jiri Pirko and Eric Dumazet. 2) Make it so that any driver that supports configuration of multiple MAC addresses can provide the forwarding database add and del calls by providing a default implementation and hooking that up if the driver doesn't have an explicit set of handlers. From Vlad Yasevich. 3) Support GSO segmentation over tunnels and other encapsulating devices such as VXLAN, from Pravin B Shelar. 4) Support L2 GRE tunnels in the flow dissector, from Michael Dalton. 5) Implement Tail Loss Probe (TLP) detection in TCP, from Nandita Dukkipati. 6) In the PHY layer, allow supporting wake-on-lan in situations where the PHY registers have to be written for it to be configured. Use it to support wake-on-lan in mv643xx_eth. From Michael Stapelberg. 7) Significantly improve firewire IPV6 support, from YOSHIFUJI Hideaki. 8) Allow multiple packets to be sent in a single transmission using network coding in batman-adv, from Martin Hundebøll. 9) Add support for T5 cxgb4 chips, from Santosh Rastapur. 10) Generalize the VXLAN forwarding tables so that there is more flexibility in configurating various aspects of the endpoints. From David Stevens. 11) Support RSS and TSO in hardware over GRE tunnels in bxn2x driver, from Dmitry Kravkov. 12) Zero copy support in nfnelink_queue, from Eric Dumazet and Pablo Neira Ayuso. 13) Start adding networking selftests. 14) In situations of overload on the same AF_PACKET fanout socket, or per-cpu packet receive queue, minimize drop by distributing the load to other cpus/fanouts. From Willem de Bruijn and Eric Dumazet. 15) Add support for new payload offset BPF instruction, from Daniel Borkmann. 16) Convert several drivers over to mdoule_platform_driver(), from Sachin Kamat. 17) Provide a minimal BPF JIT image disassembler userspace tool, from Daniel Borkmann. 18) Rewrite F-RTO implementation in TCP to match the final specification of it in RFC4138 and RFC5682. From Yuchung Cheng. 19) Provide netlink socket diag of netlink sockets ("Yo dawg, I hear you like netlink, so I implemented netlink dumping of netlink sockets.") From Andrey Vagin. 20) Remove ugly passing of rtnetlink attributes into rtnl_doit functions, from Thomas Graf. 21) Allow userspace to be able to see if a configuration change occurs in the middle of an address or device list dump, from Nicolas Dichtel. 22) Support RFC3168 ECN protection for ipv6 fragments, from Hannes Frederic Sowa. 23) Increase accuracy of packet length used by packet scheduler, from Jason Wang. 24) Beginning set of changes to make ipv4/ipv6 fragment handling more scalable and less susceptible to overload and locking contention, from Jesper Dangaard Brouer. 25) Get rid of using non-type-safe NLMSG_* macros and use nlmsg_*() instead. From Hong Zhiguo. 26) Optimize route usage in IPVS by avoiding reference counting where possible, from Julian Anastasov. 27) Convert IPVS schedulers to RCU, also from Julian Anastasov. 28) Support cpu fanouts in xt_NFQUEUE netfilter target, from Holger Eitzenberger. 29) Network namespace support for nf_log, ebt_log, xt_LOG, ipt_ULOG, nfnetlink_log, and nfnetlink_queue. From Gao feng. 30) Implement RFC3168 ECN protection, from Hannes Frederic Sowa. 31) Support several new r8169 chips, from Hayes Wang. 32) Support tokenized interface identifiers in ipv6, from Daniel Borkmann. 33) Use usbnet_link_change() helper in USB net driver, from Ming Lei. 34) Add 802.1ad vlan offload support, from Patrick McHardy. 35) Support mmap() based netlink communication, also from Patrick McHardy. 36) Support HW timestamping in mlx4 driver, from Amir Vadai. 37) Rationalize AF_PACKET packet timestamping when transmitting, from Willem de Bruijn and Daniel Borkmann. 38) Bring parity to what's provided by /proc/net/packet socket dumping and the info provided by netlink socket dumping of AF_PACKET sockets. From Nicolas Dichtel. 39) Fix peeking beyond zero sized SKBs in AF_UNIX, from Benjamin Poirier" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1722 commits) filter: fix va_list build error af_unix: fix a fatal race with bit fields bnx2x: Prevent memory leak when cnic is absent bnx2x: correct reading of speed capabilities net: sctp: attribute printl with __printf for gcc fmt checks netlink: kconfig: move mmap i/o into netlink kconfig netpoll: convert mutex into a semaphore netlink: Fix skb ref counting. net_sched: act_ipt forward compat with xtables mlx4_en: fix a build error on 32bit arches Revert "bnx2x: allow nvram test to run when device is down" bridge: avoid OOPS if root port not found drivers: net: cpsw: fix kernel warn on cpsw irq enable sh_eth: use random MAC address if no valid one supplied 3c509.c: call SET_NETDEV_DEV for all device types (ISA/ISAPnP/EISA) tg3: fix to append hardware time stamping flags unix/stream: fix peeking with an offset larger than data in queue unix/dgram: fix peeking with an offset larger than data in queue unix/dgram: peek beyond 0-sized skbs openvswitch: Remove unneeded ovs_netdev_get_ifindex() ...
Diffstat (limited to 'drivers/net/ethernet/broadcom')
-rw-r--r--drivers/net/ethernet/broadcom/bcm63xx_enet.c73
-rw-r--r--drivers/net/ethernet/broadcom/bgmac.c84
-rw-r--r--drivers/net/ethernet/broadcom/bgmac.h1
-rw-r--r--drivers/net/ethernet/broadcom/bnx2.c19
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x.h58
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c368
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h47
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c377
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h91
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h252
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c240
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h16
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c349
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h6
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c79
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h21
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c351
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h27
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c77
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h2
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c126
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h9
-rw-r--r--drivers/net/ethernet/broadcom/cnic.c4
-rw-r--r--drivers/net/ethernet/broadcom/cnic_if.h3
-rw-r--r--drivers/net/ethernet/broadcom/sb1250-mac.c5
-rw-r--r--drivers/net/ethernet/broadcom/tg3.c912
-rw-r--r--drivers/net/ethernet/broadcom/tg3.h30
27 files changed, 2669 insertions, 958 deletions
diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index 7d81e059e811..0b3e23ec37f7 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -862,27 +862,25 @@ static int bcm_enet_open(struct net_device *dev)
862 862
863 /* allocate rx dma ring */ 863 /* allocate rx dma ring */
864 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc); 864 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc);
865 p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL); 865 p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma,
866 GFP_KERNEL | __GFP_ZERO);
866 if (!p) { 867 if (!p) {
867 dev_err(kdev, "cannot allocate rx ring %u\n", size);
868 ret = -ENOMEM; 868 ret = -ENOMEM;
869 goto out_freeirq_tx; 869 goto out_freeirq_tx;
870 } 870 }
871 871
872 memset(p, 0, size);
873 priv->rx_desc_alloc_size = size; 872 priv->rx_desc_alloc_size = size;
874 priv->rx_desc_cpu = p; 873 priv->rx_desc_cpu = p;
875 874
876 /* allocate tx dma ring */ 875 /* allocate tx dma ring */
877 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc); 876 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc);
878 p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL); 877 p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma,
878 GFP_KERNEL | __GFP_ZERO);
879 if (!p) { 879 if (!p) {
880 dev_err(kdev, "cannot allocate tx ring\n");
881 ret = -ENOMEM; 880 ret = -ENOMEM;
882 goto out_free_rx_ring; 881 goto out_free_rx_ring;
883 } 882 }
884 883
885 memset(p, 0, size);
886 priv->tx_desc_alloc_size = size; 884 priv->tx_desc_alloc_size = size;
887 priv->tx_desc_cpu = p; 885 priv->tx_desc_cpu = p;
888 886
@@ -1619,7 +1617,6 @@ static int bcm_enet_probe(struct platform_device *pdev)
1619 struct resource *res_mem, *res_irq, *res_irq_rx, *res_irq_tx; 1617 struct resource *res_mem, *res_irq, *res_irq_rx, *res_irq_tx;
1620 struct mii_bus *bus; 1618 struct mii_bus *bus;
1621 const char *clk_name; 1619 const char *clk_name;
1622 unsigned int iomem_size;
1623 int i, ret; 1620 int i, ret;
1624 1621
1625 /* stop if shared driver failed, assume driver->probe will be 1622 /* stop if shared driver failed, assume driver->probe will be
@@ -1644,17 +1641,12 @@ static int bcm_enet_probe(struct platform_device *pdev)
1644 if (ret) 1641 if (ret)
1645 goto out; 1642 goto out;
1646 1643
1647 iomem_size = resource_size(res_mem); 1644 priv->base = devm_request_and_ioremap(&pdev->dev, res_mem);
1648 if (!request_mem_region(res_mem->start, iomem_size, "bcm63xx_enet")) {
1649 ret = -EBUSY;
1650 goto out;
1651 }
1652
1653 priv->base = ioremap(res_mem->start, iomem_size);
1654 if (priv->base == NULL) { 1645 if (priv->base == NULL) {
1655 ret = -ENOMEM; 1646 ret = -ENOMEM;
1656 goto out_release_mem; 1647 goto out;
1657 } 1648 }
1649
1658 dev->irq = priv->irq = res_irq->start; 1650 dev->irq = priv->irq = res_irq->start;
1659 priv->irq_rx = res_irq_rx->start; 1651 priv->irq_rx = res_irq_rx->start;
1660 priv->irq_tx = res_irq_tx->start; 1652 priv->irq_tx = res_irq_tx->start;
@@ -1674,9 +1666,9 @@ static int bcm_enet_probe(struct platform_device *pdev)
1674 priv->mac_clk = clk_get(&pdev->dev, clk_name); 1666 priv->mac_clk = clk_get(&pdev->dev, clk_name);
1675 if (IS_ERR(priv->mac_clk)) { 1667 if (IS_ERR(priv->mac_clk)) {
1676 ret = PTR_ERR(priv->mac_clk); 1668 ret = PTR_ERR(priv->mac_clk);
1677 goto out_unmap; 1669 goto out;
1678 } 1670 }
1679 clk_enable(priv->mac_clk); 1671 clk_prepare_enable(priv->mac_clk);
1680 1672
1681 /* initialize default and fetch platform data */ 1673 /* initialize default and fetch platform data */
1682 priv->rx_ring_size = BCMENET_DEF_RX_DESC; 1674 priv->rx_ring_size = BCMENET_DEF_RX_DESC;
@@ -1705,7 +1697,7 @@ static int bcm_enet_probe(struct platform_device *pdev)
1705 priv->phy_clk = NULL; 1697 priv->phy_clk = NULL;
1706 goto out_put_clk_mac; 1698 goto out_put_clk_mac;
1707 } 1699 }
1708 clk_enable(priv->phy_clk); 1700 clk_prepare_enable(priv->phy_clk);
1709 } 1701 }
1710 1702
1711 /* do minimal hardware init to be able to probe mii bus */ 1703 /* do minimal hardware init to be able to probe mii bus */
@@ -1733,7 +1725,8 @@ static int bcm_enet_probe(struct platform_device *pdev)
1733 * if a slave is not present on hw */ 1725 * if a slave is not present on hw */
1734 bus->phy_mask = ~(1 << priv->phy_id); 1726 bus->phy_mask = ~(1 << priv->phy_id);
1735 1727
1736 bus->irq = kmalloc(sizeof(int) * PHY_MAX_ADDR, GFP_KERNEL); 1728 bus->irq = devm_kzalloc(&pdev->dev, sizeof(int) * PHY_MAX_ADDR,
1729 GFP_KERNEL);
1737 if (!bus->irq) { 1730 if (!bus->irq) {
1738 ret = -ENOMEM; 1731 ret = -ENOMEM;
1739 goto out_free_mdio; 1732 goto out_free_mdio;
@@ -1794,10 +1787,8 @@ static int bcm_enet_probe(struct platform_device *pdev)
1794 return 0; 1787 return 0;
1795 1788
1796out_unregister_mdio: 1789out_unregister_mdio:
1797 if (priv->mii_bus) { 1790 if (priv->mii_bus)
1798 mdiobus_unregister(priv->mii_bus); 1791 mdiobus_unregister(priv->mii_bus);
1799 kfree(priv->mii_bus->irq);
1800 }
1801 1792
1802out_free_mdio: 1793out_free_mdio:
1803 if (priv->mii_bus) 1794 if (priv->mii_bus)
@@ -1807,19 +1798,13 @@ out_uninit_hw:
1807 /* turn off mdc clock */ 1798 /* turn off mdc clock */
1808 enet_writel(priv, 0, ENET_MIISC_REG); 1799 enet_writel(priv, 0, ENET_MIISC_REG);
1809 if (priv->phy_clk) { 1800 if (priv->phy_clk) {
1810 clk_disable(priv->phy_clk); 1801 clk_disable_unprepare(priv->phy_clk);
1811 clk_put(priv->phy_clk); 1802 clk_put(priv->phy_clk);
1812 } 1803 }
1813 1804
1814out_put_clk_mac: 1805out_put_clk_mac:
1815 clk_disable(priv->mac_clk); 1806 clk_disable_unprepare(priv->mac_clk);
1816 clk_put(priv->mac_clk); 1807 clk_put(priv->mac_clk);
1817
1818out_unmap:
1819 iounmap(priv->base);
1820
1821out_release_mem:
1822 release_mem_region(res_mem->start, iomem_size);
1823out: 1808out:
1824 free_netdev(dev); 1809 free_netdev(dev);
1825 return ret; 1810 return ret;
@@ -1833,7 +1818,6 @@ static int bcm_enet_remove(struct platform_device *pdev)
1833{ 1818{
1834 struct bcm_enet_priv *priv; 1819 struct bcm_enet_priv *priv;
1835 struct net_device *dev; 1820 struct net_device *dev;
1836 struct resource *res;
1837 1821
1838 /* stop netdevice */ 1822 /* stop netdevice */
1839 dev = platform_get_drvdata(pdev); 1823 dev = platform_get_drvdata(pdev);
@@ -1845,7 +1829,6 @@ static int bcm_enet_remove(struct platform_device *pdev)
1845 1829
1846 if (priv->has_phy) { 1830 if (priv->has_phy) {
1847 mdiobus_unregister(priv->mii_bus); 1831 mdiobus_unregister(priv->mii_bus);
1848 kfree(priv->mii_bus->irq);
1849 mdiobus_free(priv->mii_bus); 1832 mdiobus_free(priv->mii_bus);
1850 } else { 1833 } else {
1851 struct bcm63xx_enet_platform_data *pd; 1834 struct bcm63xx_enet_platform_data *pd;
@@ -1856,17 +1839,12 @@ static int bcm_enet_remove(struct platform_device *pdev)
1856 bcm_enet_mdio_write_mii); 1839 bcm_enet_mdio_write_mii);
1857 } 1840 }
1858 1841
1859 /* release device resources */
1860 iounmap(priv->base);
1861 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
1862 release_mem_region(res->start, resource_size(res));
1863
1864 /* disable hw block clocks */ 1842 /* disable hw block clocks */
1865 if (priv->phy_clk) { 1843 if (priv->phy_clk) {
1866 clk_disable(priv->phy_clk); 1844 clk_disable_unprepare(priv->phy_clk);
1867 clk_put(priv->phy_clk); 1845 clk_put(priv->phy_clk);
1868 } 1846 }
1869 clk_disable(priv->mac_clk); 1847 clk_disable_unprepare(priv->mac_clk);
1870 clk_put(priv->mac_clk); 1848 clk_put(priv->mac_clk);
1871 1849
1872 platform_set_drvdata(pdev, NULL); 1850 platform_set_drvdata(pdev, NULL);
@@ -1889,31 +1867,20 @@ struct platform_driver bcm63xx_enet_driver = {
1889static int bcm_enet_shared_probe(struct platform_device *pdev) 1867static int bcm_enet_shared_probe(struct platform_device *pdev)
1890{ 1868{
1891 struct resource *res; 1869 struct resource *res;
1892 unsigned int iomem_size;
1893 1870
1894 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1871 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
1895 if (!res) 1872 if (!res)
1896 return -ENODEV; 1873 return -ENODEV;
1897 1874
1898 iomem_size = resource_size(res); 1875 bcm_enet_shared_base = devm_request_and_ioremap(&pdev->dev, res);
1899 if (!request_mem_region(res->start, iomem_size, "bcm63xx_enet_dma")) 1876 if (!bcm_enet_shared_base)
1900 return -EBUSY;
1901
1902 bcm_enet_shared_base = ioremap(res->start, iomem_size);
1903 if (!bcm_enet_shared_base) {
1904 release_mem_region(res->start, iomem_size);
1905 return -ENOMEM; 1877 return -ENOMEM;
1906 } 1878
1907 return 0; 1879 return 0;
1908} 1880}
1909 1881
1910static int bcm_enet_shared_remove(struct platform_device *pdev) 1882static int bcm_enet_shared_remove(struct platform_device *pdev)
1911{ 1883{
1912 struct resource *res;
1913
1914 iounmap(bcm_enet_shared_base);
1915 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
1916 release_mem_region(res->start, resource_size(res));
1917 return 0; 1884 return 0;
1918} 1885}
1919 1886
diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
index da5f4397f87c..eec0af45b859 100644
--- a/drivers/net/ethernet/broadcom/bgmac.c
+++ b/drivers/net/ethernet/broadcom/bgmac.c
@@ -13,6 +13,7 @@
13#include <linux/delay.h> 13#include <linux/delay.h>
14#include <linux/etherdevice.h> 14#include <linux/etherdevice.h>
15#include <linux/mii.h> 15#include <linux/mii.h>
16#include <linux/phy.h>
16#include <linux/interrupt.h> 17#include <linux/interrupt.h>
17#include <linux/dma-mapping.h> 18#include <linux/dma-mapping.h>
18#include <bcm47xx_nvram.h> 19#include <bcm47xx_nvram.h>
@@ -244,10 +245,8 @@ static int bgmac_dma_rx_skb_for_slot(struct bgmac *bgmac,
244 245
245 /* Alloc skb */ 246 /* Alloc skb */
246 slot->skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE); 247 slot->skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE);
247 if (!slot->skb) { 248 if (!slot->skb)
248 bgmac_err(bgmac, "Allocation of skb failed!\n");
249 return -ENOMEM; 249 return -ENOMEM;
250 }
251 250
252 /* Poison - if everything goes fine, hardware will overwrite it */ 251 /* Poison - if everything goes fine, hardware will overwrite it */
253 rx = (struct bgmac_rx_header *)slot->skb->data; 252 rx = (struct bgmac_rx_header *)slot->skb->data;
@@ -1313,6 +1312,73 @@ static const struct ethtool_ops bgmac_ethtool_ops = {
1313}; 1312};
1314 1313
1315/************************************************** 1314/**************************************************
1315 * MII
1316 **************************************************/
1317
1318static int bgmac_mii_read(struct mii_bus *bus, int mii_id, int regnum)
1319{
1320 return bgmac_phy_read(bus->priv, mii_id, regnum);
1321}
1322
1323static int bgmac_mii_write(struct mii_bus *bus, int mii_id, int regnum,
1324 u16 value)
1325{
1326 return bgmac_phy_write(bus->priv, mii_id, regnum, value);
1327}
1328
1329static int bgmac_mii_register(struct bgmac *bgmac)
1330{
1331 struct mii_bus *mii_bus;
1332 int i, err = 0;
1333
1334 mii_bus = mdiobus_alloc();
1335 if (!mii_bus)
1336 return -ENOMEM;
1337
1338 mii_bus->name = "bgmac mii bus";
1339 sprintf(mii_bus->id, "%s-%d-%d", "bgmac", bgmac->core->bus->num,
1340 bgmac->core->core_unit);
1341 mii_bus->priv = bgmac;
1342 mii_bus->read = bgmac_mii_read;
1343 mii_bus->write = bgmac_mii_write;
1344 mii_bus->parent = &bgmac->core->dev;
1345 mii_bus->phy_mask = ~(1 << bgmac->phyaddr);
1346
1347 mii_bus->irq = kmalloc_array(PHY_MAX_ADDR, sizeof(int), GFP_KERNEL);
1348 if (!mii_bus->irq) {
1349 err = -ENOMEM;
1350 goto err_free_bus;
1351 }
1352 for (i = 0; i < PHY_MAX_ADDR; i++)
1353 mii_bus->irq[i] = PHY_POLL;
1354
1355 err = mdiobus_register(mii_bus);
1356 if (err) {
1357 bgmac_err(bgmac, "Registration of mii bus failed\n");
1358 goto err_free_irq;
1359 }
1360
1361 bgmac->mii_bus = mii_bus;
1362
1363 return err;
1364
1365err_free_irq:
1366 kfree(mii_bus->irq);
1367err_free_bus:
1368 mdiobus_free(mii_bus);
1369 return err;
1370}
1371
1372static void bgmac_mii_unregister(struct bgmac *bgmac)
1373{
1374 struct mii_bus *mii_bus = bgmac->mii_bus;
1375
1376 mdiobus_unregister(mii_bus);
1377 kfree(mii_bus->irq);
1378 mdiobus_free(mii_bus);
1379}
1380
1381/**************************************************
1316 * BCMA bus ops 1382 * BCMA bus ops
1317 **************************************************/ 1383 **************************************************/
1318 1384
@@ -1404,11 +1470,18 @@ static int bgmac_probe(struct bcma_device *core)
1404 if (core->bus->sprom.boardflags_lo & BGMAC_BFL_ENETADM) 1470 if (core->bus->sprom.boardflags_lo & BGMAC_BFL_ENETADM)
1405 bgmac_warn(bgmac, "Support for ADMtek ethernet switch not implemented\n"); 1471 bgmac_warn(bgmac, "Support for ADMtek ethernet switch not implemented\n");
1406 1472
1473 err = bgmac_mii_register(bgmac);
1474 if (err) {
1475 bgmac_err(bgmac, "Cannot register MDIO\n");
1476 err = -ENOTSUPP;
1477 goto err_dma_free;
1478 }
1479
1407 err = register_netdev(bgmac->net_dev); 1480 err = register_netdev(bgmac->net_dev);
1408 if (err) { 1481 if (err) {
1409 bgmac_err(bgmac, "Cannot register net device\n"); 1482 bgmac_err(bgmac, "Cannot register net device\n");
1410 err = -ENOTSUPP; 1483 err = -ENOTSUPP;
1411 goto err_dma_free; 1484 goto err_mii_unregister;
1412 } 1485 }
1413 1486
1414 netif_carrier_off(net_dev); 1487 netif_carrier_off(net_dev);
@@ -1417,6 +1490,8 @@ static int bgmac_probe(struct bcma_device *core)
1417 1490
1418 return 0; 1491 return 0;
1419 1492
1493err_mii_unregister:
1494 bgmac_mii_unregister(bgmac);
1420err_dma_free: 1495err_dma_free:
1421 bgmac_dma_free(bgmac); 1496 bgmac_dma_free(bgmac);
1422 1497
@@ -1433,6 +1508,7 @@ static void bgmac_remove(struct bcma_device *core)
1433 1508
1434 netif_napi_del(&bgmac->napi); 1509 netif_napi_del(&bgmac->napi);
1435 unregister_netdev(bgmac->net_dev); 1510 unregister_netdev(bgmac->net_dev);
1511 bgmac_mii_unregister(bgmac);
1436 bgmac_dma_free(bgmac); 1512 bgmac_dma_free(bgmac);
1437 bcma_set_drvdata(core, NULL); 1513 bcma_set_drvdata(core, NULL);
1438 free_netdev(bgmac->net_dev); 1514 free_netdev(bgmac->net_dev);
diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h
index 4ede614c81f8..98d4b5fcc070 100644
--- a/drivers/net/ethernet/broadcom/bgmac.h
+++ b/drivers/net/ethernet/broadcom/bgmac.h
@@ -399,6 +399,7 @@ struct bgmac {
399 struct bcma_device *cmn; /* Reference to CMN core for BCM4706 */ 399 struct bcma_device *cmn; /* Reference to CMN core for BCM4706 */
400 struct net_device *net_dev; 400 struct net_device *net_dev;
401 struct napi_struct napi; 401 struct napi_struct napi;
402 struct mii_bus *mii_bus;
402 403
403 /* DMA */ 404 /* DMA */
404 struct bgmac_dma_ring tx_ring[BGMAC_MAX_TX_RINGS]; 405 struct bgmac_dma_ring tx_ring[BGMAC_MAX_TX_RINGS];
diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
index 2f0ba8f2fd6c..5d204492c603 100644
--- a/drivers/net/ethernet/broadcom/bnx2.c
+++ b/drivers/net/ethernet/broadcom/bnx2.c
@@ -416,7 +416,7 @@ static int bnx2_unregister_cnic(struct net_device *dev)
416 return 0; 416 return 0;
417} 417}
418 418
419struct cnic_eth_dev *bnx2_cnic_probe(struct net_device *dev) 419static struct cnic_eth_dev *bnx2_cnic_probe(struct net_device *dev)
420{ 420{
421 struct bnx2 *bp = netdev_priv(dev); 421 struct bnx2 *bp = netdev_priv(dev);
422 struct cnic_eth_dev *cp = &bp->cnic_eth_dev; 422 struct cnic_eth_dev *cp = &bp->cnic_eth_dev;
@@ -854,12 +854,11 @@ bnx2_alloc_mem(struct bnx2 *bp)
854 sizeof(struct statistics_block); 854 sizeof(struct statistics_block);
855 855
856 status_blk = dma_alloc_coherent(&bp->pdev->dev, bp->status_stats_size, 856 status_blk = dma_alloc_coherent(&bp->pdev->dev, bp->status_stats_size,
857 &bp->status_blk_mapping, GFP_KERNEL); 857 &bp->status_blk_mapping,
858 GFP_KERNEL | __GFP_ZERO);
858 if (status_blk == NULL) 859 if (status_blk == NULL)
859 goto alloc_mem_err; 860 goto alloc_mem_err;
860 861
861 memset(status_blk, 0, bp->status_stats_size);
862
863 bnapi = &bp->bnx2_napi[0]; 862 bnapi = &bp->bnx2_napi[0];
864 bnapi->status_blk.msi = status_blk; 863 bnapi->status_blk.msi = status_blk;
865 bnapi->hw_tx_cons_ptr = 864 bnapi->hw_tx_cons_ptr =
@@ -3212,7 +3211,7 @@ bnx2_rx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
3212 } 3211 }
3213 if ((status & L2_FHDR_STATUS_L2_VLAN_TAG) && 3212 if ((status & L2_FHDR_STATUS_L2_VLAN_TAG) &&
3214 !(bp->rx_mode & BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG)) 3213 !(bp->rx_mode & BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG))
3215 __vlan_hwaccel_put_tag(skb, rx_hdr->l2_fhdr_vlan_tag); 3214 __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), rx_hdr->l2_fhdr_vlan_tag);
3216 3215
3217 skb->protocol = eth_type_trans(skb, bp->dev); 3216 skb->protocol = eth_type_trans(skb, bp->dev);
3218 3217
@@ -3554,7 +3553,7 @@ bnx2_set_rx_mode(struct net_device *dev)
3554 rx_mode = bp->rx_mode & ~(BNX2_EMAC_RX_MODE_PROMISCUOUS | 3553 rx_mode = bp->rx_mode & ~(BNX2_EMAC_RX_MODE_PROMISCUOUS |
3555 BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG); 3554 BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG);
3556 sort_mode = 1 | BNX2_RPM_SORT_USER0_BC_EN; 3555 sort_mode = 1 | BNX2_RPM_SORT_USER0_BC_EN;
3557 if (!(dev->features & NETIF_F_HW_VLAN_RX) && 3556 if (!(dev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
3558 (bp->flags & BNX2_FLAG_CAN_KEEP_VLAN)) 3557 (bp->flags & BNX2_FLAG_CAN_KEEP_VLAN))
3559 rx_mode |= BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG; 3558 rx_mode |= BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG;
3560 if (dev->flags & IFF_PROMISC) { 3559 if (dev->flags & IFF_PROMISC) {
@@ -7696,7 +7695,7 @@ bnx2_fix_features(struct net_device *dev, netdev_features_t features)
7696 struct bnx2 *bp = netdev_priv(dev); 7695 struct bnx2 *bp = netdev_priv(dev);
7697 7696
7698 if (!(bp->flags & BNX2_FLAG_CAN_KEEP_VLAN)) 7697 if (!(bp->flags & BNX2_FLAG_CAN_KEEP_VLAN))
7699 features |= NETIF_F_HW_VLAN_RX; 7698 features |= NETIF_F_HW_VLAN_CTAG_RX;
7700 7699
7701 return features; 7700 return features;
7702} 7701}
@@ -7707,12 +7706,12 @@ bnx2_set_features(struct net_device *dev, netdev_features_t features)
7707 struct bnx2 *bp = netdev_priv(dev); 7706 struct bnx2 *bp = netdev_priv(dev);
7708 7707
7709 /* TSO with VLAN tag won't work with current firmware */ 7708 /* TSO with VLAN tag won't work with current firmware */
7710 if (features & NETIF_F_HW_VLAN_TX) 7709 if (features & NETIF_F_HW_VLAN_CTAG_TX)
7711 dev->vlan_features |= (dev->hw_features & NETIF_F_ALL_TSO); 7710 dev->vlan_features |= (dev->hw_features & NETIF_F_ALL_TSO);
7712 else 7711 else
7713 dev->vlan_features &= ~NETIF_F_ALL_TSO; 7712 dev->vlan_features &= ~NETIF_F_ALL_TSO;
7714 7713
7715 if ((!!(features & NETIF_F_HW_VLAN_RX) != 7714 if ((!!(features & NETIF_F_HW_VLAN_CTAG_RX) !=
7716 !!(bp->rx_mode & BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG)) && 7715 !!(bp->rx_mode & BNX2_EMAC_RX_MODE_KEEP_VLAN_TAG)) &&
7717 netif_running(dev)) { 7716 netif_running(dev)) {
7718 bnx2_netif_stop(bp, false); 7717 bnx2_netif_stop(bp, false);
@@ -8552,7 +8551,7 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
8552 dev->hw_features |= NETIF_F_IPV6_CSUM | NETIF_F_TSO6; 8551 dev->hw_features |= NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
8553 8552
8554 dev->vlan_features = dev->hw_features; 8553 dev->vlan_features = dev->hw_features;
8555 dev->hw_features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 8554 dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX;
8556 dev->features |= dev->hw_features; 8555 dev->features |= dev->hw_features;
8557 dev->priv_flags |= IFF_UNICAST_FLT; 8556 dev->priv_flags |= IFF_UNICAST_FLT;
8558 8557
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
index e4605a965084..3dba2a70a00e 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
@@ -26,8 +26,8 @@
26 * (you will need to reboot afterwards) */ 26 * (you will need to reboot afterwards) */
27/* #define BNX2X_STOP_ON_ERROR */ 27/* #define BNX2X_STOP_ON_ERROR */
28 28
29#define DRV_MODULE_VERSION "1.78.02-0" 29#define DRV_MODULE_VERSION "1.78.17-0"
30#define DRV_MODULE_RELDATE "2013/01/14" 30#define DRV_MODULE_RELDATE "2013/04/11"
31#define BNX2X_BC_VER 0x040200 31#define BNX2X_BC_VER 0x040200
32 32
33#if defined(CONFIG_DCB) 33#if defined(CONFIG_DCB)
@@ -492,7 +492,6 @@ enum bnx2x_tpa_mode_t {
492struct bnx2x_fastpath { 492struct bnx2x_fastpath {
493 struct bnx2x *bp; /* parent */ 493 struct bnx2x *bp; /* parent */
494 494
495#define BNX2X_NAPI_WEIGHT 128
496 struct napi_struct napi; 495 struct napi_struct napi;
497 union host_hc_status_block status_blk; 496 union host_hc_status_block status_blk;
498 /* chip independed shortcuts into sb structure */ 497 /* chip independed shortcuts into sb structure */
@@ -613,9 +612,10 @@ struct bnx2x_fastpath {
613 * START_BD - describes packed 612 * START_BD - describes packed
614 * START_BD(splitted) - includes unpaged data segment for GSO 613 * START_BD(splitted) - includes unpaged data segment for GSO
615 * PARSING_BD - for TSO and CSUM data 614 * PARSING_BD - for TSO and CSUM data
615 * PARSING_BD2 - for encapsulation data
616 * Frag BDs - decribes pages for frags 616 * Frag BDs - decribes pages for frags
617 */ 617 */
618#define BDS_PER_TX_PKT 3 618#define BDS_PER_TX_PKT 4
619#define MAX_BDS_PER_TX_PKT (MAX_SKB_FRAGS + BDS_PER_TX_PKT) 619#define MAX_BDS_PER_TX_PKT (MAX_SKB_FRAGS + BDS_PER_TX_PKT)
620/* max BDs per tx packet including next pages */ 620/* max BDs per tx packet including next pages */
621#define MAX_DESC_PER_TX_PKT (MAX_BDS_PER_TX_PKT + \ 621#define MAX_DESC_PER_TX_PKT (MAX_BDS_PER_TX_PKT + \
@@ -730,18 +730,24 @@ struct bnx2x_fastpath {
730#define SKB_CS(skb) (*(u16 *)(skb_transport_header(skb) + \ 730#define SKB_CS(skb) (*(u16 *)(skb_transport_header(skb) + \
731 skb->csum_offset)) 731 skb->csum_offset))
732 732
733#define pbd_tcp_flags(skb) (ntohl(tcp_flag_word(tcp_hdr(skb)))>>16 & 0xff) 733#define pbd_tcp_flags(tcp_hdr) (ntohl(tcp_flag_word(tcp_hdr))>>16 & 0xff)
734 734
735#define XMIT_PLAIN 0 735#define XMIT_PLAIN 0
736#define XMIT_CSUM_V4 0x1 736#define XMIT_CSUM_V4 (1 << 0)
737#define XMIT_CSUM_V6 0x2 737#define XMIT_CSUM_V6 (1 << 1)
738#define XMIT_CSUM_TCP 0x4 738#define XMIT_CSUM_TCP (1 << 2)
739#define XMIT_GSO_V4 0x8 739#define XMIT_GSO_V4 (1 << 3)
740#define XMIT_GSO_V6 0x10 740#define XMIT_GSO_V6 (1 << 4)
741#define XMIT_CSUM_ENC_V4 (1 << 5)
742#define XMIT_CSUM_ENC_V6 (1 << 6)
743#define XMIT_GSO_ENC_V4 (1 << 7)
744#define XMIT_GSO_ENC_V6 (1 << 8)
741 745
742#define XMIT_CSUM (XMIT_CSUM_V4 | XMIT_CSUM_V6) 746#define XMIT_CSUM_ENC (XMIT_CSUM_ENC_V4 | XMIT_CSUM_ENC_V6)
743#define XMIT_GSO (XMIT_GSO_V4 | XMIT_GSO_V6) 747#define XMIT_GSO_ENC (XMIT_GSO_ENC_V4 | XMIT_GSO_ENC_V6)
744 748
749#define XMIT_CSUM (XMIT_CSUM_V4 | XMIT_CSUM_V6 | XMIT_CSUM_ENC)
750#define XMIT_GSO (XMIT_GSO_V4 | XMIT_GSO_V6 | XMIT_GSO_ENC)
745 751
746/* stuff added to make the code fit 80Col */ 752/* stuff added to make the code fit 80Col */
747#define CQE_TYPE(cqe_fp_flags) ((cqe_fp_flags) & ETH_FAST_PATH_RX_CQE_TYPE) 753#define CQE_TYPE(cqe_fp_flags) ((cqe_fp_flags) & ETH_FAST_PATH_RX_CQE_TYPE)
@@ -844,6 +850,9 @@ struct bnx2x_common {
844#define CHIP_IS_57840_VF(bp) (CHIP_NUM(bp) == CHIP_NUM_57840_VF) 850#define CHIP_IS_57840_VF(bp) (CHIP_NUM(bp) == CHIP_NUM_57840_VF)
845#define CHIP_IS_E1H(bp) (CHIP_IS_57711(bp) || \ 851#define CHIP_IS_E1H(bp) (CHIP_IS_57711(bp) || \
846 CHIP_IS_57711E(bp)) 852 CHIP_IS_57711E(bp))
853#define CHIP_IS_57811xx(bp) (CHIP_IS_57811(bp) || \
854 CHIP_IS_57811_MF(bp) || \
855 CHIP_IS_57811_VF(bp))
847#define CHIP_IS_E2(bp) (CHIP_IS_57712(bp) || \ 856#define CHIP_IS_E2(bp) (CHIP_IS_57712(bp) || \
848 CHIP_IS_57712_MF(bp) || \ 857 CHIP_IS_57712_MF(bp) || \
849 CHIP_IS_57712_VF(bp)) 858 CHIP_IS_57712_VF(bp))
@@ -853,9 +862,7 @@ struct bnx2x_common {
853 CHIP_IS_57810(bp) || \ 862 CHIP_IS_57810(bp) || \
854 CHIP_IS_57810_MF(bp) || \ 863 CHIP_IS_57810_MF(bp) || \
855 CHIP_IS_57810_VF(bp) || \ 864 CHIP_IS_57810_VF(bp) || \
856 CHIP_IS_57811(bp) || \ 865 CHIP_IS_57811xx(bp) || \
857 CHIP_IS_57811_MF(bp) || \
858 CHIP_IS_57811_VF(bp) || \
859 CHIP_IS_57840(bp) || \ 866 CHIP_IS_57840(bp) || \
860 CHIP_IS_57840_MF(bp) || \ 867 CHIP_IS_57840_MF(bp) || \
861 CHIP_IS_57840_VF(bp)) 868 CHIP_IS_57840_VF(bp))
@@ -1215,14 +1222,16 @@ enum {
1215 BNX2X_SP_RTNL_ENABLE_SRIOV, 1222 BNX2X_SP_RTNL_ENABLE_SRIOV,
1216 BNX2X_SP_RTNL_VFPF_MCAST, 1223 BNX2X_SP_RTNL_VFPF_MCAST,
1217 BNX2X_SP_RTNL_VFPF_STORM_RX_MODE, 1224 BNX2X_SP_RTNL_VFPF_STORM_RX_MODE,
1225 BNX2X_SP_RTNL_HYPERVISOR_VLAN,
1218}; 1226};
1219 1227
1220 1228
1221struct bnx2x_prev_path_list { 1229struct bnx2x_prev_path_list {
1230 struct list_head list;
1222 u8 bus; 1231 u8 bus;
1223 u8 slot; 1232 u8 slot;
1224 u8 path; 1233 u8 path;
1225 struct list_head list; 1234 u8 aer;
1226 u8 undi; 1235 u8 undi;
1227}; 1236};
1228 1237
@@ -1269,6 +1278,8 @@ struct bnx2x {
1269#define BP_FW_MB_IDX(bp) BP_FW_MB_IDX_VN(bp, BP_VN(bp)) 1278#define BP_FW_MB_IDX(bp) BP_FW_MB_IDX_VN(bp, BP_VN(bp))
1270 1279
1271#ifdef CONFIG_BNX2X_SRIOV 1280#ifdef CONFIG_BNX2X_SRIOV
1281 /* protects vf2pf mailbox from simultaneous access */
1282 struct mutex vf2pf_mutex;
1272 /* vf pf channel mailbox contains request and response buffers */ 1283 /* vf pf channel mailbox contains request and response buffers */
1273 struct bnx2x_vf_mbx_msg *vf2pf_mbox; 1284 struct bnx2x_vf_mbx_msg *vf2pf_mbox;
1274 dma_addr_t vf2pf_mbox_mapping; 1285 dma_addr_t vf2pf_mbox_mapping;
@@ -1281,6 +1292,8 @@ struct bnx2x {
1281 dma_addr_t pf2vf_bulletin_mapping; 1292 dma_addr_t pf2vf_bulletin_mapping;
1282 1293
1283 struct pf_vf_bulletin_content old_bulletin; 1294 struct pf_vf_bulletin_content old_bulletin;
1295
1296 u16 requested_nr_virtfn;
1284#endif /* CONFIG_BNX2X_SRIOV */ 1297#endif /* CONFIG_BNX2X_SRIOV */
1285 1298
1286 struct net_device *dev; 1299 struct net_device *dev;
@@ -1944,12 +1957,9 @@ static inline u32 reg_poll(struct bnx2x *bp, u32 reg, u32 expected, int ms,
1944void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id, 1957void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
1945 bool is_pf); 1958 bool is_pf);
1946 1959
1947#define BNX2X_ILT_ZALLOC(x, y, size) \ 1960#define BNX2X_ILT_ZALLOC(x, y, size) \
1948 do { \ 1961 x = dma_alloc_coherent(&bp->pdev->dev, size, y, \
1949 x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL); \ 1962 GFP_KERNEL | __GFP_ZERO)
1950 if (x) \
1951 memset(x, 0, size); \
1952 } while (0)
1953 1963
1954#define BNX2X_ILT_FREE(x, y, size) \ 1964#define BNX2X_ILT_FREE(x, y, size) \
1955 do { \ 1965 do { \
@@ -2286,7 +2296,7 @@ static const u32 dmae_reg_go_c[] = {
2286 DMAE_REG_GO_C12, DMAE_REG_GO_C13, DMAE_REG_GO_C14, DMAE_REG_GO_C15 2296 DMAE_REG_GO_C12, DMAE_REG_GO_C13, DMAE_REG_GO_C14, DMAE_REG_GO_C15
2287}; 2297};
2288 2298
2289void bnx2x_set_ethtool_ops(struct net_device *netdev); 2299void bnx2x_set_ethtool_ops(struct bnx2x *bp, struct net_device *netdev);
2290void bnx2x_notify_link_changed(struct bnx2x *bp); 2300void bnx2x_notify_link_changed(struct bnx2x *bp);
2291 2301
2292#define BNX2X_MF_SD_PROTOCOL(bp) \ 2302#define BNX2X_MF_SD_PROTOCOL(bp) \
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index 57619dd4a92b..b8fbe266ab68 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -451,7 +451,8 @@ static void bnx2x_tpa_start(struct bnx2x_fastpath *fp, u16 queue,
451 * Compute number of aggregated segments, and gso_type. 451 * Compute number of aggregated segments, and gso_type.
452 */ 452 */
453static void bnx2x_set_gro_params(struct sk_buff *skb, u16 parsing_flags, 453static void bnx2x_set_gro_params(struct sk_buff *skb, u16 parsing_flags,
454 u16 len_on_bd, unsigned int pkt_len) 454 u16 len_on_bd, unsigned int pkt_len,
455 u16 num_of_coalesced_segs)
455{ 456{
456 /* TPA aggregation won't have either IP options or TCP options 457 /* TPA aggregation won't have either IP options or TCP options
457 * other than timestamp or IPv6 extension headers. 458 * other than timestamp or IPv6 extension headers.
@@ -480,8 +481,7 @@ static void bnx2x_set_gro_params(struct sk_buff *skb, u16 parsing_flags,
480 /* tcp_gro_complete() will copy NAPI_GRO_CB(skb)->count 481 /* tcp_gro_complete() will copy NAPI_GRO_CB(skb)->count
481 * to skb_shinfo(skb)->gso_segs 482 * to skb_shinfo(skb)->gso_segs
482 */ 483 */
483 NAPI_GRO_CB(skb)->count = DIV_ROUND_UP(pkt_len - hdrs_len, 484 NAPI_GRO_CB(skb)->count = num_of_coalesced_segs;
484 skb_shinfo(skb)->gso_size);
485} 485}
486 486
487static int bnx2x_alloc_rx_sge(struct bnx2x *bp, 487static int bnx2x_alloc_rx_sge(struct bnx2x *bp,
@@ -537,7 +537,8 @@ static int bnx2x_fill_frag_skb(struct bnx2x *bp, struct bnx2x_fastpath *fp,
537 /* This is needed in order to enable forwarding support */ 537 /* This is needed in order to enable forwarding support */
538 if (frag_size) 538 if (frag_size)
539 bnx2x_set_gro_params(skb, tpa_info->parsing_flags, len_on_bd, 539 bnx2x_set_gro_params(skb, tpa_info->parsing_flags, len_on_bd,
540 le16_to_cpu(cqe->pkt_len)); 540 le16_to_cpu(cqe->pkt_len),
541 le16_to_cpu(cqe->num_of_coalesced_segs));
541 542
542#ifdef BNX2X_STOP_ON_ERROR 543#ifdef BNX2X_STOP_ON_ERROR
543 if (pages > min_t(u32, 8, MAX_SKB_FRAGS) * SGE_PAGES) { 544 if (pages > min_t(u32, 8, MAX_SKB_FRAGS) * SGE_PAGES) {
@@ -641,6 +642,14 @@ static void bnx2x_gro_ipv6_csum(struct bnx2x *bp, struct sk_buff *skb)
641 th->check = ~tcp_v6_check(skb->len - skb_transport_offset(skb), 642 th->check = ~tcp_v6_check(skb->len - skb_transport_offset(skb),
642 &iph->saddr, &iph->daddr, 0); 643 &iph->saddr, &iph->daddr, 0);
643} 644}
645
646static void bnx2x_gro_csum(struct bnx2x *bp, struct sk_buff *skb,
647 void (*gro_func)(struct bnx2x*, struct sk_buff*))
648{
649 skb_set_network_header(skb, 0);
650 gro_func(bp, skb);
651 tcp_gro_complete(skb);
652}
644#endif 653#endif
645 654
646static void bnx2x_gro_receive(struct bnx2x *bp, struct bnx2x_fastpath *fp, 655static void bnx2x_gro_receive(struct bnx2x *bp, struct bnx2x_fastpath *fp,
@@ -648,19 +657,17 @@ static void bnx2x_gro_receive(struct bnx2x *bp, struct bnx2x_fastpath *fp,
648{ 657{
649#ifdef CONFIG_INET 658#ifdef CONFIG_INET
650 if (skb_shinfo(skb)->gso_size) { 659 if (skb_shinfo(skb)->gso_size) {
651 skb_set_network_header(skb, 0);
652 switch (be16_to_cpu(skb->protocol)) { 660 switch (be16_to_cpu(skb->protocol)) {
653 case ETH_P_IP: 661 case ETH_P_IP:
654 bnx2x_gro_ip_csum(bp, skb); 662 bnx2x_gro_csum(bp, skb, bnx2x_gro_ip_csum);
655 break; 663 break;
656 case ETH_P_IPV6: 664 case ETH_P_IPV6:
657 bnx2x_gro_ipv6_csum(bp, skb); 665 bnx2x_gro_csum(bp, skb, bnx2x_gro_ipv6_csum);
658 break; 666 break;
659 default: 667 default:
660 BNX2X_ERR("FW GRO supports only IPv4/IPv6, not 0x%04x\n", 668 BNX2X_ERR("Error: FW GRO supports only IPv4/IPv6, not 0x%04x\n",
661 be16_to_cpu(skb->protocol)); 669 be16_to_cpu(skb->protocol));
662 } 670 }
663 tcp_gro_complete(skb);
664 } 671 }
665#endif 672#endif
666 napi_gro_receive(&fp->napi, skb); 673 napi_gro_receive(&fp->napi, skb);
@@ -718,7 +725,7 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp,
718 if (!bnx2x_fill_frag_skb(bp, fp, tpa_info, pages, 725 if (!bnx2x_fill_frag_skb(bp, fp, tpa_info, pages,
719 skb, cqe, cqe_idx)) { 726 skb, cqe, cqe_idx)) {
720 if (tpa_info->parsing_flags & PARSING_FLAGS_VLAN) 727 if (tpa_info->parsing_flags & PARSING_FLAGS_VLAN)
721 __vlan_hwaccel_put_tag(skb, tpa_info->vlan_tag); 728 __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), tpa_info->vlan_tag);
722 bnx2x_gro_receive(bp, fp, skb); 729 bnx2x_gro_receive(bp, fp, skb);
723 } else { 730 } else {
724 DP(NETIF_MSG_RX_STATUS, 731 DP(NETIF_MSG_RX_STATUS,
@@ -993,7 +1000,7 @@ reuse_rx:
993 1000
994 if (le16_to_cpu(cqe_fp->pars_flags.flags) & 1001 if (le16_to_cpu(cqe_fp->pars_flags.flags) &
995 PARSING_FLAGS_VLAN) 1002 PARSING_FLAGS_VLAN)
996 __vlan_hwaccel_put_tag(skb, 1003 __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
997 le16_to_cpu(cqe_fp->vlan_tag)); 1004 le16_to_cpu(cqe_fp->vlan_tag));
998 napi_gro_receive(&fp->napi, skb); 1005 napi_gro_receive(&fp->napi, skb);
999 1006
@@ -1037,6 +1044,7 @@ static irqreturn_t bnx2x_msix_fp_int(int irq, void *fp_cookie)
1037 DP(NETIF_MSG_INTR, 1044 DP(NETIF_MSG_INTR,
1038 "got an MSI-X interrupt on IDX:SB [fp %d fw_sd %d igusb %d]\n", 1045 "got an MSI-X interrupt on IDX:SB [fp %d fw_sd %d igusb %d]\n",
1039 fp->index, fp->fw_sb_id, fp->igu_sb_id); 1046 fp->index, fp->fw_sb_id, fp->igu_sb_id);
1047
1040 bnx2x_ack_sb(bp, fp->igu_sb_id, USTORM_ID, 0, IGU_INT_DISABLE, 0); 1048 bnx2x_ack_sb(bp, fp->igu_sb_id, USTORM_ID, 0, IGU_INT_DISABLE, 0);
1041 1049
1042#ifdef BNX2X_STOP_ON_ERROR 1050#ifdef BNX2X_STOP_ON_ERROR
@@ -1718,7 +1726,7 @@ static int bnx2x_req_irq(struct bnx2x *bp)
1718 return request_irq(irq, bnx2x_interrupt, flags, bp->dev->name, bp->dev); 1726 return request_irq(irq, bnx2x_interrupt, flags, bp->dev->name, bp->dev);
1719} 1727}
1720 1728
1721static int bnx2x_setup_irqs(struct bnx2x *bp) 1729int bnx2x_setup_irqs(struct bnx2x *bp)
1722{ 1730{
1723 int rc = 0; 1731 int rc = 0;
1724 if (bp->flags & USING_MSIX_FLAG && 1732 if (bp->flags & USING_MSIX_FLAG &&
@@ -2009,7 +2017,7 @@ static int bnx2x_init_hw(struct bnx2x *bp, u32 load_code)
2009 * Cleans the object that have internal lists without sending 2017 * Cleans the object that have internal lists without sending
2010 * ramrods. Should be run when interrutps are disabled. 2018 * ramrods. Should be run when interrutps are disabled.
2011 */ 2019 */
2012static void bnx2x_squeeze_objects(struct bnx2x *bp) 2020void bnx2x_squeeze_objects(struct bnx2x *bp)
2013{ 2021{
2014 int rc; 2022 int rc;
2015 unsigned long ramrod_flags = 0, vlan_mac_flags = 0; 2023 unsigned long ramrod_flags = 0, vlan_mac_flags = 0;
@@ -2574,6 +2582,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
2574 } 2582 }
2575 } 2583 }
2576 2584
2585 bnx2x_pre_irq_nic_init(bp);
2586
2577 /* Connect to IRQs */ 2587 /* Connect to IRQs */
2578 rc = bnx2x_setup_irqs(bp); 2588 rc = bnx2x_setup_irqs(bp);
2579 if (rc) { 2589 if (rc) {
@@ -2583,11 +2593,11 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
2583 LOAD_ERROR_EXIT(bp, load_error2); 2593 LOAD_ERROR_EXIT(bp, load_error2);
2584 } 2594 }
2585 2595
2586 /* Setup NIC internals and enable interrupts */
2587 bnx2x_nic_init(bp, load_code);
2588
2589 /* Init per-function objects */ 2596 /* Init per-function objects */
2590 if (IS_PF(bp)) { 2597 if (IS_PF(bp)) {
2598 /* Setup NIC internals and enable interrupts */
2599 bnx2x_post_irq_nic_init(bp, load_code);
2600
2591 bnx2x_init_bp_objs(bp); 2601 bnx2x_init_bp_objs(bp);
2592 bnx2x_iov_nic_init(bp); 2602 bnx2x_iov_nic_init(bp);
2593 2603
@@ -2657,7 +2667,8 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
2657 if (IS_PF(bp)) 2667 if (IS_PF(bp))
2658 rc = bnx2x_set_eth_mac(bp, true); 2668 rc = bnx2x_set_eth_mac(bp, true);
2659 else /* vf */ 2669 else /* vf */
2660 rc = bnx2x_vfpf_set_mac(bp); 2670 rc = bnx2x_vfpf_config_mac(bp, bp->dev->dev_addr, bp->fp->index,
2671 true);
2661 if (rc) { 2672 if (rc) {
2662 BNX2X_ERR("Setting Ethernet MAC failed\n"); 2673 BNX2X_ERR("Setting Ethernet MAC failed\n");
2663 LOAD_ERROR_EXIT(bp, load_error3); 2674 LOAD_ERROR_EXIT(bp, load_error3);
@@ -2777,7 +2788,7 @@ load_error0:
2777#endif /* ! BNX2X_STOP_ON_ERROR */ 2788#endif /* ! BNX2X_STOP_ON_ERROR */
2778} 2789}
2779 2790
2780static int bnx2x_drain_tx_queues(struct bnx2x *bp) 2791int bnx2x_drain_tx_queues(struct bnx2x *bp)
2781{ 2792{
2782 u8 rc = 0, cos, i; 2793 u8 rc = 0, cos, i;
2783 2794
@@ -2926,9 +2937,9 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
2926 bnx2x_free_fp_mem_cnic(bp); 2937 bnx2x_free_fp_mem_cnic(bp);
2927 2938
2928 if (IS_PF(bp)) { 2939 if (IS_PF(bp)) {
2929 bnx2x_free_mem(bp);
2930 if (CNIC_LOADED(bp)) 2940 if (CNIC_LOADED(bp))
2931 bnx2x_free_mem_cnic(bp); 2941 bnx2x_free_mem_cnic(bp);
2942 bnx2x_free_mem(bp);
2932 } 2943 }
2933 bp->state = BNX2X_STATE_CLOSED; 2944 bp->state = BNX2X_STATE_CLOSED;
2934 bp->cnic_loaded = false; 2945 bp->cnic_loaded = false;
@@ -3089,11 +3100,11 @@ int bnx2x_poll(struct napi_struct *napi, int budget)
3089 * to ease the pain of our fellow microcode engineers 3100 * to ease the pain of our fellow microcode engineers
3090 * we use one mapping for both BDs 3101 * we use one mapping for both BDs
3091 */ 3102 */
3092static noinline u16 bnx2x_tx_split(struct bnx2x *bp, 3103static u16 bnx2x_tx_split(struct bnx2x *bp,
3093 struct bnx2x_fp_txdata *txdata, 3104 struct bnx2x_fp_txdata *txdata,
3094 struct sw_tx_bd *tx_buf, 3105 struct sw_tx_bd *tx_buf,
3095 struct eth_tx_start_bd **tx_bd, u16 hlen, 3106 struct eth_tx_start_bd **tx_bd, u16 hlen,
3096 u16 bd_prod, int nbd) 3107 u16 bd_prod)
3097{ 3108{
3098 struct eth_tx_start_bd *h_tx_bd = *tx_bd; 3109 struct eth_tx_start_bd *h_tx_bd = *tx_bd;
3099 struct eth_tx_bd *d_tx_bd; 3110 struct eth_tx_bd *d_tx_bd;
@@ -3101,11 +3112,10 @@ static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
3101 int old_len = le16_to_cpu(h_tx_bd->nbytes); 3112 int old_len = le16_to_cpu(h_tx_bd->nbytes);
3102 3113
3103 /* first fix first BD */ 3114 /* first fix first BD */
3104 h_tx_bd->nbd = cpu_to_le16(nbd);
3105 h_tx_bd->nbytes = cpu_to_le16(hlen); 3115 h_tx_bd->nbytes = cpu_to_le16(hlen);
3106 3116
3107 DP(NETIF_MSG_TX_QUEUED, "TSO split header size is %d (%x:%x) nbd %d\n", 3117 DP(NETIF_MSG_TX_QUEUED, "TSO split header size is %d (%x:%x)\n",
3108 h_tx_bd->nbytes, h_tx_bd->addr_hi, h_tx_bd->addr_lo, h_tx_bd->nbd); 3118 h_tx_bd->nbytes, h_tx_bd->addr_hi, h_tx_bd->addr_lo);
3109 3119
3110 /* now get a new data BD 3120 /* now get a new data BD
3111 * (after the pbd) and fill it */ 3121 * (after the pbd) and fill it */
@@ -3134,7 +3144,7 @@ static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
3134 3144
3135#define bswab32(b32) ((__force __le32) swab32((__force __u32) (b32))) 3145#define bswab32(b32) ((__force __le32) swab32((__force __u32) (b32)))
3136#define bswab16(b16) ((__force __le16) swab16((__force __u16) (b16))) 3146#define bswab16(b16) ((__force __le16) swab16((__force __u16) (b16)))
3137static inline __le16 bnx2x_csum_fix(unsigned char *t_header, u16 csum, s8 fix) 3147static __le16 bnx2x_csum_fix(unsigned char *t_header, u16 csum, s8 fix)
3138{ 3148{
3139 __sum16 tsum = (__force __sum16) csum; 3149 __sum16 tsum = (__force __sum16) csum;
3140 3150
@@ -3149,30 +3159,47 @@ static inline __le16 bnx2x_csum_fix(unsigned char *t_header, u16 csum, s8 fix)
3149 return bswab16(tsum); 3159 return bswab16(tsum);
3150} 3160}
3151 3161
3152static inline u32 bnx2x_xmit_type(struct bnx2x *bp, struct sk_buff *skb) 3162static u32 bnx2x_xmit_type(struct bnx2x *bp, struct sk_buff *skb)
3153{ 3163{
3154 u32 rc; 3164 u32 rc;
3165 __u8 prot = 0;
3166 __be16 protocol;
3155 3167
3156 if (skb->ip_summed != CHECKSUM_PARTIAL) 3168 if (skb->ip_summed != CHECKSUM_PARTIAL)
3157 rc = XMIT_PLAIN; 3169 return XMIT_PLAIN;
3158 3170
3159 else { 3171 protocol = vlan_get_protocol(skb);
3160 if (vlan_get_protocol(skb) == htons(ETH_P_IPV6)) { 3172 if (protocol == htons(ETH_P_IPV6)) {
3161 rc = XMIT_CSUM_V6; 3173 rc = XMIT_CSUM_V6;
3162 if (ipv6_hdr(skb)->nexthdr == IPPROTO_TCP) 3174 prot = ipv6_hdr(skb)->nexthdr;
3163 rc |= XMIT_CSUM_TCP; 3175 } else {
3176 rc = XMIT_CSUM_V4;
3177 prot = ip_hdr(skb)->protocol;
3178 }
3164 3179
3180 if (!CHIP_IS_E1x(bp) && skb->encapsulation) {
3181 if (inner_ip_hdr(skb)->version == 6) {
3182 rc |= XMIT_CSUM_ENC_V6;
3183 if (inner_ipv6_hdr(skb)->nexthdr == IPPROTO_TCP)
3184 rc |= XMIT_CSUM_TCP;
3165 } else { 3185 } else {
3166 rc = XMIT_CSUM_V4; 3186 rc |= XMIT_CSUM_ENC_V4;
3167 if (ip_hdr(skb)->protocol == IPPROTO_TCP) 3187 if (inner_ip_hdr(skb)->protocol == IPPROTO_TCP)
3168 rc |= XMIT_CSUM_TCP; 3188 rc |= XMIT_CSUM_TCP;
3169 } 3189 }
3170 } 3190 }
3191 if (prot == IPPROTO_TCP)
3192 rc |= XMIT_CSUM_TCP;
3171 3193
3172 if (skb_is_gso_v6(skb)) 3194 if (skb_is_gso_v6(skb)) {
3173 rc |= XMIT_GSO_V6 | XMIT_CSUM_TCP | XMIT_CSUM_V6; 3195 rc |= (XMIT_GSO_V6 | XMIT_CSUM_TCP | XMIT_CSUM_V6);
3174 else if (skb_is_gso(skb)) 3196 if (rc & XMIT_CSUM_ENC)
3175 rc |= XMIT_GSO_V4 | XMIT_CSUM_V4 | XMIT_CSUM_TCP; 3197 rc |= XMIT_GSO_ENC_V6;
3198 } else if (skb_is_gso(skb)) {
3199 rc |= (XMIT_GSO_V4 | XMIT_CSUM_V4 | XMIT_CSUM_TCP);
3200 if (rc & XMIT_CSUM_ENC)
3201 rc |= XMIT_GSO_ENC_V4;
3202 }
3176 3203
3177 return rc; 3204 return rc;
3178} 3205}
@@ -3257,14 +3284,23 @@ exit_lbl:
3257} 3284}
3258#endif 3285#endif
3259 3286
3260static inline void bnx2x_set_pbd_gso_e2(struct sk_buff *skb, u32 *parsing_data, 3287static void bnx2x_set_pbd_gso_e2(struct sk_buff *skb, u32 *parsing_data,
3261 u32 xmit_type) 3288 u32 xmit_type)
3262{ 3289{
3290 struct ipv6hdr *ipv6;
3291
3263 *parsing_data |= (skb_shinfo(skb)->gso_size << 3292 *parsing_data |= (skb_shinfo(skb)->gso_size <<
3264 ETH_TX_PARSE_BD_E2_LSO_MSS_SHIFT) & 3293 ETH_TX_PARSE_BD_E2_LSO_MSS_SHIFT) &
3265 ETH_TX_PARSE_BD_E2_LSO_MSS; 3294 ETH_TX_PARSE_BD_E2_LSO_MSS;
3266 if ((xmit_type & XMIT_GSO_V6) && 3295
3267 (ipv6_hdr(skb)->nexthdr == NEXTHDR_IPV6)) 3296 if (xmit_type & XMIT_GSO_ENC_V6)
3297 ipv6 = inner_ipv6_hdr(skb);
3298 else if (xmit_type & XMIT_GSO_V6)
3299 ipv6 = ipv6_hdr(skb);
3300 else
3301 ipv6 = NULL;
3302
3303 if (ipv6 && ipv6->nexthdr == NEXTHDR_IPV6)
3268 *parsing_data |= ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR; 3304 *parsing_data |= ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR;
3269} 3305}
3270 3306
@@ -3275,13 +3311,13 @@ static inline void bnx2x_set_pbd_gso_e2(struct sk_buff *skb, u32 *parsing_data,
3275 * @pbd: parse BD 3311 * @pbd: parse BD
3276 * @xmit_type: xmit flags 3312 * @xmit_type: xmit flags
3277 */ 3313 */
3278static inline void bnx2x_set_pbd_gso(struct sk_buff *skb, 3314static void bnx2x_set_pbd_gso(struct sk_buff *skb,
3279 struct eth_tx_parse_bd_e1x *pbd, 3315 struct eth_tx_parse_bd_e1x *pbd,
3280 u32 xmit_type) 3316 u32 xmit_type)
3281{ 3317{
3282 pbd->lso_mss = cpu_to_le16(skb_shinfo(skb)->gso_size); 3318 pbd->lso_mss = cpu_to_le16(skb_shinfo(skb)->gso_size);
3283 pbd->tcp_send_seq = bswab32(tcp_hdr(skb)->seq); 3319 pbd->tcp_send_seq = bswab32(tcp_hdr(skb)->seq);
3284 pbd->tcp_flags = pbd_tcp_flags(skb); 3320 pbd->tcp_flags = pbd_tcp_flags(tcp_hdr(skb));
3285 3321
3286 if (xmit_type & XMIT_GSO_V4) { 3322 if (xmit_type & XMIT_GSO_V4) {
3287 pbd->ip_id = bswab16(ip_hdr(skb)->id); 3323 pbd->ip_id = bswab16(ip_hdr(skb)->id);
@@ -3301,6 +3337,40 @@ static inline void bnx2x_set_pbd_gso(struct sk_buff *skb,
3301} 3337}
3302 3338
3303/** 3339/**
3340 * bnx2x_set_pbd_csum_enc - update PBD with checksum and return header length
3341 *
3342 * @bp: driver handle
3343 * @skb: packet skb
3344 * @parsing_data: data to be updated
3345 * @xmit_type: xmit flags
3346 *
3347 * 57712/578xx related, when skb has encapsulation
3348 */
3349static u8 bnx2x_set_pbd_csum_enc(struct bnx2x *bp, struct sk_buff *skb,
3350 u32 *parsing_data, u32 xmit_type)
3351{
3352 *parsing_data |=
3353 ((((u8 *)skb_inner_transport_header(skb) - skb->data) >> 1) <<
3354 ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W_SHIFT) &
3355 ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W;
3356
3357 if (xmit_type & XMIT_CSUM_TCP) {
3358 *parsing_data |= ((inner_tcp_hdrlen(skb) / 4) <<
3359 ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW_SHIFT) &
3360 ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW;
3361
3362 return skb_inner_transport_header(skb) +
3363 inner_tcp_hdrlen(skb) - skb->data;
3364 }
3365
3366 /* We support checksum offload for TCP and UDP only.
3367 * No need to pass the UDP header length - it's a constant.
3368 */
3369 return skb_inner_transport_header(skb) +
3370 sizeof(struct udphdr) - skb->data;
3371}
3372
3373/**
3304 * bnx2x_set_pbd_csum_e2 - update PBD with checksum and return header length 3374 * bnx2x_set_pbd_csum_e2 - update PBD with checksum and return header length
3305 * 3375 *
3306 * @bp: driver handle 3376 * @bp: driver handle
@@ -3308,15 +3378,15 @@ static inline void bnx2x_set_pbd_gso(struct sk_buff *skb,
3308 * @parsing_data: data to be updated 3378 * @parsing_data: data to be updated
3309 * @xmit_type: xmit flags 3379 * @xmit_type: xmit flags
3310 * 3380 *
3311 * 57712 related 3381 * 57712/578xx related
3312 */ 3382 */
3313static inline u8 bnx2x_set_pbd_csum_e2(struct bnx2x *bp, struct sk_buff *skb, 3383static u8 bnx2x_set_pbd_csum_e2(struct bnx2x *bp, struct sk_buff *skb,
3314 u32 *parsing_data, u32 xmit_type) 3384 u32 *parsing_data, u32 xmit_type)
3315{ 3385{
3316 *parsing_data |= 3386 *parsing_data |=
3317 ((((u8 *)skb_transport_header(skb) - skb->data) >> 1) << 3387 ((((u8 *)skb_transport_header(skb) - skb->data) >> 1) <<
3318 ETH_TX_PARSE_BD_E2_TCP_HDR_START_OFFSET_W_SHIFT) & 3388 ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W_SHIFT) &
3319 ETH_TX_PARSE_BD_E2_TCP_HDR_START_OFFSET_W; 3389 ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W;
3320 3390
3321 if (xmit_type & XMIT_CSUM_TCP) { 3391 if (xmit_type & XMIT_CSUM_TCP) {
3322 *parsing_data |= ((tcp_hdrlen(skb) / 4) << 3392 *parsing_data |= ((tcp_hdrlen(skb) / 4) <<
@@ -3331,17 +3401,15 @@ static inline u8 bnx2x_set_pbd_csum_e2(struct bnx2x *bp, struct sk_buff *skb,
3331 return skb_transport_header(skb) + sizeof(struct udphdr) - skb->data; 3401 return skb_transport_header(skb) + sizeof(struct udphdr) - skb->data;
3332} 3402}
3333 3403
3334static inline void bnx2x_set_sbd_csum(struct bnx2x *bp, struct sk_buff *skb, 3404/* set FW indication according to inner or outer protocols if tunneled */
3335 struct eth_tx_start_bd *tx_start_bd, u32 xmit_type) 3405static void bnx2x_set_sbd_csum(struct bnx2x *bp, struct sk_buff *skb,
3406 struct eth_tx_start_bd *tx_start_bd,
3407 u32 xmit_type)
3336{ 3408{
3337 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_L4_CSUM; 3409 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_L4_CSUM;
3338 3410
3339 if (xmit_type & XMIT_CSUM_V4) 3411 if (xmit_type & (XMIT_CSUM_ENC_V6 | XMIT_CSUM_V6))
3340 tx_start_bd->bd_flags.as_bitfield |= 3412 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_IPV6;
3341 ETH_TX_BD_FLAGS_IP_CSUM;
3342 else
3343 tx_start_bd->bd_flags.as_bitfield |=
3344 ETH_TX_BD_FLAGS_IPV6;
3345 3413
3346 if (!(xmit_type & XMIT_CSUM_TCP)) 3414 if (!(xmit_type & XMIT_CSUM_TCP))
3347 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_IS_UDP; 3415 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_IS_UDP;
@@ -3355,9 +3423,9 @@ static inline void bnx2x_set_sbd_csum(struct bnx2x *bp, struct sk_buff *skb,
3355 * @pbd: parse BD to be updated 3423 * @pbd: parse BD to be updated
3356 * @xmit_type: xmit flags 3424 * @xmit_type: xmit flags
3357 */ 3425 */
3358static inline u8 bnx2x_set_pbd_csum(struct bnx2x *bp, struct sk_buff *skb, 3426static u8 bnx2x_set_pbd_csum(struct bnx2x *bp, struct sk_buff *skb,
3359 struct eth_tx_parse_bd_e1x *pbd, 3427 struct eth_tx_parse_bd_e1x *pbd,
3360 u32 xmit_type) 3428 u32 xmit_type)
3361{ 3429{
3362 u8 hlen = (skb_network_header(skb) - skb->data) >> 1; 3430 u8 hlen = (skb_network_header(skb) - skb->data) >> 1;
3363 3431
@@ -3403,6 +3471,75 @@ static inline u8 bnx2x_set_pbd_csum(struct bnx2x *bp, struct sk_buff *skb,
3403 return hlen; 3471 return hlen;
3404} 3472}
3405 3473
3474static void bnx2x_update_pbds_gso_enc(struct sk_buff *skb,
3475 struct eth_tx_parse_bd_e2 *pbd_e2,
3476 struct eth_tx_parse_2nd_bd *pbd2,
3477 u16 *global_data,
3478 u32 xmit_type)
3479{
3480 u16 hlen_w = 0;
3481 u8 outerip_off, outerip_len = 0;
3482 /* from outer IP to transport */
3483 hlen_w = (skb_inner_transport_header(skb) -
3484 skb_network_header(skb)) >> 1;
3485
3486 /* transport len */
3487 if (xmit_type & XMIT_CSUM_TCP)
3488 hlen_w += inner_tcp_hdrlen(skb) >> 1;
3489 else
3490 hlen_w += sizeof(struct udphdr) >> 1;
3491
3492 pbd2->fw_ip_hdr_to_payload_w = hlen_w;
3493
3494 if (xmit_type & XMIT_CSUM_ENC_V4) {
3495 struct iphdr *iph = ip_hdr(skb);
3496 pbd2->fw_ip_csum_wo_len_flags_frag =
3497 bswab16(csum_fold((~iph->check) -
3498 iph->tot_len - iph->frag_off));
3499 } else {
3500 pbd2->fw_ip_hdr_to_payload_w =
3501 hlen_w - ((sizeof(struct ipv6hdr)) >> 1);
3502 }
3503
3504 pbd2->tcp_send_seq = bswab32(inner_tcp_hdr(skb)->seq);
3505
3506 pbd2->tcp_flags = pbd_tcp_flags(inner_tcp_hdr(skb));
3507
3508 if (xmit_type & XMIT_GSO_V4) {
3509 pbd2->hw_ip_id = bswab16(inner_ip_hdr(skb)->id);
3510
3511 pbd_e2->data.tunnel_data.pseudo_csum =
3512 bswab16(~csum_tcpudp_magic(
3513 inner_ip_hdr(skb)->saddr,
3514 inner_ip_hdr(skb)->daddr,
3515 0, IPPROTO_TCP, 0));
3516
3517 outerip_len = ip_hdr(skb)->ihl << 1;
3518 } else {
3519 pbd_e2->data.tunnel_data.pseudo_csum =
3520 bswab16(~csum_ipv6_magic(
3521 &inner_ipv6_hdr(skb)->saddr,
3522 &inner_ipv6_hdr(skb)->daddr,
3523 0, IPPROTO_TCP, 0));
3524 }
3525
3526 outerip_off = (skb_network_header(skb) - skb->data) >> 1;
3527
3528 *global_data |=
3529 outerip_off |
3530 (!!(xmit_type & XMIT_CSUM_V6) <<
3531 ETH_TX_PARSE_2ND_BD_IP_HDR_TYPE_OUTER_SHIFT) |
3532 (outerip_len <<
3533 ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W_SHIFT) |
3534 ((skb->protocol == cpu_to_be16(ETH_P_8021Q)) <<
3535 ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN_SHIFT);
3536
3537 if (ip_hdr(skb)->protocol == IPPROTO_UDP) {
3538 SET_FLAG(*global_data, ETH_TX_PARSE_2ND_BD_TUNNEL_UDP_EXIST, 1);
3539 pbd2->tunnel_udp_hdr_start_w = skb_transport_offset(skb) >> 1;
3540 }
3541}
3542
3406/* called with netif_tx_lock 3543/* called with netif_tx_lock
3407 * bnx2x_tx_int() runs without netif_tx_lock unless it needs to call 3544 * bnx2x_tx_int() runs without netif_tx_lock unless it needs to call
3408 * netif_wake_queue() 3545 * netif_wake_queue()
@@ -3418,6 +3555,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3418 struct eth_tx_bd *tx_data_bd, *total_pkt_bd = NULL; 3555 struct eth_tx_bd *tx_data_bd, *total_pkt_bd = NULL;
3419 struct eth_tx_parse_bd_e1x *pbd_e1x = NULL; 3556 struct eth_tx_parse_bd_e1x *pbd_e1x = NULL;
3420 struct eth_tx_parse_bd_e2 *pbd_e2 = NULL; 3557 struct eth_tx_parse_bd_e2 *pbd_e2 = NULL;
3558 struct eth_tx_parse_2nd_bd *pbd2 = NULL;
3421 u32 pbd_e2_parsing_data = 0; 3559 u32 pbd_e2_parsing_data = 0;
3422 u16 pkt_prod, bd_prod; 3560 u16 pkt_prod, bd_prod;
3423 int nbd, txq_index; 3561 int nbd, txq_index;
@@ -3485,7 +3623,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3485 mac_type = MULTICAST_ADDRESS; 3623 mac_type = MULTICAST_ADDRESS;
3486 } 3624 }
3487 3625
3488#if (MAX_SKB_FRAGS >= MAX_FETCH_BD - 3) 3626#if (MAX_SKB_FRAGS >= MAX_FETCH_BD - BDS_PER_TX_PKT)
3489 /* First, check if we need to linearize the skb (due to FW 3627 /* First, check if we need to linearize the skb (due to FW
3490 restrictions). No need to check fragmentation if page size > 8K 3628 restrictions). No need to check fragmentation if page size > 8K
3491 (there will be no violation to FW restrictions) */ 3629 (there will be no violation to FW restrictions) */
@@ -3533,12 +3671,9 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3533 first_bd = tx_start_bd; 3671 first_bd = tx_start_bd;
3534 3672
3535 tx_start_bd->bd_flags.as_bitfield = ETH_TX_BD_FLAGS_START_BD; 3673 tx_start_bd->bd_flags.as_bitfield = ETH_TX_BD_FLAGS_START_BD;
3536 SET_FLAG(tx_start_bd->general_data,
3537 ETH_TX_START_BD_PARSE_NBDS,
3538 0);
3539 3674
3540 /* header nbd */ 3675 /* header nbd: indirectly zero other flags! */
3541 SET_FLAG(tx_start_bd->general_data, ETH_TX_START_BD_HDR_NBDS, 1); 3676 tx_start_bd->general_data = 1 << ETH_TX_START_BD_HDR_NBDS_SHIFT;
3542 3677
3543 /* remember the first BD of the packet */ 3678 /* remember the first BD of the packet */
3544 tx_buf->first_bd = txdata->tx_bd_prod; 3679 tx_buf->first_bd = txdata->tx_bd_prod;
@@ -3558,19 +3693,16 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3558 /* when transmitting in a vf, start bd must hold the ethertype 3693 /* when transmitting in a vf, start bd must hold the ethertype
3559 * for fw to enforce it 3694 * for fw to enforce it
3560 */ 3695 */
3561#ifndef BNX2X_STOP_ON_ERROR 3696 if (IS_VF(bp))
3562 if (IS_VF(bp)) {
3563#endif
3564 tx_start_bd->vlan_or_ethertype = 3697 tx_start_bd->vlan_or_ethertype =
3565 cpu_to_le16(ntohs(eth->h_proto)); 3698 cpu_to_le16(ntohs(eth->h_proto));
3566#ifndef BNX2X_STOP_ON_ERROR 3699 else
3567 } else {
3568 /* used by FW for packet accounting */ 3700 /* used by FW for packet accounting */
3569 tx_start_bd->vlan_or_ethertype = cpu_to_le16(pkt_prod); 3701 tx_start_bd->vlan_or_ethertype = cpu_to_le16(pkt_prod);
3570 }
3571#endif
3572 } 3702 }
3573 3703
3704 nbd = 2; /* start_bd + pbd + frags (updated when pages are mapped) */
3705
3574 /* turn on parsing and get a BD */ 3706 /* turn on parsing and get a BD */
3575 bd_prod = TX_BD(NEXT_TX_IDX(bd_prod)); 3707 bd_prod = TX_BD(NEXT_TX_IDX(bd_prod));
3576 3708
@@ -3580,23 +3712,58 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3580 if (!CHIP_IS_E1x(bp)) { 3712 if (!CHIP_IS_E1x(bp)) {
3581 pbd_e2 = &txdata->tx_desc_ring[bd_prod].parse_bd_e2; 3713 pbd_e2 = &txdata->tx_desc_ring[bd_prod].parse_bd_e2;
3582 memset(pbd_e2, 0, sizeof(struct eth_tx_parse_bd_e2)); 3714 memset(pbd_e2, 0, sizeof(struct eth_tx_parse_bd_e2));
3583 /* Set PBD in checksum offload case */ 3715
3584 if (xmit_type & XMIT_CSUM) 3716 if (xmit_type & XMIT_CSUM_ENC) {
3717 u16 global_data = 0;
3718
3719 /* Set PBD in enc checksum offload case */
3720 hlen = bnx2x_set_pbd_csum_enc(bp, skb,
3721 &pbd_e2_parsing_data,
3722 xmit_type);
3723
3724 /* turn on 2nd parsing and get a BD */
3725 bd_prod = TX_BD(NEXT_TX_IDX(bd_prod));
3726
3727 pbd2 = &txdata->tx_desc_ring[bd_prod].parse_2nd_bd;
3728
3729 memset(pbd2, 0, sizeof(*pbd2));
3730
3731 pbd_e2->data.tunnel_data.ip_hdr_start_inner_w =
3732 (skb_inner_network_header(skb) -
3733 skb->data) >> 1;
3734
3735 if (xmit_type & XMIT_GSO_ENC)
3736 bnx2x_update_pbds_gso_enc(skb, pbd_e2, pbd2,
3737 &global_data,
3738 xmit_type);
3739
3740 pbd2->global_data = cpu_to_le16(global_data);
3741
3742 /* add addition parse BD indication to start BD */
3743 SET_FLAG(tx_start_bd->general_data,
3744 ETH_TX_START_BD_PARSE_NBDS, 1);
3745 /* set encapsulation flag in start BD */
3746 SET_FLAG(tx_start_bd->general_data,
3747 ETH_TX_START_BD_TUNNEL_EXIST, 1);
3748 nbd++;
3749 } else if (xmit_type & XMIT_CSUM) {
3750 /* Set PBD in checksum offload case w/o encapsulation */
3585 hlen = bnx2x_set_pbd_csum_e2(bp, skb, 3751 hlen = bnx2x_set_pbd_csum_e2(bp, skb,
3586 &pbd_e2_parsing_data, 3752 &pbd_e2_parsing_data,
3587 xmit_type); 3753 xmit_type);
3754 }
3588 3755
3589 if (IS_MF_SI(bp) || IS_VF(bp)) { 3756 /* Add the macs to the parsing BD this is a vf */
3590 /* fill in the MAC addresses in the PBD - for local 3757 if (IS_VF(bp)) {
3591 * switching 3758 /* override GRE parameters in BD */
3592 */ 3759 bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.src_hi,
3593 bnx2x_set_fw_mac_addr(&pbd_e2->src_mac_addr_hi, 3760 &pbd_e2->data.mac_addr.src_mid,
3594 &pbd_e2->src_mac_addr_mid, 3761 &pbd_e2->data.mac_addr.src_lo,
3595 &pbd_e2->src_mac_addr_lo,
3596 eth->h_source); 3762 eth->h_source);
3597 bnx2x_set_fw_mac_addr(&pbd_e2->dst_mac_addr_hi, 3763
3598 &pbd_e2->dst_mac_addr_mid, 3764 bnx2x_set_fw_mac_addr(&pbd_e2->data.mac_addr.dst_hi,
3599 &pbd_e2->dst_mac_addr_lo, 3765 &pbd_e2->data.mac_addr.dst_mid,
3766 &pbd_e2->data.mac_addr.dst_lo,
3600 eth->h_dest); 3767 eth->h_dest);
3601 } 3768 }
3602 3769
@@ -3618,14 +3785,13 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3618 /* Setup the data pointer of the first BD of the packet */ 3785 /* Setup the data pointer of the first BD of the packet */
3619 tx_start_bd->addr_hi = cpu_to_le32(U64_HI(mapping)); 3786 tx_start_bd->addr_hi = cpu_to_le32(U64_HI(mapping));
3620 tx_start_bd->addr_lo = cpu_to_le32(U64_LO(mapping)); 3787 tx_start_bd->addr_lo = cpu_to_le32(U64_LO(mapping));
3621 nbd = 2; /* start_bd + pbd + frags (updated when pages are mapped) */
3622 tx_start_bd->nbytes = cpu_to_le16(skb_headlen(skb)); 3788 tx_start_bd->nbytes = cpu_to_le16(skb_headlen(skb));
3623 pkt_size = tx_start_bd->nbytes; 3789 pkt_size = tx_start_bd->nbytes;
3624 3790
3625 DP(NETIF_MSG_TX_QUEUED, 3791 DP(NETIF_MSG_TX_QUEUED,
3626 "first bd @%p addr (%x:%x) nbd %d nbytes %d flags %x vlan %x\n", 3792 "first bd @%p addr (%x:%x) nbytes %d flags %x vlan %x\n",
3627 tx_start_bd, tx_start_bd->addr_hi, tx_start_bd->addr_lo, 3793 tx_start_bd, tx_start_bd->addr_hi, tx_start_bd->addr_lo,
3628 le16_to_cpu(tx_start_bd->nbd), le16_to_cpu(tx_start_bd->nbytes), 3794 le16_to_cpu(tx_start_bd->nbytes),
3629 tx_start_bd->bd_flags.as_bitfield, 3795 tx_start_bd->bd_flags.as_bitfield,
3630 le16_to_cpu(tx_start_bd->vlan_or_ethertype)); 3796 le16_to_cpu(tx_start_bd->vlan_or_ethertype));
3631 3797
@@ -3638,10 +3804,12 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3638 3804
3639 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_SW_LSO; 3805 tx_start_bd->bd_flags.as_bitfield |= ETH_TX_BD_FLAGS_SW_LSO;
3640 3806
3641 if (unlikely(skb_headlen(skb) > hlen)) 3807 if (unlikely(skb_headlen(skb) > hlen)) {
3808 nbd++;
3642 bd_prod = bnx2x_tx_split(bp, txdata, tx_buf, 3809 bd_prod = bnx2x_tx_split(bp, txdata, tx_buf,
3643 &tx_start_bd, hlen, 3810 &tx_start_bd, hlen,
3644 bd_prod, ++nbd); 3811 bd_prod);
3812 }
3645 if (!CHIP_IS_E1x(bp)) 3813 if (!CHIP_IS_E1x(bp))
3646 bnx2x_set_pbd_gso_e2(skb, &pbd_e2_parsing_data, 3814 bnx2x_set_pbd_gso_e2(skb, &pbd_e2_parsing_data,
3647 xmit_type); 3815 xmit_type);
@@ -3731,9 +3899,13 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
3731 if (pbd_e2) 3899 if (pbd_e2)
3732 DP(NETIF_MSG_TX_QUEUED, 3900 DP(NETIF_MSG_TX_QUEUED,
3733 "PBD (E2) @%p dst %x %x %x src %x %x %x parsing_data %x\n", 3901 "PBD (E2) @%p dst %x %x %x src %x %x %x parsing_data %x\n",
3734 pbd_e2, pbd_e2->dst_mac_addr_hi, pbd_e2->dst_mac_addr_mid, 3902 pbd_e2,
3735 pbd_e2->dst_mac_addr_lo, pbd_e2->src_mac_addr_hi, 3903 pbd_e2->data.mac_addr.dst_hi,
3736 pbd_e2->src_mac_addr_mid, pbd_e2->src_mac_addr_lo, 3904 pbd_e2->data.mac_addr.dst_mid,
3905 pbd_e2->data.mac_addr.dst_lo,
3906 pbd_e2->data.mac_addr.src_hi,
3907 pbd_e2->data.mac_addr.src_mid,
3908 pbd_e2->data.mac_addr.src_lo,
3737 pbd_e2->parsing_data); 3909 pbd_e2->parsing_data);
3738 DP(NETIF_MSG_TX_QUEUED, "doorbell: nbd %d bd %u\n", nbd, bd_prod); 3910 DP(NETIF_MSG_TX_QUEUED, "doorbell: nbd %d bd %u\n", nbd, bd_prod);
3739 3911
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
index aee7671ff4c1..151675d66b0d 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
@@ -50,13 +50,13 @@ extern int int_mode;
50 } \ 50 } \
51 } while (0) 51 } while (0)
52 52
53#define BNX2X_PCI_ALLOC(x, y, size) \ 53#define BNX2X_PCI_ALLOC(x, y, size) \
54 do { \ 54do { \
55 x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL); \ 55 x = dma_alloc_coherent(&bp->pdev->dev, size, y, \
56 if (x == NULL) \ 56 GFP_KERNEL | __GFP_ZERO); \
57 goto alloc_mem_err; \ 57 if (x == NULL) \
58 memset((void *)x, 0, size); \ 58 goto alloc_mem_err; \
59 } while (0) 59} while (0)
60 60
61#define BNX2X_ALLOC(x, size) \ 61#define BNX2X_ALLOC(x, size) \
62 do { \ 62 do { \
@@ -295,16 +295,29 @@ void bnx2x_int_disable_sync(struct bnx2x *bp, int disable_hw);
295void bnx2x_nic_init_cnic(struct bnx2x *bp); 295void bnx2x_nic_init_cnic(struct bnx2x *bp);
296 296
297/** 297/**
298 * bnx2x_nic_init - init driver internals. 298 * bnx2x_preirq_nic_init - init driver internals.
299 * 299 *
300 * @bp: driver handle 300 * @bp: driver handle
301 * 301 *
302 * Initializes: 302 * Initializes:
303 * - rings 303 * - fastpath object
304 * - fastpath rings
305 * etc.
306 */
307void bnx2x_pre_irq_nic_init(struct bnx2x *bp);
308
309/**
310 * bnx2x_postirq_nic_init - init driver internals.
311 *
312 * @bp: driver handle
313 * @load_code: COMMON, PORT or FUNCTION
314 *
315 * Initializes:
304 * - status blocks 316 * - status blocks
317 * - slowpath rings
305 * - etc. 318 * - etc.
306 */ 319 */
307void bnx2x_nic_init(struct bnx2x *bp, u32 load_code); 320void bnx2x_post_irq_nic_init(struct bnx2x *bp, u32 load_code);
308/** 321/**
309 * bnx2x_alloc_mem_cnic - allocate driver's memory for cnic. 322 * bnx2x_alloc_mem_cnic - allocate driver's memory for cnic.
310 * 323 *
@@ -496,7 +509,10 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev);
496/* setup_tc callback */ 509/* setup_tc callback */
497int bnx2x_setup_tc(struct net_device *dev, u8 num_tc); 510int bnx2x_setup_tc(struct net_device *dev, u8 num_tc);
498 511
512int bnx2x_get_vf_config(struct net_device *dev, int vf,
513 struct ifla_vf_info *ivi);
499int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac); 514int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac);
515int bnx2x_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos);
500 516
501/* select_queue callback */ 517/* select_queue callback */
502u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb); 518u16 bnx2x_select_queue(struct net_device *dev, struct sk_buff *skb);
@@ -834,7 +850,7 @@ static inline void bnx2x_add_all_napi_cnic(struct bnx2x *bp)
834 /* Add NAPI objects */ 850 /* Add NAPI objects */
835 for_each_rx_queue_cnic(bp, i) 851 for_each_rx_queue_cnic(bp, i)
836 netif_napi_add(bp->dev, &bnx2x_fp(bp, i, napi), 852 netif_napi_add(bp->dev, &bnx2x_fp(bp, i, napi),
837 bnx2x_poll, BNX2X_NAPI_WEIGHT); 853 bnx2x_poll, NAPI_POLL_WEIGHT);
838} 854}
839 855
840static inline void bnx2x_add_all_napi(struct bnx2x *bp) 856static inline void bnx2x_add_all_napi(struct bnx2x *bp)
@@ -844,7 +860,7 @@ static inline void bnx2x_add_all_napi(struct bnx2x *bp)
844 /* Add NAPI objects */ 860 /* Add NAPI objects */
845 for_each_eth_queue(bp, i) 861 for_each_eth_queue(bp, i)
846 netif_napi_add(bp->dev, &bnx2x_fp(bp, i, napi), 862 netif_napi_add(bp->dev, &bnx2x_fp(bp, i, napi),
847 bnx2x_poll, BNX2X_NAPI_WEIGHT); 863 bnx2x_poll, NAPI_POLL_WEIGHT);
848} 864}
849 865
850static inline void bnx2x_del_all_napi_cnic(struct bnx2x *bp) 866static inline void bnx2x_del_all_napi_cnic(struct bnx2x *bp)
@@ -970,6 +986,9 @@ static inline int bnx2x_func_start(struct bnx2x *bp)
970 else /* CHIP_IS_E1X */ 986 else /* CHIP_IS_E1X */
971 start_params->network_cos_mode = FW_WRR; 987 start_params->network_cos_mode = FW_WRR;
972 988
989 start_params->gre_tunnel_mode = IPGRE_TUNNEL;
990 start_params->gre_tunnel_rss = GRE_INNER_HEADERS_RSS;
991
973 return bnx2x_func_state_change(bp, &func_params); 992 return bnx2x_func_state_change(bp, &func_params);
974} 993}
975 994
@@ -1396,4 +1415,8 @@ static inline bool bnx2x_is_valid_ether_addr(struct bnx2x *bp, u8 *addr)
1396 * 1415 *
1397 */ 1416 */
1398void bnx2x_fill_fw_str(struct bnx2x *bp, char *buf, size_t buf_len); 1417void bnx2x_fill_fw_str(struct bnx2x *bp, char *buf, size_t buf_len);
1418
1419int bnx2x_drain_tx_queues(struct bnx2x *bp);
1420void bnx2x_squeeze_objects(struct bnx2x *bp);
1421
1399#endif /* BNX2X_CMN_H */ 1422#endif /* BNX2X_CMN_H */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
index edfa67adf2f9..ce1a91618677 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
@@ -1364,11 +1364,27 @@ static int bnx2x_nvram_read(struct bnx2x *bp, u32 offset, u8 *ret_buf,
1364 return rc; 1364 return rc;
1365} 1365}
1366 1366
1367static int bnx2x_nvram_read32(struct bnx2x *bp, u32 offset, u32 *buf,
1368 int buf_size)
1369{
1370 int rc;
1371
1372 rc = bnx2x_nvram_read(bp, offset, (u8 *)buf, buf_size);
1373
1374 if (!rc) {
1375 __be32 *be = (__be32 *)buf;
1376
1377 while ((buf_size -= 4) >= 0)
1378 *buf++ = be32_to_cpu(*be++);
1379 }
1380
1381 return rc;
1382}
1383
1367static int bnx2x_get_eeprom(struct net_device *dev, 1384static int bnx2x_get_eeprom(struct net_device *dev,
1368 struct ethtool_eeprom *eeprom, u8 *eebuf) 1385 struct ethtool_eeprom *eeprom, u8 *eebuf)
1369{ 1386{
1370 struct bnx2x *bp = netdev_priv(dev); 1387 struct bnx2x *bp = netdev_priv(dev);
1371 int rc;
1372 1388
1373 if (!netif_running(dev)) { 1389 if (!netif_running(dev)) {
1374 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 1390 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
@@ -1383,9 +1399,7 @@ static int bnx2x_get_eeprom(struct net_device *dev,
1383 1399
1384 /* parameters already validated in ethtool_get_eeprom */ 1400 /* parameters already validated in ethtool_get_eeprom */
1385 1401
1386 rc = bnx2x_nvram_read(bp, eeprom->offset, eebuf, eeprom->len); 1402 return bnx2x_nvram_read(bp, eeprom->offset, eebuf, eeprom->len);
1387
1388 return rc;
1389} 1403}
1390 1404
1391static int bnx2x_get_module_eeprom(struct net_device *dev, 1405static int bnx2x_get_module_eeprom(struct net_device *dev,
@@ -1393,10 +1407,9 @@ static int bnx2x_get_module_eeprom(struct net_device *dev,
1393 u8 *data) 1407 u8 *data)
1394{ 1408{
1395 struct bnx2x *bp = netdev_priv(dev); 1409 struct bnx2x *bp = netdev_priv(dev);
1396 int rc = 0, phy_idx; 1410 int rc = -EINVAL, phy_idx;
1397 u8 *user_data = data; 1411 u8 *user_data = data;
1398 int remaining_len = ee->len, xfer_size; 1412 unsigned int start_addr = ee->offset, xfer_size = 0;
1399 unsigned int page_off = ee->offset;
1400 1413
1401 if (!netif_running(dev)) { 1414 if (!netif_running(dev)) {
1402 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 1415 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
@@ -1405,21 +1418,52 @@ static int bnx2x_get_module_eeprom(struct net_device *dev,
1405 } 1418 }
1406 1419
1407 phy_idx = bnx2x_get_cur_phy_idx(bp); 1420 phy_idx = bnx2x_get_cur_phy_idx(bp);
1408 bnx2x_acquire_phy_lock(bp); 1421
1409 while (!rc && remaining_len > 0) { 1422 /* Read A0 section */
1410 xfer_size = (remaining_len > SFP_EEPROM_PAGE_SIZE) ? 1423 if (start_addr < ETH_MODULE_SFF_8079_LEN) {
1411 SFP_EEPROM_PAGE_SIZE : remaining_len; 1424 /* Limit transfer size to the A0 section boundary */
1425 if (start_addr + ee->len > ETH_MODULE_SFF_8079_LEN)
1426 xfer_size = ETH_MODULE_SFF_8079_LEN - start_addr;
1427 else
1428 xfer_size = ee->len;
1429 bnx2x_acquire_phy_lock(bp);
1412 rc = bnx2x_read_sfp_module_eeprom(&bp->link_params.phy[phy_idx], 1430 rc = bnx2x_read_sfp_module_eeprom(&bp->link_params.phy[phy_idx],
1413 &bp->link_params, 1431 &bp->link_params,
1414 page_off, 1432 I2C_DEV_ADDR_A0,
1433 start_addr,
1415 xfer_size, 1434 xfer_size,
1416 user_data); 1435 user_data);
1417 remaining_len -= xfer_size; 1436 bnx2x_release_phy_lock(bp);
1437 if (rc) {
1438 DP(BNX2X_MSG_ETHTOOL, "Failed reading A0 section\n");
1439
1440 return -EINVAL;
1441 }
1418 user_data += xfer_size; 1442 user_data += xfer_size;
1419 page_off += xfer_size; 1443 start_addr += xfer_size;
1420 } 1444 }
1421 1445
1422 bnx2x_release_phy_lock(bp); 1446 /* Read A2 section */
1447 if ((start_addr >= ETH_MODULE_SFF_8079_LEN) &&
1448 (start_addr < ETH_MODULE_SFF_8472_LEN)) {
1449 xfer_size = ee->len - xfer_size;
1450 /* Limit transfer size to the A2 section boundary */
1451 if (start_addr + xfer_size > ETH_MODULE_SFF_8472_LEN)
1452 xfer_size = ETH_MODULE_SFF_8472_LEN - start_addr;
1453 start_addr -= ETH_MODULE_SFF_8079_LEN;
1454 bnx2x_acquire_phy_lock(bp);
1455 rc = bnx2x_read_sfp_module_eeprom(&bp->link_params.phy[phy_idx],
1456 &bp->link_params,
1457 I2C_DEV_ADDR_A2,
1458 start_addr,
1459 xfer_size,
1460 user_data);
1461 bnx2x_release_phy_lock(bp);
1462 if (rc) {
1463 DP(BNX2X_MSG_ETHTOOL, "Failed reading A2 section\n");
1464 return -EINVAL;
1465 }
1466 }
1423 return rc; 1467 return rc;
1424} 1468}
1425 1469
@@ -1427,24 +1471,50 @@ static int bnx2x_get_module_info(struct net_device *dev,
1427 struct ethtool_modinfo *modinfo) 1471 struct ethtool_modinfo *modinfo)
1428{ 1472{
1429 struct bnx2x *bp = netdev_priv(dev); 1473 struct bnx2x *bp = netdev_priv(dev);
1430 int phy_idx; 1474 int phy_idx, rc;
1475 u8 sff8472_comp, diag_type;
1476
1431 if (!netif_running(dev)) { 1477 if (!netif_running(dev)) {
1432 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 1478 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
1433 "cannot access eeprom when the interface is down\n"); 1479 "cannot access eeprom when the interface is down\n");
1434 return -EAGAIN; 1480 return -EAGAIN;
1435 } 1481 }
1436
1437 phy_idx = bnx2x_get_cur_phy_idx(bp); 1482 phy_idx = bnx2x_get_cur_phy_idx(bp);
1438 switch (bp->link_params.phy[phy_idx].media_type) { 1483 bnx2x_acquire_phy_lock(bp);
1439 case ETH_PHY_SFPP_10G_FIBER: 1484 rc = bnx2x_read_sfp_module_eeprom(&bp->link_params.phy[phy_idx],
1440 case ETH_PHY_SFP_1G_FIBER: 1485 &bp->link_params,
1441 case ETH_PHY_DA_TWINAX: 1486 I2C_DEV_ADDR_A0,
1487 SFP_EEPROM_SFF_8472_COMP_ADDR,
1488 SFP_EEPROM_SFF_8472_COMP_SIZE,
1489 &sff8472_comp);
1490 bnx2x_release_phy_lock(bp);
1491 if (rc) {
1492 DP(BNX2X_MSG_ETHTOOL, "Failed reading SFF-8472 comp field\n");
1493 return -EINVAL;
1494 }
1495
1496 bnx2x_acquire_phy_lock(bp);
1497 rc = bnx2x_read_sfp_module_eeprom(&bp->link_params.phy[phy_idx],
1498 &bp->link_params,
1499 I2C_DEV_ADDR_A0,
1500 SFP_EEPROM_DIAG_TYPE_ADDR,
1501 SFP_EEPROM_DIAG_TYPE_SIZE,
1502 &diag_type);
1503 bnx2x_release_phy_lock(bp);
1504 if (rc) {
1505 DP(BNX2X_MSG_ETHTOOL, "Failed reading Diag Type field\n");
1506 return -EINVAL;
1507 }
1508
1509 if (!sff8472_comp ||
1510 (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ)) {
1442 modinfo->type = ETH_MODULE_SFF_8079; 1511 modinfo->type = ETH_MODULE_SFF_8079;
1443 modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; 1512 modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
1444 return 0; 1513 } else {
1445 default: 1514 modinfo->type = ETH_MODULE_SFF_8472;
1446 return -EOPNOTSUPP; 1515 modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
1447 } 1516 }
1517 return 0;
1448} 1518}
1449 1519
1450static int bnx2x_nvram_write_dword(struct bnx2x *bp, u32 offset, u32 val, 1520static int bnx2x_nvram_write_dword(struct bnx2x *bp, u32 offset, u32 val,
@@ -1496,9 +1566,8 @@ static int bnx2x_nvram_write1(struct bnx2x *bp, u32 offset, u8 *data_buf,
1496 int buf_size) 1566 int buf_size)
1497{ 1567{
1498 int rc; 1568 int rc;
1499 u32 cmd_flags; 1569 u32 cmd_flags, align_offset, val;
1500 u32 align_offset; 1570 __be32 val_be;
1501 __be32 val;
1502 1571
1503 if (offset + buf_size > bp->common.flash_size) { 1572 if (offset + buf_size > bp->common.flash_size) {
1504 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 1573 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
@@ -1517,16 +1586,16 @@ static int bnx2x_nvram_write1(struct bnx2x *bp, u32 offset, u8 *data_buf,
1517 1586
1518 cmd_flags = (MCPR_NVM_COMMAND_FIRST | MCPR_NVM_COMMAND_LAST); 1587 cmd_flags = (MCPR_NVM_COMMAND_FIRST | MCPR_NVM_COMMAND_LAST);
1519 align_offset = (offset & ~0x03); 1588 align_offset = (offset & ~0x03);
1520 rc = bnx2x_nvram_read_dword(bp, align_offset, &val, cmd_flags); 1589 rc = bnx2x_nvram_read_dword(bp, align_offset, &val_be, cmd_flags);
1521 1590
1522 if (rc == 0) { 1591 if (rc == 0) {
1523 val &= ~(0xff << BYTE_OFFSET(offset));
1524 val |= (*data_buf << BYTE_OFFSET(offset));
1525
1526 /* nvram data is returned as an array of bytes 1592 /* nvram data is returned as an array of bytes
1527 * convert it back to cpu order 1593 * convert it back to cpu order
1528 */ 1594 */
1529 val = be32_to_cpu(val); 1595 val = be32_to_cpu(val_be);
1596
1597 val &= ~le32_to_cpu(0xff << BYTE_OFFSET(offset));
1598 val |= le32_to_cpu(*data_buf << BYTE_OFFSET(offset));
1530 1599
1531 rc = bnx2x_nvram_write_dword(bp, align_offset, val, 1600 rc = bnx2x_nvram_write_dword(bp, align_offset, val,
1532 cmd_flags); 1601 cmd_flags);
@@ -1820,12 +1889,15 @@ static int bnx2x_set_pauseparam(struct net_device *dev,
1820 bp->link_params.req_flow_ctrl[cfg_idx] = 1889 bp->link_params.req_flow_ctrl[cfg_idx] =
1821 BNX2X_FLOW_CTRL_AUTO; 1890 BNX2X_FLOW_CTRL_AUTO;
1822 } 1891 }
1823 bp->link_params.req_fc_auto_adv = BNX2X_FLOW_CTRL_NONE; 1892 bp->link_params.req_fc_auto_adv = 0;
1824 if (epause->rx_pause) 1893 if (epause->rx_pause)
1825 bp->link_params.req_fc_auto_adv |= BNX2X_FLOW_CTRL_RX; 1894 bp->link_params.req_fc_auto_adv |= BNX2X_FLOW_CTRL_RX;
1826 1895
1827 if (epause->tx_pause) 1896 if (epause->tx_pause)
1828 bp->link_params.req_fc_auto_adv |= BNX2X_FLOW_CTRL_TX; 1897 bp->link_params.req_fc_auto_adv |= BNX2X_FLOW_CTRL_TX;
1898
1899 if (!bp->link_params.req_fc_auto_adv)
1900 bp->link_params.req_fc_auto_adv |= BNX2X_FLOW_CTRL_NONE;
1829 } 1901 }
1830 1902
1831 DP(BNX2X_MSG_ETHTOOL, 1903 DP(BNX2X_MSG_ETHTOOL,
@@ -2526,14 +2598,168 @@ static int bnx2x_test_ext_loopback(struct bnx2x *bp)
2526 return rc; 2598 return rc;
2527} 2599}
2528 2600
2601struct code_entry {
2602 u32 sram_start_addr;
2603 u32 code_attribute;
2604#define CODE_IMAGE_TYPE_MASK 0xf0800003
2605#define CODE_IMAGE_VNTAG_PROFILES_DATA 0xd0000003
2606#define CODE_IMAGE_LENGTH_MASK 0x007ffffc
2607#define CODE_IMAGE_TYPE_EXTENDED_DIR 0xe0000000
2608 u32 nvm_start_addr;
2609};
2610
2611#define CODE_ENTRY_MAX 16
2612#define CODE_ENTRY_EXTENDED_DIR_IDX 15
2613#define MAX_IMAGES_IN_EXTENDED_DIR 64
2614#define NVRAM_DIR_OFFSET 0x14
2615
2616#define EXTENDED_DIR_EXISTS(code) \
2617 ((code & CODE_IMAGE_TYPE_MASK) == CODE_IMAGE_TYPE_EXTENDED_DIR && \
2618 (code & CODE_IMAGE_LENGTH_MASK) != 0)
2619
2529#define CRC32_RESIDUAL 0xdebb20e3 2620#define CRC32_RESIDUAL 0xdebb20e3
2621#define CRC_BUFF_SIZE 256
2622
2623static int bnx2x_nvram_crc(struct bnx2x *bp,
2624 int offset,
2625 int size,
2626 u8 *buff)
2627{
2628 u32 crc = ~0;
2629 int rc = 0, done = 0;
2630
2631 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2632 "NVRAM CRC from 0x%08x to 0x%08x\n", offset, offset + size);
2633
2634 while (done < size) {
2635 int count = min_t(int, size - done, CRC_BUFF_SIZE);
2636
2637 rc = bnx2x_nvram_read(bp, offset + done, buff, count);
2638
2639 if (rc)
2640 return rc;
2641
2642 crc = crc32_le(crc, buff, count);
2643 done += count;
2644 }
2645
2646 if (crc != CRC32_RESIDUAL)
2647 rc = -EINVAL;
2648
2649 return rc;
2650}
2651
2652static int bnx2x_test_nvram_dir(struct bnx2x *bp,
2653 struct code_entry *entry,
2654 u8 *buff)
2655{
2656 size_t size = entry->code_attribute & CODE_IMAGE_LENGTH_MASK;
2657 u32 type = entry->code_attribute & CODE_IMAGE_TYPE_MASK;
2658 int rc;
2659
2660 /* Zero-length images and AFEX profiles do not have CRC */
2661 if (size == 0 || type == CODE_IMAGE_VNTAG_PROFILES_DATA)
2662 return 0;
2663
2664 rc = bnx2x_nvram_crc(bp, entry->nvm_start_addr, size, buff);
2665 if (rc)
2666 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2667 "image %x has failed crc test (rc %d)\n", type, rc);
2668
2669 return rc;
2670}
2671
2672static int bnx2x_test_dir_entry(struct bnx2x *bp, u32 addr, u8 *buff)
2673{
2674 int rc;
2675 struct code_entry entry;
2676
2677 rc = bnx2x_nvram_read32(bp, addr, (u32 *)&entry, sizeof(entry));
2678 if (rc)
2679 return rc;
2680
2681 return bnx2x_test_nvram_dir(bp, &entry, buff);
2682}
2683
2684static int bnx2x_test_nvram_ext_dirs(struct bnx2x *bp, u8 *buff)
2685{
2686 u32 rc, cnt, dir_offset = NVRAM_DIR_OFFSET;
2687 struct code_entry entry;
2688 int i;
2689
2690 rc = bnx2x_nvram_read32(bp,
2691 dir_offset +
2692 sizeof(entry) * CODE_ENTRY_EXTENDED_DIR_IDX,
2693 (u32 *)&entry, sizeof(entry));
2694 if (rc)
2695 return rc;
2696
2697 if (!EXTENDED_DIR_EXISTS(entry.code_attribute))
2698 return 0;
2699
2700 rc = bnx2x_nvram_read32(bp, entry.nvm_start_addr,
2701 &cnt, sizeof(u32));
2702 if (rc)
2703 return rc;
2704
2705 dir_offset = entry.nvm_start_addr + 8;
2706
2707 for (i = 0; i < cnt && i < MAX_IMAGES_IN_EXTENDED_DIR; i++) {
2708 rc = bnx2x_test_dir_entry(bp, dir_offset +
2709 sizeof(struct code_entry) * i,
2710 buff);
2711 if (rc)
2712 return rc;
2713 }
2714
2715 return 0;
2716}
2717
2718static int bnx2x_test_nvram_dirs(struct bnx2x *bp, u8 *buff)
2719{
2720 u32 rc, dir_offset = NVRAM_DIR_OFFSET;
2721 int i;
2722
2723 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, "NVRAM DIRS CRC test-set\n");
2724
2725 for (i = 0; i < CODE_ENTRY_EXTENDED_DIR_IDX; i++) {
2726 rc = bnx2x_test_dir_entry(bp, dir_offset +
2727 sizeof(struct code_entry) * i,
2728 buff);
2729 if (rc)
2730 return rc;
2731 }
2732
2733 return bnx2x_test_nvram_ext_dirs(bp, buff);
2734}
2735
2736struct crc_pair {
2737 int offset;
2738 int size;
2739};
2740
2741static int bnx2x_test_nvram_tbl(struct bnx2x *bp,
2742 const struct crc_pair *nvram_tbl, u8 *buf)
2743{
2744 int i;
2745
2746 for (i = 0; nvram_tbl[i].size; i++) {
2747 int rc = bnx2x_nvram_crc(bp, nvram_tbl[i].offset,
2748 nvram_tbl[i].size, buf);
2749 if (rc) {
2750 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2751 "nvram_tbl[%d] has failed crc test (rc %d)\n",
2752 i, rc);
2753 return rc;
2754 }
2755 }
2756
2757 return 0;
2758}
2530 2759
2531static int bnx2x_test_nvram(struct bnx2x *bp) 2760static int bnx2x_test_nvram(struct bnx2x *bp)
2532{ 2761{
2533 static const struct { 2762 const struct crc_pair nvram_tbl[] = {
2534 int offset;
2535 int size;
2536 } nvram_tbl[] = {
2537 { 0, 0x14 }, /* bootstrap */ 2763 { 0, 0x14 }, /* bootstrap */
2538 { 0x14, 0xec }, /* dir */ 2764 { 0x14, 0xec }, /* dir */
2539 { 0x100, 0x350 }, /* manuf_info */ 2765 { 0x100, 0x350 }, /* manuf_info */
@@ -2542,30 +2768,33 @@ static int bnx2x_test_nvram(struct bnx2x *bp)
2542 { 0x708, 0x70 }, /* manuf_key_info */ 2768 { 0x708, 0x70 }, /* manuf_key_info */
2543 { 0, 0 } 2769 { 0, 0 }
2544 }; 2770 };
2545 __be32 *buf; 2771 const struct crc_pair nvram_tbl2[] = {
2546 u8 *data; 2772 { 0x7e8, 0x350 }, /* manuf_info2 */
2547 int i, rc; 2773 { 0xb38, 0xf0 }, /* feature_info */
2548 u32 magic, crc; 2774 { 0, 0 }
2775 };
2776
2777 u8 *buf;
2778 int rc;
2779 u32 magic;
2549 2780
2550 if (BP_NOMCP(bp)) 2781 if (BP_NOMCP(bp))
2551 return 0; 2782 return 0;
2552 2783
2553 buf = kmalloc(0x350, GFP_KERNEL); 2784 buf = kmalloc(CRC_BUFF_SIZE, GFP_KERNEL);
2554 if (!buf) { 2785 if (!buf) {
2555 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, "kmalloc failed\n"); 2786 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, "kmalloc failed\n");
2556 rc = -ENOMEM; 2787 rc = -ENOMEM;
2557 goto test_nvram_exit; 2788 goto test_nvram_exit;
2558 } 2789 }
2559 data = (u8 *)buf;
2560 2790
2561 rc = bnx2x_nvram_read(bp, 0, data, 4); 2791 rc = bnx2x_nvram_read32(bp, 0, &magic, sizeof(magic));
2562 if (rc) { 2792 if (rc) {
2563 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 2793 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2564 "magic value read (rc %d)\n", rc); 2794 "magic value read (rc %d)\n", rc);
2565 goto test_nvram_exit; 2795 goto test_nvram_exit;
2566 } 2796 }
2567 2797
2568 magic = be32_to_cpu(buf[0]);
2569 if (magic != 0x669955aa) { 2798 if (magic != 0x669955aa) {
2570 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 2799 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2571 "wrong magic value (0x%08x)\n", magic); 2800 "wrong magic value (0x%08x)\n", magic);
@@ -2573,25 +2802,26 @@ static int bnx2x_test_nvram(struct bnx2x *bp)
2573 goto test_nvram_exit; 2802 goto test_nvram_exit;
2574 } 2803 }
2575 2804
2576 for (i = 0; nvram_tbl[i].size; i++) { 2805 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, "Port 0 CRC test-set\n");
2806 rc = bnx2x_test_nvram_tbl(bp, nvram_tbl, buf);
2807 if (rc)
2808 goto test_nvram_exit;
2577 2809
2578 rc = bnx2x_nvram_read(bp, nvram_tbl[i].offset, data, 2810 if (!CHIP_IS_E1x(bp) && !CHIP_IS_57811xx(bp)) {
2579 nvram_tbl[i].size); 2811 u32 hide = SHMEM_RD(bp, dev_info.shared_hw_config.config2) &
2580 if (rc) { 2812 SHARED_HW_CFG_HIDE_PORT1;
2581 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2582 "nvram_tbl[%d] read data (rc %d)\n", i, rc);
2583 goto test_nvram_exit;
2584 }
2585 2813
2586 crc = ether_crc_le(nvram_tbl[i].size, data); 2814 if (!hide) {
2587 if (crc != CRC32_RESIDUAL) {
2588 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM, 2815 DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
2589 "nvram_tbl[%d] wrong crc value (0x%08x)\n", i, crc); 2816 "Port 1 CRC test-set\n");
2590 rc = -ENODEV; 2817 rc = bnx2x_test_nvram_tbl(bp, nvram_tbl2, buf);
2591 goto test_nvram_exit; 2818 if (rc)
2819 goto test_nvram_exit;
2592 } 2820 }
2593 } 2821 }
2594 2822
2823 rc = bnx2x_test_nvram_dirs(bp, buf);
2824
2595test_nvram_exit: 2825test_nvram_exit:
2596 kfree(buf); 2826 kfree(buf);
2597 return rc; 2827 return rc;
@@ -3232,7 +3462,32 @@ static const struct ethtool_ops bnx2x_ethtool_ops = {
3232 .get_ts_info = ethtool_op_get_ts_info, 3462 .get_ts_info = ethtool_op_get_ts_info,
3233}; 3463};
3234 3464
3235void bnx2x_set_ethtool_ops(struct net_device *netdev) 3465static const struct ethtool_ops bnx2x_vf_ethtool_ops = {
3466 .get_settings = bnx2x_get_settings,
3467 .set_settings = bnx2x_set_settings,
3468 .get_drvinfo = bnx2x_get_drvinfo,
3469 .get_msglevel = bnx2x_get_msglevel,
3470 .set_msglevel = bnx2x_set_msglevel,
3471 .get_link = bnx2x_get_link,
3472 .get_coalesce = bnx2x_get_coalesce,
3473 .get_ringparam = bnx2x_get_ringparam,
3474 .set_ringparam = bnx2x_set_ringparam,
3475 .get_sset_count = bnx2x_get_sset_count,
3476 .get_strings = bnx2x_get_strings,
3477 .get_ethtool_stats = bnx2x_get_ethtool_stats,
3478 .get_rxnfc = bnx2x_get_rxnfc,
3479 .set_rxnfc = bnx2x_set_rxnfc,
3480 .get_rxfh_indir_size = bnx2x_get_rxfh_indir_size,
3481 .get_rxfh_indir = bnx2x_get_rxfh_indir,
3482 .set_rxfh_indir = bnx2x_set_rxfh_indir,
3483 .get_channels = bnx2x_get_channels,
3484 .set_channels = bnx2x_set_channels,
3485};
3486
3487void bnx2x_set_ethtool_ops(struct bnx2x *bp, struct net_device *netdev)
3236{ 3488{
3237 SET_ETHTOOL_OPS(netdev, &bnx2x_ethtool_ops); 3489 if (IS_PF(bp))
3490 SET_ETHTOOL_OPS(netdev, &bnx2x_ethtool_ops);
3491 else /* vf */
3492 SET_ETHTOOL_OPS(netdev, &bnx2x_vf_ethtool_ops);
3238} 3493}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
index e5f808377c91..84aecdf06f7a 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_fw_defs.h
@@ -30,31 +30,31 @@
30 * IRO[138].m2) + ((sbId) * IRO[138].m3)) 30 * IRO[138].m2) + ((sbId) * IRO[138].m3))
31#define CSTORM_IGU_MODE_OFFSET (IRO[157].base) 31#define CSTORM_IGU_MODE_OFFSET (IRO[157].base)
32#define CSTORM_ISCSI_CQ_SIZE_OFFSET(pfId) \ 32#define CSTORM_ISCSI_CQ_SIZE_OFFSET(pfId) \
33 (IRO[316].base + ((pfId) * IRO[316].m1))
34#define CSTORM_ISCSI_CQ_SQN_SIZE_OFFSET(pfId) \
35 (IRO[317].base + ((pfId) * IRO[317].m1)) 33 (IRO[317].base + ((pfId) * IRO[317].m1))
34#define CSTORM_ISCSI_CQ_SQN_SIZE_OFFSET(pfId) \
35 (IRO[318].base + ((pfId) * IRO[318].m1))
36#define CSTORM_ISCSI_EQ_CONS_OFFSET(pfId, iscsiEqId) \ 36#define CSTORM_ISCSI_EQ_CONS_OFFSET(pfId, iscsiEqId) \
37 (IRO[309].base + ((pfId) * IRO[309].m1) + ((iscsiEqId) * IRO[309].m2)) 37 (IRO[310].base + ((pfId) * IRO[310].m1) + ((iscsiEqId) * IRO[310].m2))
38#define CSTORM_ISCSI_EQ_NEXT_EQE_ADDR_OFFSET(pfId, iscsiEqId) \ 38#define CSTORM_ISCSI_EQ_NEXT_EQE_ADDR_OFFSET(pfId, iscsiEqId) \
39 (IRO[311].base + ((pfId) * IRO[311].m1) + ((iscsiEqId) * IRO[311].m2)) 39 (IRO[312].base + ((pfId) * IRO[312].m1) + ((iscsiEqId) * IRO[312].m2))
40#define CSTORM_ISCSI_EQ_NEXT_PAGE_ADDR_OFFSET(pfId, iscsiEqId) \ 40#define CSTORM_ISCSI_EQ_NEXT_PAGE_ADDR_OFFSET(pfId, iscsiEqId) \
41 (IRO[310].base + ((pfId) * IRO[310].m1) + ((iscsiEqId) * IRO[310].m2)) 41 (IRO[311].base + ((pfId) * IRO[311].m1) + ((iscsiEqId) * IRO[311].m2))
42#define CSTORM_ISCSI_EQ_NEXT_PAGE_ADDR_VALID_OFFSET(pfId, iscsiEqId) \ 42#define CSTORM_ISCSI_EQ_NEXT_PAGE_ADDR_VALID_OFFSET(pfId, iscsiEqId) \
43 (IRO[312].base + ((pfId) * IRO[312].m1) + ((iscsiEqId) * IRO[312].m2)) 43 (IRO[313].base + ((pfId) * IRO[313].m1) + ((iscsiEqId) * IRO[313].m2))
44#define CSTORM_ISCSI_EQ_PROD_OFFSET(pfId, iscsiEqId) \ 44#define CSTORM_ISCSI_EQ_PROD_OFFSET(pfId, iscsiEqId) \
45 (IRO[308].base + ((pfId) * IRO[308].m1) + ((iscsiEqId) * IRO[308].m2)) 45 (IRO[309].base + ((pfId) * IRO[309].m1) + ((iscsiEqId) * IRO[309].m2))
46#define CSTORM_ISCSI_EQ_SB_INDEX_OFFSET(pfId, iscsiEqId) \ 46#define CSTORM_ISCSI_EQ_SB_INDEX_OFFSET(pfId, iscsiEqId) \
47 (IRO[314].base + ((pfId) * IRO[314].m1) + ((iscsiEqId) * IRO[314].m2)) 47 (IRO[315].base + ((pfId) * IRO[315].m1) + ((iscsiEqId) * IRO[315].m2))
48#define CSTORM_ISCSI_EQ_SB_NUM_OFFSET(pfId, iscsiEqId) \ 48#define CSTORM_ISCSI_EQ_SB_NUM_OFFSET(pfId, iscsiEqId) \
49 (IRO[313].base + ((pfId) * IRO[313].m1) + ((iscsiEqId) * IRO[313].m2)) 49 (IRO[314].base + ((pfId) * IRO[314].m1) + ((iscsiEqId) * IRO[314].m2))
50#define CSTORM_ISCSI_HQ_SIZE_OFFSET(pfId) \ 50#define CSTORM_ISCSI_HQ_SIZE_OFFSET(pfId) \
51 (IRO[315].base + ((pfId) * IRO[315].m1)) 51 (IRO[316].base + ((pfId) * IRO[316].m1))
52#define CSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \ 52#define CSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
53 (IRO[307].base + ((pfId) * IRO[307].m1)) 53 (IRO[308].base + ((pfId) * IRO[308].m1))
54#define CSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \ 54#define CSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
55 (IRO[306].base + ((pfId) * IRO[306].m1)) 55 (IRO[307].base + ((pfId) * IRO[307].m1))
56#define CSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \ 56#define CSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
57 (IRO[305].base + ((pfId) * IRO[305].m1)) 57 (IRO[306].base + ((pfId) * IRO[306].m1))
58#define CSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \ 58#define CSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
59 (IRO[151].base + ((funcId) * IRO[151].m1)) 59 (IRO[151].base + ((funcId) * IRO[151].m1))
60#define CSTORM_SP_STATUS_BLOCK_DATA_OFFSET(pfId) \ 60#define CSTORM_SP_STATUS_BLOCK_DATA_OFFSET(pfId) \
@@ -114,7 +114,7 @@
114#define TSTORM_ISCSI_RQ_SIZE_OFFSET(pfId) \ 114#define TSTORM_ISCSI_RQ_SIZE_OFFSET(pfId) \
115 (IRO[268].base + ((pfId) * IRO[268].m1)) 115 (IRO[268].base + ((pfId) * IRO[268].m1))
116#define TSTORM_ISCSI_TCP_LOCAL_ADV_WND_OFFSET(pfId) \ 116#define TSTORM_ISCSI_TCP_LOCAL_ADV_WND_OFFSET(pfId) \
117 (IRO[277].base + ((pfId) * IRO[277].m1)) 117 (IRO[278].base + ((pfId) * IRO[278].m1))
118#define TSTORM_ISCSI_TCP_VARS_FLAGS_OFFSET(pfId) \ 118#define TSTORM_ISCSI_TCP_VARS_FLAGS_OFFSET(pfId) \
119 (IRO[264].base + ((pfId) * IRO[264].m1)) 119 (IRO[264].base + ((pfId) * IRO[264].m1))
120#define TSTORM_ISCSI_TCP_VARS_LSB_LOCAL_MAC_ADDR_OFFSET(pfId) \ 120#define TSTORM_ISCSI_TCP_VARS_LSB_LOCAL_MAC_ADDR_OFFSET(pfId) \
@@ -136,35 +136,32 @@
136#define USTORM_ASSERT_LIST_INDEX_OFFSET (IRO[177].base) 136#define USTORM_ASSERT_LIST_INDEX_OFFSET (IRO[177].base)
137#define USTORM_ASSERT_LIST_OFFSET(assertListEntry) \ 137#define USTORM_ASSERT_LIST_OFFSET(assertListEntry) \
138 (IRO[176].base + ((assertListEntry) * IRO[176].m1)) 138 (IRO[176].base + ((assertListEntry) * IRO[176].m1))
139#define USTORM_CQE_PAGE_NEXT_OFFSET(portId, clientId) \
140 (IRO[205].base + ((portId) * IRO[205].m1) + ((clientId) * \
141 IRO[205].m2))
142#define USTORM_ETH_PAUSE_ENABLED_OFFSET(portId) \ 139#define USTORM_ETH_PAUSE_ENABLED_OFFSET(portId) \
143 (IRO[183].base + ((portId) * IRO[183].m1)) 140 (IRO[183].base + ((portId) * IRO[183].m1))
144#define USTORM_FCOE_EQ_PROD_OFFSET(pfId) \ 141#define USTORM_FCOE_EQ_PROD_OFFSET(pfId) \
145 (IRO[318].base + ((pfId) * IRO[318].m1)) 142 (IRO[319].base + ((pfId) * IRO[319].m1))
146#define USTORM_FUNC_EN_OFFSET(funcId) \ 143#define USTORM_FUNC_EN_OFFSET(funcId) \
147 (IRO[178].base + ((funcId) * IRO[178].m1)) 144 (IRO[178].base + ((funcId) * IRO[178].m1))
148#define USTORM_ISCSI_CQ_SIZE_OFFSET(pfId) \ 145#define USTORM_ISCSI_CQ_SIZE_OFFSET(pfId) \
149 (IRO[282].base + ((pfId) * IRO[282].m1))
150#define USTORM_ISCSI_CQ_SQN_SIZE_OFFSET(pfId) \
151 (IRO[283].base + ((pfId) * IRO[283].m1)) 146 (IRO[283].base + ((pfId) * IRO[283].m1))
147#define USTORM_ISCSI_CQ_SQN_SIZE_OFFSET(pfId) \
148 (IRO[284].base + ((pfId) * IRO[284].m1))
152#define USTORM_ISCSI_ERROR_BITMAP_OFFSET(pfId) \ 149#define USTORM_ISCSI_ERROR_BITMAP_OFFSET(pfId) \
153 (IRO[287].base + ((pfId) * IRO[287].m1)) 150 (IRO[288].base + ((pfId) * IRO[288].m1))
154#define USTORM_ISCSI_GLOBAL_BUF_PHYS_ADDR_OFFSET(pfId) \ 151#define USTORM_ISCSI_GLOBAL_BUF_PHYS_ADDR_OFFSET(pfId) \
155 (IRO[284].base + ((pfId) * IRO[284].m1)) 152 (IRO[285].base + ((pfId) * IRO[285].m1))
156#define USTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \ 153#define USTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
157 (IRO[280].base + ((pfId) * IRO[280].m1)) 154 (IRO[281].base + ((pfId) * IRO[281].m1))
158#define USTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \ 155#define USTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
159 (IRO[279].base + ((pfId) * IRO[279].m1)) 156 (IRO[280].base + ((pfId) * IRO[280].m1))
160#define USTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \ 157#define USTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
161 (IRO[278].base + ((pfId) * IRO[278].m1)) 158 (IRO[279].base + ((pfId) * IRO[279].m1))
162#define USTORM_ISCSI_R2TQ_SIZE_OFFSET(pfId) \ 159#define USTORM_ISCSI_R2TQ_SIZE_OFFSET(pfId) \
163 (IRO[281].base + ((pfId) * IRO[281].m1)) 160 (IRO[282].base + ((pfId) * IRO[282].m1))
164#define USTORM_ISCSI_RQ_BUFFER_SIZE_OFFSET(pfId) \ 161#define USTORM_ISCSI_RQ_BUFFER_SIZE_OFFSET(pfId) \
165 (IRO[285].base + ((pfId) * IRO[285].m1))
166#define USTORM_ISCSI_RQ_SIZE_OFFSET(pfId) \
167 (IRO[286].base + ((pfId) * IRO[286].m1)) 162 (IRO[286].base + ((pfId) * IRO[286].m1))
163#define USTORM_ISCSI_RQ_SIZE_OFFSET(pfId) \
164 (IRO[287].base + ((pfId) * IRO[287].m1))
168#define USTORM_MEM_WORKAROUND_ADDRESS_OFFSET(pfId) \ 165#define USTORM_MEM_WORKAROUND_ADDRESS_OFFSET(pfId) \
169 (IRO[182].base + ((pfId) * IRO[182].m1)) 166 (IRO[182].base + ((pfId) * IRO[182].m1))
170#define USTORM_RECORD_SLOW_PATH_OFFSET(funcId) \ 167#define USTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
@@ -190,39 +187,39 @@
190#define XSTORM_FUNC_EN_OFFSET(funcId) \ 187#define XSTORM_FUNC_EN_OFFSET(funcId) \
191 (IRO[47].base + ((funcId) * IRO[47].m1)) 188 (IRO[47].base + ((funcId) * IRO[47].m1))
192#define XSTORM_ISCSI_HQ_SIZE_OFFSET(pfId) \ 189#define XSTORM_ISCSI_HQ_SIZE_OFFSET(pfId) \
193 (IRO[295].base + ((pfId) * IRO[295].m1)) 190 (IRO[296].base + ((pfId) * IRO[296].m1))
194#define XSTORM_ISCSI_LOCAL_MAC_ADDR0_OFFSET(pfId) \ 191#define XSTORM_ISCSI_LOCAL_MAC_ADDR0_OFFSET(pfId) \
195 (IRO[298].base + ((pfId) * IRO[298].m1))
196#define XSTORM_ISCSI_LOCAL_MAC_ADDR1_OFFSET(pfId) \
197 (IRO[299].base + ((pfId) * IRO[299].m1)) 192 (IRO[299].base + ((pfId) * IRO[299].m1))
198#define XSTORM_ISCSI_LOCAL_MAC_ADDR2_OFFSET(pfId) \ 193#define XSTORM_ISCSI_LOCAL_MAC_ADDR1_OFFSET(pfId) \
199 (IRO[300].base + ((pfId) * IRO[300].m1)) 194 (IRO[300].base + ((pfId) * IRO[300].m1))
200#define XSTORM_ISCSI_LOCAL_MAC_ADDR3_OFFSET(pfId) \ 195#define XSTORM_ISCSI_LOCAL_MAC_ADDR2_OFFSET(pfId) \
201 (IRO[301].base + ((pfId) * IRO[301].m1)) 196 (IRO[301].base + ((pfId) * IRO[301].m1))
202#define XSTORM_ISCSI_LOCAL_MAC_ADDR4_OFFSET(pfId) \ 197#define XSTORM_ISCSI_LOCAL_MAC_ADDR3_OFFSET(pfId) \
203 (IRO[302].base + ((pfId) * IRO[302].m1)) 198 (IRO[302].base + ((pfId) * IRO[302].m1))
204#define XSTORM_ISCSI_LOCAL_MAC_ADDR5_OFFSET(pfId) \ 199#define XSTORM_ISCSI_LOCAL_MAC_ADDR4_OFFSET(pfId) \
205 (IRO[303].base + ((pfId) * IRO[303].m1)) 200 (IRO[303].base + ((pfId) * IRO[303].m1))
206#define XSTORM_ISCSI_LOCAL_VLAN_OFFSET(pfId) \ 201#define XSTORM_ISCSI_LOCAL_MAC_ADDR5_OFFSET(pfId) \
207 (IRO[304].base + ((pfId) * IRO[304].m1)) 202 (IRO[304].base + ((pfId) * IRO[304].m1))
203#define XSTORM_ISCSI_LOCAL_VLAN_OFFSET(pfId) \
204 (IRO[305].base + ((pfId) * IRO[305].m1))
208#define XSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \ 205#define XSTORM_ISCSI_NUM_OF_TASKS_OFFSET(pfId) \
209 (IRO[294].base + ((pfId) * IRO[294].m1)) 206 (IRO[295].base + ((pfId) * IRO[295].m1))
210#define XSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \ 207#define XSTORM_ISCSI_PAGE_SIZE_LOG_OFFSET(pfId) \
211 (IRO[293].base + ((pfId) * IRO[293].m1)) 208 (IRO[294].base + ((pfId) * IRO[294].m1))
212#define XSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \ 209#define XSTORM_ISCSI_PAGE_SIZE_OFFSET(pfId) \
213 (IRO[292].base + ((pfId) * IRO[292].m1)) 210 (IRO[293].base + ((pfId) * IRO[293].m1))
214#define XSTORM_ISCSI_R2TQ_SIZE_OFFSET(pfId) \ 211#define XSTORM_ISCSI_R2TQ_SIZE_OFFSET(pfId) \
215 (IRO[297].base + ((pfId) * IRO[297].m1)) 212 (IRO[298].base + ((pfId) * IRO[298].m1))
216#define XSTORM_ISCSI_SQ_SIZE_OFFSET(pfId) \ 213#define XSTORM_ISCSI_SQ_SIZE_OFFSET(pfId) \
217 (IRO[296].base + ((pfId) * IRO[296].m1)) 214 (IRO[297].base + ((pfId) * IRO[297].m1))
218#define XSTORM_ISCSI_TCP_VARS_ADV_WND_SCL_OFFSET(pfId) \ 215#define XSTORM_ISCSI_TCP_VARS_ADV_WND_SCL_OFFSET(pfId) \
219 (IRO[291].base + ((pfId) * IRO[291].m1)) 216 (IRO[292].base + ((pfId) * IRO[292].m1))
220#define XSTORM_ISCSI_TCP_VARS_FLAGS_OFFSET(pfId) \ 217#define XSTORM_ISCSI_TCP_VARS_FLAGS_OFFSET(pfId) \
221 (IRO[290].base + ((pfId) * IRO[290].m1)) 218 (IRO[291].base + ((pfId) * IRO[291].m1))
222#define XSTORM_ISCSI_TCP_VARS_TOS_OFFSET(pfId) \ 219#define XSTORM_ISCSI_TCP_VARS_TOS_OFFSET(pfId) \
223 (IRO[289].base + ((pfId) * IRO[289].m1)) 220 (IRO[290].base + ((pfId) * IRO[290].m1))
224#define XSTORM_ISCSI_TCP_VARS_TTL_OFFSET(pfId) \ 221#define XSTORM_ISCSI_TCP_VARS_TTL_OFFSET(pfId) \
225 (IRO[288].base + ((pfId) * IRO[288].m1)) 222 (IRO[289].base + ((pfId) * IRO[289].m1))
226#define XSTORM_RATE_SHAPING_PER_VN_VARS_OFFSET(pfId) \ 223#define XSTORM_RATE_SHAPING_PER_VN_VARS_OFFSET(pfId) \
227 (IRO[44].base + ((pfId) * IRO[44].m1)) 224 (IRO[44].base + ((pfId) * IRO[44].m1))
228#define XSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \ 225#define XSTORM_RECORD_SLOW_PATH_OFFSET(funcId) \
@@ -389,4 +386,8 @@
389 386
390#define UNDEF_IRO 0x80000000 387#define UNDEF_IRO 0x80000000
391 388
389/* used for defining the amount of FCoE tasks supported for PF */
390#define MAX_FCOE_FUNCS_PER_ENGINE 2
391#define MAX_NUM_FCOE_TASKS_PER_ENGINE 4096
392
392#endif /* BNX2X_FW_DEFS_H */ 393#endif /* BNX2X_FW_DEFS_H */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
index 037860ecc343..12f00a40cdf0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_hsi.h
@@ -114,6 +114,10 @@ struct license_key {
114#define EPIO_CFG_EPIO30 0x0000001f 114#define EPIO_CFG_EPIO30 0x0000001f
115#define EPIO_CFG_EPIO31 0x00000020 115#define EPIO_CFG_EPIO31 0x00000020
116 116
117struct mac_addr {
118 u32 upper;
119 u32 lower;
120};
117 121
118struct shared_hw_cfg { /* NVRAM Offset */ 122struct shared_hw_cfg { /* NVRAM Offset */
119 /* Up to 16 bytes of NULL-terminated string */ 123 /* Up to 16 bytes of NULL-terminated string */
@@ -508,7 +512,22 @@ struct port_hw_cfg { /* port 0: 0x12c port 1: 0x2bc */
508 #define PORT_HW_CFG_PAUSE_ON_HOST_RING_DISABLED 0x00000000 512 #define PORT_HW_CFG_PAUSE_ON_HOST_RING_DISABLED 0x00000000
509 #define PORT_HW_CFG_PAUSE_ON_HOST_RING_ENABLED 0x00000001 513 #define PORT_HW_CFG_PAUSE_ON_HOST_RING_ENABLED 0x00000001
510 514
511 u32 reserved0[6]; /* 0x178 */ 515 /* SFP+ Tx Equalization: NIC recommended and tested value is 0xBEB2
516 * LOM recommended and tested value is 0xBEB2. Using a different
517 * value means using a value not tested by BRCM
518 */
519 u32 sfi_tap_values; /* 0x178 */
520 #define PORT_HW_CFG_TX_EQUALIZATION_MASK 0x0000FFFF
521 #define PORT_HW_CFG_TX_EQUALIZATION_SHIFT 0
522
523 /* SFP+ Tx driver broadcast IDRIVER: NIC recommended and tested
524 * value is 0x2. LOM recommended and tested value is 0x2. Using a
525 * different value means using a value not tested by BRCM
526 */
527 #define PORT_HW_CFG_TX_DRV_BROADCAST_MASK 0x000F0000
528 #define PORT_HW_CFG_TX_DRV_BROADCAST_SHIFT 16
529
530 u32 reserved0[5]; /* 0x17c */
512 531
513 u32 aeu_int_mask; /* 0x190 */ 532 u32 aeu_int_mask; /* 0x190 */
514 533
@@ -2821,8 +2840,8 @@ struct afex_stats {
2821 2840
2822#define BCM_5710_FW_MAJOR_VERSION 7 2841#define BCM_5710_FW_MAJOR_VERSION 7
2823#define BCM_5710_FW_MINOR_VERSION 8 2842#define BCM_5710_FW_MINOR_VERSION 8
2824#define BCM_5710_FW_REVISION_VERSION 2 2843#define BCM_5710_FW_REVISION_VERSION 17
2825#define BCM_5710_FW_ENGINEERING_VERSION 0 2844#define BCM_5710_FW_ENGINEERING_VERSION 0
2826#define BCM_5710_FW_COMPILE_FLAGS 1 2845#define BCM_5710_FW_COMPILE_FLAGS 1
2827 2846
2828 2847
@@ -3513,11 +3532,14 @@ struct client_init_tx_data {
3513#define CLIENT_INIT_TX_DATA_BCAST_ACCEPT_ALL_SHIFT 2 3532#define CLIENT_INIT_TX_DATA_BCAST_ACCEPT_ALL_SHIFT 2
3514#define CLIENT_INIT_TX_DATA_ACCEPT_ANY_VLAN (0x1<<3) 3533#define CLIENT_INIT_TX_DATA_ACCEPT_ANY_VLAN (0x1<<3)
3515#define CLIENT_INIT_TX_DATA_ACCEPT_ANY_VLAN_SHIFT 3 3534#define CLIENT_INIT_TX_DATA_ACCEPT_ANY_VLAN_SHIFT 3
3516#define CLIENT_INIT_TX_DATA_RESERVED1 (0xFFF<<4) 3535#define CLIENT_INIT_TX_DATA_RESERVED0 (0xFFF<<4)
3517#define CLIENT_INIT_TX_DATA_RESERVED1_SHIFT 4 3536#define CLIENT_INIT_TX_DATA_RESERVED0_SHIFT 4
3518 u8 default_vlan_flg; 3537 u8 default_vlan_flg;
3519 u8 force_default_pri_flg; 3538 u8 force_default_pri_flg;
3520 __le32 reserved3; 3539 u8 tunnel_lso_inc_ip_id;
3540 u8 refuse_outband_vlan_flg;
3541 u8 tunnel_non_lso_pcsum_location;
3542 u8 reserved1;
3521}; 3543};
3522 3544
3523/* 3545/*
@@ -3551,6 +3573,11 @@ struct client_update_ramrod_data {
3551 __le16 silent_vlan_mask; 3573 __le16 silent_vlan_mask;
3552 u8 silent_vlan_removal_flg; 3574 u8 silent_vlan_removal_flg;
3553 u8 silent_vlan_change_flg; 3575 u8 silent_vlan_change_flg;
3576 u8 refuse_outband_vlan_flg;
3577 u8 refuse_outband_vlan_change_flg;
3578 u8 tx_switching_flg;
3579 u8 tx_switching_change_flg;
3580 __le32 reserved1;
3554 __le32 echo; 3581 __le32 echo;
3555}; 3582};
3556 3583
@@ -3620,7 +3647,8 @@ struct eth_classify_header {
3620 */ 3647 */
3621struct eth_classify_mac_cmd { 3648struct eth_classify_mac_cmd {
3622 struct eth_classify_cmd_header header; 3649 struct eth_classify_cmd_header header;
3623 __le32 reserved0; 3650 __le16 reserved0;
3651 __le16 inner_mac;
3624 __le16 mac_lsb; 3652 __le16 mac_lsb;
3625 __le16 mac_mid; 3653 __le16 mac_mid;
3626 __le16 mac_msb; 3654 __le16 mac_msb;
@@ -3633,7 +3661,8 @@ struct eth_classify_mac_cmd {
3633 */ 3661 */
3634struct eth_classify_pair_cmd { 3662struct eth_classify_pair_cmd {
3635 struct eth_classify_cmd_header header; 3663 struct eth_classify_cmd_header header;
3636 __le32 reserved0; 3664 __le16 reserved0;
3665 __le16 inner_mac;
3637 __le16 mac_lsb; 3666 __le16 mac_lsb;
3638 __le16 mac_mid; 3667 __le16 mac_mid;
3639 __le16 mac_msb; 3668 __le16 mac_msb;
@@ -3855,8 +3884,68 @@ struct eth_halt_ramrod_data {
3855 3884
3856 3885
3857/* 3886/*
3858 * Command for setting multicast classification for a client 3887 * destination and source mac address.
3888 */
3889struct eth_mac_addresses {
3890#if defined(__BIG_ENDIAN)
3891 __le16 dst_mid;
3892 __le16 dst_lo;
3893#elif defined(__LITTLE_ENDIAN)
3894 __le16 dst_lo;
3895 __le16 dst_mid;
3896#endif
3897#if defined(__BIG_ENDIAN)
3898 __le16 src_lo;
3899 __le16 dst_hi;
3900#elif defined(__LITTLE_ENDIAN)
3901 __le16 dst_hi;
3902 __le16 src_lo;
3903#endif
3904#if defined(__BIG_ENDIAN)
3905 __le16 src_hi;
3906 __le16 src_mid;
3907#elif defined(__LITTLE_ENDIAN)
3908 __le16 src_mid;
3909 __le16 src_hi;
3910#endif
3911};
3912
3913/* tunneling related data */
3914struct eth_tunnel_data {
3915#if defined(__BIG_ENDIAN)
3916 __le16 dst_mid;
3917 __le16 dst_lo;
3918#elif defined(__LITTLE_ENDIAN)
3919 __le16 dst_lo;
3920 __le16 dst_mid;
3921#endif
3922#if defined(__BIG_ENDIAN)
3923 __le16 reserved0;
3924 __le16 dst_hi;
3925#elif defined(__LITTLE_ENDIAN)
3926 __le16 dst_hi;
3927 __le16 reserved0;
3928#endif
3929#if defined(__BIG_ENDIAN)
3930 u8 reserved1;
3931 u8 ip_hdr_start_inner_w;
3932 __le16 pseudo_csum;
3933#elif defined(__LITTLE_ENDIAN)
3934 __le16 pseudo_csum;
3935 u8 ip_hdr_start_inner_w;
3936 u8 reserved1;
3937#endif
3938};
3939
3940/* union for mac addresses and for tunneling data.
3941 * considered as tunneling data only if (tunnel_exist == 1).
3859 */ 3942 */
3943union eth_mac_addr_or_tunnel_data {
3944 struct eth_mac_addresses mac_addr;
3945 struct eth_tunnel_data tunnel_data;
3946};
3947
3948/*Command for setting multicast classification for a client */
3860struct eth_multicast_rules_cmd { 3949struct eth_multicast_rules_cmd {
3861 u8 cmd_general_data; 3950 u8 cmd_general_data;
3862#define ETH_MULTICAST_RULES_CMD_RX_CMD (0x1<<0) 3951#define ETH_MULTICAST_RULES_CMD_RX_CMD (0x1<<0)
@@ -3874,7 +3963,6 @@ struct eth_multicast_rules_cmd {
3874 struct regpair reserved3; 3963 struct regpair reserved3;
3875}; 3964};
3876 3965
3877
3878/* 3966/*
3879 * parameters for multicast classification ramrod 3967 * parameters for multicast classification ramrod
3880 */ 3968 */
@@ -3883,7 +3971,6 @@ struct eth_multicast_rules_ramrod_data {
3883 struct eth_multicast_rules_cmd rules[MULTICAST_RULES_COUNT]; 3971 struct eth_multicast_rules_cmd rules[MULTICAST_RULES_COUNT];
3884}; 3972};
3885 3973
3886
3887/* 3974/*
3888 * Place holder for ramrods protocol specific data 3975 * Place holder for ramrods protocol specific data
3889 */ 3976 */
@@ -3947,11 +4034,14 @@ struct eth_rss_update_ramrod_data {
3947#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY_SHIFT 4 4034#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_TCP_CAPABILITY_SHIFT 4
3948#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY (0x1<<5) 4035#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY (0x1<<5)
3949#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY_SHIFT 5 4036#define ETH_RSS_UPDATE_RAMROD_DATA_IPV6_UDP_CAPABILITY_SHIFT 5
4037#define ETH_RSS_UPDATE_RAMROD_DATA_EN_5_TUPLE_CAPABILITY (0x1<<6)
4038#define ETH_RSS_UPDATE_RAMROD_DATA_EN_5_TUPLE_CAPABILITY_SHIFT 6
3950#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY (0x1<<7) 4039#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY (0x1<<7)
3951#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY_SHIFT 7 4040#define ETH_RSS_UPDATE_RAMROD_DATA_UPDATE_RSS_KEY_SHIFT 7
3952 u8 rss_result_mask; 4041 u8 rss_result_mask;
3953 u8 rss_mode; 4042 u8 rss_mode;
3954 __le32 __reserved2; 4043 __le16 udp_4tuple_dst_port_mask;
4044 __le16 udp_4tuple_dst_port_value;
3955 u8 indirection_table[T_ETH_INDIRECTION_TABLE_SIZE]; 4045 u8 indirection_table[T_ETH_INDIRECTION_TABLE_SIZE];
3956 __le32 rss_key[T_ETH_RSS_KEY]; 4046 __le32 rss_key[T_ETH_RSS_KEY];
3957 __le32 echo; 4047 __le32 echo;
@@ -4115,6 +4205,23 @@ enum eth_tpa_update_command {
4115 MAX_ETH_TPA_UPDATE_COMMAND 4205 MAX_ETH_TPA_UPDATE_COMMAND
4116}; 4206};
4117 4207
4208/* In case of LSO over IPv4 tunnel, whether to increment
4209 * IP ID on external IP header or internal IP header
4210 */
4211enum eth_tunnel_lso_inc_ip_id {
4212 EXT_HEADER,
4213 INT_HEADER,
4214 MAX_ETH_TUNNEL_LSO_INC_IP_ID
4215};
4216
4217/* In case tunnel exist and L4 checksum offload,
4218 * the pseudo checksum location, on packet or on BD.
4219 */
4220enum eth_tunnel_non_lso_pcsum_location {
4221 PCSUM_ON_PKT,
4222 PCSUM_ON_BD,
4223 MAX_ETH_TUNNEL_NON_LSO_PCSUM_LOCATION
4224};
4118 4225
4119/* 4226/*
4120 * Tx regular BD structure 4227 * Tx regular BD structure
@@ -4166,8 +4273,8 @@ struct eth_tx_start_bd {
4166#define ETH_TX_START_BD_FORCE_VLAN_MODE_SHIFT 4 4273#define ETH_TX_START_BD_FORCE_VLAN_MODE_SHIFT 4
4167#define ETH_TX_START_BD_PARSE_NBDS (0x3<<5) 4274#define ETH_TX_START_BD_PARSE_NBDS (0x3<<5)
4168#define ETH_TX_START_BD_PARSE_NBDS_SHIFT 5 4275#define ETH_TX_START_BD_PARSE_NBDS_SHIFT 5
4169#define ETH_TX_START_BD_RESREVED (0x1<<7) 4276#define ETH_TX_START_BD_TUNNEL_EXIST (0x1<<7)
4170#define ETH_TX_START_BD_RESREVED_SHIFT 7 4277#define ETH_TX_START_BD_TUNNEL_EXIST_SHIFT 7
4171}; 4278};
4172 4279
4173/* 4280/*
@@ -4216,15 +4323,10 @@ struct eth_tx_parse_bd_e1x {
4216 * Tx parsing BD structure for ETH E2 4323 * Tx parsing BD structure for ETH E2
4217 */ 4324 */
4218struct eth_tx_parse_bd_e2 { 4325struct eth_tx_parse_bd_e2 {
4219 __le16 dst_mac_addr_lo; 4326 union eth_mac_addr_or_tunnel_data data;
4220 __le16 dst_mac_addr_mid;
4221 __le16 dst_mac_addr_hi;
4222 __le16 src_mac_addr_lo;
4223 __le16 src_mac_addr_mid;
4224 __le16 src_mac_addr_hi;
4225 __le32 parsing_data; 4327 __le32 parsing_data;
4226#define ETH_TX_PARSE_BD_E2_TCP_HDR_START_OFFSET_W (0x7FF<<0) 4328#define ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W (0x7FF<<0)
4227#define ETH_TX_PARSE_BD_E2_TCP_HDR_START_OFFSET_W_SHIFT 0 4329#define ETH_TX_PARSE_BD_E2_L4_HDR_START_OFFSET_W_SHIFT 0
4228#define ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW (0xF<<11) 4330#define ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW (0xF<<11)
4229#define ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW_SHIFT 11 4331#define ETH_TX_PARSE_BD_E2_TCP_HDR_LENGTH_DW_SHIFT 11
4230#define ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR (0x1<<15) 4332#define ETH_TX_PARSE_BD_E2_IPV6_WITH_EXT_HDR (0x1<<15)
@@ -4236,8 +4338,51 @@ struct eth_tx_parse_bd_e2 {
4236}; 4338};
4237 4339
4238/* 4340/*
4239 * The last BD in the BD memory will hold a pointer to the next BD memory 4341 * Tx 2nd parsing BD structure for ETH packet
4240 */ 4342 */
4343struct eth_tx_parse_2nd_bd {
4344 __le16 global_data;
4345#define ETH_TX_PARSE_2ND_BD_IP_HDR_START_OUTER_W (0xF<<0)
4346#define ETH_TX_PARSE_2ND_BD_IP_HDR_START_OUTER_W_SHIFT 0
4347#define ETH_TX_PARSE_2ND_BD_IP_HDR_TYPE_OUTER (0x1<<4)
4348#define ETH_TX_PARSE_2ND_BD_IP_HDR_TYPE_OUTER_SHIFT 4
4349#define ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN (0x1<<5)
4350#define ETH_TX_PARSE_2ND_BD_LLC_SNAP_EN_SHIFT 5
4351#define ETH_TX_PARSE_2ND_BD_NS_FLG (0x1<<6)
4352#define ETH_TX_PARSE_2ND_BD_NS_FLG_SHIFT 6
4353#define ETH_TX_PARSE_2ND_BD_TUNNEL_UDP_EXIST (0x1<<7)
4354#define ETH_TX_PARSE_2ND_BD_TUNNEL_UDP_EXIST_SHIFT 7
4355#define ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W (0x1F<<8)
4356#define ETH_TX_PARSE_2ND_BD_IP_HDR_LEN_OUTER_W_SHIFT 8
4357#define ETH_TX_PARSE_2ND_BD_RESERVED0 (0x7<<13)
4358#define ETH_TX_PARSE_2ND_BD_RESERVED0_SHIFT 13
4359 __le16 reserved1;
4360 u8 tcp_flags;
4361#define ETH_TX_PARSE_2ND_BD_FIN_FLG (0x1<<0)
4362#define ETH_TX_PARSE_2ND_BD_FIN_FLG_SHIFT 0
4363#define ETH_TX_PARSE_2ND_BD_SYN_FLG (0x1<<1)
4364#define ETH_TX_PARSE_2ND_BD_SYN_FLG_SHIFT 1
4365#define ETH_TX_PARSE_2ND_BD_RST_FLG (0x1<<2)
4366#define ETH_TX_PARSE_2ND_BD_RST_FLG_SHIFT 2
4367#define ETH_TX_PARSE_2ND_BD_PSH_FLG (0x1<<3)
4368#define ETH_TX_PARSE_2ND_BD_PSH_FLG_SHIFT 3
4369#define ETH_TX_PARSE_2ND_BD_ACK_FLG (0x1<<4)
4370#define ETH_TX_PARSE_2ND_BD_ACK_FLG_SHIFT 4
4371#define ETH_TX_PARSE_2ND_BD_URG_FLG (0x1<<5)
4372#define ETH_TX_PARSE_2ND_BD_URG_FLG_SHIFT 5
4373#define ETH_TX_PARSE_2ND_BD_ECE_FLG (0x1<<6)
4374#define ETH_TX_PARSE_2ND_BD_ECE_FLG_SHIFT 6
4375#define ETH_TX_PARSE_2ND_BD_CWR_FLG (0x1<<7)
4376#define ETH_TX_PARSE_2ND_BD_CWR_FLG_SHIFT 7
4377 u8 reserved2;
4378 u8 tunnel_udp_hdr_start_w;
4379 u8 fw_ip_hdr_to_payload_w;
4380 __le16 fw_ip_csum_wo_len_flags_frag;
4381 __le16 hw_ip_id;
4382 __le32 tcp_send_seq;
4383};
4384
4385/* The last BD in the BD memory will hold a pointer to the next BD memory */
4241struct eth_tx_next_bd { 4386struct eth_tx_next_bd {
4242 __le32 addr_lo; 4387 __le32 addr_lo;
4243 __le32 addr_hi; 4388 __le32 addr_hi;
@@ -4252,6 +4397,7 @@ union eth_tx_bd_types {
4252 struct eth_tx_bd reg_bd; 4397 struct eth_tx_bd reg_bd;
4253 struct eth_tx_parse_bd_e1x parse_bd_e1x; 4398 struct eth_tx_parse_bd_e1x parse_bd_e1x;
4254 struct eth_tx_parse_bd_e2 parse_bd_e2; 4399 struct eth_tx_parse_bd_e2 parse_bd_e2;
4400 struct eth_tx_parse_2nd_bd parse_2nd_bd;
4255 struct eth_tx_next_bd next_bd; 4401 struct eth_tx_next_bd next_bd;
4256}; 4402};
4257 4403
@@ -4663,10 +4809,10 @@ enum common_spqe_cmd_id {
4663 RAMROD_CMD_ID_COMMON_STOP_TRAFFIC, 4809 RAMROD_CMD_ID_COMMON_STOP_TRAFFIC,
4664 RAMROD_CMD_ID_COMMON_START_TRAFFIC, 4810 RAMROD_CMD_ID_COMMON_START_TRAFFIC,
4665 RAMROD_CMD_ID_COMMON_AFEX_VIF_LISTS, 4811 RAMROD_CMD_ID_COMMON_AFEX_VIF_LISTS,
4812 RAMROD_CMD_ID_COMMON_SET_TIMESYNC,
4666 MAX_COMMON_SPQE_CMD_ID 4813 MAX_COMMON_SPQE_CMD_ID
4667}; 4814};
4668 4815
4669
4670/* 4816/*
4671 * Per-protocol connection types 4817 * Per-protocol connection types
4672 */ 4818 */
@@ -4863,7 +5009,7 @@ struct vf_flr_event_data {
4863 */ 5009 */
4864struct malicious_vf_event_data { 5010struct malicious_vf_event_data {
4865 u8 vf_id; 5011 u8 vf_id;
4866 u8 reserved0; 5012 u8 err_id;
4867 u16 reserved1; 5013 u16 reserved1;
4868 u32 reserved2; 5014 u32 reserved2;
4869 u32 reserved3; 5015 u32 reserved3;
@@ -4969,10 +5115,10 @@ enum event_ring_opcode {
4969 EVENT_RING_OPCODE_CLASSIFICATION_RULES, 5115 EVENT_RING_OPCODE_CLASSIFICATION_RULES,
4970 EVENT_RING_OPCODE_FILTERS_RULES, 5116 EVENT_RING_OPCODE_FILTERS_RULES,
4971 EVENT_RING_OPCODE_MULTICAST_RULES, 5117 EVENT_RING_OPCODE_MULTICAST_RULES,
5118 EVENT_RING_OPCODE_SET_TIMESYNC,
4972 MAX_EVENT_RING_OPCODE 5119 MAX_EVENT_RING_OPCODE
4973}; 5120};
4974 5121
4975
4976/* 5122/*
4977 * Modes for fairness algorithm 5123 * Modes for fairness algorithm
4978 */ 5124 */
@@ -5010,14 +5156,18 @@ struct flow_control_configuration {
5010 */ 5156 */
5011struct function_start_data { 5157struct function_start_data {
5012 u8 function_mode; 5158 u8 function_mode;
5013 u8 reserved; 5159 u8 allow_npar_tx_switching;
5014 __le16 sd_vlan_tag; 5160 __le16 sd_vlan_tag;
5015 __le16 vif_id; 5161 __le16 vif_id;
5016 u8 path_id; 5162 u8 path_id;
5017 u8 network_cos_mode; 5163 u8 network_cos_mode;
5164 u8 dmae_cmd_id;
5165 u8 gre_tunnel_mode;
5166 u8 gre_tunnel_rss;
5167 u8 nvgre_clss_en;
5168 __le16 reserved1[2];
5018}; 5169};
5019 5170
5020
5021struct function_update_data { 5171struct function_update_data {
5022 u8 vif_id_change_flg; 5172 u8 vif_id_change_flg;
5023 u8 afex_default_vlan_change_flg; 5173 u8 afex_default_vlan_change_flg;
@@ -5027,14 +5177,19 @@ struct function_update_data {
5027 __le16 afex_default_vlan; 5177 __le16 afex_default_vlan;
5028 u8 allowed_priorities; 5178 u8 allowed_priorities;
5029 u8 network_cos_mode; 5179 u8 network_cos_mode;
5180 u8 lb_mode_en_change_flg;
5030 u8 lb_mode_en; 5181 u8 lb_mode_en;
5031 u8 tx_switch_suspend_change_flg; 5182 u8 tx_switch_suspend_change_flg;
5032 u8 tx_switch_suspend; 5183 u8 tx_switch_suspend;
5033 u8 echo; 5184 u8 echo;
5034 __le16 reserved1; 5185 u8 reserved1;
5186 u8 update_gre_cfg_flg;
5187 u8 gre_tunnel_mode;
5188 u8 gre_tunnel_rss;
5189 u8 nvgre_clss_en;
5190 u32 reserved3;
5035}; 5191};
5036 5192
5037
5038/* 5193/*
5039 * FW version stored in the Xstorm RAM 5194 * FW version stored in the Xstorm RAM
5040 */ 5195 */
@@ -5061,6 +5216,22 @@ struct fw_version {
5061#define __FW_VERSION_RESERVED_SHIFT 4 5216#define __FW_VERSION_RESERVED_SHIFT 4
5062}; 5217};
5063 5218
5219/* GRE RSS Mode */
5220enum gre_rss_mode {
5221 GRE_OUTER_HEADERS_RSS,
5222 GRE_INNER_HEADERS_RSS,
5223 NVGRE_KEY_ENTROPY_RSS,
5224 MAX_GRE_RSS_MODE
5225};
5226
5227/* GRE Tunnel Mode */
5228enum gre_tunnel_type {
5229 NO_GRE_TUNNEL,
5230 NVGRE_TUNNEL,
5231 L2GRE_TUNNEL,
5232 IPGRE_TUNNEL,
5233 MAX_GRE_TUNNEL_TYPE
5234};
5064 5235
5065/* 5236/*
5066 * Dynamic Host-Coalescing - Driver(host) counters 5237 * Dynamic Host-Coalescing - Driver(host) counters
@@ -5224,6 +5395,26 @@ enum ip_ver {
5224 MAX_IP_VER 5395 MAX_IP_VER
5225}; 5396};
5226 5397
5398/*
5399 * Malicious VF error ID
5400 */
5401enum malicious_vf_error_id {
5402 VF_PF_CHANNEL_NOT_READY,
5403 ETH_ILLEGAL_BD_LENGTHS,
5404 ETH_PACKET_TOO_SHORT,
5405 ETH_PAYLOAD_TOO_BIG,
5406 ETH_ILLEGAL_ETH_TYPE,
5407 ETH_ILLEGAL_LSO_HDR_LEN,
5408 ETH_TOO_MANY_BDS,
5409 ETH_ZERO_HDR_NBDS,
5410 ETH_START_BD_NOT_SET,
5411 ETH_ILLEGAL_PARSE_NBDS,
5412 ETH_IPV6_AND_CHECKSUM,
5413 ETH_VLAN_FLG_INCORRECT,
5414 ETH_ILLEGAL_LSO_MSS,
5415 ETH_TUNNEL_NOT_SUPPORTED,
5416 MAX_MALICIOUS_VF_ERROR_ID
5417};
5227 5418
5228/* 5419/*
5229 * Multi-function modes 5420 * Multi-function modes
@@ -5368,7 +5559,6 @@ struct protocol_common_spe {
5368 union protocol_common_specific_data data; 5559 union protocol_common_specific_data data;
5369}; 5560};
5370 5561
5371
5372/* 5562/*
5373 * The send queue element 5563 * The send queue element
5374 */ 5564 */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
index 0283f343b0d1..9d64b988ab34 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
@@ -27,6 +27,10 @@
27#include "bnx2x.h" 27#include "bnx2x.h"
28#include "bnx2x_cmn.h" 28#include "bnx2x_cmn.h"
29 29
30typedef int (*read_sfp_module_eeprom_func_p)(struct bnx2x_phy *phy,
31 struct link_params *params,
32 u8 dev_addr, u16 addr, u8 byte_cnt,
33 u8 *o_buf, u8);
30/********************************************************/ 34/********************************************************/
31#define ETH_HLEN 14 35#define ETH_HLEN 14
32/* L2 header size + 2*VLANs (8 bytes) + LLC SNAP (8 bytes) */ 36/* L2 header size + 2*VLANs (8 bytes) + LLC SNAP (8 bytes) */
@@ -152,6 +156,7 @@
152#define SFP_EEPROM_CON_TYPE_ADDR 0x2 156#define SFP_EEPROM_CON_TYPE_ADDR 0x2
153 #define SFP_EEPROM_CON_TYPE_VAL_LC 0x7 157 #define SFP_EEPROM_CON_TYPE_VAL_LC 0x7
154 #define SFP_EEPROM_CON_TYPE_VAL_COPPER 0x21 158 #define SFP_EEPROM_CON_TYPE_VAL_COPPER 0x21
159 #define SFP_EEPROM_CON_TYPE_VAL_RJ45 0x22
155 160
156 161
157#define SFP_EEPROM_COMP_CODE_ADDR 0x3 162#define SFP_EEPROM_COMP_CODE_ADDR 0x3
@@ -3127,11 +3132,6 @@ static int bnx2x_bsc_read(struct link_params *params,
3127 int rc = 0; 3132 int rc = 0;
3128 struct bnx2x *bp = params->bp; 3133 struct bnx2x *bp = params->bp;
3129 3134
3130 if ((sl_devid != 0xa0) && (sl_devid != 0xa2)) {
3131 DP(NETIF_MSG_LINK, "invalid sl_devid 0x%x\n", sl_devid);
3132 return -EINVAL;
3133 }
3134
3135 if (xfer_cnt > 16) { 3135 if (xfer_cnt > 16) {
3136 DP(NETIF_MSG_LINK, "invalid xfer_cnt %d. Max is 16 bytes\n", 3136 DP(NETIF_MSG_LINK, "invalid xfer_cnt %d. Max is 16 bytes\n",
3137 xfer_cnt); 3137 xfer_cnt);
@@ -3426,13 +3426,19 @@ static void bnx2x_calc_ieee_aneg_adv(struct bnx2x_phy *phy,
3426 3426
3427 switch (phy->req_flow_ctrl) { 3427 switch (phy->req_flow_ctrl) {
3428 case BNX2X_FLOW_CTRL_AUTO: 3428 case BNX2X_FLOW_CTRL_AUTO:
3429 if (params->req_fc_auto_adv == BNX2X_FLOW_CTRL_BOTH) 3429 switch (params->req_fc_auto_adv) {
3430 case BNX2X_FLOW_CTRL_BOTH:
3430 *ieee_fc |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH; 3431 *ieee_fc |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_BOTH;
3431 else 3432 break;
3433 case BNX2X_FLOW_CTRL_RX:
3434 case BNX2X_FLOW_CTRL_TX:
3432 *ieee_fc |= 3435 *ieee_fc |=
3433 MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC; 3436 MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
3437 break;
3438 default:
3439 break;
3440 }
3434 break; 3441 break;
3435
3436 case BNX2X_FLOW_CTRL_TX: 3442 case BNX2X_FLOW_CTRL_TX:
3437 *ieee_fc |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC; 3443 *ieee_fc |= MDIO_COMBO_IEEE0_AUTO_NEG_ADV_PAUSE_ASYMMETRIC;
3438 break; 3444 break;
@@ -3629,6 +3635,16 @@ static u8 bnx2x_ext_phy_resolve_fc(struct bnx2x_phy *phy,
3629 * init configuration, and set/clear SGMII flag. Internal 3635 * init configuration, and set/clear SGMII flag. Internal
3630 * phy init is done purely in phy_init stage. 3636 * phy init is done purely in phy_init stage.
3631 */ 3637 */
3638#define WC_TX_DRIVER(post2, idriver, ipre) \
3639 ((post2 << MDIO_WC_REG_TX0_TX_DRIVER_POST2_COEFF_OFFSET) | \
3640 (idriver << MDIO_WC_REG_TX0_TX_DRIVER_IDRIVER_OFFSET) | \
3641 (ipre << MDIO_WC_REG_TX0_TX_DRIVER_IPRE_DRIVER_OFFSET))
3642
3643#define WC_TX_FIR(post, main, pre) \
3644 ((post << MDIO_WC_REG_TX_FIR_TAP_POST_TAP_OFFSET) | \
3645 (main << MDIO_WC_REG_TX_FIR_TAP_MAIN_TAP_OFFSET) | \
3646 (pre << MDIO_WC_REG_TX_FIR_TAP_PRE_TAP_OFFSET))
3647
3632static void bnx2x_warpcore_enable_AN_KR2(struct bnx2x_phy *phy, 3648static void bnx2x_warpcore_enable_AN_KR2(struct bnx2x_phy *phy,
3633 struct link_params *params, 3649 struct link_params *params,
3634 struct link_vars *vars) 3650 struct link_vars *vars)
@@ -3728,7 +3744,7 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy,
3728 if (((vars->line_speed == SPEED_AUTO_NEG) && 3744 if (((vars->line_speed == SPEED_AUTO_NEG) &&
3729 (phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) || 3745 (phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_1G)) ||
3730 (vars->line_speed == SPEED_1000)) { 3746 (vars->line_speed == SPEED_1000)) {
3731 u32 addr = MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2; 3747 u16 addr = MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2;
3732 an_adv |= (1<<5); 3748 an_adv |= (1<<5);
3733 3749
3734 /* Enable CL37 1G Parallel Detect */ 3750 /* Enable CL37 1G Parallel Detect */
@@ -3753,20 +3769,13 @@ static void bnx2x_warpcore_enable_AN_KR(struct bnx2x_phy *phy,
3753 /* Set Transmit PMD settings */ 3769 /* Set Transmit PMD settings */
3754 lane = bnx2x_get_warpcore_lane(phy, params); 3770 lane = bnx2x_get_warpcore_lane(phy, params);
3755 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, 3771 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
3756 MDIO_WC_REG_TX0_TX_DRIVER + 0x10*lane, 3772 MDIO_WC_REG_TX0_TX_DRIVER + 0x10*lane,
3757 ((0x02 << MDIO_WC_REG_TX0_TX_DRIVER_POST2_COEFF_OFFSET) | 3773 WC_TX_DRIVER(0x02, 0x06, 0x09));
3758 (0x06 << MDIO_WC_REG_TX0_TX_DRIVER_IDRIVER_OFFSET) |
3759 (0x09 << MDIO_WC_REG_TX0_TX_DRIVER_IPRE_DRIVER_OFFSET)));
3760 /* Configure the next lane if dual mode */ 3774 /* Configure the next lane if dual mode */
3761 if (phy->flags & FLAGS_WC_DUAL_MODE) 3775 if (phy->flags & FLAGS_WC_DUAL_MODE)
3762 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, 3776 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
3763 MDIO_WC_REG_TX0_TX_DRIVER + 0x10*(lane+1), 3777 MDIO_WC_REG_TX0_TX_DRIVER + 0x10*(lane+1),
3764 ((0x02 << 3778 WC_TX_DRIVER(0x02, 0x06, 0x09));
3765 MDIO_WC_REG_TX0_TX_DRIVER_POST2_COEFF_OFFSET) |
3766 (0x06 <<
3767 MDIO_WC_REG_TX0_TX_DRIVER_IDRIVER_OFFSET) |
3768 (0x09 <<
3769 MDIO_WC_REG_TX0_TX_DRIVER_IPRE_DRIVER_OFFSET)));
3770 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, 3779 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
3771 MDIO_WC_REG_CL72_USERB0_CL72_OS_DEF_CTRL, 3780 MDIO_WC_REG_CL72_USERB0_CL72_OS_DEF_CTRL,
3772 0x03f0); 3781 0x03f0);
@@ -3909,6 +3918,8 @@ static void bnx2x_warpcore_set_10G_XFI(struct bnx2x_phy *phy,
3909{ 3918{
3910 struct bnx2x *bp = params->bp; 3919 struct bnx2x *bp = params->bp;
3911 u16 misc1_val, tap_val, tx_driver_val, lane, val; 3920 u16 misc1_val, tap_val, tx_driver_val, lane, val;
3921 u32 cfg_tap_val, tx_drv_brdct, tx_equal;
3922
3912 /* Hold rxSeqStart */ 3923 /* Hold rxSeqStart */
3913 bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD, 3924 bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD,
3914 MDIO_WC_REG_DSC2B0_DSC_MISC_CTRL0, 0x8000); 3925 MDIO_WC_REG_DSC2B0_DSC_MISC_CTRL0, 0x8000);
@@ -3952,23 +3963,33 @@ static void bnx2x_warpcore_set_10G_XFI(struct bnx2x_phy *phy,
3952 3963
3953 if (is_xfi) { 3964 if (is_xfi) {
3954 misc1_val |= 0x5; 3965 misc1_val |= 0x5;
3955 tap_val = ((0x08 << MDIO_WC_REG_TX_FIR_TAP_POST_TAP_OFFSET) | 3966 tap_val = WC_TX_FIR(0x08, 0x37, 0x00);
3956 (0x37 << MDIO_WC_REG_TX_FIR_TAP_MAIN_TAP_OFFSET) | 3967 tx_driver_val = WC_TX_DRIVER(0x00, 0x02, 0x03);
3957 (0x00 << MDIO_WC_REG_TX_FIR_TAP_PRE_TAP_OFFSET));
3958 tx_driver_val =
3959 ((0x00 << MDIO_WC_REG_TX0_TX_DRIVER_POST2_COEFF_OFFSET) |
3960 (0x02 << MDIO_WC_REG_TX0_TX_DRIVER_IDRIVER_OFFSET) |
3961 (0x03 << MDIO_WC_REG_TX0_TX_DRIVER_IPRE_DRIVER_OFFSET));
3962
3963 } else { 3968 } else {
3969 cfg_tap_val = REG_RD(bp, params->shmem_base +
3970 offsetof(struct shmem_region, dev_info.
3971 port_hw_config[params->port].
3972 sfi_tap_values));
3973
3974 tx_equal = cfg_tap_val & PORT_HW_CFG_TX_EQUALIZATION_MASK;
3975
3976 tx_drv_brdct = (cfg_tap_val &
3977 PORT_HW_CFG_TX_DRV_BROADCAST_MASK) >>
3978 PORT_HW_CFG_TX_DRV_BROADCAST_SHIFT;
3979
3964 misc1_val |= 0x9; 3980 misc1_val |= 0x9;
3965 tap_val = ((0x0f << MDIO_WC_REG_TX_FIR_TAP_POST_TAP_OFFSET) | 3981
3966 (0x2b << MDIO_WC_REG_TX_FIR_TAP_MAIN_TAP_OFFSET) | 3982 /* TAP values are controlled by nvram, if value there isn't 0 */
3967 (0x02 << MDIO_WC_REG_TX_FIR_TAP_PRE_TAP_OFFSET)); 3983 if (tx_equal)
3968 tx_driver_val = 3984 tap_val = (u16)tx_equal;
3969 ((0x03 << MDIO_WC_REG_TX0_TX_DRIVER_POST2_COEFF_OFFSET) | 3985 else
3970 (0x02 << MDIO_WC_REG_TX0_TX_DRIVER_IDRIVER_OFFSET) | 3986 tap_val = WC_TX_FIR(0x0f, 0x2b, 0x02);
3971 (0x06 << MDIO_WC_REG_TX0_TX_DRIVER_IPRE_DRIVER_OFFSET)); 3987
3988 if (tx_drv_brdct)
3989 tx_driver_val = WC_TX_DRIVER(0x03, (u16)tx_drv_brdct,
3990 0x06);
3991 else
3992 tx_driver_val = WC_TX_DRIVER(0x03, 0x02, 0x06);
3972 } 3993 }
3973 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, 3994 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
3974 MDIO_WC_REG_SERDESDIGITAL_MISC1, misc1_val); 3995 MDIO_WC_REG_SERDESDIGITAL_MISC1, misc1_val);
@@ -4105,15 +4126,11 @@ static void bnx2x_warpcore_set_20G_DXGXS(struct bnx2x *bp,
4105 /* Set Transmit PMD settings */ 4126 /* Set Transmit PMD settings */
4106 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, 4127 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
4107 MDIO_WC_REG_TX_FIR_TAP, 4128 MDIO_WC_REG_TX_FIR_TAP,
4108 ((0x12 << MDIO_WC_REG_TX_FIR_TAP_POST_TAP_OFFSET) | 4129 (WC_TX_FIR(0x12, 0x2d, 0x00) |
4109 (0x2d << MDIO_WC_REG_TX_FIR_TAP_MAIN_TAP_OFFSET) | 4130 MDIO_WC_REG_TX_FIR_TAP_ENABLE));
4110 (0x00 << MDIO_WC_REG_TX_FIR_TAP_PRE_TAP_OFFSET) |
4111 MDIO_WC_REG_TX_FIR_TAP_ENABLE));
4112 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD, 4131 bnx2x_cl45_write(bp, phy, MDIO_WC_DEVAD,
4113 MDIO_WC_REG_TX0_TX_DRIVER + 0x10*lane, 4132 MDIO_WC_REG_TX0_TX_DRIVER + 0x10*lane,
4114 ((0x02 << MDIO_WC_REG_TX0_TX_DRIVER_POST2_COEFF_OFFSET) | 4133 WC_TX_DRIVER(0x02, 0x02, 0x02));
4115 (0x02 << MDIO_WC_REG_TX0_TX_DRIVER_IDRIVER_OFFSET) |
4116 (0x02 << MDIO_WC_REG_TX0_TX_DRIVER_IPRE_DRIVER_OFFSET)));
4117} 4134}
4118 4135
4119static void bnx2x_warpcore_set_sgmii_speed(struct bnx2x_phy *phy, 4136static void bnx2x_warpcore_set_sgmii_speed(struct bnx2x_phy *phy,
@@ -4750,8 +4767,8 @@ void bnx2x_link_status_update(struct link_params *params,
4750 port_mb[port].link_status)); 4767 port_mb[port].link_status));
4751 4768
4752 /* Force link UP in non LOOPBACK_EXT loopback mode(s) */ 4769 /* Force link UP in non LOOPBACK_EXT loopback mode(s) */
4753 if (bp->link_params.loopback_mode != LOOPBACK_NONE && 4770 if (params->loopback_mode != LOOPBACK_NONE &&
4754 bp->link_params.loopback_mode != LOOPBACK_EXT) 4771 params->loopback_mode != LOOPBACK_EXT)
4755 vars->link_status |= LINK_STATUS_LINK_UP; 4772 vars->link_status |= LINK_STATUS_LINK_UP;
4756 4773
4757 if (bnx2x_eee_has_cap(params)) 4774 if (bnx2x_eee_has_cap(params))
@@ -7758,7 +7775,8 @@ static void bnx2x_sfp_set_transmitter(struct link_params *params,
7758 7775
7759static int bnx2x_8726_read_sfp_module_eeprom(struct bnx2x_phy *phy, 7776static int bnx2x_8726_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7760 struct link_params *params, 7777 struct link_params *params,
7761 u16 addr, u8 byte_cnt, u8 *o_buf) 7778 u8 dev_addr, u16 addr, u8 byte_cnt,
7779 u8 *o_buf, u8 is_init)
7762{ 7780{
7763 struct bnx2x *bp = params->bp; 7781 struct bnx2x *bp = params->bp;
7764 u16 val = 0; 7782 u16 val = 0;
@@ -7771,7 +7789,7 @@ static int bnx2x_8726_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7771 /* Set the read command byte count */ 7789 /* Set the read command byte count */
7772 bnx2x_cl45_write(bp, phy, 7790 bnx2x_cl45_write(bp, phy,
7773 MDIO_PMA_DEVAD, MDIO_PMA_REG_SFP_TWO_WIRE_BYTE_CNT, 7791 MDIO_PMA_DEVAD, MDIO_PMA_REG_SFP_TWO_WIRE_BYTE_CNT,
7774 (byte_cnt | 0xa000)); 7792 (byte_cnt | (dev_addr << 8)));
7775 7793
7776 /* Set the read command address */ 7794 /* Set the read command address */
7777 bnx2x_cl45_write(bp, phy, 7795 bnx2x_cl45_write(bp, phy,
@@ -7845,6 +7863,7 @@ static void bnx2x_warpcore_power_module(struct link_params *params,
7845} 7863}
7846static int bnx2x_warpcore_read_sfp_module_eeprom(struct bnx2x_phy *phy, 7864static int bnx2x_warpcore_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7847 struct link_params *params, 7865 struct link_params *params,
7866 u8 dev_addr,
7848 u16 addr, u8 byte_cnt, 7867 u16 addr, u8 byte_cnt,
7849 u8 *o_buf, u8 is_init) 7868 u8 *o_buf, u8 is_init)
7850{ 7869{
@@ -7869,7 +7888,7 @@ static int bnx2x_warpcore_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7869 usleep_range(1000, 2000); 7888 usleep_range(1000, 2000);
7870 bnx2x_warpcore_power_module(params, 1); 7889 bnx2x_warpcore_power_module(params, 1);
7871 } 7890 }
7872 rc = bnx2x_bsc_read(params, phy, 0xa0, addr32, 0, byte_cnt, 7891 rc = bnx2x_bsc_read(params, phy, dev_addr, addr32, 0, byte_cnt,
7873 data_array); 7892 data_array);
7874 } while ((rc != 0) && (++cnt < I2C_WA_RETRY_CNT)); 7893 } while ((rc != 0) && (++cnt < I2C_WA_RETRY_CNT));
7875 7894
@@ -7885,7 +7904,8 @@ static int bnx2x_warpcore_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7885 7904
7886static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy, 7905static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7887 struct link_params *params, 7906 struct link_params *params,
7888 u16 addr, u8 byte_cnt, u8 *o_buf) 7907 u8 dev_addr, u16 addr, u8 byte_cnt,
7908 u8 *o_buf, u8 is_init)
7889{ 7909{
7890 struct bnx2x *bp = params->bp; 7910 struct bnx2x *bp = params->bp;
7891 u16 val, i; 7911 u16 val, i;
@@ -7896,6 +7916,15 @@ static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7896 return -EINVAL; 7916 return -EINVAL;
7897 } 7917 }
7898 7918
7919 /* Set 2-wire transfer rate of SFP+ module EEPROM
7920 * to 100Khz since some DACs(direct attached cables) do
7921 * not work at 400Khz.
7922 */
7923 bnx2x_cl45_write(bp, phy,
7924 MDIO_PMA_DEVAD,
7925 MDIO_PMA_REG_8727_TWO_WIRE_SLAVE_ADDR,
7926 ((dev_addr << 8) | 1));
7927
7899 /* Need to read from 1.8000 to clear it */ 7928 /* Need to read from 1.8000 to clear it */
7900 bnx2x_cl45_read(bp, phy, 7929 bnx2x_cl45_read(bp, phy,
7901 MDIO_PMA_DEVAD, 7930 MDIO_PMA_DEVAD,
@@ -7968,26 +7997,44 @@ static int bnx2x_8727_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7968 7997
7969 return -EINVAL; 7998 return -EINVAL;
7970} 7999}
7971
7972int bnx2x_read_sfp_module_eeprom(struct bnx2x_phy *phy, 8000int bnx2x_read_sfp_module_eeprom(struct bnx2x_phy *phy,
7973 struct link_params *params, u16 addr, 8001 struct link_params *params, u8 dev_addr,
7974 u8 byte_cnt, u8 *o_buf) 8002 u16 addr, u16 byte_cnt, u8 *o_buf)
7975{ 8003{
7976 int rc = -EOPNOTSUPP; 8004 int rc = 0;
8005 struct bnx2x *bp = params->bp;
8006 u8 xfer_size;
8007 u8 *user_data = o_buf;
8008 read_sfp_module_eeprom_func_p read_func;
8009
8010 if ((dev_addr != 0xa0) && (dev_addr != 0xa2)) {
8011 DP(NETIF_MSG_LINK, "invalid dev_addr 0x%x\n", dev_addr);
8012 return -EINVAL;
8013 }
8014
7977 switch (phy->type) { 8015 switch (phy->type) {
7978 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8726: 8016 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8726:
7979 rc = bnx2x_8726_read_sfp_module_eeprom(phy, params, addr, 8017 read_func = bnx2x_8726_read_sfp_module_eeprom;
7980 byte_cnt, o_buf); 8018 break;
7981 break;
7982 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8727: 8019 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8727:
7983 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8722: 8020 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM8722:
7984 rc = bnx2x_8727_read_sfp_module_eeprom(phy, params, addr, 8021 read_func = bnx2x_8727_read_sfp_module_eeprom;
7985 byte_cnt, o_buf); 8022 break;
7986 break;
7987 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT: 8023 case PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT:
7988 rc = bnx2x_warpcore_read_sfp_module_eeprom(phy, params, addr, 8024 read_func = bnx2x_warpcore_read_sfp_module_eeprom;
7989 byte_cnt, o_buf, 0); 8025 break;
7990 break; 8026 default:
8027 return -EOPNOTSUPP;
8028 }
8029
8030 while (!rc && (byte_cnt > 0)) {
8031 xfer_size = (byte_cnt > SFP_EEPROM_PAGE_SIZE) ?
8032 SFP_EEPROM_PAGE_SIZE : byte_cnt;
8033 rc = read_func(phy, params, dev_addr, addr, xfer_size,
8034 user_data, 0);
8035 byte_cnt -= xfer_size;
8036 user_data += xfer_size;
8037 addr += xfer_size;
7991 } 8038 }
7992 return rc; 8039 return rc;
7993} 8040}
@@ -8004,6 +8051,7 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy,
8004 /* First check for copper cable */ 8051 /* First check for copper cable */
8005 if (bnx2x_read_sfp_module_eeprom(phy, 8052 if (bnx2x_read_sfp_module_eeprom(phy,
8006 params, 8053 params,
8054 I2C_DEV_ADDR_A0,
8007 SFP_EEPROM_CON_TYPE_ADDR, 8055 SFP_EEPROM_CON_TYPE_ADDR,
8008 2, 8056 2,
8009 (u8 *)val) != 0) { 8057 (u8 *)val) != 0) {
@@ -8021,6 +8069,7 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy,
8021 */ 8069 */
8022 if (bnx2x_read_sfp_module_eeprom(phy, 8070 if (bnx2x_read_sfp_module_eeprom(phy,
8023 params, 8071 params,
8072 I2C_DEV_ADDR_A0,
8024 SFP_EEPROM_FC_TX_TECH_ADDR, 8073 SFP_EEPROM_FC_TX_TECH_ADDR,
8025 1, 8074 1,
8026 &copper_module_type) != 0) { 8075 &copper_module_type) != 0) {
@@ -8049,20 +8098,24 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy,
8049 break; 8098 break;
8050 } 8099 }
8051 case SFP_EEPROM_CON_TYPE_VAL_LC: 8100 case SFP_EEPROM_CON_TYPE_VAL_LC:
8101 case SFP_EEPROM_CON_TYPE_VAL_RJ45:
8052 check_limiting_mode = 1; 8102 check_limiting_mode = 1;
8053 if ((val[1] & (SFP_EEPROM_COMP_CODE_SR_MASK | 8103 if ((val[1] & (SFP_EEPROM_COMP_CODE_SR_MASK |
8054 SFP_EEPROM_COMP_CODE_LR_MASK | 8104 SFP_EEPROM_COMP_CODE_LR_MASK |
8055 SFP_EEPROM_COMP_CODE_LRM_MASK)) == 0) { 8105 SFP_EEPROM_COMP_CODE_LRM_MASK)) == 0) {
8056 DP(NETIF_MSG_LINK, "1G Optic module detected\n"); 8106 DP(NETIF_MSG_LINK, "1G SFP module detected\n");
8057 gport = params->port; 8107 gport = params->port;
8058 phy->media_type = ETH_PHY_SFP_1G_FIBER; 8108 phy->media_type = ETH_PHY_SFP_1G_FIBER;
8059 phy->req_line_speed = SPEED_1000; 8109 if (phy->req_line_speed != SPEED_1000) {
8060 if (!CHIP_IS_E1x(bp)) 8110 phy->req_line_speed = SPEED_1000;
8061 gport = BP_PATH(bp) + (params->port << 1); 8111 if (!CHIP_IS_E1x(bp)) {
8062 netdev_err(bp->dev, "Warning: Link speed was forced to 1000Mbps." 8112 gport = BP_PATH(bp) +
8063 " Current SFP module in port %d is not" 8113 (params->port << 1);
8064 " compliant with 10G Ethernet\n", 8114 }
8065 gport); 8115 netdev_err(bp->dev,
8116 "Warning: Link speed was forced to 1000Mbps. Current SFP module in port %d is not compliant with 10G Ethernet\n",
8117 gport);
8118 }
8066 } else { 8119 } else {
8067 int idx, cfg_idx = 0; 8120 int idx, cfg_idx = 0;
8068 DP(NETIF_MSG_LINK, "10G Optic module detected\n"); 8121 DP(NETIF_MSG_LINK, "10G Optic module detected\n");
@@ -8101,6 +8154,7 @@ static int bnx2x_get_edc_mode(struct bnx2x_phy *phy,
8101 u8 options[SFP_EEPROM_OPTIONS_SIZE]; 8154 u8 options[SFP_EEPROM_OPTIONS_SIZE];
8102 if (bnx2x_read_sfp_module_eeprom(phy, 8155 if (bnx2x_read_sfp_module_eeprom(phy,
8103 params, 8156 params,
8157 I2C_DEV_ADDR_A0,
8104 SFP_EEPROM_OPTIONS_ADDR, 8158 SFP_EEPROM_OPTIONS_ADDR,
8105 SFP_EEPROM_OPTIONS_SIZE, 8159 SFP_EEPROM_OPTIONS_SIZE,
8106 options) != 0) { 8160 options) != 0) {
@@ -8167,6 +8221,7 @@ static int bnx2x_verify_sfp_module(struct bnx2x_phy *phy,
8167 /* Format the warning message */ 8221 /* Format the warning message */
8168 if (bnx2x_read_sfp_module_eeprom(phy, 8222 if (bnx2x_read_sfp_module_eeprom(phy,
8169 params, 8223 params,
8224 I2C_DEV_ADDR_A0,
8170 SFP_EEPROM_VENDOR_NAME_ADDR, 8225 SFP_EEPROM_VENDOR_NAME_ADDR,
8171 SFP_EEPROM_VENDOR_NAME_SIZE, 8226 SFP_EEPROM_VENDOR_NAME_SIZE,
8172 (u8 *)vendor_name)) 8227 (u8 *)vendor_name))
@@ -8175,6 +8230,7 @@ static int bnx2x_verify_sfp_module(struct bnx2x_phy *phy,
8175 vendor_name[SFP_EEPROM_VENDOR_NAME_SIZE] = '\0'; 8230 vendor_name[SFP_EEPROM_VENDOR_NAME_SIZE] = '\0';
8176 if (bnx2x_read_sfp_module_eeprom(phy, 8231 if (bnx2x_read_sfp_module_eeprom(phy,
8177 params, 8232 params,
8233 I2C_DEV_ADDR_A0,
8178 SFP_EEPROM_PART_NO_ADDR, 8234 SFP_EEPROM_PART_NO_ADDR,
8179 SFP_EEPROM_PART_NO_SIZE, 8235 SFP_EEPROM_PART_NO_SIZE,
8180 (u8 *)vendor_pn)) 8236 (u8 *)vendor_pn))
@@ -8205,12 +8261,13 @@ static int bnx2x_wait_for_sfp_module_initialized(struct bnx2x_phy *phy,
8205 8261
8206 for (timeout = 0; timeout < 60; timeout++) { 8262 for (timeout = 0; timeout < 60; timeout++) {
8207 if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT) 8263 if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT)
8208 rc = bnx2x_warpcore_read_sfp_module_eeprom(phy, 8264 rc = bnx2x_warpcore_read_sfp_module_eeprom(
8209 params, 1, 8265 phy, params, I2C_DEV_ADDR_A0, 1, 1, &val,
8210 1, &val, 1); 8266 1);
8211 else 8267 else
8212 rc = bnx2x_read_sfp_module_eeprom(phy, params, 1, 1, 8268 rc = bnx2x_read_sfp_module_eeprom(phy, params,
8213 &val); 8269 I2C_DEV_ADDR_A0,
8270 1, 1, &val);
8214 if (rc == 0) { 8271 if (rc == 0) {
8215 DP(NETIF_MSG_LINK, 8272 DP(NETIF_MSG_LINK,
8216 "SFP+ module initialization took %d ms\n", 8273 "SFP+ module initialization took %d ms\n",
@@ -8219,7 +8276,8 @@ static int bnx2x_wait_for_sfp_module_initialized(struct bnx2x_phy *phy,
8219 } 8276 }
8220 usleep_range(5000, 10000); 8277 usleep_range(5000, 10000);
8221 } 8278 }
8222 rc = bnx2x_read_sfp_module_eeprom(phy, params, 1, 1, &val); 8279 rc = bnx2x_read_sfp_module_eeprom(phy, params, I2C_DEV_ADDR_A0,
8280 1, 1, &val);
8223 return rc; 8281 return rc;
8224} 8282}
8225 8283
@@ -8376,15 +8434,6 @@ static void bnx2x_8727_specific_func(struct bnx2x_phy *phy,
8376 bnx2x_cl45_write(bp, phy, 8434 bnx2x_cl45_write(bp, phy,
8377 MDIO_PMA_DEVAD, MDIO_PMA_REG_8727_PCS_OPT_CTRL, 8435 MDIO_PMA_DEVAD, MDIO_PMA_REG_8727_PCS_OPT_CTRL,
8378 val); 8436 val);
8379
8380 /* Set 2-wire transfer rate of SFP+ module EEPROM
8381 * to 100Khz since some DACs(direct attached cables) do
8382 * not work at 400Khz.
8383 */
8384 bnx2x_cl45_write(bp, phy,
8385 MDIO_PMA_DEVAD,
8386 MDIO_PMA_REG_8727_TWO_WIRE_SLAVE_ADDR,
8387 0xa001);
8388 break; 8437 break;
8389 default: 8438 default:
8390 DP(NETIF_MSG_LINK, "Function 0x%x not supported by 8727\n", 8439 DP(NETIF_MSG_LINK, "Function 0x%x not supported by 8727\n",
@@ -9528,8 +9577,7 @@ static void bnx2x_save_848xx_spirom_version(struct bnx2x_phy *phy,
9528 } else { 9577 } else {
9529 /* For 32-bit registers in 848xx, access via MDIO2ARM i/f. */ 9578 /* For 32-bit registers in 848xx, access via MDIO2ARM i/f. */
9530 /* (1) set reg 0xc200_0014(SPI_BRIDGE_CTRL_2) to 0x03000000 */ 9579 /* (1) set reg 0xc200_0014(SPI_BRIDGE_CTRL_2) to 0x03000000 */
9531 for (i = 0; i < ARRAY_SIZE(reg_set); 9580 for (i = 0; i < ARRAY_SIZE(reg_set); i++)
9532 i++)
9533 bnx2x_cl45_write(bp, phy, reg_set[i].devad, 9581 bnx2x_cl45_write(bp, phy, reg_set[i].devad,
9534 reg_set[i].reg, reg_set[i].val); 9582 reg_set[i].reg, reg_set[i].val);
9535 9583
@@ -10281,7 +10329,8 @@ static u8 bnx2x_848xx_read_status(struct bnx2x_phy *phy,
10281 LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE; 10329 LINK_STATUS_LINK_PARTNER_10GXFD_CAPABLE;
10282 10330
10283 /* Determine if EEE was negotiated */ 10331 /* Determine if EEE was negotiated */
10284 if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84833) 10332 if ((phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84833) ||
10333 (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84834))
10285 bnx2x_eee_an_resolve(phy, params, vars); 10334 bnx2x_eee_an_resolve(phy, params, vars);
10286 } 10335 }
10287 10336
@@ -12242,7 +12291,7 @@ static void bnx2x_init_bmac_loopback(struct link_params *params,
12242 12291
12243 bnx2x_xgxs_deassert(params); 12292 bnx2x_xgxs_deassert(params);
12244 12293
12245 /* set bmac loopback */ 12294 /* Set bmac loopback */
12246 bnx2x_bmac_enable(params, vars, 1, 1); 12295 bnx2x_bmac_enable(params, vars, 1, 1);
12247 12296
12248 REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + params->port*4, 0); 12297 REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + params->port*4, 0);
@@ -12261,7 +12310,7 @@ static void bnx2x_init_emac_loopback(struct link_params *params,
12261 vars->phy_flags = PHY_XGXS_FLAG; 12310 vars->phy_flags = PHY_XGXS_FLAG;
12262 12311
12263 bnx2x_xgxs_deassert(params); 12312 bnx2x_xgxs_deassert(params);
12264 /* set bmac loopback */ 12313 /* Set bmac loopback */
12265 bnx2x_emac_enable(params, vars, 1); 12314 bnx2x_emac_enable(params, vars, 1);
12266 bnx2x_emac_program(params, vars); 12315 bnx2x_emac_program(params, vars);
12267 REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + params->port*4, 0); 12316 REG_WR(bp, NIG_REG_EGRESS_DRAIN0_MODE + params->port*4, 0);
@@ -12521,6 +12570,7 @@ int bnx2x_phy_init(struct link_params *params, struct link_vars *vars)
12521 params->req_line_speed[0], params->req_flow_ctrl[0]); 12570 params->req_line_speed[0], params->req_flow_ctrl[0]);
12522 DP(NETIF_MSG_LINK, "(2) req_speed %d, req_flowctrl %d\n", 12571 DP(NETIF_MSG_LINK, "(2) req_speed %d, req_flowctrl %d\n",
12523 params->req_line_speed[1], params->req_flow_ctrl[1]); 12572 params->req_line_speed[1], params->req_flow_ctrl[1]);
12573 DP(NETIF_MSG_LINK, "req_adv_flow_ctrl 0x%x\n", params->req_fc_auto_adv);
12524 vars->link_status = 0; 12574 vars->link_status = 0;
12525 vars->phy_link_up = 0; 12575 vars->phy_link_up = 0;
12526 vars->link_up = 0; 12576 vars->link_up = 0;
@@ -13440,8 +13490,8 @@ static void bnx2x_check_kr2_wa(struct link_params *params,
13440 int sigdet; 13490 int sigdet;
13441 13491
13442 /* Once KR2 was disabled, wait 5 seconds before checking KR2 recovery 13492 /* Once KR2 was disabled, wait 5 seconds before checking KR2 recovery
13443 * since some switches tend to reinit the AN process and clear the 13493 * Since some switches tend to reinit the AN process and clear the
13444 * advertised BP/NP after ~2 seconds causing the KR2 to be disabled 13494 * the advertised BP/NP after ~2 seconds causing the KR2 to be disabled
13445 * and recovered many times 13495 * and recovered many times
13446 */ 13496 */
13447 if (vars->check_kr2_recovery_cnt > 0) { 13497 if (vars->check_kr2_recovery_cnt > 0) {
@@ -13469,8 +13519,10 @@ static void bnx2x_check_kr2_wa(struct link_params *params,
13469 13519
13470 /* CL73 has not begun yet */ 13520 /* CL73 has not begun yet */
13471 if (base_page == 0) { 13521 if (base_page == 0) {
13472 if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) 13522 if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
13473 bnx2x_kr2_recovery(params, vars, phy); 13523 bnx2x_kr2_recovery(params, vars, phy);
13524 DP(NETIF_MSG_LINK, "No BP\n");
13525 }
13474 return; 13526 return;
13475 } 13527 }
13476 13528
@@ -13486,7 +13538,7 @@ static void bnx2x_check_kr2_wa(struct link_params *params,
13486 if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) { 13538 if (!(vars->link_attr_sync & LINK_ATTR_SYNC_KR2_ENABLE)) {
13487 if (!not_kr2_device) { 13539 if (!not_kr2_device) {
13488 DP(NETIF_MSG_LINK, "BP=0x%x, NP=0x%x\n", base_page, 13540 DP(NETIF_MSG_LINK, "BP=0x%x, NP=0x%x\n", base_page,
13489 next_page); 13541 next_page);
13490 bnx2x_kr2_recovery(params, vars, phy); 13542 bnx2x_kr2_recovery(params, vars, phy);
13491 } 13543 }
13492 return; 13544 return;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
index 56c2aae4e2c8..4df45234fdc0 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
@@ -41,6 +41,9 @@
41#define SPEED_AUTO_NEG 0 41#define SPEED_AUTO_NEG 0
42#define SPEED_20000 20000 42#define SPEED_20000 20000
43 43
44#define I2C_DEV_ADDR_A0 0xa0
45#define I2C_DEV_ADDR_A2 0xa2
46
44#define SFP_EEPROM_PAGE_SIZE 16 47#define SFP_EEPROM_PAGE_SIZE 16
45#define SFP_EEPROM_VENDOR_NAME_ADDR 0x14 48#define SFP_EEPROM_VENDOR_NAME_ADDR 0x14
46#define SFP_EEPROM_VENDOR_NAME_SIZE 16 49#define SFP_EEPROM_VENDOR_NAME_SIZE 16
@@ -54,6 +57,15 @@
54#define SFP_EEPROM_SERIAL_SIZE 16 57#define SFP_EEPROM_SERIAL_SIZE 16
55#define SFP_EEPROM_DATE_ADDR 0x54 /* ASCII YYMMDD */ 58#define SFP_EEPROM_DATE_ADDR 0x54 /* ASCII YYMMDD */
56#define SFP_EEPROM_DATE_SIZE 6 59#define SFP_EEPROM_DATE_SIZE 6
60#define SFP_EEPROM_DIAG_TYPE_ADDR 0x5c
61#define SFP_EEPROM_DIAG_TYPE_SIZE 1
62#define SFP_EEPROM_DIAG_ADDR_CHANGE_REQ (1<<2)
63#define SFP_EEPROM_SFF_8472_COMP_ADDR 0x5e
64#define SFP_EEPROM_SFF_8472_COMP_SIZE 1
65
66#define SFP_EEPROM_A2_CHECKSUM_RANGE 0x5e
67#define SFP_EEPROM_A2_CC_DMI_ADDR 0x5f
68
57#define PWR_FLT_ERR_MSG_LEN 250 69#define PWR_FLT_ERR_MSG_LEN 250
58 70
59#define XGXS_EXT_PHY_TYPE(ext_phy_config) \ 71#define XGXS_EXT_PHY_TYPE(ext_phy_config) \
@@ -420,8 +432,8 @@ void bnx2x_sfx7101_sp_sw_reset(struct bnx2x *bp, struct bnx2x_phy *phy);
420 432
421/* Read "byte_cnt" bytes from address "addr" from the SFP+ EEPROM */ 433/* Read "byte_cnt" bytes from address "addr" from the SFP+ EEPROM */
422int bnx2x_read_sfp_module_eeprom(struct bnx2x_phy *phy, 434int bnx2x_read_sfp_module_eeprom(struct bnx2x_phy *phy,
423 struct link_params *params, u16 addr, 435 struct link_params *params, u8 dev_addr,
424 u8 byte_cnt, u8 *o_buf); 436 u16 addr, u16 byte_cnt, u8 *o_buf);
425 437
426void bnx2x_hw_reset_phy(struct link_params *params); 438void bnx2x_hw_reset_phy(struct link_params *params);
427 439
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index c50696b396f1..b4c9dea93a53 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -75,8 +75,6 @@
75#define FW_FILE_NAME_E1H "bnx2x/bnx2x-e1h-" FW_FILE_VERSION ".fw" 75#define FW_FILE_NAME_E1H "bnx2x/bnx2x-e1h-" FW_FILE_VERSION ".fw"
76#define FW_FILE_NAME_E2 "bnx2x/bnx2x-e2-" FW_FILE_VERSION ".fw" 76#define FW_FILE_NAME_E2 "bnx2x/bnx2x-e2-" FW_FILE_VERSION ".fw"
77 77
78#define MAC_LEADING_ZERO_CNT (ALIGN(ETH_ALEN, sizeof(u32)) - ETH_ALEN)
79
80/* Time in jiffies before concluding the transmitter is hung */ 78/* Time in jiffies before concluding the transmitter is hung */
81#define TX_TIMEOUT (5*HZ) 79#define TX_TIMEOUT (5*HZ)
82 80
@@ -2955,14 +2953,16 @@ static unsigned long bnx2x_get_common_flags(struct bnx2x *bp,
2955 __set_bit(BNX2X_Q_FLG_ACTIVE, &flags); 2953 __set_bit(BNX2X_Q_FLG_ACTIVE, &flags);
2956 2954
2957 /* tx only connections collect statistics (on the same index as the 2955 /* tx only connections collect statistics (on the same index as the
2958 * parent connection). The statistics are zeroed when the parent 2956 * parent connection). The statistics are zeroed when the parent
2959 * connection is initialized. 2957 * connection is initialized.
2960 */ 2958 */
2961 2959
2962 __set_bit(BNX2X_Q_FLG_STATS, &flags); 2960 __set_bit(BNX2X_Q_FLG_STATS, &flags);
2963 if (zero_stats) 2961 if (zero_stats)
2964 __set_bit(BNX2X_Q_FLG_ZERO_STATS, &flags); 2962 __set_bit(BNX2X_Q_FLG_ZERO_STATS, &flags);
2965 2963
2964 __set_bit(BNX2X_Q_FLG_PCSUM_ON_PKT, &flags);
2965 __set_bit(BNX2X_Q_FLG_TUN_INC_INNER_IP_ID, &flags);
2966 2966
2967#ifdef BNX2X_STOP_ON_ERROR 2967#ifdef BNX2X_STOP_ON_ERROR
2968 __set_bit(BNX2X_Q_FLG_TX_SEC, &flags); 2968 __set_bit(BNX2X_Q_FLG_TX_SEC, &flags);
@@ -3227,16 +3227,29 @@ static void bnx2x_drv_info_ether_stat(struct bnx2x *bp)
3227{ 3227{
3228 struct eth_stats_info *ether_stat = 3228 struct eth_stats_info *ether_stat =
3229 &bp->slowpath->drv_info_to_mcp.ether_stat; 3229 &bp->slowpath->drv_info_to_mcp.ether_stat;
3230 struct bnx2x_vlan_mac_obj *mac_obj =
3231 &bp->sp_objs->mac_obj;
3232 int i;
3230 3233
3231 strlcpy(ether_stat->version, DRV_MODULE_VERSION, 3234 strlcpy(ether_stat->version, DRV_MODULE_VERSION,
3232 ETH_STAT_INFO_VERSION_LEN); 3235 ETH_STAT_INFO_VERSION_LEN);
3233 3236
3234 bp->sp_objs[0].mac_obj.get_n_elements(bp, &bp->sp_objs[0].mac_obj, 3237 /* get DRV_INFO_ETH_STAT_NUM_MACS_REQUIRED macs, placing them in the
3235 DRV_INFO_ETH_STAT_NUM_MACS_REQUIRED, 3238 * mac_local field in ether_stat struct. The base address is offset by 2
3236 ether_stat->mac_local); 3239 * bytes to account for the field being 8 bytes but a mac address is
3237 3240 * only 6 bytes. Likewise, the stride for the get_n_elements function is
3241 * 2 bytes to compensate from the 6 bytes of a mac to the 8 bytes
3242 * allocated by the ether_stat struct, so the macs will land in their
3243 * proper positions.
3244 */
3245 for (i = 0; i < DRV_INFO_ETH_STAT_NUM_MACS_REQUIRED; i++)
3246 memset(ether_stat->mac_local + i, 0,
3247 sizeof(ether_stat->mac_local[0]));
3248 mac_obj->get_n_elements(bp, &bp->sp_objs[0].mac_obj,
3249 DRV_INFO_ETH_STAT_NUM_MACS_REQUIRED,
3250 ether_stat->mac_local + MAC_PAD, MAC_PAD,
3251 ETH_ALEN);
3238 ether_stat->mtu_size = bp->dev->mtu; 3252 ether_stat->mtu_size = bp->dev->mtu;
3239
3240 if (bp->dev->features & NETIF_F_RXCSUM) 3253 if (bp->dev->features & NETIF_F_RXCSUM)
3241 ether_stat->feature_flags |= FEATURE_ETH_CHKSUM_OFFLOAD_MASK; 3254 ether_stat->feature_flags |= FEATURE_ETH_CHKSUM_OFFLOAD_MASK;
3242 if (bp->dev->features & NETIF_F_TSO) 3255 if (bp->dev->features & NETIF_F_TSO)
@@ -3258,8 +3271,7 @@ static void bnx2x_drv_info_fcoe_stat(struct bnx2x *bp)
3258 if (!CNIC_LOADED(bp)) 3271 if (!CNIC_LOADED(bp))
3259 return; 3272 return;
3260 3273
3261 memcpy(fcoe_stat->mac_local + MAC_LEADING_ZERO_CNT, 3274 memcpy(fcoe_stat->mac_local + MAC_PAD, bp->fip_mac, ETH_ALEN);
3262 bp->fip_mac, ETH_ALEN);
3263 3275
3264 fcoe_stat->qos_priority = 3276 fcoe_stat->qos_priority =
3265 app->traffic_type_priority[LLFC_TRAFFIC_TYPE_FCOE]; 3277 app->traffic_type_priority[LLFC_TRAFFIC_TYPE_FCOE];
@@ -3361,8 +3373,8 @@ static void bnx2x_drv_info_iscsi_stat(struct bnx2x *bp)
3361 if (!CNIC_LOADED(bp)) 3373 if (!CNIC_LOADED(bp))
3362 return; 3374 return;
3363 3375
3364 memcpy(iscsi_stat->mac_local + MAC_LEADING_ZERO_CNT, 3376 memcpy(iscsi_stat->mac_local + MAC_PAD, bp->cnic_eth_dev.iscsi_mac,
3365 bp->cnic_eth_dev.iscsi_mac, ETH_ALEN); 3377 ETH_ALEN);
3366 3378
3367 iscsi_stat->qos_priority = 3379 iscsi_stat->qos_priority =
3368 app->traffic_type_priority[LLFC_TRAFFIC_TYPE_ISCSI]; 3380 app->traffic_type_priority[LLFC_TRAFFIC_TYPE_ISCSI];
@@ -6018,10 +6030,11 @@ void bnx2x_nic_init_cnic(struct bnx2x *bp)
6018 mmiowb(); 6030 mmiowb();
6019} 6031}
6020 6032
6021void bnx2x_nic_init(struct bnx2x *bp, u32 load_code) 6033void bnx2x_pre_irq_nic_init(struct bnx2x *bp)
6022{ 6034{
6023 int i; 6035 int i;
6024 6036
6037 /* Setup NIC internals and enable interrupts */
6025 for_each_eth_queue(bp, i) 6038 for_each_eth_queue(bp, i)
6026 bnx2x_init_eth_fp(bp, i); 6039 bnx2x_init_eth_fp(bp, i);
6027 6040
@@ -6030,17 +6043,26 @@ void bnx2x_nic_init(struct bnx2x *bp, u32 load_code)
6030 bnx2x_init_rx_rings(bp); 6043 bnx2x_init_rx_rings(bp);
6031 bnx2x_init_tx_rings(bp); 6044 bnx2x_init_tx_rings(bp);
6032 6045
6033 if (IS_VF(bp)) 6046 if (IS_VF(bp)) {
6047 bnx2x_memset_stats(bp);
6034 return; 6048 return;
6049 }
6035 6050
6036 /* Initialize MOD_ABS interrupts */ 6051 if (IS_PF(bp)) {
6037 bnx2x_init_mod_abs_int(bp, &bp->link_vars, bp->common.chip_id, 6052 /* Initialize MOD_ABS interrupts */
6038 bp->common.shmem_base, bp->common.shmem2_base, 6053 bnx2x_init_mod_abs_int(bp, &bp->link_vars, bp->common.chip_id,
6039 BP_PORT(bp)); 6054 bp->common.shmem_base,
6055 bp->common.shmem2_base, BP_PORT(bp));
6040 6056
6041 bnx2x_init_def_sb(bp); 6057 /* initialize the default status block and sp ring */
6042 bnx2x_update_dsb_idx(bp); 6058 bnx2x_init_def_sb(bp);
6043 bnx2x_init_sp_ring(bp); 6059 bnx2x_update_dsb_idx(bp);
6060 bnx2x_init_sp_ring(bp);
6061 }
6062}
6063
6064void bnx2x_post_irq_nic_init(struct bnx2x *bp, u32 load_code)
6065{
6044 bnx2x_init_eq_ring(bp); 6066 bnx2x_init_eq_ring(bp);
6045 bnx2x_init_internal(bp, load_code); 6067 bnx2x_init_internal(bp, load_code);
6046 bnx2x_pf_init(bp); 6068 bnx2x_pf_init(bp);
@@ -6058,12 +6080,7 @@ void bnx2x_nic_init(struct bnx2x *bp, u32 load_code)
6058 AEU_INPUTS_ATTN_BITS_SPIO5); 6080 AEU_INPUTS_ATTN_BITS_SPIO5);
6059} 6081}
6060 6082
6061/* end of nic init */ 6083/* gzip service functions */
6062
6063/*
6064 * gzip service functions
6065 */
6066
6067static int bnx2x_gunzip_init(struct bnx2x *bp) 6084static int bnx2x_gunzip_init(struct bnx2x *bp)
6068{ 6085{
6069 bp->gunzip_buf = dma_alloc_coherent(&bp->pdev->dev, FW_BUF_SIZE, 6086 bp->gunzip_buf = dma_alloc_coherent(&bp->pdev->dev, FW_BUF_SIZE,
@@ -7757,6 +7774,8 @@ void bnx2x_free_mem(struct bnx2x *bp)
7757 BNX2X_PCI_FREE(bp->eq_ring, bp->eq_mapping, 7774 BNX2X_PCI_FREE(bp->eq_ring, bp->eq_mapping,
7758 BCM_PAGE_SIZE * NUM_EQ_PAGES); 7775 BCM_PAGE_SIZE * NUM_EQ_PAGES);
7759 7776
7777 BNX2X_PCI_FREE(bp->t2, bp->t2_mapping, SRC_T2_SZ);
7778
7760 bnx2x_iov_free_mem(bp); 7779 bnx2x_iov_free_mem(bp);
7761} 7780}
7762 7781
@@ -7773,7 +7792,7 @@ int bnx2x_alloc_mem_cnic(struct bnx2x *bp)
7773 sizeof(struct 7792 sizeof(struct
7774 host_hc_status_block_e1x)); 7793 host_hc_status_block_e1x));
7775 7794
7776 if (CONFIGURE_NIC_MODE(bp)) 7795 if (CONFIGURE_NIC_MODE(bp) && !bp->t2)
7777 /* allocate searcher T2 table, as it wan't allocated before */ 7796 /* allocate searcher T2 table, as it wan't allocated before */
7778 BNX2X_PCI_ALLOC(bp->t2, &bp->t2_mapping, SRC_T2_SZ); 7797 BNX2X_PCI_ALLOC(bp->t2, &bp->t2_mapping, SRC_T2_SZ);
7779 7798
@@ -7796,7 +7815,7 @@ int bnx2x_alloc_mem(struct bnx2x *bp)
7796{ 7815{
7797 int i, allocated, context_size; 7816 int i, allocated, context_size;
7798 7817
7799 if (!CONFIGURE_NIC_MODE(bp)) 7818 if (!CONFIGURE_NIC_MODE(bp) && !bp->t2)
7800 /* allocate searcher T2 table */ 7819 /* allocate searcher T2 table */
7801 BNX2X_PCI_ALLOC(bp->t2, &bp->t2_mapping, SRC_T2_SZ); 7820 BNX2X_PCI_ALLOC(bp->t2, &bp->t2_mapping, SRC_T2_SZ);
7802 7821
@@ -7917,8 +7936,6 @@ int bnx2x_del_all_macs(struct bnx2x *bp,
7917 7936
7918int bnx2x_set_eth_mac(struct bnx2x *bp, bool set) 7937int bnx2x_set_eth_mac(struct bnx2x *bp, bool set)
7919{ 7938{
7920 unsigned long ramrod_flags = 0;
7921
7922 if (is_zero_ether_addr(bp->dev->dev_addr) && 7939 if (is_zero_ether_addr(bp->dev->dev_addr) &&
7923 (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))) { 7940 (IS_MF_STORAGE_SD(bp) || IS_MF_FCOE_AFEX(bp))) {
7924 DP(NETIF_MSG_IFUP | NETIF_MSG_IFDOWN, 7941 DP(NETIF_MSG_IFUP | NETIF_MSG_IFDOWN,
@@ -7926,12 +7943,18 @@ int bnx2x_set_eth_mac(struct bnx2x *bp, bool set)
7926 return 0; 7943 return 0;
7927 } 7944 }
7928 7945
7929 DP(NETIF_MSG_IFUP, "Adding Eth MAC\n"); 7946 if (IS_PF(bp)) {
7947 unsigned long ramrod_flags = 0;
7930 7948
7931 __set_bit(RAMROD_COMP_WAIT, &ramrod_flags); 7949 DP(NETIF_MSG_IFUP, "Adding Eth MAC\n");
7932 /* Eth MAC is set on RSS leading client (fp[0]) */ 7950 __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
7933 return bnx2x_set_mac_one(bp, bp->dev->dev_addr, &bp->sp_objs->mac_obj, 7951 return bnx2x_set_mac_one(bp, bp->dev->dev_addr,
7934 set, BNX2X_ETH_MAC, &ramrod_flags); 7952 &bp->sp_objs->mac_obj, set,
7953 BNX2X_ETH_MAC, &ramrod_flags);
7954 } else { /* vf */
7955 return bnx2x_vfpf_config_mac(bp, bp->dev->dev_addr,
7956 bp->fp->index, true);
7957 }
7935} 7958}
7936 7959
7937int bnx2x_setup_leading(struct bnx2x *bp) 7960int bnx2x_setup_leading(struct bnx2x *bp)
@@ -9525,6 +9548,10 @@ sp_rtnl_not_reset:
9525 bnx2x_vfpf_storm_rx_mode(bp); 9548 bnx2x_vfpf_storm_rx_mode(bp);
9526 } 9549 }
9527 9550
9551 if (test_and_clear_bit(BNX2X_SP_RTNL_HYPERVISOR_VLAN,
9552 &bp->sp_rtnl_state))
9553 bnx2x_pf_set_vfs_vlan(bp);
9554
9528 /* work which needs rtnl lock not-taken (as it takes the lock itself and 9555 /* work which needs rtnl lock not-taken (as it takes the lock itself and
9529 * can be called from other contexts as well) 9556 * can be called from other contexts as well)
9530 */ 9557 */
@@ -9532,8 +9559,10 @@ sp_rtnl_not_reset:
9532 9559
9533 /* enable SR-IOV if applicable */ 9560 /* enable SR-IOV if applicable */
9534 if (IS_SRIOV(bp) && test_and_clear_bit(BNX2X_SP_RTNL_ENABLE_SRIOV, 9561 if (IS_SRIOV(bp) && test_and_clear_bit(BNX2X_SP_RTNL_ENABLE_SRIOV,
9535 &bp->sp_rtnl_state)) 9562 &bp->sp_rtnl_state)) {
9563 bnx2x_disable_sriov(bp);
9536 bnx2x_enable_sriov(bp); 9564 bnx2x_enable_sriov(bp);
9565 }
9537} 9566}
9538 9567
9539static void bnx2x_period_task(struct work_struct *work) 9568static void bnx2x_period_task(struct work_struct *work)
@@ -9701,6 +9730,31 @@ static struct bnx2x_prev_path_list *
9701 return NULL; 9730 return NULL;
9702} 9731}
9703 9732
9733static int bnx2x_prev_path_mark_eeh(struct bnx2x *bp)
9734{
9735 struct bnx2x_prev_path_list *tmp_list;
9736 int rc;
9737
9738 rc = down_interruptible(&bnx2x_prev_sem);
9739 if (rc) {
9740 BNX2X_ERR("Received %d when tried to take lock\n", rc);
9741 return rc;
9742 }
9743
9744 tmp_list = bnx2x_prev_path_get_entry(bp);
9745 if (tmp_list) {
9746 tmp_list->aer = 1;
9747 rc = 0;
9748 } else {
9749 BNX2X_ERR("path %d: Entry does not exist for eeh; Flow occurs before initial insmod is over ?\n",
9750 BP_PATH(bp));
9751 }
9752
9753 up(&bnx2x_prev_sem);
9754
9755 return rc;
9756}
9757
9704static bool bnx2x_prev_is_path_marked(struct bnx2x *bp) 9758static bool bnx2x_prev_is_path_marked(struct bnx2x *bp)
9705{ 9759{
9706 struct bnx2x_prev_path_list *tmp_list; 9760 struct bnx2x_prev_path_list *tmp_list;
@@ -9709,14 +9763,15 @@ static bool bnx2x_prev_is_path_marked(struct bnx2x *bp)
9709 if (down_trylock(&bnx2x_prev_sem)) 9763 if (down_trylock(&bnx2x_prev_sem))
9710 return false; 9764 return false;
9711 9765
9712 list_for_each_entry(tmp_list, &bnx2x_prev_list, list) { 9766 tmp_list = bnx2x_prev_path_get_entry(bp);
9713 if (PCI_SLOT(bp->pdev->devfn) == tmp_list->slot && 9767 if (tmp_list) {
9714 bp->pdev->bus->number == tmp_list->bus && 9768 if (tmp_list->aer) {
9715 BP_PATH(bp) == tmp_list->path) { 9769 DP(NETIF_MSG_HW, "Path %d was marked by AER\n",
9770 BP_PATH(bp));
9771 } else {
9716 rc = true; 9772 rc = true;
9717 BNX2X_DEV_INFO("Path %d was already cleaned from previous drivers\n", 9773 BNX2X_DEV_INFO("Path %d was already cleaned from previous drivers\n",
9718 BP_PATH(bp)); 9774 BP_PATH(bp));
9719 break;
9720 } 9775 }
9721 } 9776 }
9722 9777
@@ -9730,6 +9785,28 @@ static int bnx2x_prev_mark_path(struct bnx2x *bp, bool after_undi)
9730 struct bnx2x_prev_path_list *tmp_list; 9785 struct bnx2x_prev_path_list *tmp_list;
9731 int rc; 9786 int rc;
9732 9787
9788 rc = down_interruptible(&bnx2x_prev_sem);
9789 if (rc) {
9790 BNX2X_ERR("Received %d when tried to take lock\n", rc);
9791 return rc;
9792 }
9793
9794 /* Check whether the entry for this path already exists */
9795 tmp_list = bnx2x_prev_path_get_entry(bp);
9796 if (tmp_list) {
9797 if (!tmp_list->aer) {
9798 BNX2X_ERR("Re-Marking the path.\n");
9799 } else {
9800 DP(NETIF_MSG_HW, "Removing AER indication from path %d\n",
9801 BP_PATH(bp));
9802 tmp_list->aer = 0;
9803 }
9804 up(&bnx2x_prev_sem);
9805 return 0;
9806 }
9807 up(&bnx2x_prev_sem);
9808
9809 /* Create an entry for this path and add it */
9733 tmp_list = kmalloc(sizeof(struct bnx2x_prev_path_list), GFP_KERNEL); 9810 tmp_list = kmalloc(sizeof(struct bnx2x_prev_path_list), GFP_KERNEL);
9734 if (!tmp_list) { 9811 if (!tmp_list) {
9735 BNX2X_ERR("Failed to allocate 'bnx2x_prev_path_list'\n"); 9812 BNX2X_ERR("Failed to allocate 'bnx2x_prev_path_list'\n");
@@ -9739,6 +9816,7 @@ static int bnx2x_prev_mark_path(struct bnx2x *bp, bool after_undi)
9739 tmp_list->bus = bp->pdev->bus->number; 9816 tmp_list->bus = bp->pdev->bus->number;
9740 tmp_list->slot = PCI_SLOT(bp->pdev->devfn); 9817 tmp_list->slot = PCI_SLOT(bp->pdev->devfn);
9741 tmp_list->path = BP_PATH(bp); 9818 tmp_list->path = BP_PATH(bp);
9819 tmp_list->aer = 0;
9742 tmp_list->undi = after_undi ? (1 << BP_PORT(bp)) : 0; 9820 tmp_list->undi = after_undi ? (1 << BP_PORT(bp)) : 0;
9743 9821
9744 rc = down_interruptible(&bnx2x_prev_sem); 9822 rc = down_interruptible(&bnx2x_prev_sem);
@@ -9746,8 +9824,8 @@ static int bnx2x_prev_mark_path(struct bnx2x *bp, bool after_undi)
9746 BNX2X_ERR("Received %d when tried to take lock\n", rc); 9824 BNX2X_ERR("Received %d when tried to take lock\n", rc);
9747 kfree(tmp_list); 9825 kfree(tmp_list);
9748 } else { 9826 } else {
9749 BNX2X_DEV_INFO("Marked path [%d] - finished previous unload\n", 9827 DP(NETIF_MSG_HW, "Marked path [%d] - finished previous unload\n",
9750 BP_PATH(bp)); 9828 BP_PATH(bp));
9751 list_add(&tmp_list->list, &bnx2x_prev_list); 9829 list_add(&tmp_list->list, &bnx2x_prev_list);
9752 up(&bnx2x_prev_sem); 9830 up(&bnx2x_prev_sem);
9753 } 9831 }
@@ -9990,6 +10068,7 @@ static int bnx2x_prev_unload(struct bnx2x *bp)
9990 } 10068 }
9991 10069
9992 do { 10070 do {
10071 int aer = 0;
9993 /* Lock MCP using an unload request */ 10072 /* Lock MCP using an unload request */
9994 fw = bnx2x_fw_command(bp, DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS, 0); 10073 fw = bnx2x_fw_command(bp, DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS, 0);
9995 if (!fw) { 10074 if (!fw) {
@@ -9998,7 +10077,18 @@ static int bnx2x_prev_unload(struct bnx2x *bp)
9998 break; 10077 break;
9999 } 10078 }
10000 10079
10001 if (fw == FW_MSG_CODE_DRV_UNLOAD_COMMON) { 10080 rc = down_interruptible(&bnx2x_prev_sem);
10081 if (rc) {
10082 BNX2X_ERR("Cannot check for AER; Received %d when tried to take lock\n",
10083 rc);
10084 } else {
10085 /* If Path is marked by EEH, ignore unload status */
10086 aer = !!(bnx2x_prev_path_get_entry(bp) &&
10087 bnx2x_prev_path_get_entry(bp)->aer);
10088 up(&bnx2x_prev_sem);
10089 }
10090
10091 if (fw == FW_MSG_CODE_DRV_UNLOAD_COMMON || aer) {
10002 rc = bnx2x_prev_unload_common(bp); 10092 rc = bnx2x_prev_unload_common(bp);
10003 break; 10093 break;
10004 } 10094 }
@@ -10038,8 +10128,12 @@ static void bnx2x_get_common_hwinfo(struct bnx2x *bp)
10038 id = ((val & 0xffff) << 16); 10128 id = ((val & 0xffff) << 16);
10039 val = REG_RD(bp, MISC_REG_CHIP_REV); 10129 val = REG_RD(bp, MISC_REG_CHIP_REV);
10040 id |= ((val & 0xf) << 12); 10130 id |= ((val & 0xf) << 12);
10041 val = REG_RD(bp, MISC_REG_CHIP_METAL); 10131
10042 id |= ((val & 0xff) << 4); 10132 /* Metal is read from PCI regs, but we can't access >=0x400 from
10133 * the configuration space (so we need to reg_rd)
10134 */
10135 val = REG_RD(bp, PCICFG_OFFSET + PCI_ID_VAL3);
10136 id |= (((val >> 24) & 0xf) << 4);
10043 val = REG_RD(bp, MISC_REG_BOND_ID); 10137 val = REG_RD(bp, MISC_REG_BOND_ID);
10044 id |= (val & 0xf); 10138 id |= (val & 0xf);
10045 bp->common.chip_id = id; 10139 bp->common.chip_id = id;
@@ -10575,10 +10669,12 @@ static void bnx2x_get_port_hwinfo(struct bnx2x *bp)
10575 10669
10576 bp->link_params.speed_cap_mask[0] = 10670 bp->link_params.speed_cap_mask[0] =
10577 SHMEM_RD(bp, 10671 SHMEM_RD(bp,
10578 dev_info.port_hw_config[port].speed_capability_mask); 10672 dev_info.port_hw_config[port].speed_capability_mask) &
10673 PORT_HW_CFG_SPEED_CAPABILITY_D0_MASK;
10579 bp->link_params.speed_cap_mask[1] = 10674 bp->link_params.speed_cap_mask[1] =
10580 SHMEM_RD(bp, 10675 SHMEM_RD(bp,
10581 dev_info.port_hw_config[port].speed_capability_mask2); 10676 dev_info.port_hw_config[port].speed_capability_mask2) &
10677 PORT_HW_CFG_SPEED_CAPABILITY_D0_MASK;
10582 bp->port.link_config[0] = 10678 bp->port.link_config[0] =
10583 SHMEM_RD(bp, dev_info.port_feature_config[port].link_config); 10679 SHMEM_RD(bp, dev_info.port_feature_config[port].link_config);
10584 10680
@@ -10703,6 +10799,12 @@ static void bnx2x_get_fcoe_info(struct bnx2x *bp)
10703 (max_fcoe_conn & BNX2X_MAX_FCOE_INIT_CONN_MASK) >> 10799 (max_fcoe_conn & BNX2X_MAX_FCOE_INIT_CONN_MASK) >>
10704 BNX2X_MAX_FCOE_INIT_CONN_SHIFT; 10800 BNX2X_MAX_FCOE_INIT_CONN_SHIFT;
10705 10801
10802 /* Calculate the number of maximum allowed FCoE tasks */
10803 bp->cnic_eth_dev.max_fcoe_exchanges = MAX_NUM_FCOE_TASKS_PER_ENGINE;
10804 if (IS_MF(bp) || CHIP_MODE_IS_4_PORT(bp))
10805 bp->cnic_eth_dev.max_fcoe_exchanges /=
10806 MAX_FCOE_FUNCS_PER_ENGINE;
10807
10706 /* Read the WWN: */ 10808 /* Read the WWN: */
10707 if (!IS_MF(bp)) { 10809 if (!IS_MF(bp)) {
10708 /* Port info */ 10810 /* Port info */
@@ -10816,14 +10918,12 @@ static void bnx2x_get_cnic_mac_hwinfo(struct bnx2x *bp)
10816 } 10918 }
10817 } 10919 }
10818 10920
10819 if (IS_MF_STORAGE_SD(bp)) 10921 /* If this is a storage-only interface, use SAN mac as
10820 /* Zero primary MAC configuration */ 10922 * primary MAC. Notice that for SD this is already the case,
10821 memset(bp->dev->dev_addr, 0, ETH_ALEN); 10923 * as the SAN mac was copied from the primary MAC.
10822 10924 */
10823 if (IS_MF_FCOE_AFEX(bp) || IS_MF_FCOE_SD(bp)) 10925 if (IS_MF_FCOE_AFEX(bp))
10824 /* use FIP MAC as primary MAC */
10825 memcpy(bp->dev->dev_addr, fip_mac, ETH_ALEN); 10926 memcpy(bp->dev->dev_addr, fip_mac, ETH_ALEN);
10826
10827 } else { 10927 } else {
10828 val2 = SHMEM_RD(bp, dev_info.port_hw_config[port]. 10928 val2 = SHMEM_RD(bp, dev_info.port_hw_config[port].
10829 iscsi_mac_upper); 10929 iscsi_mac_upper);
@@ -11060,6 +11160,9 @@ static int bnx2x_get_hwinfo(struct bnx2x *bp)
11060 } else 11160 } else
11061 BNX2X_DEV_INFO("illegal OV for SD\n"); 11161 BNX2X_DEV_INFO("illegal OV for SD\n");
11062 break; 11162 break;
11163 case SHARED_FEAT_CFG_FORCE_SF_MODE_FORCED_SF:
11164 bp->mf_config[vn] = 0;
11165 break;
11063 default: 11166 default:
11064 /* Unknown configuration: reset mf_config */ 11167 /* Unknown configuration: reset mf_config */
11065 bp->mf_config[vn] = 0; 11168 bp->mf_config[vn] = 0;
@@ -11406,26 +11509,6 @@ static int bnx2x_init_bp(struct bnx2x *bp)
11406 * net_device service functions 11509 * net_device service functions
11407 */ 11510 */
11408 11511
11409static int bnx2x_open_epilog(struct bnx2x *bp)
11410{
11411 /* Enable sriov via delayed work. This must be done via delayed work
11412 * because it causes the probe of the vf devices to be run, which invoke
11413 * register_netdevice which must have rtnl lock taken. As we are holding
11414 * the lock right now, that could only work if the probe would not take
11415 * the lock. However, as the probe of the vf may be called from other
11416 * contexts as well (such as passthrough to vm failes) it can't assume
11417 * the lock is being held for it. Using delayed work here allows the
11418 * probe code to simply take the lock (i.e. wait for it to be released
11419 * if it is being held).
11420 */
11421 smp_mb__before_clear_bit();
11422 set_bit(BNX2X_SP_RTNL_ENABLE_SRIOV, &bp->sp_rtnl_state);
11423 smp_mb__after_clear_bit();
11424 schedule_delayed_work(&bp->sp_rtnl_task, 0);
11425
11426 return 0;
11427}
11428
11429/* called with rtnl_lock */ 11512/* called with rtnl_lock */
11430static int bnx2x_open(struct net_device *dev) 11513static int bnx2x_open(struct net_device *dev)
11431{ 11514{
@@ -11795,6 +11878,8 @@ static const struct net_device_ops bnx2x_netdev_ops = {
11795 .ndo_setup_tc = bnx2x_setup_tc, 11878 .ndo_setup_tc = bnx2x_setup_tc,
11796#ifdef CONFIG_BNX2X_SRIOV 11879#ifdef CONFIG_BNX2X_SRIOV
11797 .ndo_set_vf_mac = bnx2x_set_vf_mac, 11880 .ndo_set_vf_mac = bnx2x_set_vf_mac,
11881 .ndo_set_vf_vlan = bnx2x_set_vf_vlan,
11882 .ndo_get_vf_config = bnx2x_get_vf_config,
11798#endif 11883#endif
11799#ifdef NETDEV_FCOE_WWNN 11884#ifdef NETDEV_FCOE_WWNN
11800 .ndo_fcoe_get_wwn = bnx2x_fcoe_get_wwn, 11885 .ndo_fcoe_get_wwn = bnx2x_fcoe_get_wwn,
@@ -11957,19 +12042,26 @@ static int bnx2x_init_dev(struct bnx2x *bp, struct pci_dev *pdev,
11957 dev->watchdog_timeo = TX_TIMEOUT; 12042 dev->watchdog_timeo = TX_TIMEOUT;
11958 12043
11959 dev->netdev_ops = &bnx2x_netdev_ops; 12044 dev->netdev_ops = &bnx2x_netdev_ops;
11960 bnx2x_set_ethtool_ops(dev); 12045 bnx2x_set_ethtool_ops(bp, dev);
11961 12046
11962 dev->priv_flags |= IFF_UNICAST_FLT; 12047 dev->priv_flags |= IFF_UNICAST_FLT;
11963 12048
11964 dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 12049 dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
11965 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | 12050 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 |
11966 NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO | 12051 NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO |
11967 NETIF_F_RXHASH | NETIF_F_HW_VLAN_TX; 12052 NETIF_F_RXHASH | NETIF_F_HW_VLAN_CTAG_TX;
12053 if (!CHIP_IS_E1x(bp)) {
12054 dev->hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL;
12055 dev->hw_enc_features =
12056 NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
12057 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 |
12058 NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL;
12059 }
11968 12060
11969 dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 12061 dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
11970 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_HIGHDMA; 12062 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_HIGHDMA;
11971 12063
11972 dev->features |= dev->hw_features | NETIF_F_HW_VLAN_RX; 12064 dev->features |= dev->hw_features | NETIF_F_HW_VLAN_CTAG_RX;
11973 if (bp->flags & USING_DAC_FLAG) 12065 if (bp->flags & USING_DAC_FLAG)
11974 dev->features |= NETIF_F_HIGHDMA; 12066 dev->features |= NETIF_F_HIGHDMA;
11975 12067
@@ -12451,7 +12543,7 @@ static int bnx2x_init_one(struct pci_dev *pdev,
12451 * l2 connections. 12543 * l2 connections.
12452 */ 12544 */
12453 if (IS_VF(bp)) { 12545 if (IS_VF(bp)) {
12454 bnx2x_vf_map_doorbells(bp); 12546 bp->doorbells = bnx2x_vf_doorbells(bp);
12455 rc = bnx2x_vf_pci_alloc(bp); 12547 rc = bnx2x_vf_pci_alloc(bp);
12456 if (rc) 12548 if (rc)
12457 goto init_one_exit; 12549 goto init_one_exit;
@@ -12479,13 +12571,8 @@ static int bnx2x_init_one(struct pci_dev *pdev,
12479 goto init_one_exit; 12571 goto init_one_exit;
12480 } 12572 }
12481 12573
12482 /* Enable SRIOV if capability found in configuration space. 12574 /* Enable SRIOV if capability found in configuration space */
12483 * Once the generic SR-IOV framework makes it in from the 12575 rc = bnx2x_iov_init_one(bp, int_mode, BNX2X_MAX_NUM_OF_VFS);
12484 * pci tree this will be revised, to allow dynamic control
12485 * over the number of VFs. Right now, change the num of vfs
12486 * param below to enable SR-IOV.
12487 */
12488 rc = bnx2x_iov_init_one(bp, int_mode, 0/*num vfs*/);
12489 if (rc) 12576 if (rc)
12490 goto init_one_exit; 12577 goto init_one_exit;
12491 12578
@@ -12497,16 +12584,6 @@ static int bnx2x_init_one(struct pci_dev *pdev,
12497 if (CHIP_IS_E1x(bp)) 12584 if (CHIP_IS_E1x(bp))
12498 bp->flags |= NO_FCOE_FLAG; 12585 bp->flags |= NO_FCOE_FLAG;
12499 12586
12500 /* disable FCOE for 57840 device, until FW supports it */
12501 switch (ent->driver_data) {
12502 case BCM57840_O:
12503 case BCM57840_4_10:
12504 case BCM57840_2_20:
12505 case BCM57840_MFO:
12506 case BCM57840_MF:
12507 bp->flags |= NO_FCOE_FLAG;
12508 }
12509
12510 /* Set bp->num_queues for MSI-X mode*/ 12587 /* Set bp->num_queues for MSI-X mode*/
12511 bnx2x_set_num_queues(bp); 12588 bnx2x_set_num_queues(bp);
12512 12589
@@ -12640,9 +12717,7 @@ static void bnx2x_remove_one(struct pci_dev *pdev)
12640 12717
12641static int bnx2x_eeh_nic_unload(struct bnx2x *bp) 12718static int bnx2x_eeh_nic_unload(struct bnx2x *bp)
12642{ 12719{
12643 int i; 12720 bp->state = BNX2X_STATE_CLOSING_WAIT4_HALT;
12644
12645 bp->state = BNX2X_STATE_ERROR;
12646 12721
12647 bp->rx_mode = BNX2X_RX_MODE_NONE; 12722 bp->rx_mode = BNX2X_RX_MODE_NONE;
12648 12723
@@ -12651,29 +12726,21 @@ static int bnx2x_eeh_nic_unload(struct bnx2x *bp)
12651 12726
12652 /* Stop Tx */ 12727 /* Stop Tx */
12653 bnx2x_tx_disable(bp); 12728 bnx2x_tx_disable(bp);
12654
12655 bnx2x_netif_stop(bp, 0);
12656 /* Delete all NAPI objects */ 12729 /* Delete all NAPI objects */
12657 bnx2x_del_all_napi(bp); 12730 bnx2x_del_all_napi(bp);
12658 if (CNIC_LOADED(bp)) 12731 if (CNIC_LOADED(bp))
12659 bnx2x_del_all_napi_cnic(bp); 12732 bnx2x_del_all_napi_cnic(bp);
12733 netdev_reset_tc(bp->dev);
12660 12734
12661 del_timer_sync(&bp->timer); 12735 del_timer_sync(&bp->timer);
12736 cancel_delayed_work(&bp->sp_task);
12737 cancel_delayed_work(&bp->period_task);
12662 12738
12663 bnx2x_stats_handle(bp, STATS_EVENT_STOP); 12739 spin_lock_bh(&bp->stats_lock);
12664 12740 bp->stats_state = STATS_STATE_DISABLED;
12665 /* Release IRQs */ 12741 spin_unlock_bh(&bp->stats_lock);
12666 bnx2x_free_irq(bp);
12667
12668 /* Free SKBs, SGEs, TPA pool and driver internals */
12669 bnx2x_free_skbs(bp);
12670
12671 for_each_rx_queue(bp, i)
12672 bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
12673
12674 bnx2x_free_mem(bp);
12675 12742
12676 bp->state = BNX2X_STATE_CLOSED; 12743 bnx2x_save_statistics(bp);
12677 12744
12678 netif_carrier_off(bp->dev); 12745 netif_carrier_off(bp->dev);
12679 12746
@@ -12709,6 +12776,8 @@ static pci_ers_result_t bnx2x_io_error_detected(struct pci_dev *pdev,
12709 12776
12710 rtnl_lock(); 12777 rtnl_lock();
12711 12778
12779 BNX2X_ERR("IO error detected\n");
12780
12712 netif_device_detach(dev); 12781 netif_device_detach(dev);
12713 12782
12714 if (state == pci_channel_io_perm_failure) { 12783 if (state == pci_channel_io_perm_failure) {
@@ -12719,6 +12788,8 @@ static pci_ers_result_t bnx2x_io_error_detected(struct pci_dev *pdev,
12719 if (netif_running(dev)) 12788 if (netif_running(dev))
12720 bnx2x_eeh_nic_unload(bp); 12789 bnx2x_eeh_nic_unload(bp);
12721 12790
12791 bnx2x_prev_path_mark_eeh(bp);
12792
12722 pci_disable_device(pdev); 12793 pci_disable_device(pdev);
12723 12794
12724 rtnl_unlock(); 12795 rtnl_unlock();
@@ -12737,9 +12808,10 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
12737{ 12808{
12738 struct net_device *dev = pci_get_drvdata(pdev); 12809 struct net_device *dev = pci_get_drvdata(pdev);
12739 struct bnx2x *bp = netdev_priv(dev); 12810 struct bnx2x *bp = netdev_priv(dev);
12811 int i;
12740 12812
12741 rtnl_lock(); 12813 rtnl_lock();
12742 12814 BNX2X_ERR("IO slot reset initializing...\n");
12743 if (pci_enable_device(pdev)) { 12815 if (pci_enable_device(pdev)) {
12744 dev_err(&pdev->dev, 12816 dev_err(&pdev->dev,
12745 "Cannot re-enable PCI device after reset\n"); 12817 "Cannot re-enable PCI device after reset\n");
@@ -12749,10 +12821,47 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
12749 12821
12750 pci_set_master(pdev); 12822 pci_set_master(pdev);
12751 pci_restore_state(pdev); 12823 pci_restore_state(pdev);
12824 pci_save_state(pdev);
12752 12825
12753 if (netif_running(dev)) 12826 if (netif_running(dev))
12754 bnx2x_set_power_state(bp, PCI_D0); 12827 bnx2x_set_power_state(bp, PCI_D0);
12755 12828
12829 if (netif_running(dev)) {
12830 BNX2X_ERR("IO slot reset --> driver unload\n");
12831 if (IS_PF(bp) && SHMEM2_HAS(bp, drv_capabilities_flag)) {
12832 u32 v;
12833
12834 v = SHMEM2_RD(bp,
12835 drv_capabilities_flag[BP_FW_MB_IDX(bp)]);
12836 SHMEM2_WR(bp, drv_capabilities_flag[BP_FW_MB_IDX(bp)],
12837 v & ~DRV_FLAGS_CAPABILITIES_LOADED_L2);
12838 }
12839 bnx2x_drain_tx_queues(bp);
12840 bnx2x_send_unload_req(bp, UNLOAD_RECOVERY);
12841 bnx2x_netif_stop(bp, 1);
12842 bnx2x_free_irq(bp);
12843
12844 /* Report UNLOAD_DONE to MCP */
12845 bnx2x_send_unload_done(bp, true);
12846
12847 bp->sp_state = 0;
12848 bp->port.pmf = 0;
12849
12850 bnx2x_prev_unload(bp);
12851
12852 /* We should have resetted the engine, so It's fair to
12853 * assume the FW will no longer write to the bnx2x driver.
12854 */
12855 bnx2x_squeeze_objects(bp);
12856 bnx2x_free_skbs(bp);
12857 for_each_rx_queue(bp, i)
12858 bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
12859 bnx2x_free_fp_mem(bp);
12860 bnx2x_free_mem(bp);
12861
12862 bp->state = BNX2X_STATE_CLOSED;
12863 }
12864
12756 rtnl_unlock(); 12865 rtnl_unlock();
12757 12866
12758 return PCI_ERS_RESULT_RECOVERED; 12867 return PCI_ERS_RESULT_RECOVERED;
@@ -12779,6 +12888,9 @@ static void bnx2x_io_resume(struct pci_dev *pdev)
12779 12888
12780 bnx2x_eeh_recover(bp); 12889 bnx2x_eeh_recover(bp);
12781 12890
12891 bp->fw_seq = SHMEM_RD(bp, func_mb[BP_FW_MB_IDX(bp)].drv_mb_header) &
12892 DRV_MSG_SEQ_NUMBER_MASK;
12893
12782 if (netif_running(dev)) 12894 if (netif_running(dev))
12783 bnx2x_nic_load(bp, LOAD_NORMAL); 12895 bnx2x_nic_load(bp, LOAD_NORMAL);
12784 12896
@@ -12801,6 +12913,9 @@ static struct pci_driver bnx2x_pci_driver = {
12801 .suspend = bnx2x_suspend, 12913 .suspend = bnx2x_suspend,
12802 .resume = bnx2x_resume, 12914 .resume = bnx2x_resume,
12803 .err_handler = &bnx2x_err_handler, 12915 .err_handler = &bnx2x_err_handler,
12916#ifdef CONFIG_BNX2X_SRIOV
12917 .sriov_configure = bnx2x_sriov_configure,
12918#endif
12804}; 12919};
12805 12920
12806static int __init bnx2x_init(void) 12921static int __init bnx2x_init(void)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 791eb2d53011..d22bc40091ec 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -1491,10 +1491,6 @@
1491/* [R 4] This field indicates the type of the device. '0' - 2 Ports; '1' - 1 1491/* [R 4] This field indicates the type of the device. '0' - 2 Ports; '1' - 1
1492 Port. */ 1492 Port. */
1493#define MISC_REG_BOND_ID 0xa400 1493#define MISC_REG_BOND_ID 0xa400
1494/* [R 8] These bits indicate the metal revision of the chip. This value
1495 starts at 0x00 for each all-layer tape-out and increments by one for each
1496 tape-out. */
1497#define MISC_REG_CHIP_METAL 0xa404
1498/* [R 16] These bits indicate the part number for the chip. */ 1494/* [R 16] These bits indicate the part number for the chip. */
1499#define MISC_REG_CHIP_NUM 0xa408 1495#define MISC_REG_CHIP_NUM 0xa408
1500/* [R 4] These bits indicate the base revision of the chip. This value 1496/* [R 4] These bits indicate the base revision of the chip. This value
@@ -6331,6 +6327,8 @@
6331#define PCI_PM_DATA_B 0x414 6327#define PCI_PM_DATA_B 0x414
6332#define PCI_ID_VAL1 0x434 6328#define PCI_ID_VAL1 0x434
6333#define PCI_ID_VAL2 0x438 6329#define PCI_ID_VAL2 0x438
6330#define PCI_ID_VAL3 0x43c
6331
6334#define GRC_CONFIG_REG_PF_INIT_VF 0x624 6332#define GRC_CONFIG_REG_PF_INIT_VF 0x624
6335#define GRC_CR_PF_INIT_VF_PF_FIRST_VF_NUM_MASK 0xf 6333#define GRC_CR_PF_INIT_VF_PF_FIRST_VF_NUM_MASK 0xf
6336/* First VF_NUM for PF is encoded in this register. 6334/* First VF_NUM for PF is encoded in this register.
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
index 7306416bc90d..32a9609cc98b 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c
@@ -30,8 +30,6 @@
30 30
31#define BNX2X_MAX_EMUL_MULTI 16 31#define BNX2X_MAX_EMUL_MULTI 16
32 32
33#define MAC_LEADING_ZERO_CNT (ALIGN(ETH_ALEN, sizeof(u32)) - ETH_ALEN)
34
35/**** Exe Queue interfaces ****/ 33/**** Exe Queue interfaces ****/
36 34
37/** 35/**
@@ -444,30 +442,21 @@ static bool bnx2x_put_credit_vlan_mac(struct bnx2x_vlan_mac_obj *o)
444} 442}
445 443
446static int bnx2x_get_n_elements(struct bnx2x *bp, struct bnx2x_vlan_mac_obj *o, 444static int bnx2x_get_n_elements(struct bnx2x *bp, struct bnx2x_vlan_mac_obj *o,
447 int n, u8 *buf) 445 int n, u8 *base, u8 stride, u8 size)
448{ 446{
449 struct bnx2x_vlan_mac_registry_elem *pos; 447 struct bnx2x_vlan_mac_registry_elem *pos;
450 u8 *next = buf; 448 u8 *next = base;
451 int counter = 0; 449 int counter = 0;
452 450
453 /* traverse list */ 451 /* traverse list */
454 list_for_each_entry(pos, &o->head, link) { 452 list_for_each_entry(pos, &o->head, link) {
455 if (counter < n) { 453 if (counter < n) {
456 /* place leading zeroes in buffer */ 454 memcpy(next, &pos->u, size);
457 memset(next, 0, MAC_LEADING_ZERO_CNT);
458
459 /* place mac after leading zeroes*/
460 memcpy(next + MAC_LEADING_ZERO_CNT, pos->u.mac.mac,
461 ETH_ALEN);
462
463 /* calculate address of next element and
464 * advance counter
465 */
466 counter++; 455 counter++;
467 next = buf + counter * ALIGN(ETH_ALEN, sizeof(u32)); 456 DP(BNX2X_MSG_SP, "copied element number %d to address %p element was:\n",
457 counter, next);
458 next += stride + size;
468 459
469 DP(BNX2X_MSG_SP, "copied element number %d to address %p element was %pM\n",
470 counter, next, pos->u.mac.mac);
471 } 460 }
472 } 461 }
473 return counter * ETH_ALEN; 462 return counter * ETH_ALEN;
@@ -487,7 +476,8 @@ static int bnx2x_check_mac_add(struct bnx2x *bp,
487 476
488 /* Check if a requested MAC already exists */ 477 /* Check if a requested MAC already exists */
489 list_for_each_entry(pos, &o->head, link) 478 list_for_each_entry(pos, &o->head, link)
490 if (!memcmp(data->mac.mac, pos->u.mac.mac, ETH_ALEN)) 479 if (!memcmp(data->mac.mac, pos->u.mac.mac, ETH_ALEN) &&
480 (data->mac.is_inner_mac == pos->u.mac.is_inner_mac))
491 return -EEXIST; 481 return -EEXIST;
492 482
493 return 0; 483 return 0;
@@ -520,7 +510,9 @@ static int bnx2x_check_vlan_mac_add(struct bnx2x *bp,
520 list_for_each_entry(pos, &o->head, link) 510 list_for_each_entry(pos, &o->head, link)
521 if ((data->vlan_mac.vlan == pos->u.vlan_mac.vlan) && 511 if ((data->vlan_mac.vlan == pos->u.vlan_mac.vlan) &&
522 (!memcmp(data->vlan_mac.mac, pos->u.vlan_mac.mac, 512 (!memcmp(data->vlan_mac.mac, pos->u.vlan_mac.mac,
523 ETH_ALEN))) 513 ETH_ALEN)) &&
514 (data->vlan_mac.is_inner_mac ==
515 pos->u.vlan_mac.is_inner_mac))
524 return -EEXIST; 516 return -EEXIST;
525 517
526 return 0; 518 return 0;
@@ -538,7 +530,8 @@ static struct bnx2x_vlan_mac_registry_elem *
538 DP(BNX2X_MSG_SP, "Checking MAC %pM for DEL command\n", data->mac.mac); 530 DP(BNX2X_MSG_SP, "Checking MAC %pM for DEL command\n", data->mac.mac);
539 531
540 list_for_each_entry(pos, &o->head, link) 532 list_for_each_entry(pos, &o->head, link)
541 if (!memcmp(data->mac.mac, pos->u.mac.mac, ETH_ALEN)) 533 if ((!memcmp(data->mac.mac, pos->u.mac.mac, ETH_ALEN)) &&
534 (data->mac.is_inner_mac == pos->u.mac.is_inner_mac))
542 return pos; 535 return pos;
543 536
544 return NULL; 537 return NULL;
@@ -573,7 +566,9 @@ static struct bnx2x_vlan_mac_registry_elem *
573 list_for_each_entry(pos, &o->head, link) 566 list_for_each_entry(pos, &o->head, link)
574 if ((data->vlan_mac.vlan == pos->u.vlan_mac.vlan) && 567 if ((data->vlan_mac.vlan == pos->u.vlan_mac.vlan) &&
575 (!memcmp(data->vlan_mac.mac, pos->u.vlan_mac.mac, 568 (!memcmp(data->vlan_mac.mac, pos->u.vlan_mac.mac,
576 ETH_ALEN))) 569 ETH_ALEN)) &&
570 (data->vlan_mac.is_inner_mac ==
571 pos->u.vlan_mac.is_inner_mac))
577 return pos; 572 return pos;
578 573
579 return NULL; 574 return NULL;
@@ -770,6 +765,8 @@ static void bnx2x_set_one_mac_e2(struct bnx2x *bp,
770 bnx2x_set_fw_mac_addr(&rule_entry->mac.mac_msb, 765 bnx2x_set_fw_mac_addr(&rule_entry->mac.mac_msb,
771 &rule_entry->mac.mac_mid, 766 &rule_entry->mac.mac_mid,
772 &rule_entry->mac.mac_lsb, mac); 767 &rule_entry->mac.mac_lsb, mac);
768 rule_entry->mac.inner_mac =
769 cpu_to_le16(elem->cmd_data.vlan_mac.u.mac.is_inner_mac);
773 770
774 /* MOVE: Add a rule that will add this MAC to the target Queue */ 771 /* MOVE: Add a rule that will add this MAC to the target Queue */
775 if (cmd == BNX2X_VLAN_MAC_MOVE) { 772 if (cmd == BNX2X_VLAN_MAC_MOVE) {
@@ -786,6 +783,9 @@ static void bnx2x_set_one_mac_e2(struct bnx2x *bp,
786 bnx2x_set_fw_mac_addr(&rule_entry->mac.mac_msb, 783 bnx2x_set_fw_mac_addr(&rule_entry->mac.mac_msb,
787 &rule_entry->mac.mac_mid, 784 &rule_entry->mac.mac_mid,
788 &rule_entry->mac.mac_lsb, mac); 785 &rule_entry->mac.mac_lsb, mac);
786 rule_entry->mac.inner_mac =
787 cpu_to_le16(elem->cmd_data.vlan_mac.
788 u.mac.is_inner_mac);
789 } 789 }
790 790
791 /* Set the ramrod data header */ 791 /* Set the ramrod data header */
@@ -974,7 +974,8 @@ static void bnx2x_set_one_vlan_mac_e2(struct bnx2x *bp,
974 bnx2x_set_fw_mac_addr(&rule_entry->pair.mac_msb, 974 bnx2x_set_fw_mac_addr(&rule_entry->pair.mac_msb,
975 &rule_entry->pair.mac_mid, 975 &rule_entry->pair.mac_mid,
976 &rule_entry->pair.mac_lsb, mac); 976 &rule_entry->pair.mac_lsb, mac);
977 977 rule_entry->pair.inner_mac =
978 cpu_to_le16(elem->cmd_data.vlan_mac.u.vlan_mac.is_inner_mac);
978 /* MOVE: Add a rule that will add this MAC to the target Queue */ 979 /* MOVE: Add a rule that will add this MAC to the target Queue */
979 if (cmd == BNX2X_VLAN_MAC_MOVE) { 980 if (cmd == BNX2X_VLAN_MAC_MOVE) {
980 rule_entry++; 981 rule_entry++;
@@ -991,6 +992,9 @@ static void bnx2x_set_one_vlan_mac_e2(struct bnx2x *bp,
991 bnx2x_set_fw_mac_addr(&rule_entry->pair.mac_msb, 992 bnx2x_set_fw_mac_addr(&rule_entry->pair.mac_msb,
992 &rule_entry->pair.mac_mid, 993 &rule_entry->pair.mac_mid,
993 &rule_entry->pair.mac_lsb, mac); 994 &rule_entry->pair.mac_lsb, mac);
995 rule_entry->pair.inner_mac =
996 cpu_to_le16(elem->cmd_data.vlan_mac.u.
997 vlan_mac.is_inner_mac);
994 } 998 }
995 999
996 /* Set the ramrod data header */ 1000 /* Set the ramrod data header */
@@ -1854,6 +1858,7 @@ static int bnx2x_vlan_mac_del_all(struct bnx2x *bp,
1854 return rc; 1858 return rc;
1855 } 1859 }
1856 list_del(&exeq_pos->link); 1860 list_del(&exeq_pos->link);
1861 bnx2x_exe_queue_free_elem(bp, exeq_pos);
1857 } 1862 }
1858 } 1863 }
1859 1864
@@ -2012,6 +2017,7 @@ void bnx2x_init_vlan_obj(struct bnx2x *bp,
2012 vlan_obj->check_move = bnx2x_check_move; 2017 vlan_obj->check_move = bnx2x_check_move;
2013 vlan_obj->ramrod_cmd = 2018 vlan_obj->ramrod_cmd =
2014 RAMROD_CMD_ID_ETH_CLASSIFICATION_RULES; 2019 RAMROD_CMD_ID_ETH_CLASSIFICATION_RULES;
2020 vlan_obj->get_n_elements = bnx2x_get_n_elements;
2015 2021
2016 /* Exe Queue */ 2022 /* Exe Queue */
2017 bnx2x_exe_queue_init(bp, 2023 bnx2x_exe_queue_init(bp,
@@ -4426,6 +4432,12 @@ static void bnx2x_q_fill_init_tx_data(struct bnx2x_queue_sp_obj *o,
4426 tx_data->force_default_pri_flg = 4432 tx_data->force_default_pri_flg =
4427 test_bit(BNX2X_Q_FLG_FORCE_DEFAULT_PRI, flags); 4433 test_bit(BNX2X_Q_FLG_FORCE_DEFAULT_PRI, flags);
4428 4434
4435 tx_data->tunnel_lso_inc_ip_id =
4436 test_bit(BNX2X_Q_FLG_TUN_INC_INNER_IP_ID, flags);
4437 tx_data->tunnel_non_lso_pcsum_location =
4438 test_bit(BNX2X_Q_FLG_PCSUM_ON_PKT, flags) ? PCSUM_ON_PKT :
4439 PCSUM_ON_BD;
4440
4429 tx_data->tx_status_block_id = params->fw_sb_id; 4441 tx_data->tx_status_block_id = params->fw_sb_id;
4430 tx_data->tx_sb_index_number = params->sb_cq_index; 4442 tx_data->tx_sb_index_number = params->sb_cq_index;
4431 tx_data->tss_leading_client_id = params->tss_leading_cl_id; 4443 tx_data->tss_leading_client_id = params->tss_leading_cl_id;
@@ -5669,17 +5681,18 @@ static inline int bnx2x_func_send_start(struct bnx2x *bp,
5669 memset(rdata, 0, sizeof(*rdata)); 5681 memset(rdata, 0, sizeof(*rdata));
5670 5682
5671 /* Fill the ramrod data with provided parameters */ 5683 /* Fill the ramrod data with provided parameters */
5672 rdata->function_mode = (u8)start_params->mf_mode; 5684 rdata->function_mode = (u8)start_params->mf_mode;
5673 rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag); 5685 rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag);
5674 rdata->path_id = BP_PATH(bp); 5686 rdata->path_id = BP_PATH(bp);
5675 rdata->network_cos_mode = start_params->network_cos_mode; 5687 rdata->network_cos_mode = start_params->network_cos_mode;
5676 5688 rdata->gre_tunnel_mode = start_params->gre_tunnel_mode;
5677 /* 5689 rdata->gre_tunnel_rss = start_params->gre_tunnel_rss;
5678 * No need for an explicit memory barrier here as long we would 5690
5679 * need to ensure the ordering of writing to the SPQ element 5691 /* No need for an explicit memory barrier here as long we would
5680 * and updating of the SPQ producer which involves a memory 5692 * need to ensure the ordering of writing to the SPQ element
5681 * read and we will have to put a full memory barrier there 5693 * and updating of the SPQ producer which involves a memory
5682 * (inside bnx2x_sp_post()). 5694 * read and we will have to put a full memory barrier there
5695 * (inside bnx2x_sp_post()).
5683 */ 5696 */
5684 5697
5685 return bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_FUNCTION_START, 0, 5698 return bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_FUNCTION_START, 0,
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
index ff907609b9fc..43c00bc84a08 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
@@ -100,6 +100,7 @@ struct bnx2x_raw_obj {
100/************************* VLAN-MAC commands related parameters ***************/ 100/************************* VLAN-MAC commands related parameters ***************/
101struct bnx2x_mac_ramrod_data { 101struct bnx2x_mac_ramrod_data {
102 u8 mac[ETH_ALEN]; 102 u8 mac[ETH_ALEN];
103 u8 is_inner_mac;
103}; 104};
104 105
105struct bnx2x_vlan_ramrod_data { 106struct bnx2x_vlan_ramrod_data {
@@ -108,6 +109,7 @@ struct bnx2x_vlan_ramrod_data {
108 109
109struct bnx2x_vlan_mac_ramrod_data { 110struct bnx2x_vlan_mac_ramrod_data {
110 u8 mac[ETH_ALEN]; 111 u8 mac[ETH_ALEN];
112 u8 is_inner_mac;
111 u16 vlan; 113 u16 vlan;
112}; 114};
113 115
@@ -313,8 +315,9 @@ struct bnx2x_vlan_mac_obj {
313 * 315 *
314 * @return number of copied bytes 316 * @return number of copied bytes
315 */ 317 */
316 int (*get_n_elements)(struct bnx2x *bp, struct bnx2x_vlan_mac_obj *o, 318 int (*get_n_elements)(struct bnx2x *bp,
317 int n, u8 *buf); 319 struct bnx2x_vlan_mac_obj *o, int n, u8 *base,
320 u8 stride, u8 size);
318 321
319 /** 322 /**
320 * Checks if ADD-ramrod with the given params may be performed. 323 * Checks if ADD-ramrod with the given params may be performed.
@@ -824,7 +827,9 @@ enum {
824 BNX2X_Q_FLG_TX_SEC, 827 BNX2X_Q_FLG_TX_SEC,
825 BNX2X_Q_FLG_ANTI_SPOOF, 828 BNX2X_Q_FLG_ANTI_SPOOF,
826 BNX2X_Q_FLG_SILENT_VLAN_REM, 829 BNX2X_Q_FLG_SILENT_VLAN_REM,
827 BNX2X_Q_FLG_FORCE_DEFAULT_PRI 830 BNX2X_Q_FLG_FORCE_DEFAULT_PRI,
831 BNX2X_Q_FLG_PCSUM_ON_PKT,
832 BNX2X_Q_FLG_TUN_INC_INNER_IP_ID
828}; 833};
829 834
830/* Queue type options: queue type may be a compination of below. */ 835/* Queue type options: queue type may be a compination of below. */
@@ -842,6 +847,7 @@ enum bnx2x_q_type {
842#define BNX2X_MULTI_TX_COS_E3B0 3 847#define BNX2X_MULTI_TX_COS_E3B0 3
843#define BNX2X_MULTI_TX_COS 3 /* Maximum possible */ 848#define BNX2X_MULTI_TX_COS 3 /* Maximum possible */
844 849
850#define MAC_PAD (ALIGN(ETH_ALEN, sizeof(u32)) - ETH_ALEN)
845 851
846struct bnx2x_queue_init_params { 852struct bnx2x_queue_init_params {
847 struct { 853 struct {
@@ -1118,6 +1124,15 @@ struct bnx2x_func_start_params {
1118 1124
1119 /* Function cos mode */ 1125 /* Function cos mode */
1120 u8 network_cos_mode; 1126 u8 network_cos_mode;
1127
1128 /* NVGRE classification enablement */
1129 u8 nvgre_clss_en;
1130
1131 /* NO_GRE_TUNNEL/NVGRE_TUNNEL/L2GRE_TUNNEL/IPGRE_TUNNEL */
1132 u8 gre_tunnel_mode;
1133
1134 /* GRE_OUTER_HEADERS_RSS/GRE_INNER_HEADERS_RSS/NVGRE_KEY_ENTROPY_RSS */
1135 u8 gre_tunnel_rss;
1121}; 1136};
1122 1137
1123struct bnx2x_func_switch_update_params { 1138struct bnx2x_func_switch_update_params {
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
index 6adfa2093581..2ce7c7471367 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
@@ -20,7 +20,9 @@
20#include "bnx2x.h" 20#include "bnx2x.h"
21#include "bnx2x_init.h" 21#include "bnx2x_init.h"
22#include "bnx2x_cmn.h" 22#include "bnx2x_cmn.h"
23#include "bnx2x_sp.h"
23#include <linux/crc32.h> 24#include <linux/crc32.h>
25#include <linux/if_vlan.h>
24 26
25/* General service functions */ 27/* General service functions */
26static void storm_memset_vf_to_pf(struct bnx2x *bp, u16 abs_fid, 28static void storm_memset_vf_to_pf(struct bnx2x *bp, u16 abs_fid,
@@ -555,8 +557,7 @@ static int bnx2x_vfop_config_list(struct bnx2x *bp,
555 rc = bnx2x_config_vlan_mac(bp, vlan_mac); 557 rc = bnx2x_config_vlan_mac(bp, vlan_mac);
556 if (rc >= 0) { 558 if (rc >= 0) {
557 cnt += pos->add ? 1 : -1; 559 cnt += pos->add ? 1 : -1;
558 list_del(&pos->link); 560 list_move(&pos->link, &rollback_list);
559 list_add(&pos->link, &rollback_list);
560 rc = 0; 561 rc = 0;
561 } else if (rc == -EEXIST) { 562 } else if (rc == -EEXIST) {
562 rc = 0; 563 rc = 0;
@@ -958,6 +959,12 @@ op_err:
958 BNX2X_ERR("QSETUP[%d:%d] error: rc %d\n", vf->abs_vfid, qid, vfop->rc); 959 BNX2X_ERR("QSETUP[%d:%d] error: rc %d\n", vf->abs_vfid, qid, vfop->rc);
959op_done: 960op_done:
960 case BNX2X_VFOP_QSETUP_DONE: 961 case BNX2X_VFOP_QSETUP_DONE:
962 vf->cfg_flags |= VF_CFG_VLAN;
963 smp_mb__before_clear_bit();
964 set_bit(BNX2X_SP_RTNL_HYPERVISOR_VLAN,
965 &bp->sp_rtnl_state);
966 smp_mb__after_clear_bit();
967 schedule_delayed_work(&bp->sp_rtnl_task, 0);
961 bnx2x_vfop_end(bp, vf, vfop); 968 bnx2x_vfop_end(bp, vf, vfop);
962 return; 969 return;
963 default: 970 default:
@@ -1459,7 +1466,6 @@ static u8 bnx2x_vf_is_pcie_pending(struct bnx2x *bp, u8 abs_vfid)
1459 return bnx2x_is_pcie_pending(dev); 1466 return bnx2x_is_pcie_pending(dev);
1460 1467
1461unknown_dev: 1468unknown_dev:
1462 BNX2X_ERR("Unknown device\n");
1463 return false; 1469 return false;
1464} 1470}
1465 1471
@@ -1926,20 +1932,22 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
1926 1932
1927 /* SRIOV can be enabled only with MSIX */ 1933 /* SRIOV can be enabled only with MSIX */
1928 if (int_mode_param == BNX2X_INT_MODE_MSI || 1934 if (int_mode_param == BNX2X_INT_MODE_MSI ||
1929 int_mode_param == BNX2X_INT_MODE_INTX) 1935 int_mode_param == BNX2X_INT_MODE_INTX) {
1930 BNX2X_ERR("Forced MSI/INTx mode is incompatible with SRIOV\n"); 1936 BNX2X_ERR("Forced MSI/INTx mode is incompatible with SRIOV\n");
1937 return 0;
1938 }
1931 1939
1932 err = -EIO; 1940 err = -EIO;
1933 /* verify ari is enabled */ 1941 /* verify ari is enabled */
1934 if (!bnx2x_ari_enabled(bp->pdev)) { 1942 if (!bnx2x_ari_enabled(bp->pdev)) {
1935 BNX2X_ERR("ARI not supported, SRIOV can not be enabled\n"); 1943 BNX2X_ERR("ARI not supported (check pci bridge ARI forwarding), SRIOV can not be enabled\n");
1936 return err; 1944 return 0;
1937 } 1945 }
1938 1946
1939 /* verify igu is in normal mode */ 1947 /* verify igu is in normal mode */
1940 if (CHIP_INT_MODE_IS_BC(bp)) { 1948 if (CHIP_INT_MODE_IS_BC(bp)) {
1941 BNX2X_ERR("IGU not normal mode, SRIOV can not be enabled\n"); 1949 BNX2X_ERR("IGU not normal mode, SRIOV can not be enabled\n");
1942 return err; 1950 return 0;
1943 } 1951 }
1944 1952
1945 /* allocate the vfs database */ 1953 /* allocate the vfs database */
@@ -1964,8 +1972,10 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
1964 if (iov->total == 0) 1972 if (iov->total == 0)
1965 goto failed; 1973 goto failed;
1966 1974
1967 /* calculate the actual number of VFs */ 1975 iov->nr_virtfn = min_t(u16, iov->total, num_vfs_param);
1968 iov->nr_virtfn = min_t(u16, iov->total, (u16)num_vfs_param); 1976
1977 DP(BNX2X_MSG_IOV, "num_vfs_param was %d, nr_virtfn was %d\n",
1978 num_vfs_param, iov->nr_virtfn);
1969 1979
1970 /* allocate the vf array */ 1980 /* allocate the vf array */
1971 bp->vfdb->vfs = kzalloc(sizeof(struct bnx2x_virtf) * 1981 bp->vfdb->vfs = kzalloc(sizeof(struct bnx2x_virtf) *
@@ -2378,8 +2388,8 @@ int bnx2x_iov_eq_sp_event(struct bnx2x *bp, union event_ring_elem *elem)
2378 goto get_vf; 2388 goto get_vf;
2379 case EVENT_RING_OPCODE_MALICIOUS_VF: 2389 case EVENT_RING_OPCODE_MALICIOUS_VF:
2380 abs_vfid = elem->message.data.malicious_vf_event.vf_id; 2390 abs_vfid = elem->message.data.malicious_vf_event.vf_id;
2381 DP(BNX2X_MSG_IOV, "Got VF MALICIOUS notification abs_vfid=%d\n", 2391 DP(BNX2X_MSG_IOV, "Got VF MALICIOUS notification abs_vfid=%d err_id=0x%x\n",
2382 abs_vfid); 2392 abs_vfid, elem->message.data.malicious_vf_event.err_id);
2383 goto get_vf; 2393 goto get_vf;
2384 default: 2394 default:
2385 return 1; 2395 return 1;
@@ -2436,8 +2446,8 @@ get_vf:
2436 /* Do nothing for now */ 2446 /* Do nothing for now */
2437 break; 2447 break;
2438 case EVENT_RING_OPCODE_MALICIOUS_VF: 2448 case EVENT_RING_OPCODE_MALICIOUS_VF:
2439 DP(BNX2X_MSG_IOV, "got VF [%d] MALICIOUS notification\n", 2449 DP(BNX2X_MSG_IOV, "Got VF MALICIOUS notification abs_vfid=%d error id %x\n",
2440 vf->abs_vfid); 2450 abs_vfid, elem->message.data.malicious_vf_event.err_id);
2441 /* Do nothing for now */ 2451 /* Do nothing for now */
2442 break; 2452 break;
2443 } 2453 }
@@ -3012,21 +3022,138 @@ void bnx2x_unlock_vf_pf_channel(struct bnx2x *bp, struct bnx2x_virtf *vf,
3012 vf->op_current = CHANNEL_TLV_NONE; 3022 vf->op_current = CHANNEL_TLV_NONE;
3013} 3023}
3014 3024
3015void bnx2x_enable_sriov(struct bnx2x *bp) 3025int bnx2x_sriov_configure(struct pci_dev *dev, int num_vfs_param)
3016{ 3026{
3017 int rc = 0;
3018 3027
3019 /* disbale sriov in case it is still enabled */ 3028 struct bnx2x *bp = netdev_priv(pci_get_drvdata(dev));
3029
3030 DP(BNX2X_MSG_IOV, "bnx2x_sriov_configure called with %d, BNX2X_NR_VIRTFN(bp) was %d\n",
3031 num_vfs_param, BNX2X_NR_VIRTFN(bp));
3032
3033 /* HW channel is only operational when PF is up */
3034 if (bp->state != BNX2X_STATE_OPEN) {
3035 BNX2X_ERR("VF num configurtion via sysfs not supported while PF is down");
3036 return -EINVAL;
3037 }
3038
3039 /* we are always bound by the total_vfs in the configuration space */
3040 if (num_vfs_param > BNX2X_NR_VIRTFN(bp)) {
3041 BNX2X_ERR("truncating requested number of VFs (%d) down to maximum allowed (%d)\n",
3042 num_vfs_param, BNX2X_NR_VIRTFN(bp));
3043 num_vfs_param = BNX2X_NR_VIRTFN(bp);
3044 }
3045
3046 bp->requested_nr_virtfn = num_vfs_param;
3047 if (num_vfs_param == 0) {
3048 pci_disable_sriov(dev);
3049 return 0;
3050 } else {
3051 return bnx2x_enable_sriov(bp);
3052 }
3053}
3054
3055int bnx2x_enable_sriov(struct bnx2x *bp)
3056{
3057 int rc = 0, req_vfs = bp->requested_nr_virtfn;
3058
3059 rc = pci_enable_sriov(bp->pdev, req_vfs);
3060 if (rc) {
3061 BNX2X_ERR("pci_enable_sriov failed with %d\n", rc);
3062 return rc;
3063 }
3064 DP(BNX2X_MSG_IOV, "sriov enabled (%d vfs)\n", req_vfs);
3065 return req_vfs;
3066}
3067
3068void bnx2x_pf_set_vfs_vlan(struct bnx2x *bp)
3069{
3070 int vfidx;
3071 struct pf_vf_bulletin_content *bulletin;
3072
3073 DP(BNX2X_MSG_IOV, "configuring vlan for VFs from sp-task\n");
3074 for_each_vf(bp, vfidx) {
3075 bulletin = BP_VF_BULLETIN(bp, vfidx);
3076 if (BP_VF(bp, vfidx)->cfg_flags & VF_CFG_VLAN)
3077 bnx2x_set_vf_vlan(bp->dev, vfidx, bulletin->vlan, 0);
3078 }
3079}
3080
3081void bnx2x_disable_sriov(struct bnx2x *bp)
3082{
3020 pci_disable_sriov(bp->pdev); 3083 pci_disable_sriov(bp->pdev);
3021 DP(BNX2X_MSG_IOV, "sriov disabled\n"); 3084}
3085
3086static int bnx2x_vf_ndo_sanity(struct bnx2x *bp, int vfidx,
3087 struct bnx2x_virtf *vf)
3088{
3089 if (!IS_SRIOV(bp)) {
3090 BNX2X_ERR("vf ndo called though sriov is disabled\n");
3091 return -EINVAL;
3092 }
3093
3094 if (vfidx >= BNX2X_NR_VIRTFN(bp)) {
3095 BNX2X_ERR("vf ndo called for uninitialized VF. vfidx was %d BNX2X_NR_VIRTFN was %d\n",
3096 vfidx, BNX2X_NR_VIRTFN(bp));
3097 return -EINVAL;
3098 }
3099
3100 if (!vf) {
3101 BNX2X_ERR("vf ndo called but vf was null. vfidx was %d\n",
3102 vfidx);
3103 return -EINVAL;
3104 }
3022 3105
3023 /* enable sriov */ 3106 return 0;
3024 DP(BNX2X_MSG_IOV, "vf num (%d)\n", (bp->vfdb->sriov.nr_virtfn)); 3107}
3025 rc = pci_enable_sriov(bp->pdev, (bp->vfdb->sriov.nr_virtfn)); 3108
3109int bnx2x_get_vf_config(struct net_device *dev, int vfidx,
3110 struct ifla_vf_info *ivi)
3111{
3112 struct bnx2x *bp = netdev_priv(dev);
3113 struct bnx2x_virtf *vf = BP_VF(bp, vfidx);
3114 struct bnx2x_vlan_mac_obj *mac_obj = &bnx2x_vfq(vf, 0, mac_obj);
3115 struct bnx2x_vlan_mac_obj *vlan_obj = &bnx2x_vfq(vf, 0, vlan_obj);
3116 struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vfidx);
3117 int rc;
3118
3119 /* sanity */
3120 rc = bnx2x_vf_ndo_sanity(bp, vfidx, vf);
3026 if (rc) 3121 if (rc)
3027 BNX2X_ERR("pci_enable_sriov failed with %d\n", rc); 3122 return rc;
3028 else 3123 if (!mac_obj || !vlan_obj || !bulletin) {
3029 DP(BNX2X_MSG_IOV, "sriov enabled\n"); 3124 BNX2X_ERR("VF partially initialized\n");
3125 return -EINVAL;
3126 }
3127
3128 ivi->vf = vfidx;
3129 ivi->qos = 0;
3130 ivi->tx_rate = 10000; /* always 10G. TBA take from link struct */
3131 ivi->spoofchk = 1; /*always enabled */
3132 if (vf->state == VF_ENABLED) {
3133 /* mac and vlan are in vlan_mac objects */
3134 mac_obj->get_n_elements(bp, mac_obj, 1, (u8 *)&ivi->mac,
3135 0, ETH_ALEN);
3136 vlan_obj->get_n_elements(bp, vlan_obj, 1, (u8 *)&ivi->vlan,
3137 0, VLAN_HLEN);
3138 } else {
3139 /* mac */
3140 if (bulletin->valid_bitmap & (1 << MAC_ADDR_VALID))
3141 /* mac configured by ndo so its in bulletin board */
3142 memcpy(&ivi->mac, bulletin->mac, ETH_ALEN);
3143 else
3144 /* funtion has not been loaded yet. Show mac as 0s */
3145 memset(&ivi->mac, 0, ETH_ALEN);
3146
3147 /* vlan */
3148 if (bulletin->valid_bitmap & (1 << VLAN_VALID))
3149 /* vlan configured by ndo so its in bulletin board */
3150 memcpy(&ivi->vlan, &bulletin->vlan, VLAN_HLEN);
3151 else
3152 /* funtion has not been loaded yet. Show vlans as 0s */
3153 memset(&ivi->vlan, 0, VLAN_HLEN);
3154 }
3155
3156 return 0;
3030} 3157}
3031 3158
3032/* New mac for VF. Consider these cases: 3159/* New mac for VF. Consider these cases:
@@ -3044,23 +3171,19 @@ void bnx2x_enable_sriov(struct bnx2x *bp)
3044 * VF to configure any mac for itself except for this mac. In case of a race 3171 * VF to configure any mac for itself except for this mac. In case of a race
3045 * where the VF fails to see the new post on its bulletin board before sending a 3172 * where the VF fails to see the new post on its bulletin board before sending a
3046 * mac configuration request, the PF will simply fail the request and VF can try 3173 * mac configuration request, the PF will simply fail the request and VF can try
3047 * again after consulting its bulletin board 3174 * again after consulting its bulletin board.
3048 */ 3175 */
3049int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac) 3176int bnx2x_set_vf_mac(struct net_device *dev, int vfidx, u8 *mac)
3050{ 3177{
3051 struct bnx2x *bp = netdev_priv(dev); 3178 struct bnx2x *bp = netdev_priv(dev);
3052 int rc, q_logical_state, vfidx = queue; 3179 int rc, q_logical_state;
3053 struct bnx2x_virtf *vf = BP_VF(bp, vfidx); 3180 struct bnx2x_virtf *vf = BP_VF(bp, vfidx);
3054 struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vfidx); 3181 struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vfidx);
3055 3182
3056 /* if SRIOV is disabled there is nothing to do (and somewhere, someone 3183 /* sanity */
3057 * has erred). 3184 rc = bnx2x_vf_ndo_sanity(bp, vfidx, vf);
3058 */ 3185 if (rc)
3059 if (!IS_SRIOV(bp)) { 3186 return rc;
3060 BNX2X_ERR("bnx2x_set_vf_mac called though sriov is disabled\n");
3061 return -EINVAL;
3062 }
3063
3064 if (!is_valid_ether_addr(mac)) { 3187 if (!is_valid_ether_addr(mac)) {
3065 BNX2X_ERR("mac address invalid\n"); 3188 BNX2X_ERR("mac address invalid\n");
3066 return -EINVAL; 3189 return -EINVAL;
@@ -3085,7 +3208,7 @@ int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac)
3085 if (vf->state == VF_ENABLED && 3208 if (vf->state == VF_ENABLED &&
3086 q_logical_state == BNX2X_Q_LOGICAL_STATE_ACTIVE) { 3209 q_logical_state == BNX2X_Q_LOGICAL_STATE_ACTIVE) {
3087 /* configure the mac in device on this vf's queue */ 3210 /* configure the mac in device on this vf's queue */
3088 unsigned long flags = 0; 3211 unsigned long ramrod_flags = 0;
3089 struct bnx2x_vlan_mac_obj *mac_obj = &bnx2x_vfq(vf, 0, mac_obj); 3212 struct bnx2x_vlan_mac_obj *mac_obj = &bnx2x_vfq(vf, 0, mac_obj);
3090 3213
3091 /* must lock vfpf channel to protect against vf flows */ 3214 /* must lock vfpf channel to protect against vf flows */
@@ -3106,14 +3229,133 @@ int bnx2x_set_vf_mac(struct net_device *dev, int queue, u8 *mac)
3106 } 3229 }
3107 3230
3108 /* configure the new mac to device */ 3231 /* configure the new mac to device */
3109 __set_bit(RAMROD_COMP_WAIT, &flags); 3232 __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
3110 bnx2x_set_mac_one(bp, (u8 *)&bulletin->mac, mac_obj, true, 3233 bnx2x_set_mac_one(bp, (u8 *)&bulletin->mac, mac_obj, true,
3111 BNX2X_ETH_MAC, &flags); 3234 BNX2X_ETH_MAC, &ramrod_flags);
3112 3235
3113 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC); 3236 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC);
3114 } 3237 }
3115 3238
3116 return rc; 3239 return 0;
3240}
3241
3242int bnx2x_set_vf_vlan(struct net_device *dev, int vfidx, u16 vlan, u8 qos)
3243{
3244 struct bnx2x *bp = netdev_priv(dev);
3245 int rc, q_logical_state;
3246 struct bnx2x_virtf *vf = BP_VF(bp, vfidx);
3247 struct pf_vf_bulletin_content *bulletin = BP_VF_BULLETIN(bp, vfidx);
3248
3249 /* sanity */
3250 rc = bnx2x_vf_ndo_sanity(bp, vfidx, vf);
3251 if (rc)
3252 return rc;
3253
3254 if (vlan > 4095) {
3255 BNX2X_ERR("illegal vlan value %d\n", vlan);
3256 return -EINVAL;
3257 }
3258
3259 DP(BNX2X_MSG_IOV, "configuring VF %d with VLAN %d qos %d\n",
3260 vfidx, vlan, 0);
3261
3262 /* update PF's copy of the VF's bulletin. No point in posting the vlan
3263 * to the VF since it doesn't have anything to do with it. But it useful
3264 * to store it here in case the VF is not up yet and we can only
3265 * configure the vlan later when it does.
3266 */
3267 bulletin->valid_bitmap |= 1 << VLAN_VALID;
3268 bulletin->vlan = vlan;
3269
3270 /* is vf initialized and queue set up? */
3271 q_logical_state =
3272 bnx2x_get_q_logical_state(bp, &bnx2x_vfq(vf, 0, sp_obj));
3273 if (vf->state == VF_ENABLED &&
3274 q_logical_state == BNX2X_Q_LOGICAL_STATE_ACTIVE) {
3275 /* configure the vlan in device on this vf's queue */
3276 unsigned long ramrod_flags = 0;
3277 unsigned long vlan_mac_flags = 0;
3278 struct bnx2x_vlan_mac_obj *vlan_obj =
3279 &bnx2x_vfq(vf, 0, vlan_obj);
3280 struct bnx2x_vlan_mac_ramrod_params ramrod_param;
3281 struct bnx2x_queue_state_params q_params = {NULL};
3282 struct bnx2x_queue_update_params *update_params;
3283
3284 memset(&ramrod_param, 0, sizeof(ramrod_param));
3285
3286 /* must lock vfpf channel to protect against vf flows */
3287 bnx2x_lock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
3288
3289 /* remove existing vlans */
3290 __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
3291 rc = vlan_obj->delete_all(bp, vlan_obj, &vlan_mac_flags,
3292 &ramrod_flags);
3293 if (rc) {
3294 BNX2X_ERR("failed to delete vlans\n");
3295 return -EINVAL;
3296 }
3297
3298 /* send queue update ramrod to configure default vlan and silent
3299 * vlan removal
3300 */
3301 __set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags);
3302 q_params.cmd = BNX2X_Q_CMD_UPDATE;
3303 q_params.q_obj = &bnx2x_vfq(vf, 0, sp_obj);
3304 update_params = &q_params.params.update;
3305 __set_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN_CHNG,
3306 &update_params->update_flags);
3307 __set_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM_CHNG,
3308 &update_params->update_flags);
3309
3310 if (vlan == 0) {
3311 /* if vlan is 0 then we want to leave the VF traffic
3312 * untagged, and leave the incoming traffic untouched
3313 * (i.e. do not remove any vlan tags).
3314 */
3315 __clear_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN,
3316 &update_params->update_flags);
3317 __clear_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM,
3318 &update_params->update_flags);
3319 } else {
3320 /* configure the new vlan to device */
3321 __set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
3322 ramrod_param.vlan_mac_obj = vlan_obj;
3323 ramrod_param.ramrod_flags = ramrod_flags;
3324 ramrod_param.user_req.u.vlan.vlan = vlan;
3325 ramrod_param.user_req.cmd = BNX2X_VLAN_MAC_ADD;
3326 rc = bnx2x_config_vlan_mac(bp, &ramrod_param);
3327 if (rc) {
3328 BNX2X_ERR("failed to configure vlan\n");
3329 return -EINVAL;
3330 }
3331
3332 /* configure default vlan to vf queue and set silent
3333 * vlan removal (the vf remains unaware of this vlan).
3334 */
3335 update_params = &q_params.params.update;
3336 __set_bit(BNX2X_Q_UPDATE_DEF_VLAN_EN,
3337 &update_params->update_flags);
3338 __set_bit(BNX2X_Q_UPDATE_SILENT_VLAN_REM,
3339 &update_params->update_flags);
3340 update_params->def_vlan = vlan;
3341 }
3342
3343 /* Update the Queue state */
3344 rc = bnx2x_queue_state_change(bp, &q_params);
3345 if (rc) {
3346 BNX2X_ERR("Failed to configure default VLAN\n");
3347 return rc;
3348 }
3349
3350 /* clear the flag indicating that this VF needs its vlan
3351 * (will only be set if the HV configured th Vlan before vf was
3352 * and we were called because the VF came up later
3353 */
3354 vf->cfg_flags &= ~VF_CFG_VLAN;
3355
3356 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
3357 }
3358 return 0;
3117} 3359}
3118 3360
3119/* crc is the first field in the bulletin board. compute the crc over the 3361/* crc is the first field in the bulletin board. compute the crc over the
@@ -3165,20 +3407,26 @@ enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp)
3165 memcpy(bp->dev->dev_addr, bulletin.mac, ETH_ALEN); 3407 memcpy(bp->dev->dev_addr, bulletin.mac, ETH_ALEN);
3166 } 3408 }
3167 3409
3410 /* the vlan in bulletin board is valid and is new */
3411 if (bulletin.valid_bitmap & 1 << VLAN_VALID)
3412 memcpy(&bulletin.vlan, &bp->old_bulletin.vlan, VLAN_HLEN);
3413
3168 /* copy new bulletin board to bp */ 3414 /* copy new bulletin board to bp */
3169 bp->old_bulletin = bulletin; 3415 bp->old_bulletin = bulletin;
3170 3416
3171 return PFVF_BULLETIN_UPDATED; 3417 return PFVF_BULLETIN_UPDATED;
3172} 3418}
3173 3419
3174void bnx2x_vf_map_doorbells(struct bnx2x *bp) 3420void __iomem *bnx2x_vf_doorbells(struct bnx2x *bp)
3175{ 3421{
3176 /* vf doorbells are embedded within the regview */ 3422 /* vf doorbells are embedded within the regview */
3177 bp->doorbells = bp->regview + PXP_VF_ADDR_DB_START; 3423 return bp->regview + PXP_VF_ADDR_DB_START;
3178} 3424}
3179 3425
3180int bnx2x_vf_pci_alloc(struct bnx2x *bp) 3426int bnx2x_vf_pci_alloc(struct bnx2x *bp)
3181{ 3427{
3428 mutex_init(&bp->vf2pf_mutex);
3429
3182 /* allocate vf2pf mailbox for vf to pf channel */ 3430 /* allocate vf2pf mailbox for vf to pf channel */
3183 BNX2X_PCI_ALLOC(bp->vf2pf_mbox, &bp->vf2pf_mbox_mapping, 3431 BNX2X_PCI_ALLOC(bp->vf2pf_mbox, &bp->vf2pf_mbox_mapping,
3184 sizeof(struct bnx2x_vf_mbx_msg)); 3432 sizeof(struct bnx2x_vf_mbx_msg));
@@ -3196,3 +3444,26 @@ alloc_mem_err:
3196 sizeof(union pf_vf_bulletin)); 3444 sizeof(union pf_vf_bulletin));
3197 return -ENOMEM; 3445 return -ENOMEM;
3198} 3446}
3447
3448int bnx2x_open_epilog(struct bnx2x *bp)
3449{
3450 /* Enable sriov via delayed work. This must be done via delayed work
3451 * because it causes the probe of the vf devices to be run, which invoke
3452 * register_netdevice which must have rtnl lock taken. As we are holding
3453 * the lock right now, that could only work if the probe would not take
3454 * the lock. However, as the probe of the vf may be called from other
3455 * contexts as well (such as passthrough to vm failes) it can't assume
3456 * the lock is being held for it. Using delayed work here allows the
3457 * probe code to simply take the lock (i.e. wait for it to be released
3458 * if it is being held). We only want to do this if the number of VFs
3459 * was set before PF driver was loaded.
3460 */
3461 if (IS_SRIOV(bp) && BNX2X_NR_VIRTFN(bp)) {
3462 smp_mb__before_clear_bit();
3463 set_bit(BNX2X_SP_RTNL_ENABLE_SRIOV, &bp->sp_rtnl_state);
3464 smp_mb__after_clear_bit();
3465 schedule_delayed_work(&bp->sp_rtnl_task, 0);
3466 }
3467
3468 return 0;
3469}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
index b4050173add9..d67ddc554c0f 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
@@ -193,6 +193,7 @@ struct bnx2x_virtf {
193#define VF_CFG_TPA 0x0004 193#define VF_CFG_TPA 0x0004
194#define VF_CFG_INT_SIMD 0x0008 194#define VF_CFG_INT_SIMD 0x0008
195#define VF_CACHE_LINE 0x0010 195#define VF_CACHE_LINE 0x0010
196#define VF_CFG_VLAN 0x0020
196 197
197 u8 state; 198 u8 state;
198#define VF_FREE 0 /* VF ready to be acquired holds no resc */ 199#define VF_FREE 0 /* VF ready to be acquired holds no resc */
@@ -712,6 +713,7 @@ void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
712 u16 length); 713 u16 length);
713void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv, 714void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
714 u16 type, u16 length); 715 u16 type, u16 length);
716void bnx2x_vfpf_finalize(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv);
715void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list); 717void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list);
716 718
717bool bnx2x_tlv_supported(u16 tlvtype); 719bool bnx2x_tlv_supported(u16 tlvtype);
@@ -731,7 +733,7 @@ int bnx2x_vfpf_init(struct bnx2x *bp);
731void bnx2x_vfpf_close_vf(struct bnx2x *bp); 733void bnx2x_vfpf_close_vf(struct bnx2x *bp);
732int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx); 734int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx);
733int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx); 735int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx);
734int bnx2x_vfpf_set_mac(struct bnx2x *bp); 736int bnx2x_vfpf_config_mac(struct bnx2x *bp, u8 *addr, u8 vf_qid, bool set);
735int bnx2x_vfpf_set_mcast(struct net_device *dev); 737int bnx2x_vfpf_set_mcast(struct net_device *dev);
736int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp); 738int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp);
737 739
@@ -750,13 +752,17 @@ static inline int bnx2x_vf_ustorm_prods_offset(struct bnx2x *bp,
750} 752}
751 753
752enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp); 754enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp);
753void bnx2x_vf_map_doorbells(struct bnx2x *bp); 755void __iomem *bnx2x_vf_doorbells(struct bnx2x *bp);
754int bnx2x_vf_pci_alloc(struct bnx2x *bp); 756int bnx2x_vf_pci_alloc(struct bnx2x *bp);
755void bnx2x_enable_sriov(struct bnx2x *bp); 757int bnx2x_enable_sriov(struct bnx2x *bp);
758void bnx2x_disable_sriov(struct bnx2x *bp);
756static inline int bnx2x_vf_headroom(struct bnx2x *bp) 759static inline int bnx2x_vf_headroom(struct bnx2x *bp)
757{ 760{
758 return bp->vfdb->sriov.nr_virtfn * BNX2X_CLIENTS_PER_VF; 761 return bp->vfdb->sriov.nr_virtfn * BNX2X_CLIENTS_PER_VF;
759} 762}
763void bnx2x_pf_set_vfs_vlan(struct bnx2x *bp);
764int bnx2x_sriov_configure(struct pci_dev *dev, int num_vfs);
765int bnx2x_open_epilog(struct bnx2x *bp);
760 766
761#else /* CONFIG_BNX2X_SRIOV */ 767#else /* CONFIG_BNX2X_SRIOV */
762 768
@@ -779,7 +785,8 @@ static inline void bnx2x_iov_init_dmae(struct bnx2x *bp) {}
779static inline int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param, 785static inline int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param,
780 int num_vfs_param) {return 0; } 786 int num_vfs_param) {return 0; }
781static inline void bnx2x_iov_remove_one(struct bnx2x *bp) {} 787static inline void bnx2x_iov_remove_one(struct bnx2x *bp) {}
782static inline void bnx2x_enable_sriov(struct bnx2x *bp) {} 788static inline int bnx2x_enable_sriov(struct bnx2x *bp) {return 0; }
789static inline void bnx2x_disable_sriov(struct bnx2x *bp) {}
783static inline int bnx2x_vfpf_acquire(struct bnx2x *bp, 790static inline int bnx2x_vfpf_acquire(struct bnx2x *bp,
784 u8 tx_count, u8 rx_count) {return 0; } 791 u8 tx_count, u8 rx_count) {return 0; }
785static inline int bnx2x_vfpf_release(struct bnx2x *bp) {return 0; } 792static inline int bnx2x_vfpf_release(struct bnx2x *bp) {return 0; }
@@ -787,7 +794,8 @@ static inline int bnx2x_vfpf_init(struct bnx2x *bp) {return 0; }
787static inline void bnx2x_vfpf_close_vf(struct bnx2x *bp) {} 794static inline void bnx2x_vfpf_close_vf(struct bnx2x *bp) {}
788static inline int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx) {return 0; } 795static inline int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx) {return 0; }
789static inline int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx) {return 0; } 796static inline int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx) {return 0; }
790static inline int bnx2x_vfpf_set_mac(struct bnx2x *bp) {return 0; } 797static inline int bnx2x_vfpf_config_mac(struct bnx2x *bp, u8 *addr,
798 u8 vf_qid, bool set) {return 0; }
791static inline int bnx2x_vfpf_set_mcast(struct net_device *dev) {return 0; } 799static inline int bnx2x_vfpf_set_mcast(struct net_device *dev) {return 0; }
792static inline int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp) {return 0; } 800static inline int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp) {return 0; }
793static inline int bnx2x_iov_nic_init(struct bnx2x *bp) {return 0; } 801static inline int bnx2x_iov_nic_init(struct bnx2x *bp) {return 0; }
@@ -802,8 +810,15 @@ static inline enum sample_bulletin_result bnx2x_sample_bulletin(struct bnx2x *bp
802 return PFVF_BULLETIN_UNCHANGED; 810 return PFVF_BULLETIN_UNCHANGED;
803} 811}
804 812
805static inline int bnx2x_vf_map_doorbells(struct bnx2x *bp) {return 0; } 813static inline void __iomem *bnx2x_vf_doorbells(struct bnx2x *bp)
814{
815 return NULL;
816}
817
806static inline int bnx2x_vf_pci_alloc(struct bnx2x *bp) {return 0; } 818static inline int bnx2x_vf_pci_alloc(struct bnx2x *bp) {return 0; }
819static inline void bnx2x_pf_set_vfs_vlan(struct bnx2x *bp) {}
820static inline int bnx2x_sriov_configure(struct pci_dev *dev, int num_vfs) {return 0; }
821static inline int bnx2x_open_epilog(struct bnx2x *bp) {return 0; }
807 822
808#endif /* CONFIG_BNX2X_SRIOV */ 823#endif /* CONFIG_BNX2X_SRIOV */
809#endif /* bnx2x_sriov.h */ 824#endif /* bnx2x_sriov.h */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
index 4397f8b76f2e..2ca3d94fcec2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
@@ -1547,11 +1547,51 @@ static void bnx2x_prep_fw_stats_req(struct bnx2x *bp)
1547 } 1547 }
1548} 1548}
1549 1549
1550void bnx2x_memset_stats(struct bnx2x *bp)
1551{
1552 int i;
1553
1554 /* function stats */
1555 for_each_queue(bp, i) {
1556 struct bnx2x_fp_stats *fp_stats = &bp->fp_stats[i];
1557
1558 memset(&fp_stats->old_tclient, 0,
1559 sizeof(fp_stats->old_tclient));
1560 memset(&fp_stats->old_uclient, 0,
1561 sizeof(fp_stats->old_uclient));
1562 memset(&fp_stats->old_xclient, 0,
1563 sizeof(fp_stats->old_xclient));
1564 if (bp->stats_init) {
1565 memset(&fp_stats->eth_q_stats, 0,
1566 sizeof(fp_stats->eth_q_stats));
1567 memset(&fp_stats->eth_q_stats_old, 0,
1568 sizeof(fp_stats->eth_q_stats_old));
1569 }
1570 }
1571
1572 memset(&bp->dev->stats, 0, sizeof(bp->dev->stats));
1573
1574 if (bp->stats_init) {
1575 memset(&bp->net_stats_old, 0, sizeof(bp->net_stats_old));
1576 memset(&bp->fw_stats_old, 0, sizeof(bp->fw_stats_old));
1577 memset(&bp->eth_stats_old, 0, sizeof(bp->eth_stats_old));
1578 memset(&bp->eth_stats, 0, sizeof(bp->eth_stats));
1579 memset(&bp->func_stats, 0, sizeof(bp->func_stats));
1580 }
1581
1582 bp->stats_state = STATS_STATE_DISABLED;
1583
1584 if (bp->port.pmf && bp->port.port_stx)
1585 bnx2x_port_stats_base_init(bp);
1586
1587 /* mark the end of statistics initializiation */
1588 bp->stats_init = false;
1589}
1590
1550void bnx2x_stats_init(struct bnx2x *bp) 1591void bnx2x_stats_init(struct bnx2x *bp)
1551{ 1592{
1552 int /*abs*/port = BP_PORT(bp); 1593 int /*abs*/port = BP_PORT(bp);
1553 int mb_idx = BP_FW_MB_IDX(bp); 1594 int mb_idx = BP_FW_MB_IDX(bp);
1554 int i;
1555 1595
1556 bp->stats_pending = 0; 1596 bp->stats_pending = 0;
1557 bp->executer_idx = 0; 1597 bp->executer_idx = 0;
@@ -1587,36 +1627,11 @@ void bnx2x_stats_init(struct bnx2x *bp)
1587 &(bp->port.old_nig_stats.egress_mac_pkt1_lo), 2); 1627 &(bp->port.old_nig_stats.egress_mac_pkt1_lo), 2);
1588 } 1628 }
1589 1629
1590 /* function stats */
1591 for_each_queue(bp, i) {
1592 struct bnx2x_fp_stats *fp_stats = &bp->fp_stats[i];
1593
1594 memset(&fp_stats->old_tclient, 0,
1595 sizeof(fp_stats->old_tclient));
1596 memset(&fp_stats->old_uclient, 0,
1597 sizeof(fp_stats->old_uclient));
1598 memset(&fp_stats->old_xclient, 0,
1599 sizeof(fp_stats->old_xclient));
1600 if (bp->stats_init) {
1601 memset(&fp_stats->eth_q_stats, 0,
1602 sizeof(fp_stats->eth_q_stats));
1603 memset(&fp_stats->eth_q_stats_old, 0,
1604 sizeof(fp_stats->eth_q_stats_old));
1605 }
1606 }
1607
1608 /* Prepare statistics ramrod data */ 1630 /* Prepare statistics ramrod data */
1609 bnx2x_prep_fw_stats_req(bp); 1631 bnx2x_prep_fw_stats_req(bp);
1610 1632
1611 memset(&bp->dev->stats, 0, sizeof(bp->dev->stats)); 1633 /* Clean SP from previous statistics */
1612 if (bp->stats_init) { 1634 if (bp->stats_init) {
1613 memset(&bp->net_stats_old, 0, sizeof(bp->net_stats_old));
1614 memset(&bp->fw_stats_old, 0, sizeof(bp->fw_stats_old));
1615 memset(&bp->eth_stats_old, 0, sizeof(bp->eth_stats_old));
1616 memset(&bp->eth_stats, 0, sizeof(bp->eth_stats));
1617 memset(&bp->func_stats, 0, sizeof(bp->func_stats));
1618
1619 /* Clean SP from previous statistics */
1620 if (bp->func_stx) { 1635 if (bp->func_stx) {
1621 memset(bnx2x_sp(bp, func_stats), 0, 1636 memset(bnx2x_sp(bp, func_stats), 0,
1622 sizeof(struct host_func_stats)); 1637 sizeof(struct host_func_stats));
@@ -1626,13 +1641,7 @@ void bnx2x_stats_init(struct bnx2x *bp)
1626 } 1641 }
1627 } 1642 }
1628 1643
1629 bp->stats_state = STATS_STATE_DISABLED; 1644 bnx2x_memset_stats(bp);
1630
1631 if (bp->port.pmf && bp->port.port_stx)
1632 bnx2x_port_stats_base_init(bp);
1633
1634 /* mark the end of statistics initializiation */
1635 bp->stats_init = false;
1636} 1645}
1637 1646
1638void bnx2x_save_statistics(struct bnx2x *bp) 1647void bnx2x_save_statistics(struct bnx2x *bp)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
index 198f6f1c9ad5..d117f472816c 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
@@ -540,8 +540,8 @@ struct bnx2x_fw_port_stats_old {
540/* forward */ 540/* forward */
541struct bnx2x; 541struct bnx2x;
542 542
543void bnx2x_memset_stats(struct bnx2x *bp);
543void bnx2x_stats_init(struct bnx2x *bp); 544void bnx2x_stats_init(struct bnx2x *bp);
544
545void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event); 545void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event);
546 546
547/** 547/**
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
index 531eebf40d60..928b074d7d80 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
@@ -36,6 +36,8 @@ void bnx2x_add_tlv(struct bnx2x *bp, void *tlvs_list, u16 offset, u16 type,
36void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv, 36void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
37 u16 type, u16 length) 37 u16 type, u16 length)
38{ 38{
39 mutex_lock(&bp->vf2pf_mutex);
40
39 DP(BNX2X_MSG_IOV, "preparing to send %d tlv over vf pf channel\n", 41 DP(BNX2X_MSG_IOV, "preparing to send %d tlv over vf pf channel\n",
40 type); 42 type);
41 43
@@ -49,6 +51,15 @@ void bnx2x_vfpf_prep(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv,
49 first_tlv->resp_msg_offset = sizeof(bp->vf2pf_mbox->req); 51 first_tlv->resp_msg_offset = sizeof(bp->vf2pf_mbox->req);
50} 52}
51 53
54/* releases the mailbox */
55void bnx2x_vfpf_finalize(struct bnx2x *bp, struct vfpf_first_tlv *first_tlv)
56{
57 DP(BNX2X_MSG_IOV, "done sending [%d] tlv over vf pf channel\n",
58 first_tlv->tl.type);
59
60 mutex_unlock(&bp->vf2pf_mutex);
61}
62
52/* list the types and lengths of the tlvs on the buffer */ 63/* list the types and lengths of the tlvs on the buffer */
53void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list) 64void bnx2x_dp_tlv_list(struct bnx2x *bp, void *tlvs_list)
54{ 65{
@@ -181,8 +192,10 @@ int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
181 /* clear mailbox and prep first tlv */ 192 /* clear mailbox and prep first tlv */
182 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_ACQUIRE, sizeof(*req)); 193 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_ACQUIRE, sizeof(*req));
183 194
184 if (bnx2x_get_vf_id(bp, &vf_id)) 195 if (bnx2x_get_vf_id(bp, &vf_id)) {
185 return -EAGAIN; 196 rc = -EAGAIN;
197 goto out;
198 }
186 199
187 req->vfdev_info.vf_id = vf_id; 200 req->vfdev_info.vf_id = vf_id;
188 req->vfdev_info.vf_os = 0; 201 req->vfdev_info.vf_os = 0;
@@ -213,7 +226,7 @@ int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
213 226
214 /* PF timeout */ 227 /* PF timeout */
215 if (rc) 228 if (rc)
216 return rc; 229 goto out;
217 230
218 /* copy acquire response from buffer to bp */ 231 /* copy acquire response from buffer to bp */
219 memcpy(&bp->acquire_resp, resp, sizeof(bp->acquire_resp)); 232 memcpy(&bp->acquire_resp, resp, sizeof(bp->acquire_resp));
@@ -253,7 +266,8 @@ int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
253 /* PF reports error */ 266 /* PF reports error */
254 BNX2X_ERR("Failed to get the requested amount of resources: %d. Breaking...\n", 267 BNX2X_ERR("Failed to get the requested amount of resources: %d. Breaking...\n",
255 bp->acquire_resp.hdr.status); 268 bp->acquire_resp.hdr.status);
256 return -EAGAIN; 269 rc = -EAGAIN;
270 goto out;
257 } 271 }
258 } 272 }
259 273
@@ -279,20 +293,24 @@ int bnx2x_vfpf_acquire(struct bnx2x *bp, u8 tx_count, u8 rx_count)
279 bp->acquire_resp.resc.current_mac_addr, 293 bp->acquire_resp.resc.current_mac_addr,
280 ETH_ALEN); 294 ETH_ALEN);
281 295
282 return 0; 296out:
297 bnx2x_vfpf_finalize(bp, &req->first_tlv);
298 return rc;
283} 299}
284 300
285int bnx2x_vfpf_release(struct bnx2x *bp) 301int bnx2x_vfpf_release(struct bnx2x *bp)
286{ 302{
287 struct vfpf_release_tlv *req = &bp->vf2pf_mbox->req.release; 303 struct vfpf_release_tlv *req = &bp->vf2pf_mbox->req.release;
288 struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp; 304 struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
289 u32 rc = 0, vf_id; 305 u32 rc, vf_id;
290 306
291 /* clear mailbox and prep first tlv */ 307 /* clear mailbox and prep first tlv */
292 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_RELEASE, sizeof(*req)); 308 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_RELEASE, sizeof(*req));
293 309
294 if (bnx2x_get_vf_id(bp, &vf_id)) 310 if (bnx2x_get_vf_id(bp, &vf_id)) {
295 return -EAGAIN; 311 rc = -EAGAIN;
312 goto out;
313 }
296 314
297 req->vf_id = vf_id; 315 req->vf_id = vf_id;
298 316
@@ -308,7 +326,8 @@ int bnx2x_vfpf_release(struct bnx2x *bp)
308 326
309 if (rc) 327 if (rc)
310 /* PF timeout */ 328 /* PF timeout */
311 return rc; 329 goto out;
330
312 if (resp->hdr.status == PFVF_STATUS_SUCCESS) { 331 if (resp->hdr.status == PFVF_STATUS_SUCCESS) {
313 /* PF released us */ 332 /* PF released us */
314 DP(BNX2X_MSG_SP, "vf released\n"); 333 DP(BNX2X_MSG_SP, "vf released\n");
@@ -316,10 +335,13 @@ int bnx2x_vfpf_release(struct bnx2x *bp)
316 /* PF reports error */ 335 /* PF reports error */
317 BNX2X_ERR("PF failed our release request - are we out of sync? response status: %d\n", 336 BNX2X_ERR("PF failed our release request - are we out of sync? response status: %d\n",
318 resp->hdr.status); 337 resp->hdr.status);
319 return -EAGAIN; 338 rc = -EAGAIN;
339 goto out;
320 } 340 }
341out:
342 bnx2x_vfpf_finalize(bp, &req->first_tlv);
321 343
322 return 0; 344 return rc;
323} 345}
324 346
325/* Tell PF about SB addresses */ 347/* Tell PF about SB addresses */
@@ -350,16 +372,20 @@ int bnx2x_vfpf_init(struct bnx2x *bp)
350 372
351 rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping); 373 rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
352 if (rc) 374 if (rc)
353 return rc; 375 goto out;
354 376
355 if (resp->hdr.status != PFVF_STATUS_SUCCESS) { 377 if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
356 BNX2X_ERR("INIT VF failed: %d. Breaking...\n", 378 BNX2X_ERR("INIT VF failed: %d. Breaking...\n",
357 resp->hdr.status); 379 resp->hdr.status);
358 return -EAGAIN; 380 rc = -EAGAIN;
381 goto out;
359 } 382 }
360 383
361 DP(BNX2X_MSG_SP, "INIT VF Succeeded\n"); 384 DP(BNX2X_MSG_SP, "INIT VF Succeeded\n");
362 return 0; 385out:
386 bnx2x_vfpf_finalize(bp, &req->first_tlv);
387
388 return rc;
363} 389}
364 390
365/* CLOSE VF - opposite to INIT_VF */ 391/* CLOSE VF - opposite to INIT_VF */
@@ -380,6 +406,9 @@ void bnx2x_vfpf_close_vf(struct bnx2x *bp)
380 for_each_queue(bp, i) 406 for_each_queue(bp, i)
381 bnx2x_vfpf_teardown_queue(bp, i); 407 bnx2x_vfpf_teardown_queue(bp, i);
382 408
409 /* remove mac */
410 bnx2x_vfpf_config_mac(bp, bp->dev->dev_addr, bp->fp->index, false);
411
383 /* clear mailbox and prep first tlv */ 412 /* clear mailbox and prep first tlv */
384 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_CLOSE, sizeof(*req)); 413 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_CLOSE, sizeof(*req));
385 414
@@ -401,6 +430,8 @@ void bnx2x_vfpf_close_vf(struct bnx2x *bp)
401 BNX2X_ERR("Sending CLOSE failed: pf response was %d\n", 430 BNX2X_ERR("Sending CLOSE failed: pf response was %d\n",
402 resp->hdr.status); 431 resp->hdr.status);
403 432
433 bnx2x_vfpf_finalize(bp, &req->first_tlv);
434
404free_irq: 435free_irq:
405 /* Disable HW interrupts, NAPI */ 436 /* Disable HW interrupts, NAPI */
406 bnx2x_netif_stop(bp, 0); 437 bnx2x_netif_stop(bp, 0);
@@ -435,7 +466,6 @@ int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx)
435 /* calculate queue flags */ 466 /* calculate queue flags */
436 flags |= VFPF_QUEUE_FLG_STATS; 467 flags |= VFPF_QUEUE_FLG_STATS;
437 flags |= VFPF_QUEUE_FLG_CACHE_ALIGN; 468 flags |= VFPF_QUEUE_FLG_CACHE_ALIGN;
438 flags |= IS_MF_SD(bp) ? VFPF_QUEUE_FLG_OV : 0;
439 flags |= VFPF_QUEUE_FLG_VLAN; 469 flags |= VFPF_QUEUE_FLG_VLAN;
440 DP(NETIF_MSG_IFUP, "vlan removal enabled\n"); 470 DP(NETIF_MSG_IFUP, "vlan removal enabled\n");
441 471
@@ -486,8 +516,11 @@ int bnx2x_vfpf_setup_q(struct bnx2x *bp, int fp_idx)
486 if (resp->hdr.status != PFVF_STATUS_SUCCESS) { 516 if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
487 BNX2X_ERR("Status of SETUP_Q for queue[%d] is %d\n", 517 BNX2X_ERR("Status of SETUP_Q for queue[%d] is %d\n",
488 fp_idx, resp->hdr.status); 518 fp_idx, resp->hdr.status);
489 return -EINVAL; 519 rc = -EINVAL;
490 } 520 }
521
522 bnx2x_vfpf_finalize(bp, &req->first_tlv);
523
491 return rc; 524 return rc;
492} 525}
493 526
@@ -515,41 +548,46 @@ int bnx2x_vfpf_teardown_queue(struct bnx2x *bp, int qidx)
515 if (rc) { 548 if (rc) {
516 BNX2X_ERR("Sending TEARDOWN for queue %d failed: %d\n", qidx, 549 BNX2X_ERR("Sending TEARDOWN for queue %d failed: %d\n", qidx,
517 rc); 550 rc);
518 return rc; 551 goto out;
519 } 552 }
520 553
521 /* PF failed the transaction */ 554 /* PF failed the transaction */
522 if (resp->hdr.status != PFVF_STATUS_SUCCESS) { 555 if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
523 BNX2X_ERR("TEARDOWN for queue %d failed: %d\n", qidx, 556 BNX2X_ERR("TEARDOWN for queue %d failed: %d\n", qidx,
524 resp->hdr.status); 557 resp->hdr.status);
525 return -EINVAL; 558 rc = -EINVAL;
526 } 559 }
527 560
528 return 0; 561out:
562 bnx2x_vfpf_finalize(bp, &req->first_tlv);
563 return rc;
529} 564}
530 565
531/* request pf to add a mac for the vf */ 566/* request pf to add a mac for the vf */
532int bnx2x_vfpf_set_mac(struct bnx2x *bp) 567int bnx2x_vfpf_config_mac(struct bnx2x *bp, u8 *addr, u8 vf_qid, bool set)
533{ 568{
534 struct vfpf_set_q_filters_tlv *req = &bp->vf2pf_mbox->req.set_q_filters; 569 struct vfpf_set_q_filters_tlv *req = &bp->vf2pf_mbox->req.set_q_filters;
535 struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp; 570 struct pfvf_general_resp_tlv *resp = &bp->vf2pf_mbox->resp.general_resp;
536 int rc; 571 struct pf_vf_bulletin_content bulletin = bp->pf2vf_bulletin->content;
572 int rc = 0;
537 573
538 /* clear mailbox and prep first tlv */ 574 /* clear mailbox and prep first tlv */
539 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SET_Q_FILTERS, 575 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SET_Q_FILTERS,
540 sizeof(*req)); 576 sizeof(*req));
541 577
542 req->flags = VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED; 578 req->flags = VFPF_SET_Q_FILTERS_MAC_VLAN_CHANGED;
543 req->vf_qid = 0; 579 req->vf_qid = vf_qid;
544 req->n_mac_vlan_filters = 1; 580 req->n_mac_vlan_filters = 1;
545 req->filters[0].flags = 581
546 VFPF_Q_FILTER_DEST_MAC_VALID | VFPF_Q_FILTER_SET_MAC; 582 req->filters[0].flags = VFPF_Q_FILTER_DEST_MAC_VALID;
583 if (set)
584 req->filters[0].flags |= VFPF_Q_FILTER_SET_MAC;
547 585
548 /* sample bulletin board for new mac */ 586 /* sample bulletin board for new mac */
549 bnx2x_sample_bulletin(bp); 587 bnx2x_sample_bulletin(bp);
550 588
551 /* copy mac from device to request */ 589 /* copy mac from device to request */
552 memcpy(req->filters[0].mac, bp->dev->dev_addr, ETH_ALEN); 590 memcpy(req->filters[0].mac, addr, ETH_ALEN);
553 591
554 /* add list termination tlv */ 592 /* add list termination tlv */
555 bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END, 593 bnx2x_add_tlv(bp, req, req->first_tlv.tl.length, CHANNEL_TLV_LIST_END,
@@ -562,7 +600,7 @@ int bnx2x_vfpf_set_mac(struct bnx2x *bp)
562 rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping); 600 rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
563 if (rc) { 601 if (rc) {
564 BNX2X_ERR("failed to send message to pf. rc was %d\n", rc); 602 BNX2X_ERR("failed to send message to pf. rc was %d\n", rc);
565 return rc; 603 goto out;
566 } 604 }
567 605
568 /* failure may mean PF was configured with a new mac for us */ 606 /* failure may mean PF was configured with a new mac for us */
@@ -570,6 +608,9 @@ int bnx2x_vfpf_set_mac(struct bnx2x *bp)
570 DP(BNX2X_MSG_IOV, 608 DP(BNX2X_MSG_IOV,
571 "vfpf SET MAC failed. Check bulletin board for new posts\n"); 609 "vfpf SET MAC failed. Check bulletin board for new posts\n");
572 610
611 /* copy mac from bulletin to device */
612 memcpy(bp->dev->dev_addr, bulletin.mac, ETH_ALEN);
613
573 /* check if bulletin board was updated */ 614 /* check if bulletin board was updated */
574 if (bnx2x_sample_bulletin(bp) == PFVF_BULLETIN_UPDATED) { 615 if (bnx2x_sample_bulletin(bp) == PFVF_BULLETIN_UPDATED) {
575 /* copy mac from device to request */ 616 /* copy mac from device to request */
@@ -587,8 +628,10 @@ int bnx2x_vfpf_set_mac(struct bnx2x *bp)
587 628
588 if (resp->hdr.status != PFVF_STATUS_SUCCESS) { 629 if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
589 BNX2X_ERR("vfpf SET MAC failed: %d\n", resp->hdr.status); 630 BNX2X_ERR("vfpf SET MAC failed: %d\n", resp->hdr.status);
590 return -EINVAL; 631 rc = -EINVAL;
591 } 632 }
633out:
634 bnx2x_vfpf_finalize(bp, &req->first_tlv);
592 635
593 return 0; 636 return 0;
594} 637}
@@ -643,14 +686,16 @@ int bnx2x_vfpf_set_mcast(struct net_device *dev)
643 rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping); 686 rc = bnx2x_send_msg2pf(bp, &resp->hdr.status, bp->vf2pf_mbox_mapping);
644 if (rc) { 687 if (rc) {
645 BNX2X_ERR("Sending a message failed: %d\n", rc); 688 BNX2X_ERR("Sending a message failed: %d\n", rc);
646 return rc; 689 goto out;
647 } 690 }
648 691
649 if (resp->hdr.status != PFVF_STATUS_SUCCESS) { 692 if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
650 BNX2X_ERR("Set Rx mode/multicast failed: %d\n", 693 BNX2X_ERR("Set Rx mode/multicast failed: %d\n",
651 resp->hdr.status); 694 resp->hdr.status);
652 return -EINVAL; 695 rc = -EINVAL;
653 } 696 }
697out:
698 bnx2x_vfpf_finalize(bp, &req->first_tlv);
654 699
655 return 0; 700 return 0;
656} 701}
@@ -689,7 +734,8 @@ int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp)
689 break; 734 break;
690 default: 735 default:
691 BNX2X_ERR("BAD rx mode (%d)\n", mode); 736 BNX2X_ERR("BAD rx mode (%d)\n", mode);
692 return -EINVAL; 737 rc = -EINVAL;
738 goto out;
693 } 739 }
694 740
695 req->flags |= VFPF_SET_Q_FILTERS_RX_MASK_CHANGED; 741 req->flags |= VFPF_SET_Q_FILTERS_RX_MASK_CHANGED;
@@ -708,8 +754,10 @@ int bnx2x_vfpf_storm_rx_mode(struct bnx2x *bp)
708 754
709 if (resp->hdr.status != PFVF_STATUS_SUCCESS) { 755 if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
710 BNX2X_ERR("Set Rx mode failed: %d\n", resp->hdr.status); 756 BNX2X_ERR("Set Rx mode failed: %d\n", resp->hdr.status);
711 return -EINVAL; 757 rc = -EINVAL;
712 } 758 }
759out:
760 bnx2x_vfpf_finalize(bp, &req->first_tlv);
713 761
714 return rc; 762 return rc;
715} 763}
@@ -1004,7 +1052,7 @@ static void bnx2x_vf_mbx_init_vf(struct bnx2x *bp, struct bnx2x_virtf *vf,
1004} 1052}
1005 1053
1006/* convert MBX queue-flags to standard SP queue-flags */ 1054/* convert MBX queue-flags to standard SP queue-flags */
1007static void bnx2x_vf_mbx_set_q_flags(u32 mbx_q_flags, 1055static void bnx2x_vf_mbx_set_q_flags(struct bnx2x *bp, u32 mbx_q_flags,
1008 unsigned long *sp_q_flags) 1056 unsigned long *sp_q_flags)
1009{ 1057{
1010 if (mbx_q_flags & VFPF_QUEUE_FLG_TPA) 1058 if (mbx_q_flags & VFPF_QUEUE_FLG_TPA)
@@ -1015,8 +1063,6 @@ static void bnx2x_vf_mbx_set_q_flags(u32 mbx_q_flags,
1015 __set_bit(BNX2X_Q_FLG_TPA_GRO, sp_q_flags); 1063 __set_bit(BNX2X_Q_FLG_TPA_GRO, sp_q_flags);
1016 if (mbx_q_flags & VFPF_QUEUE_FLG_STATS) 1064 if (mbx_q_flags & VFPF_QUEUE_FLG_STATS)
1017 __set_bit(BNX2X_Q_FLG_STATS, sp_q_flags); 1065 __set_bit(BNX2X_Q_FLG_STATS, sp_q_flags);
1018 if (mbx_q_flags & VFPF_QUEUE_FLG_OV)
1019 __set_bit(BNX2X_Q_FLG_OV, sp_q_flags);
1020 if (mbx_q_flags & VFPF_QUEUE_FLG_VLAN) 1066 if (mbx_q_flags & VFPF_QUEUE_FLG_VLAN)
1021 __set_bit(BNX2X_Q_FLG_VLAN, sp_q_flags); 1067 __set_bit(BNX2X_Q_FLG_VLAN, sp_q_flags);
1022 if (mbx_q_flags & VFPF_QUEUE_FLG_COS) 1068 if (mbx_q_flags & VFPF_QUEUE_FLG_COS)
@@ -1025,6 +1071,10 @@ static void bnx2x_vf_mbx_set_q_flags(u32 mbx_q_flags,
1025 __set_bit(BNX2X_Q_FLG_HC, sp_q_flags); 1071 __set_bit(BNX2X_Q_FLG_HC, sp_q_flags);
1026 if (mbx_q_flags & VFPF_QUEUE_FLG_DHC) 1072 if (mbx_q_flags & VFPF_QUEUE_FLG_DHC)
1027 __set_bit(BNX2X_Q_FLG_DHC, sp_q_flags); 1073 __set_bit(BNX2X_Q_FLG_DHC, sp_q_flags);
1074
1075 /* outer vlan removal is set according to the PF's multi fuction mode */
1076 if (IS_MF_SD(bp))
1077 __set_bit(BNX2X_Q_FLG_OV, sp_q_flags);
1028} 1078}
1029 1079
1030static void bnx2x_vf_mbx_setup_q(struct bnx2x *bp, struct bnx2x_virtf *vf, 1080static void bnx2x_vf_mbx_setup_q(struct bnx2x *bp, struct bnx2x_virtf *vf,
@@ -1075,11 +1125,11 @@ static void bnx2x_vf_mbx_setup_q(struct bnx2x *bp, struct bnx2x_virtf *vf,
1075 init_p->tx.hc_rate = setup_q->txq.hc_rate; 1125 init_p->tx.hc_rate = setup_q->txq.hc_rate;
1076 init_p->tx.sb_cq_index = setup_q->txq.sb_index; 1126 init_p->tx.sb_cq_index = setup_q->txq.sb_index;
1077 1127
1078 bnx2x_vf_mbx_set_q_flags(setup_q->txq.flags, 1128 bnx2x_vf_mbx_set_q_flags(bp, setup_q->txq.flags,
1079 &init_p->tx.flags); 1129 &init_p->tx.flags);
1080 1130
1081 /* tx setup - flags */ 1131 /* tx setup - flags */
1082 bnx2x_vf_mbx_set_q_flags(setup_q->txq.flags, 1132 bnx2x_vf_mbx_set_q_flags(bp, setup_q->txq.flags,
1083 &setup_p->flags); 1133 &setup_p->flags);
1084 1134
1085 /* tx setup - general, nothing */ 1135 /* tx setup - general, nothing */
@@ -1107,11 +1157,11 @@ static void bnx2x_vf_mbx_setup_q(struct bnx2x *bp, struct bnx2x_virtf *vf,
1107 /* rx init */ 1157 /* rx init */
1108 init_p->rx.hc_rate = setup_q->rxq.hc_rate; 1158 init_p->rx.hc_rate = setup_q->rxq.hc_rate;
1109 init_p->rx.sb_cq_index = setup_q->rxq.sb_index; 1159 init_p->rx.sb_cq_index = setup_q->rxq.sb_index;
1110 bnx2x_vf_mbx_set_q_flags(setup_q->rxq.flags, 1160 bnx2x_vf_mbx_set_q_flags(bp, setup_q->rxq.flags,
1111 &init_p->rx.flags); 1161 &init_p->rx.flags);
1112 1162
1113 /* rx setup - flags */ 1163 /* rx setup - flags */
1114 bnx2x_vf_mbx_set_q_flags(setup_q->rxq.flags, 1164 bnx2x_vf_mbx_set_q_flags(bp, setup_q->rxq.flags,
1115 &setup_p->flags); 1165 &setup_p->flags);
1116 1166
1117 /* rx setup - general */ 1167 /* rx setup - general */
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
index bfc80baec00d..41708faab575 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.h
@@ -328,9 +328,15 @@ struct pf_vf_bulletin_content {
328#define MAC_ADDR_VALID 0 /* alert the vf that a new mac address 328#define MAC_ADDR_VALID 0 /* alert the vf that a new mac address
329 * is available for it 329 * is available for it
330 */ 330 */
331#define VLAN_VALID 1 /* when set, the vf should not access
332 * the vfpf channel
333 */
331 334
332 u8 mac[ETH_ALEN]; 335 u8 mac[ETH_ALEN];
333 u8 padding[2]; 336 u8 mac_padding[2];
337
338 u16 vlan;
339 u8 vlan_padding[6];
334}; 340};
335 341
336union pf_vf_bulletin { 342union pf_vf_bulletin {
@@ -353,6 +359,7 @@ enum channel_tlvs {
353 CHANNEL_TLV_LIST_END, 359 CHANNEL_TLV_LIST_END,
354 CHANNEL_TLV_FLR, 360 CHANNEL_TLV_FLR,
355 CHANNEL_TLV_PF_SET_MAC, 361 CHANNEL_TLV_PF_SET_MAC,
362 CHANNEL_TLV_PF_SET_VLAN,
356 CHANNEL_TLV_MAX 363 CHANNEL_TLV_MAX
357}; 364};
358 365
diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c
index 149a3a038491..40649a8bf390 100644
--- a/drivers/net/ethernet/broadcom/cnic.c
+++ b/drivers/net/ethernet/broadcom/cnic.c
@@ -5544,8 +5544,10 @@ static struct cnic_dev *init_bnx2x_cnic(struct net_device *dev)
5544 5544
5545 if (!(ethdev->drv_state & CNIC_DRV_STATE_NO_ISCSI)) 5545 if (!(ethdev->drv_state & CNIC_DRV_STATE_NO_ISCSI))
5546 cdev->max_iscsi_conn = ethdev->max_iscsi_conn; 5546 cdev->max_iscsi_conn = ethdev->max_iscsi_conn;
5547 if (CNIC_SUPPORTS_FCOE(cp)) 5547 if (CNIC_SUPPORTS_FCOE(cp)) {
5548 cdev->max_fcoe_conn = ethdev->max_fcoe_conn; 5548 cdev->max_fcoe_conn = ethdev->max_fcoe_conn;
5549 cdev->max_fcoe_exchanges = ethdev->max_fcoe_exchanges;
5550 }
5549 5551
5550 if (cdev->max_fcoe_conn > BNX2X_FCOE_NUM_CONNECTIONS) 5552 if (cdev->max_fcoe_conn > BNX2X_FCOE_NUM_CONNECTIONS)
5551 cdev->max_fcoe_conn = BNX2X_FCOE_NUM_CONNECTIONS; 5553 cdev->max_fcoe_conn = BNX2X_FCOE_NUM_CONNECTIONS;
diff --git a/drivers/net/ethernet/broadcom/cnic_if.h b/drivers/net/ethernet/broadcom/cnic_if.h
index 0c9367a0f57d..ec9bb9ad4bb3 100644
--- a/drivers/net/ethernet/broadcom/cnic_if.h
+++ b/drivers/net/ethernet/broadcom/cnic_if.h
@@ -195,6 +195,7 @@ struct cnic_eth_dev {
195 u32 max_fcoe_conn; 195 u32 max_fcoe_conn;
196 u32 max_rdma_conn; 196 u32 max_rdma_conn;
197 u32 fcoe_init_cid; 197 u32 fcoe_init_cid;
198 u32 max_fcoe_exchanges;
198 u32 fcoe_wwn_port_name_hi; 199 u32 fcoe_wwn_port_name_hi;
199 u32 fcoe_wwn_port_name_lo; 200 u32 fcoe_wwn_port_name_lo;
200 u32 fcoe_wwn_node_name_hi; 201 u32 fcoe_wwn_node_name_hi;
@@ -313,6 +314,8 @@ struct cnic_dev {
313 int max_fcoe_conn; 314 int max_fcoe_conn;
314 int max_rdma_conn; 315 int max_rdma_conn;
315 316
317 int max_fcoe_exchanges;
318
316 union drv_info_to_mcp *stats_addr; 319 union drv_info_to_mcp *stats_addr;
317 struct fcoe_capabilities *fcoe_cap; 320 struct fcoe_capabilities *fcoe_cap;
318 321
diff --git a/drivers/net/ethernet/broadcom/sb1250-mac.c b/drivers/net/ethernet/broadcom/sb1250-mac.c
index e9b35da375cb..e80bfb60c3ef 100644
--- a/drivers/net/ethernet/broadcom/sb1250-mac.c
+++ b/drivers/net/ethernet/broadcom/sb1250-mac.c
@@ -831,11 +831,8 @@ static int sbdma_add_rcvbuffer(struct sbmac_softc *sc, struct sbmacdma *d,
831 sb_new = netdev_alloc_skb(dev, ENET_PACKET_SIZE + 831 sb_new = netdev_alloc_skb(dev, ENET_PACKET_SIZE +
832 SMP_CACHE_BYTES * 2 + 832 SMP_CACHE_BYTES * 2 +
833 NET_IP_ALIGN); 833 NET_IP_ALIGN);
834 if (sb_new == NULL) { 834 if (sb_new == NULL)
835 pr_info("%s: sk_buff allocation failed\n",
836 d->sbdma_eth->sbm_dev->name);
837 return -ENOBUFS; 835 return -ENOBUFS;
838 }
839 836
840 sbdma_align_skb(sb_new, SMP_CACHE_BYTES, NET_IP_ALIGN); 837 sbdma_align_skb(sb_new, SMP_CACHE_BYTES, NET_IP_ALIGN);
841 } 838 }
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 17a972734ba7..728d42ab2a76 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -94,10 +94,10 @@ static inline void _tg3_flag_clear(enum TG3_FLAGS flag, unsigned long *bits)
94 94
95#define DRV_MODULE_NAME "tg3" 95#define DRV_MODULE_NAME "tg3"
96#define TG3_MAJ_NUM 3 96#define TG3_MAJ_NUM 3
97#define TG3_MIN_NUM 130 97#define TG3_MIN_NUM 131
98#define DRV_MODULE_VERSION \ 98#define DRV_MODULE_VERSION \
99 __stringify(TG3_MAJ_NUM) "." __stringify(TG3_MIN_NUM) 99 __stringify(TG3_MAJ_NUM) "." __stringify(TG3_MIN_NUM)
100#define DRV_MODULE_RELDATE "February 14, 2013" 100#define DRV_MODULE_RELDATE "April 09, 2013"
101 101
102#define RESET_KIND_SHUTDOWN 0 102#define RESET_KIND_SHUTDOWN 0
103#define RESET_KIND_INIT 1 103#define RESET_KIND_INIT 1
@@ -212,6 +212,7 @@ static inline void _tg3_flag_clear(enum TG3_FLAGS flag, unsigned long *bits)
212#define TG3_FW_UPDATE_FREQ_SEC (TG3_FW_UPDATE_TIMEOUT_SEC / 2) 212#define TG3_FW_UPDATE_FREQ_SEC (TG3_FW_UPDATE_TIMEOUT_SEC / 2)
213 213
214#define FIRMWARE_TG3 "tigon/tg3.bin" 214#define FIRMWARE_TG3 "tigon/tg3.bin"
215#define FIRMWARE_TG357766 "tigon/tg357766.bin"
215#define FIRMWARE_TG3TSO "tigon/tg3_tso.bin" 216#define FIRMWARE_TG3TSO "tigon/tg3_tso.bin"
216#define FIRMWARE_TG3TSO5 "tigon/tg3_tso5.bin" 217#define FIRMWARE_TG3TSO5 "tigon/tg3_tso5.bin"
217 218
@@ -1873,6 +1874,20 @@ static void tg3_link_report(struct tg3 *tp)
1873 tp->link_up = netif_carrier_ok(tp->dev); 1874 tp->link_up = netif_carrier_ok(tp->dev);
1874} 1875}
1875 1876
1877static u32 tg3_decode_flowctrl_1000T(u32 adv)
1878{
1879 u32 flowctrl = 0;
1880
1881 if (adv & ADVERTISE_PAUSE_CAP) {
1882 flowctrl |= FLOW_CTRL_RX;
1883 if (!(adv & ADVERTISE_PAUSE_ASYM))
1884 flowctrl |= FLOW_CTRL_TX;
1885 } else if (adv & ADVERTISE_PAUSE_ASYM)
1886 flowctrl |= FLOW_CTRL_TX;
1887
1888 return flowctrl;
1889}
1890
1876static u16 tg3_advert_flowctrl_1000X(u8 flow_ctrl) 1891static u16 tg3_advert_flowctrl_1000X(u8 flow_ctrl)
1877{ 1892{
1878 u16 miireg; 1893 u16 miireg;
@@ -1889,6 +1904,20 @@ static u16 tg3_advert_flowctrl_1000X(u8 flow_ctrl)
1889 return miireg; 1904 return miireg;
1890} 1905}
1891 1906
1907static u32 tg3_decode_flowctrl_1000X(u32 adv)
1908{
1909 u32 flowctrl = 0;
1910
1911 if (adv & ADVERTISE_1000XPAUSE) {
1912 flowctrl |= FLOW_CTRL_RX;
1913 if (!(adv & ADVERTISE_1000XPSE_ASYM))
1914 flowctrl |= FLOW_CTRL_TX;
1915 } else if (adv & ADVERTISE_1000XPSE_ASYM)
1916 flowctrl |= FLOW_CTRL_TX;
1917
1918 return flowctrl;
1919}
1920
1892static u8 tg3_resolve_flowctrl_1000X(u16 lcladv, u16 rmtadv) 1921static u8 tg3_resolve_flowctrl_1000X(u16 lcladv, u16 rmtadv)
1893{ 1922{
1894 u8 cap = 0; 1923 u8 cap = 0;
@@ -2199,7 +2228,7 @@ static void tg3_phy_toggle_apd(struct tg3 *tp, bool enable)
2199 tg3_writephy(tp, MII_TG3_MISC_SHDW, reg); 2228 tg3_writephy(tp, MII_TG3_MISC_SHDW, reg);
2200} 2229}
2201 2230
2202static void tg3_phy_toggle_automdix(struct tg3 *tp, int enable) 2231static void tg3_phy_toggle_automdix(struct tg3 *tp, bool enable)
2203{ 2232{
2204 u32 phy; 2233 u32 phy;
2205 2234
@@ -2291,7 +2320,7 @@ static void tg3_phy_apply_otp(struct tg3 *tp)
2291 tg3_phy_toggle_auxctl_smdsp(tp, false); 2320 tg3_phy_toggle_auxctl_smdsp(tp, false);
2292} 2321}
2293 2322
2294static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up) 2323static void tg3_phy_eee_adjust(struct tg3 *tp, bool current_link_up)
2295{ 2324{
2296 u32 val; 2325 u32 val;
2297 2326
@@ -2301,7 +2330,7 @@ static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up)
2301 tp->setlpicnt = 0; 2330 tp->setlpicnt = 0;
2302 2331
2303 if (tp->link_config.autoneg == AUTONEG_ENABLE && 2332 if (tp->link_config.autoneg == AUTONEG_ENABLE &&
2304 current_link_up == 1 && 2333 current_link_up &&
2305 tp->link_config.active_duplex == DUPLEX_FULL && 2334 tp->link_config.active_duplex == DUPLEX_FULL &&
2306 (tp->link_config.active_speed == SPEED_100 || 2335 (tp->link_config.active_speed == SPEED_100 ||
2307 tp->link_config.active_speed == SPEED_1000)) { 2336 tp->link_config.active_speed == SPEED_1000)) {
@@ -2323,7 +2352,7 @@ static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up)
2323 } 2352 }
2324 2353
2325 if (!tp->setlpicnt) { 2354 if (!tp->setlpicnt) {
2326 if (current_link_up == 1 && 2355 if (current_link_up &&
2327 !tg3_phy_toggle_auxctl_smdsp(tp, true)) { 2356 !tg3_phy_toggle_auxctl_smdsp(tp, true)) {
2328 tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, 0x0000); 2357 tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, 0x0000);
2329 tg3_phy_toggle_auxctl_smdsp(tp, false); 2358 tg3_phy_toggle_auxctl_smdsp(tp, false);
@@ -2530,6 +2559,13 @@ static void tg3_carrier_off(struct tg3 *tp)
2530 tp->link_up = false; 2559 tp->link_up = false;
2531} 2560}
2532 2561
2562static void tg3_warn_mgmt_link_flap(struct tg3 *tp)
2563{
2564 if (tg3_flag(tp, ENABLE_ASF))
2565 netdev_warn(tp->dev,
2566 "Management side-band traffic will be interrupted during phy settings change\n");
2567}
2568
2533/* This will reset the tigon3 PHY if there is no valid 2569/* This will reset the tigon3 PHY if there is no valid
2534 * link unless the FORCE argument is non-zero. 2570 * link unless the FORCE argument is non-zero.
2535 */ 2571 */
@@ -2669,7 +2705,7 @@ out:
2669 if (tg3_chip_rev_id(tp) == CHIPREV_ID_5762_A0) 2705 if (tg3_chip_rev_id(tp) == CHIPREV_ID_5762_A0)
2670 tg3_phydsp_write(tp, 0xffb, 0x4000); 2706 tg3_phydsp_write(tp, 0xffb, 0x4000);
2671 2707
2672 tg3_phy_toggle_automdix(tp, 1); 2708 tg3_phy_toggle_automdix(tp, true);
2673 tg3_phy_set_wirespeed(tp); 2709 tg3_phy_set_wirespeed(tp);
2674 return 0; 2710 return 0;
2675} 2711}
@@ -2925,6 +2961,9 @@ static void tg3_power_down_phy(struct tg3 *tp, bool do_low_power)
2925{ 2961{
2926 u32 val; 2962 u32 val;
2927 2963
2964 if (tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN)
2965 return;
2966
2928 if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES) { 2967 if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES) {
2929 if (tg3_asic_rev(tp) == ASIC_REV_5704) { 2968 if (tg3_asic_rev(tp) == ASIC_REV_5704) {
2930 u32 sg_dig_ctrl = tr32(SG_DIG_CTRL); 2969 u32 sg_dig_ctrl = tr32(SG_DIG_CTRL);
@@ -3448,11 +3487,58 @@ static int tg3_nvram_write_block(struct tg3 *tp, u32 offset, u32 len, u8 *buf)
3448#define TX_CPU_SCRATCH_SIZE 0x04000 3487#define TX_CPU_SCRATCH_SIZE 0x04000
3449 3488
3450/* tp->lock is held. */ 3489/* tp->lock is held. */
3451static int tg3_halt_cpu(struct tg3 *tp, u32 offset) 3490static int tg3_pause_cpu(struct tg3 *tp, u32 cpu_base)
3452{ 3491{
3453 int i; 3492 int i;
3493 const int iters = 10000;
3494
3495 for (i = 0; i < iters; i++) {
3496 tw32(cpu_base + CPU_STATE, 0xffffffff);
3497 tw32(cpu_base + CPU_MODE, CPU_MODE_HALT);
3498 if (tr32(cpu_base + CPU_MODE) & CPU_MODE_HALT)
3499 break;
3500 }
3501
3502 return (i == iters) ? -EBUSY : 0;
3503}
3504
3505/* tp->lock is held. */
3506static int tg3_rxcpu_pause(struct tg3 *tp)
3507{
3508 int rc = tg3_pause_cpu(tp, RX_CPU_BASE);
3509
3510 tw32(RX_CPU_BASE + CPU_STATE, 0xffffffff);
3511 tw32_f(RX_CPU_BASE + CPU_MODE, CPU_MODE_HALT);
3512 udelay(10);
3513
3514 return rc;
3515}
3516
3517/* tp->lock is held. */
3518static int tg3_txcpu_pause(struct tg3 *tp)
3519{
3520 return tg3_pause_cpu(tp, TX_CPU_BASE);
3521}
3522
3523/* tp->lock is held. */
3524static void tg3_resume_cpu(struct tg3 *tp, u32 cpu_base)
3525{
3526 tw32(cpu_base + CPU_STATE, 0xffffffff);
3527 tw32_f(cpu_base + CPU_MODE, 0x00000000);
3528}
3454 3529
3455 BUG_ON(offset == TX_CPU_BASE && tg3_flag(tp, 5705_PLUS)); 3530/* tp->lock is held. */
3531static void tg3_rxcpu_resume(struct tg3 *tp)
3532{
3533 tg3_resume_cpu(tp, RX_CPU_BASE);
3534}
3535
3536/* tp->lock is held. */
3537static int tg3_halt_cpu(struct tg3 *tp, u32 cpu_base)
3538{
3539 int rc;
3540
3541 BUG_ON(cpu_base == TX_CPU_BASE && tg3_flag(tp, 5705_PLUS));
3456 3542
3457 if (tg3_asic_rev(tp) == ASIC_REV_5906) { 3543 if (tg3_asic_rev(tp) == ASIC_REV_5906) {
3458 u32 val = tr32(GRC_VCPU_EXT_CTRL); 3544 u32 val = tr32(GRC_VCPU_EXT_CTRL);
@@ -3460,17 +3546,8 @@ static int tg3_halt_cpu(struct tg3 *tp, u32 offset)
3460 tw32(GRC_VCPU_EXT_CTRL, val | GRC_VCPU_EXT_CTRL_HALT_CPU); 3546 tw32(GRC_VCPU_EXT_CTRL, val | GRC_VCPU_EXT_CTRL_HALT_CPU);
3461 return 0; 3547 return 0;
3462 } 3548 }
3463 if (offset == RX_CPU_BASE) { 3549 if (cpu_base == RX_CPU_BASE) {
3464 for (i = 0; i < 10000; i++) { 3550 rc = tg3_rxcpu_pause(tp);
3465 tw32(offset + CPU_STATE, 0xffffffff);
3466 tw32(offset + CPU_MODE, CPU_MODE_HALT);
3467 if (tr32(offset + CPU_MODE) & CPU_MODE_HALT)
3468 break;
3469 }
3470
3471 tw32(offset + CPU_STATE, 0xffffffff);
3472 tw32_f(offset + CPU_MODE, CPU_MODE_HALT);
3473 udelay(10);
3474 } else { 3551 } else {
3475 /* 3552 /*
3476 * There is only an Rx CPU for the 5750 derivative in the 3553 * There is only an Rx CPU for the 5750 derivative in the
@@ -3479,17 +3556,12 @@ static int tg3_halt_cpu(struct tg3 *tp, u32 offset)
3479 if (tg3_flag(tp, IS_SSB_CORE)) 3556 if (tg3_flag(tp, IS_SSB_CORE))
3480 return 0; 3557 return 0;
3481 3558
3482 for (i = 0; i < 10000; i++) { 3559 rc = tg3_txcpu_pause(tp);
3483 tw32(offset + CPU_STATE, 0xffffffff);
3484 tw32(offset + CPU_MODE, CPU_MODE_HALT);
3485 if (tr32(offset + CPU_MODE) & CPU_MODE_HALT)
3486 break;
3487 }
3488 } 3560 }
3489 3561
3490 if (i >= 10000) { 3562 if (rc) {
3491 netdev_err(tp->dev, "%s timed out, %s CPU\n", 3563 netdev_err(tp->dev, "%s timed out, %s CPU\n",
3492 __func__, offset == RX_CPU_BASE ? "RX" : "TX"); 3564 __func__, cpu_base == RX_CPU_BASE ? "RX" : "TX");
3493 return -ENODEV; 3565 return -ENODEV;
3494 } 3566 }
3495 3567
@@ -3499,19 +3571,41 @@ static int tg3_halt_cpu(struct tg3 *tp, u32 offset)
3499 return 0; 3571 return 0;
3500} 3572}
3501 3573
3502struct fw_info { 3574static int tg3_fw_data_len(struct tg3 *tp,
3503 unsigned int fw_base; 3575 const struct tg3_firmware_hdr *fw_hdr)
3504 unsigned int fw_len; 3576{
3505 const __be32 *fw_data; 3577 int fw_len;
3506}; 3578
3579 /* Non fragmented firmware have one firmware header followed by a
3580 * contiguous chunk of data to be written. The length field in that
3581 * header is not the length of data to be written but the complete
3582 * length of the bss. The data length is determined based on
3583 * tp->fw->size minus headers.
3584 *
3585 * Fragmented firmware have a main header followed by multiple
3586 * fragments. Each fragment is identical to non fragmented firmware
3587 * with a firmware header followed by a contiguous chunk of data. In
3588 * the main header, the length field is unused and set to 0xffffffff.
3589 * In each fragment header the length is the entire size of that
3590 * fragment i.e. fragment data + header length. Data length is
3591 * therefore length field in the header minus TG3_FW_HDR_LEN.
3592 */
3593 if (tp->fw_len == 0xffffffff)
3594 fw_len = be32_to_cpu(fw_hdr->len);
3595 else
3596 fw_len = tp->fw->size;
3597
3598 return (fw_len - TG3_FW_HDR_LEN) / sizeof(u32);
3599}
3507 3600
3508/* tp->lock is held. */ 3601/* tp->lock is held. */
3509static int tg3_load_firmware_cpu(struct tg3 *tp, u32 cpu_base, 3602static int tg3_load_firmware_cpu(struct tg3 *tp, u32 cpu_base,
3510 u32 cpu_scratch_base, int cpu_scratch_size, 3603 u32 cpu_scratch_base, int cpu_scratch_size,
3511 struct fw_info *info) 3604 const struct tg3_firmware_hdr *fw_hdr)
3512{ 3605{
3513 int err, lock_err, i; 3606 int err, i;
3514 void (*write_op)(struct tg3 *, u32, u32); 3607 void (*write_op)(struct tg3 *, u32, u32);
3608 int total_len = tp->fw->size;
3515 3609
3516 if (cpu_base == TX_CPU_BASE && tg3_flag(tp, 5705_PLUS)) { 3610 if (cpu_base == TX_CPU_BASE && tg3_flag(tp, 5705_PLUS)) {
3517 netdev_err(tp->dev, 3611 netdev_err(tp->dev,
@@ -3520,30 +3614,49 @@ static int tg3_load_firmware_cpu(struct tg3 *tp, u32 cpu_base,
3520 return -EINVAL; 3614 return -EINVAL;
3521 } 3615 }
3522 3616
3523 if (tg3_flag(tp, 5705_PLUS)) 3617 if (tg3_flag(tp, 5705_PLUS) && tg3_asic_rev(tp) != ASIC_REV_57766)
3524 write_op = tg3_write_mem; 3618 write_op = tg3_write_mem;
3525 else 3619 else
3526 write_op = tg3_write_indirect_reg32; 3620 write_op = tg3_write_indirect_reg32;
3527 3621
3528 /* It is possible that bootcode is still loading at this point. 3622 if (tg3_asic_rev(tp) != ASIC_REV_57766) {
3529 * Get the nvram lock first before halting the cpu. 3623 /* It is possible that bootcode is still loading at this point.
3530 */ 3624 * Get the nvram lock first before halting the cpu.
3531 lock_err = tg3_nvram_lock(tp); 3625 */
3532 err = tg3_halt_cpu(tp, cpu_base); 3626 int lock_err = tg3_nvram_lock(tp);
3533 if (!lock_err) 3627 err = tg3_halt_cpu(tp, cpu_base);
3534 tg3_nvram_unlock(tp); 3628 if (!lock_err)
3535 if (err) 3629 tg3_nvram_unlock(tp);
3536 goto out; 3630 if (err)
3631 goto out;
3537 3632
3538 for (i = 0; i < cpu_scratch_size; i += sizeof(u32)) 3633 for (i = 0; i < cpu_scratch_size; i += sizeof(u32))
3539 write_op(tp, cpu_scratch_base + i, 0); 3634 write_op(tp, cpu_scratch_base + i, 0);
3540 tw32(cpu_base + CPU_STATE, 0xffffffff); 3635 tw32(cpu_base + CPU_STATE, 0xffffffff);
3541 tw32(cpu_base + CPU_MODE, tr32(cpu_base+CPU_MODE)|CPU_MODE_HALT); 3636 tw32(cpu_base + CPU_MODE,
3542 for (i = 0; i < (info->fw_len / sizeof(u32)); i++) 3637 tr32(cpu_base + CPU_MODE) | CPU_MODE_HALT);
3543 write_op(tp, (cpu_scratch_base + 3638 } else {
3544 (info->fw_base & 0xffff) + 3639 /* Subtract additional main header for fragmented firmware and
3545 (i * sizeof(u32))), 3640 * advance to the first fragment
3546 be32_to_cpu(info->fw_data[i])); 3641 */
3642 total_len -= TG3_FW_HDR_LEN;
3643 fw_hdr++;
3644 }
3645
3646 do {
3647 u32 *fw_data = (u32 *)(fw_hdr + 1);
3648 for (i = 0; i < tg3_fw_data_len(tp, fw_hdr); i++)
3649 write_op(tp, cpu_scratch_base +
3650 (be32_to_cpu(fw_hdr->base_addr) & 0xffff) +
3651 (i * sizeof(u32)),
3652 be32_to_cpu(fw_data[i]));
3653
3654 total_len -= be32_to_cpu(fw_hdr->len);
3655
3656 /* Advance to next fragment */
3657 fw_hdr = (struct tg3_firmware_hdr *)
3658 ((void *)fw_hdr + be32_to_cpu(fw_hdr->len));
3659 } while (total_len > 0);
3547 3660
3548 err = 0; 3661 err = 0;
3549 3662
@@ -3552,13 +3665,33 @@ out:
3552} 3665}
3553 3666
3554/* tp->lock is held. */ 3667/* tp->lock is held. */
3668static int tg3_pause_cpu_and_set_pc(struct tg3 *tp, u32 cpu_base, u32 pc)
3669{
3670 int i;
3671 const int iters = 5;
3672
3673 tw32(cpu_base + CPU_STATE, 0xffffffff);
3674 tw32_f(cpu_base + CPU_PC, pc);
3675
3676 for (i = 0; i < iters; i++) {
3677 if (tr32(cpu_base + CPU_PC) == pc)
3678 break;
3679 tw32(cpu_base + CPU_STATE, 0xffffffff);
3680 tw32(cpu_base + CPU_MODE, CPU_MODE_HALT);
3681 tw32_f(cpu_base + CPU_PC, pc);
3682 udelay(1000);
3683 }
3684
3685 return (i == iters) ? -EBUSY : 0;
3686}
3687
3688/* tp->lock is held. */
3555static int tg3_load_5701_a0_firmware_fix(struct tg3 *tp) 3689static int tg3_load_5701_a0_firmware_fix(struct tg3 *tp)
3556{ 3690{
3557 struct fw_info info; 3691 const struct tg3_firmware_hdr *fw_hdr;
3558 const __be32 *fw_data; 3692 int err;
3559 int err, i;
3560 3693
3561 fw_data = (void *)tp->fw->data; 3694 fw_hdr = (struct tg3_firmware_hdr *)tp->fw->data;
3562 3695
3563 /* Firmware blob starts with version numbers, followed by 3696 /* Firmware blob starts with version numbers, followed by
3564 start address and length. We are setting complete length. 3697 start address and length. We are setting complete length.
@@ -3566,60 +3699,117 @@ static int tg3_load_5701_a0_firmware_fix(struct tg3 *tp)
3566 Remainder is the blob to be loaded contiguously 3699 Remainder is the blob to be loaded contiguously
3567 from start address. */ 3700 from start address. */
3568 3701
3569 info.fw_base = be32_to_cpu(fw_data[1]);
3570 info.fw_len = tp->fw->size - 12;
3571 info.fw_data = &fw_data[3];
3572
3573 err = tg3_load_firmware_cpu(tp, RX_CPU_BASE, 3702 err = tg3_load_firmware_cpu(tp, RX_CPU_BASE,
3574 RX_CPU_SCRATCH_BASE, RX_CPU_SCRATCH_SIZE, 3703 RX_CPU_SCRATCH_BASE, RX_CPU_SCRATCH_SIZE,
3575 &info); 3704 fw_hdr);
3576 if (err) 3705 if (err)
3577 return err; 3706 return err;
3578 3707
3579 err = tg3_load_firmware_cpu(tp, TX_CPU_BASE, 3708 err = tg3_load_firmware_cpu(tp, TX_CPU_BASE,
3580 TX_CPU_SCRATCH_BASE, TX_CPU_SCRATCH_SIZE, 3709 TX_CPU_SCRATCH_BASE, TX_CPU_SCRATCH_SIZE,
3581 &info); 3710 fw_hdr);
3582 if (err) 3711 if (err)
3583 return err; 3712 return err;
3584 3713
3585 /* Now startup only the RX cpu. */ 3714 /* Now startup only the RX cpu. */
3586 tw32(RX_CPU_BASE + CPU_STATE, 0xffffffff); 3715 err = tg3_pause_cpu_and_set_pc(tp, RX_CPU_BASE,
3587 tw32_f(RX_CPU_BASE + CPU_PC, info.fw_base); 3716 be32_to_cpu(fw_hdr->base_addr));
3588 3717 if (err) {
3589 for (i = 0; i < 5; i++) {
3590 if (tr32(RX_CPU_BASE + CPU_PC) == info.fw_base)
3591 break;
3592 tw32(RX_CPU_BASE + CPU_STATE, 0xffffffff);
3593 tw32(RX_CPU_BASE + CPU_MODE, CPU_MODE_HALT);
3594 tw32_f(RX_CPU_BASE + CPU_PC, info.fw_base);
3595 udelay(1000);
3596 }
3597 if (i >= 5) {
3598 netdev_err(tp->dev, "%s fails to set RX CPU PC, is %08x " 3718 netdev_err(tp->dev, "%s fails to set RX CPU PC, is %08x "
3599 "should be %08x\n", __func__, 3719 "should be %08x\n", __func__,
3600 tr32(RX_CPU_BASE + CPU_PC), info.fw_base); 3720 tr32(RX_CPU_BASE + CPU_PC),
3721 be32_to_cpu(fw_hdr->base_addr));
3601 return -ENODEV; 3722 return -ENODEV;
3602 } 3723 }
3603 tw32(RX_CPU_BASE + CPU_STATE, 0xffffffff); 3724
3604 tw32_f(RX_CPU_BASE + CPU_MODE, 0x00000000); 3725 tg3_rxcpu_resume(tp);
3605 3726
3606 return 0; 3727 return 0;
3607} 3728}
3608 3729
3730static int tg3_validate_rxcpu_state(struct tg3 *tp)
3731{
3732 const int iters = 1000;
3733 int i;
3734 u32 val;
3735
3736 /* Wait for boot code to complete initialization and enter service
3737 * loop. It is then safe to download service patches
3738 */
3739 for (i = 0; i < iters; i++) {
3740 if (tr32(RX_CPU_HWBKPT) == TG3_SBROM_IN_SERVICE_LOOP)
3741 break;
3742
3743 udelay(10);
3744 }
3745
3746 if (i == iters) {
3747 netdev_err(tp->dev, "Boot code not ready for service patches\n");
3748 return -EBUSY;
3749 }
3750
3751 val = tg3_read_indirect_reg32(tp, TG3_57766_FW_HANDSHAKE);
3752 if (val & 0xff) {
3753 netdev_warn(tp->dev,
3754 "Other patches exist. Not downloading EEE patch\n");
3755 return -EEXIST;
3756 }
3757
3758 return 0;
3759}
3760
3761/* tp->lock is held. */
3762static void tg3_load_57766_firmware(struct tg3 *tp)
3763{
3764 struct tg3_firmware_hdr *fw_hdr;
3765
3766 if (!tg3_flag(tp, NO_NVRAM))
3767 return;
3768
3769 if (tg3_validate_rxcpu_state(tp))
3770 return;
3771
3772 if (!tp->fw)
3773 return;
3774
3775 /* This firmware blob has a different format than older firmware
3776 * releases as given below. The main difference is we have fragmented
3777 * data to be written to non-contiguous locations.
3778 *
3779 * In the beginning we have a firmware header identical to other
3780 * firmware which consists of version, base addr and length. The length
3781 * here is unused and set to 0xffffffff.
3782 *
3783 * This is followed by a series of firmware fragments which are
3784 * individually identical to previous firmware. i.e. they have the
3785 * firmware header and followed by data for that fragment. The version
3786 * field of the individual fragment header is unused.
3787 */
3788
3789 fw_hdr = (struct tg3_firmware_hdr *)tp->fw->data;
3790 if (be32_to_cpu(fw_hdr->base_addr) != TG3_57766_FW_BASE_ADDR)
3791 return;
3792
3793 if (tg3_rxcpu_pause(tp))
3794 return;
3795
3796 /* tg3_load_firmware_cpu() will always succeed for the 57766 */
3797 tg3_load_firmware_cpu(tp, 0, TG3_57766_FW_BASE_ADDR, 0, fw_hdr);
3798
3799 tg3_rxcpu_resume(tp);
3800}
3801
3609/* tp->lock is held. */ 3802/* tp->lock is held. */
3610static int tg3_load_tso_firmware(struct tg3 *tp) 3803static int tg3_load_tso_firmware(struct tg3 *tp)
3611{ 3804{
3612 struct fw_info info; 3805 const struct tg3_firmware_hdr *fw_hdr;
3613 const __be32 *fw_data;
3614 unsigned long cpu_base, cpu_scratch_base, cpu_scratch_size; 3806 unsigned long cpu_base, cpu_scratch_base, cpu_scratch_size;
3615 int err, i; 3807 int err;
3616 3808
3617 if (tg3_flag(tp, HW_TSO_1) || 3809 if (!tg3_flag(tp, FW_TSO))
3618 tg3_flag(tp, HW_TSO_2) ||
3619 tg3_flag(tp, HW_TSO_3))
3620 return 0; 3810 return 0;
3621 3811
3622 fw_data = (void *)tp->fw->data; 3812 fw_hdr = (struct tg3_firmware_hdr *)tp->fw->data;
3623 3813
3624 /* Firmware blob starts with version numbers, followed by 3814 /* Firmware blob starts with version numbers, followed by
3625 start address and length. We are setting complete length. 3815 start address and length. We are setting complete length.
@@ -3627,10 +3817,7 @@ static int tg3_load_tso_firmware(struct tg3 *tp)
3627 Remainder is the blob to be loaded contiguously 3817 Remainder is the blob to be loaded contiguously
3628 from start address. */ 3818 from start address. */
3629 3819
3630 info.fw_base = be32_to_cpu(fw_data[1]);
3631 cpu_scratch_size = tp->fw_len; 3820 cpu_scratch_size = tp->fw_len;
3632 info.fw_len = tp->fw->size - 12;
3633 info.fw_data = &fw_data[3];
3634 3821
3635 if (tg3_asic_rev(tp) == ASIC_REV_5705) { 3822 if (tg3_asic_rev(tp) == ASIC_REV_5705) {
3636 cpu_base = RX_CPU_BASE; 3823 cpu_base = RX_CPU_BASE;
@@ -3643,36 +3830,28 @@ static int tg3_load_tso_firmware(struct tg3 *tp)
3643 3830
3644 err = tg3_load_firmware_cpu(tp, cpu_base, 3831 err = tg3_load_firmware_cpu(tp, cpu_base,
3645 cpu_scratch_base, cpu_scratch_size, 3832 cpu_scratch_base, cpu_scratch_size,
3646 &info); 3833 fw_hdr);
3647 if (err) 3834 if (err)
3648 return err; 3835 return err;
3649 3836
3650 /* Now startup the cpu. */ 3837 /* Now startup the cpu. */
3651 tw32(cpu_base + CPU_STATE, 0xffffffff); 3838 err = tg3_pause_cpu_and_set_pc(tp, cpu_base,
3652 tw32_f(cpu_base + CPU_PC, info.fw_base); 3839 be32_to_cpu(fw_hdr->base_addr));
3653 3840 if (err) {
3654 for (i = 0; i < 5; i++) {
3655 if (tr32(cpu_base + CPU_PC) == info.fw_base)
3656 break;
3657 tw32(cpu_base + CPU_STATE, 0xffffffff);
3658 tw32(cpu_base + CPU_MODE, CPU_MODE_HALT);
3659 tw32_f(cpu_base + CPU_PC, info.fw_base);
3660 udelay(1000);
3661 }
3662 if (i >= 5) {
3663 netdev_err(tp->dev, 3841 netdev_err(tp->dev,
3664 "%s fails to set CPU PC, is %08x should be %08x\n", 3842 "%s fails to set CPU PC, is %08x should be %08x\n",
3665 __func__, tr32(cpu_base + CPU_PC), info.fw_base); 3843 __func__, tr32(cpu_base + CPU_PC),
3844 be32_to_cpu(fw_hdr->base_addr));
3666 return -ENODEV; 3845 return -ENODEV;
3667 } 3846 }
3668 tw32(cpu_base + CPU_STATE, 0xffffffff); 3847
3669 tw32_f(cpu_base + CPU_MODE, 0x00000000); 3848 tg3_resume_cpu(tp, cpu_base);
3670 return 0; 3849 return 0;
3671} 3850}
3672 3851
3673 3852
3674/* tp->lock is held. */ 3853/* tp->lock is held. */
3675static void __tg3_set_mac_addr(struct tg3 *tp, int skip_mac_1) 3854static void __tg3_set_mac_addr(struct tg3 *tp, bool skip_mac_1)
3676{ 3855{
3677 u32 addr_high, addr_low; 3856 u32 addr_high, addr_low;
3678 int i; 3857 int i;
@@ -3735,7 +3914,7 @@ static int tg3_power_up(struct tg3 *tp)
3735 return err; 3914 return err;
3736} 3915}
3737 3916
3738static int tg3_setup_phy(struct tg3 *, int); 3917static int tg3_setup_phy(struct tg3 *, bool);
3739 3918
3740static int tg3_power_down_prepare(struct tg3 *tp) 3919static int tg3_power_down_prepare(struct tg3 *tp)
3741{ 3920{
@@ -3807,7 +3986,7 @@ static int tg3_power_down_prepare(struct tg3 *tp)
3807 tp->phy_flags |= TG3_PHYFLG_IS_LOW_POWER; 3986 tp->phy_flags |= TG3_PHYFLG_IS_LOW_POWER;
3808 3987
3809 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES)) 3988 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES))
3810 tg3_setup_phy(tp, 0); 3989 tg3_setup_phy(tp, false);
3811 } 3990 }
3812 3991
3813 if (tg3_asic_rev(tp) == ASIC_REV_5906) { 3992 if (tg3_asic_rev(tp) == ASIC_REV_5906) {
@@ -3848,7 +4027,13 @@ static int tg3_power_down_prepare(struct tg3 *tp)
3848 4027
3849 if (tp->phy_flags & TG3_PHYFLG_MII_SERDES) 4028 if (tp->phy_flags & TG3_PHYFLG_MII_SERDES)
3850 mac_mode = MAC_MODE_PORT_MODE_GMII; 4029 mac_mode = MAC_MODE_PORT_MODE_GMII;
3851 else 4030 else if (tp->phy_flags &
4031 TG3_PHYFLG_KEEP_LINK_ON_PWRDN) {
4032 if (tp->link_config.active_speed == SPEED_1000)
4033 mac_mode = MAC_MODE_PORT_MODE_GMII;
4034 else
4035 mac_mode = MAC_MODE_PORT_MODE_MII;
4036 } else
3852 mac_mode = MAC_MODE_PORT_MODE_MII; 4037 mac_mode = MAC_MODE_PORT_MODE_MII;
3853 4038
3854 mac_mode |= tp->mac_mode & MAC_MODE_LINK_POLARITY; 4039 mac_mode |= tp->mac_mode & MAC_MODE_LINK_POLARITY;
@@ -4102,12 +4287,16 @@ static void tg3_phy_copper_begin(struct tg3 *tp)
4102 (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)) { 4287 (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)) {
4103 u32 adv, fc; 4288 u32 adv, fc;
4104 4289
4105 if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) { 4290 if ((tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) &&
4291 !(tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN)) {
4106 adv = ADVERTISED_10baseT_Half | 4292 adv = ADVERTISED_10baseT_Half |
4107 ADVERTISED_10baseT_Full; 4293 ADVERTISED_10baseT_Full;
4108 if (tg3_flag(tp, WOL_SPEED_100MB)) 4294 if (tg3_flag(tp, WOL_SPEED_100MB))
4109 adv |= ADVERTISED_100baseT_Half | 4295 adv |= ADVERTISED_100baseT_Half |
4110 ADVERTISED_100baseT_Full; 4296 ADVERTISED_100baseT_Full;
4297 if (tp->phy_flags & TG3_PHYFLG_1G_ON_VAUX_OK)
4298 adv |= ADVERTISED_1000baseT_Half |
4299 ADVERTISED_1000baseT_Full;
4111 4300
4112 fc = FLOW_CTRL_TX | FLOW_CTRL_RX; 4301 fc = FLOW_CTRL_TX | FLOW_CTRL_RX;
4113 } else { 4302 } else {
@@ -4121,6 +4310,15 @@ static void tg3_phy_copper_begin(struct tg3 *tp)
4121 4310
4122 tg3_phy_autoneg_cfg(tp, adv, fc); 4311 tg3_phy_autoneg_cfg(tp, adv, fc);
4123 4312
4313 if ((tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) &&
4314 (tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN)) {
4315 /* Normally during power down we want to autonegotiate
4316 * the lowest possible speed for WOL. However, to avoid
4317 * link flap, we leave it untouched.
4318 */
4319 return;
4320 }
4321
4124 tg3_writephy(tp, MII_BMCR, 4322 tg3_writephy(tp, MII_BMCR,
4125 BMCR_ANENABLE | BMCR_ANRESTART); 4323 BMCR_ANENABLE | BMCR_ANRESTART);
4126 } else { 4324 } else {
@@ -4177,6 +4375,103 @@ static void tg3_phy_copper_begin(struct tg3 *tp)
4177 } 4375 }
4178} 4376}
4179 4377
4378static int tg3_phy_pull_config(struct tg3 *tp)
4379{
4380 int err;
4381 u32 val;
4382
4383 err = tg3_readphy(tp, MII_BMCR, &val);
4384 if (err)
4385 goto done;
4386
4387 if (!(val & BMCR_ANENABLE)) {
4388 tp->link_config.autoneg = AUTONEG_DISABLE;
4389 tp->link_config.advertising = 0;
4390 tg3_flag_clear(tp, PAUSE_AUTONEG);
4391
4392 err = -EIO;
4393
4394 switch (val & (BMCR_SPEED1000 | BMCR_SPEED100)) {
4395 case 0:
4396 if (tp->phy_flags & TG3_PHYFLG_ANY_SERDES)
4397 goto done;
4398
4399 tp->link_config.speed = SPEED_10;
4400 break;
4401 case BMCR_SPEED100:
4402 if (tp->phy_flags & TG3_PHYFLG_ANY_SERDES)
4403 goto done;
4404
4405 tp->link_config.speed = SPEED_100;
4406 break;
4407 case BMCR_SPEED1000:
4408 if (!(tp->phy_flags & TG3_PHYFLG_10_100_ONLY)) {
4409 tp->link_config.speed = SPEED_1000;
4410 break;
4411 }
4412 /* Fall through */
4413 default:
4414 goto done;
4415 }
4416
4417 if (val & BMCR_FULLDPLX)
4418 tp->link_config.duplex = DUPLEX_FULL;
4419 else
4420 tp->link_config.duplex = DUPLEX_HALF;
4421
4422 tp->link_config.flowctrl = FLOW_CTRL_RX | FLOW_CTRL_TX;
4423
4424 err = 0;
4425 goto done;
4426 }
4427
4428 tp->link_config.autoneg = AUTONEG_ENABLE;
4429 tp->link_config.advertising = ADVERTISED_Autoneg;
4430 tg3_flag_set(tp, PAUSE_AUTONEG);
4431
4432 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES)) {
4433 u32 adv;
4434
4435 err = tg3_readphy(tp, MII_ADVERTISE, &val);
4436 if (err)
4437 goto done;
4438
4439 adv = mii_adv_to_ethtool_adv_t(val & ADVERTISE_ALL);
4440 tp->link_config.advertising |= adv | ADVERTISED_TP;
4441
4442 tp->link_config.flowctrl = tg3_decode_flowctrl_1000T(val);
4443 } else {
4444 tp->link_config.advertising |= ADVERTISED_FIBRE;
4445 }
4446
4447 if (!(tp->phy_flags & TG3_PHYFLG_10_100_ONLY)) {
4448 u32 adv;
4449
4450 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES)) {
4451 err = tg3_readphy(tp, MII_CTRL1000, &val);
4452 if (err)
4453 goto done;
4454
4455 adv = mii_ctrl1000_to_ethtool_adv_t(val);
4456 } else {
4457 err = tg3_readphy(tp, MII_ADVERTISE, &val);
4458 if (err)
4459 goto done;
4460
4461 adv = tg3_decode_flowctrl_1000X(val);
4462 tp->link_config.flowctrl = adv;
4463
4464 val &= (ADVERTISE_1000XHALF | ADVERTISE_1000XFULL);
4465 adv = mii_adv_to_ethtool_adv_x(val);
4466 }
4467
4468 tp->link_config.advertising |= adv;
4469 }
4470
4471done:
4472 return err;
4473}
4474
4180static int tg3_init_5401phy_dsp(struct tg3 *tp) 4475static int tg3_init_5401phy_dsp(struct tg3 *tp)
4181{ 4476{
4182 int err; 4477 int err;
@@ -4196,6 +4491,32 @@ static int tg3_init_5401phy_dsp(struct tg3 *tp)
4196 return err; 4491 return err;
4197} 4492}
4198 4493
4494static bool tg3_phy_eee_config_ok(struct tg3 *tp)
4495{
4496 u32 val;
4497 u32 tgtadv = 0;
4498 u32 advertising = tp->link_config.advertising;
4499
4500 if (!(tp->phy_flags & TG3_PHYFLG_EEE_CAP))
4501 return true;
4502
4503 if (tg3_phy_cl45_read(tp, MDIO_MMD_AN, MDIO_AN_EEE_ADV, &val))
4504 return false;
4505
4506 val &= (MDIO_AN_EEE_ADV_100TX | MDIO_AN_EEE_ADV_1000T);
4507
4508
4509 if (advertising & ADVERTISED_100baseT_Full)
4510 tgtadv |= MDIO_AN_EEE_ADV_100TX;
4511 if (advertising & ADVERTISED_1000baseT_Full)
4512 tgtadv |= MDIO_AN_EEE_ADV_1000T;
4513
4514 if (val != tgtadv)
4515 return false;
4516
4517 return true;
4518}
4519
4199static bool tg3_phy_copper_an_config_ok(struct tg3 *tp, u32 *lcladv) 4520static bool tg3_phy_copper_an_config_ok(struct tg3 *tp, u32 *lcladv)
4200{ 4521{
4201 u32 advmsk, tgtadv, advertising; 4522 u32 advmsk, tgtadv, advertising;
@@ -4262,7 +4583,7 @@ static bool tg3_phy_copper_fetch_rmtadv(struct tg3 *tp, u32 *rmtadv)
4262 return true; 4583 return true;
4263} 4584}
4264 4585
4265static bool tg3_test_and_report_link_chg(struct tg3 *tp, int curr_link_up) 4586static bool tg3_test_and_report_link_chg(struct tg3 *tp, bool curr_link_up)
4266{ 4587{
4267 if (curr_link_up != tp->link_up) { 4588 if (curr_link_up != tp->link_up) {
4268 if (curr_link_up) { 4589 if (curr_link_up) {
@@ -4280,23 +4601,28 @@ static bool tg3_test_and_report_link_chg(struct tg3 *tp, int curr_link_up)
4280 return false; 4601 return false;
4281} 4602}
4282 4603
4283static int tg3_setup_copper_phy(struct tg3 *tp, int force_reset) 4604static void tg3_clear_mac_status(struct tg3 *tp)
4284{ 4605{
4285 int current_link_up; 4606 tw32(MAC_EVENT, 0);
4607
4608 tw32_f(MAC_STATUS,
4609 MAC_STATUS_SYNC_CHANGED |
4610 MAC_STATUS_CFG_CHANGED |
4611 MAC_STATUS_MI_COMPLETION |
4612 MAC_STATUS_LNKSTATE_CHANGED);
4613 udelay(40);
4614}
4615
4616static int tg3_setup_copper_phy(struct tg3 *tp, bool force_reset)
4617{
4618 bool current_link_up;
4286 u32 bmsr, val; 4619 u32 bmsr, val;
4287 u32 lcl_adv, rmt_adv; 4620 u32 lcl_adv, rmt_adv;
4288 u16 current_speed; 4621 u16 current_speed;
4289 u8 current_duplex; 4622 u8 current_duplex;
4290 int i, err; 4623 int i, err;
4291 4624
4292 tw32(MAC_EVENT, 0); 4625 tg3_clear_mac_status(tp);
4293
4294 tw32_f(MAC_STATUS,
4295 (MAC_STATUS_SYNC_CHANGED |
4296 MAC_STATUS_CFG_CHANGED |
4297 MAC_STATUS_MI_COMPLETION |
4298 MAC_STATUS_LNKSTATE_CHANGED));
4299 udelay(40);
4300 4626
4301 if ((tp->mi_mode & MAC_MI_MODE_AUTO_POLL) != 0) { 4627 if ((tp->mi_mode & MAC_MI_MODE_AUTO_POLL) != 0) {
4302 tw32_f(MAC_MI_MODE, 4628 tw32_f(MAC_MI_MODE,
@@ -4316,7 +4642,7 @@ static int tg3_setup_copper_phy(struct tg3 *tp, int force_reset)
4316 tg3_readphy(tp, MII_BMSR, &bmsr); 4642 tg3_readphy(tp, MII_BMSR, &bmsr);
4317 if (!tg3_readphy(tp, MII_BMSR, &bmsr) && 4643 if (!tg3_readphy(tp, MII_BMSR, &bmsr) &&
4318 !(bmsr & BMSR_LSTATUS)) 4644 !(bmsr & BMSR_LSTATUS))
4319 force_reset = 1; 4645 force_reset = true;
4320 } 4646 }
4321 if (force_reset) 4647 if (force_reset)
4322 tg3_phy_reset(tp); 4648 tg3_phy_reset(tp);
@@ -4380,7 +4706,7 @@ static int tg3_setup_copper_phy(struct tg3 *tp, int force_reset)
4380 tg3_writephy(tp, MII_TG3_EXT_CTRL, 0); 4706 tg3_writephy(tp, MII_TG3_EXT_CTRL, 0);
4381 } 4707 }
4382 4708
4383 current_link_up = 0; 4709 current_link_up = false;
4384 current_speed = SPEED_UNKNOWN; 4710 current_speed = SPEED_UNKNOWN;
4385 current_duplex = DUPLEX_UNKNOWN; 4711 current_duplex = DUPLEX_UNKNOWN;
4386 tp->phy_flags &= ~TG3_PHYFLG_MDIX_STATE; 4712 tp->phy_flags &= ~TG3_PHYFLG_MDIX_STATE;
@@ -4439,21 +4765,31 @@ static int tg3_setup_copper_phy(struct tg3 *tp, int force_reset)
4439 tp->link_config.active_duplex = current_duplex; 4765 tp->link_config.active_duplex = current_duplex;
4440 4766
4441 if (tp->link_config.autoneg == AUTONEG_ENABLE) { 4767 if (tp->link_config.autoneg == AUTONEG_ENABLE) {
4768 bool eee_config_ok = tg3_phy_eee_config_ok(tp);
4769
4442 if ((bmcr & BMCR_ANENABLE) && 4770 if ((bmcr & BMCR_ANENABLE) &&
4771 eee_config_ok &&
4443 tg3_phy_copper_an_config_ok(tp, &lcl_adv) && 4772 tg3_phy_copper_an_config_ok(tp, &lcl_adv) &&
4444 tg3_phy_copper_fetch_rmtadv(tp, &rmt_adv)) 4773 tg3_phy_copper_fetch_rmtadv(tp, &rmt_adv))
4445 current_link_up = 1; 4774 current_link_up = true;
4775
4776 /* EEE settings changes take effect only after a phy
4777 * reset. If we have skipped a reset due to Link Flap
4778 * Avoidance being enabled, do it now.
4779 */
4780 if (!eee_config_ok &&
4781 (tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN) &&
4782 !force_reset)
4783 tg3_phy_reset(tp);
4446 } else { 4784 } else {
4447 if (!(bmcr & BMCR_ANENABLE) && 4785 if (!(bmcr & BMCR_ANENABLE) &&
4448 tp->link_config.speed == current_speed && 4786 tp->link_config.speed == current_speed &&
4449 tp->link_config.duplex == current_duplex && 4787 tp->link_config.duplex == current_duplex) {
4450 tp->link_config.flowctrl == 4788 current_link_up = true;
4451 tp->link_config.active_flowctrl) {
4452 current_link_up = 1;
4453 } 4789 }
4454 } 4790 }
4455 4791
4456 if (current_link_up == 1 && 4792 if (current_link_up &&
4457 tp->link_config.active_duplex == DUPLEX_FULL) { 4793 tp->link_config.active_duplex == DUPLEX_FULL) {
4458 u32 reg, bit; 4794 u32 reg, bit;
4459 4795
@@ -4473,11 +4809,11 @@ static int tg3_setup_copper_phy(struct tg3 *tp, int force_reset)
4473 } 4809 }
4474 4810
4475relink: 4811relink:
4476 if (current_link_up == 0 || (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)) { 4812 if (!current_link_up || (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)) {
4477 tg3_phy_copper_begin(tp); 4813 tg3_phy_copper_begin(tp);
4478 4814
4479 if (tg3_flag(tp, ROBOSWITCH)) { 4815 if (tg3_flag(tp, ROBOSWITCH)) {
4480 current_link_up = 1; 4816 current_link_up = true;
4481 /* FIXME: when BCM5325 switch is used use 100 MBit/s */ 4817 /* FIXME: when BCM5325 switch is used use 100 MBit/s */
4482 current_speed = SPEED_1000; 4818 current_speed = SPEED_1000;
4483 current_duplex = DUPLEX_FULL; 4819 current_duplex = DUPLEX_FULL;
@@ -4488,11 +4824,11 @@ relink:
4488 tg3_readphy(tp, MII_BMSR, &bmsr); 4824 tg3_readphy(tp, MII_BMSR, &bmsr);
4489 if ((!tg3_readphy(tp, MII_BMSR, &bmsr) && (bmsr & BMSR_LSTATUS)) || 4825 if ((!tg3_readphy(tp, MII_BMSR, &bmsr) && (bmsr & BMSR_LSTATUS)) ||
4490 (tp->mac_mode & MAC_MODE_PORT_INT_LPBACK)) 4826 (tp->mac_mode & MAC_MODE_PORT_INT_LPBACK))
4491 current_link_up = 1; 4827 current_link_up = true;
4492 } 4828 }
4493 4829
4494 tp->mac_mode &= ~MAC_MODE_PORT_MODE_MASK; 4830 tp->mac_mode &= ~MAC_MODE_PORT_MODE_MASK;
4495 if (current_link_up == 1) { 4831 if (current_link_up) {
4496 if (tp->link_config.active_speed == SPEED_100 || 4832 if (tp->link_config.active_speed == SPEED_100 ||
4497 tp->link_config.active_speed == SPEED_10) 4833 tp->link_config.active_speed == SPEED_10)
4498 tp->mac_mode |= MAC_MODE_PORT_MODE_MII; 4834 tp->mac_mode |= MAC_MODE_PORT_MODE_MII;
@@ -4528,7 +4864,7 @@ relink:
4528 tp->mac_mode |= MAC_MODE_HALF_DUPLEX; 4864 tp->mac_mode |= MAC_MODE_HALF_DUPLEX;
4529 4865
4530 if (tg3_asic_rev(tp) == ASIC_REV_5700) { 4866 if (tg3_asic_rev(tp) == ASIC_REV_5700) {
4531 if (current_link_up == 1 && 4867 if (current_link_up &&
4532 tg3_5700_link_polarity(tp, tp->link_config.active_speed)) 4868 tg3_5700_link_polarity(tp, tp->link_config.active_speed))
4533 tp->mac_mode |= MAC_MODE_LINK_POLARITY; 4869 tp->mac_mode |= MAC_MODE_LINK_POLARITY;
4534 else 4870 else
@@ -4559,7 +4895,7 @@ relink:
4559 udelay(40); 4895 udelay(40);
4560 4896
4561 if (tg3_asic_rev(tp) == ASIC_REV_5700 && 4897 if (tg3_asic_rev(tp) == ASIC_REV_5700 &&
4562 current_link_up == 1 && 4898 current_link_up &&
4563 tp->link_config.active_speed == SPEED_1000 && 4899 tp->link_config.active_speed == SPEED_1000 &&
4564 (tg3_flag(tp, PCIX_MODE) || tg3_flag(tp, PCI_HIGH_SPEED))) { 4900 (tg3_flag(tp, PCIX_MODE) || tg3_flag(tp, PCI_HIGH_SPEED))) {
4565 udelay(120); 4901 udelay(120);
@@ -4999,19 +5335,19 @@ static void tg3_init_bcm8002(struct tg3 *tp)
4999 tg3_writephy(tp, 0x10, 0x8011); 5335 tg3_writephy(tp, 0x10, 0x8011);
5000} 5336}
5001 5337
5002static int tg3_setup_fiber_hw_autoneg(struct tg3 *tp, u32 mac_status) 5338static bool tg3_setup_fiber_hw_autoneg(struct tg3 *tp, u32 mac_status)
5003{ 5339{
5004 u16 flowctrl; 5340 u16 flowctrl;
5341 bool current_link_up;
5005 u32 sg_dig_ctrl, sg_dig_status; 5342 u32 sg_dig_ctrl, sg_dig_status;
5006 u32 serdes_cfg, expected_sg_dig_ctrl; 5343 u32 serdes_cfg, expected_sg_dig_ctrl;
5007 int workaround, port_a; 5344 int workaround, port_a;
5008 int current_link_up;
5009 5345
5010 serdes_cfg = 0; 5346 serdes_cfg = 0;
5011 expected_sg_dig_ctrl = 0; 5347 expected_sg_dig_ctrl = 0;
5012 workaround = 0; 5348 workaround = 0;
5013 port_a = 1; 5349 port_a = 1;
5014 current_link_up = 0; 5350 current_link_up = false;
5015 5351
5016 if (tg3_chip_rev_id(tp) != CHIPREV_ID_5704_A0 && 5352 if (tg3_chip_rev_id(tp) != CHIPREV_ID_5704_A0 &&
5017 tg3_chip_rev_id(tp) != CHIPREV_ID_5704_A1) { 5353 tg3_chip_rev_id(tp) != CHIPREV_ID_5704_A1) {
@@ -5042,7 +5378,7 @@ static int tg3_setup_fiber_hw_autoneg(struct tg3 *tp, u32 mac_status)
5042 } 5378 }
5043 if (mac_status & MAC_STATUS_PCS_SYNCED) { 5379 if (mac_status & MAC_STATUS_PCS_SYNCED) {
5044 tg3_setup_flow_control(tp, 0, 0); 5380 tg3_setup_flow_control(tp, 0, 0);
5045 current_link_up = 1; 5381 current_link_up = true;
5046 } 5382 }
5047 goto out; 5383 goto out;
5048 } 5384 }
@@ -5063,7 +5399,7 @@ static int tg3_setup_fiber_hw_autoneg(struct tg3 *tp, u32 mac_status)
5063 MAC_STATUS_RCVD_CFG)) == 5399 MAC_STATUS_RCVD_CFG)) ==
5064 MAC_STATUS_PCS_SYNCED)) { 5400 MAC_STATUS_PCS_SYNCED)) {
5065 tp->serdes_counter--; 5401 tp->serdes_counter--;
5066 current_link_up = 1; 5402 current_link_up = true;
5067 goto out; 5403 goto out;
5068 } 5404 }
5069restart_autoneg: 5405restart_autoneg:
@@ -5098,7 +5434,7 @@ restart_autoneg:
5098 mii_adv_to_ethtool_adv_x(remote_adv); 5434 mii_adv_to_ethtool_adv_x(remote_adv);
5099 5435
5100 tg3_setup_flow_control(tp, local_adv, remote_adv); 5436 tg3_setup_flow_control(tp, local_adv, remote_adv);
5101 current_link_up = 1; 5437 current_link_up = true;
5102 tp->serdes_counter = 0; 5438 tp->serdes_counter = 0;
5103 tp->phy_flags &= ~TG3_PHYFLG_PARALLEL_DETECT; 5439 tp->phy_flags &= ~TG3_PHYFLG_PARALLEL_DETECT;
5104 } else if (!(sg_dig_status & SG_DIG_AUTONEG_COMPLETE)) { 5440 } else if (!(sg_dig_status & SG_DIG_AUTONEG_COMPLETE)) {
@@ -5126,7 +5462,7 @@ restart_autoneg:
5126 if ((mac_status & MAC_STATUS_PCS_SYNCED) && 5462 if ((mac_status & MAC_STATUS_PCS_SYNCED) &&
5127 !(mac_status & MAC_STATUS_RCVD_CFG)) { 5463 !(mac_status & MAC_STATUS_RCVD_CFG)) {
5128 tg3_setup_flow_control(tp, 0, 0); 5464 tg3_setup_flow_control(tp, 0, 0);
5129 current_link_up = 1; 5465 current_link_up = true;
5130 tp->phy_flags |= 5466 tp->phy_flags |=
5131 TG3_PHYFLG_PARALLEL_DETECT; 5467 TG3_PHYFLG_PARALLEL_DETECT;
5132 tp->serdes_counter = 5468 tp->serdes_counter =
@@ -5144,9 +5480,9 @@ out:
5144 return current_link_up; 5480 return current_link_up;
5145} 5481}
5146 5482
5147static int tg3_setup_fiber_by_hand(struct tg3 *tp, u32 mac_status) 5483static bool tg3_setup_fiber_by_hand(struct tg3 *tp, u32 mac_status)
5148{ 5484{
5149 int current_link_up = 0; 5485 bool current_link_up = false;
5150 5486
5151 if (!(mac_status & MAC_STATUS_PCS_SYNCED)) 5487 if (!(mac_status & MAC_STATUS_PCS_SYNCED))
5152 goto out; 5488 goto out;
@@ -5173,7 +5509,7 @@ static int tg3_setup_fiber_by_hand(struct tg3 *tp, u32 mac_status)
5173 5509
5174 tg3_setup_flow_control(tp, local_adv, remote_adv); 5510 tg3_setup_flow_control(tp, local_adv, remote_adv);
5175 5511
5176 current_link_up = 1; 5512 current_link_up = true;
5177 } 5513 }
5178 for (i = 0; i < 30; i++) { 5514 for (i = 0; i < 30; i++) {
5179 udelay(20); 5515 udelay(20);
@@ -5188,15 +5524,15 @@ static int tg3_setup_fiber_by_hand(struct tg3 *tp, u32 mac_status)
5188 } 5524 }
5189 5525
5190 mac_status = tr32(MAC_STATUS); 5526 mac_status = tr32(MAC_STATUS);
5191 if (current_link_up == 0 && 5527 if (!current_link_up &&
5192 (mac_status & MAC_STATUS_PCS_SYNCED) && 5528 (mac_status & MAC_STATUS_PCS_SYNCED) &&
5193 !(mac_status & MAC_STATUS_RCVD_CFG)) 5529 !(mac_status & MAC_STATUS_RCVD_CFG))
5194 current_link_up = 1; 5530 current_link_up = true;
5195 } else { 5531 } else {
5196 tg3_setup_flow_control(tp, 0, 0); 5532 tg3_setup_flow_control(tp, 0, 0);
5197 5533
5198 /* Forcing 1000FD link up. */ 5534 /* Forcing 1000FD link up. */
5199 current_link_up = 1; 5535 current_link_up = true;
5200 5536
5201 tw32_f(MAC_MODE, (tp->mac_mode | MAC_MODE_SEND_CONFIGS)); 5537 tw32_f(MAC_MODE, (tp->mac_mode | MAC_MODE_SEND_CONFIGS));
5202 udelay(40); 5538 udelay(40);
@@ -5209,13 +5545,13 @@ out:
5209 return current_link_up; 5545 return current_link_up;
5210} 5546}
5211 5547
5212static int tg3_setup_fiber_phy(struct tg3 *tp, int force_reset) 5548static int tg3_setup_fiber_phy(struct tg3 *tp, bool force_reset)
5213{ 5549{
5214 u32 orig_pause_cfg; 5550 u32 orig_pause_cfg;
5215 u16 orig_active_speed; 5551 u16 orig_active_speed;
5216 u8 orig_active_duplex; 5552 u8 orig_active_duplex;
5217 u32 mac_status; 5553 u32 mac_status;
5218 int current_link_up; 5554 bool current_link_up;
5219 int i; 5555 int i;
5220 5556
5221 orig_pause_cfg = tp->link_config.active_flowctrl; 5557 orig_pause_cfg = tp->link_config.active_flowctrl;
@@ -5252,7 +5588,7 @@ static int tg3_setup_fiber_phy(struct tg3 *tp, int force_reset)
5252 tw32_f(MAC_EVENT, MAC_EVENT_LNKSTATE_CHANGED); 5588 tw32_f(MAC_EVENT, MAC_EVENT_LNKSTATE_CHANGED);
5253 udelay(40); 5589 udelay(40);
5254 5590
5255 current_link_up = 0; 5591 current_link_up = false;
5256 tp->link_config.rmt_adv = 0; 5592 tp->link_config.rmt_adv = 0;
5257 mac_status = tr32(MAC_STATUS); 5593 mac_status = tr32(MAC_STATUS);
5258 5594
@@ -5277,7 +5613,7 @@ static int tg3_setup_fiber_phy(struct tg3 *tp, int force_reset)
5277 5613
5278 mac_status = tr32(MAC_STATUS); 5614 mac_status = tr32(MAC_STATUS);
5279 if ((mac_status & MAC_STATUS_PCS_SYNCED) == 0) { 5615 if ((mac_status & MAC_STATUS_PCS_SYNCED) == 0) {
5280 current_link_up = 0; 5616 current_link_up = false;
5281 if (tp->link_config.autoneg == AUTONEG_ENABLE && 5617 if (tp->link_config.autoneg == AUTONEG_ENABLE &&
5282 tp->serdes_counter == 0) { 5618 tp->serdes_counter == 0) {
5283 tw32_f(MAC_MODE, (tp->mac_mode | 5619 tw32_f(MAC_MODE, (tp->mac_mode |
@@ -5287,7 +5623,7 @@ static int tg3_setup_fiber_phy(struct tg3 *tp, int force_reset)
5287 } 5623 }
5288 } 5624 }
5289 5625
5290 if (current_link_up == 1) { 5626 if (current_link_up) {
5291 tp->link_config.active_speed = SPEED_1000; 5627 tp->link_config.active_speed = SPEED_1000;
5292 tp->link_config.active_duplex = DUPLEX_FULL; 5628 tp->link_config.active_duplex = DUPLEX_FULL;
5293 tw32(MAC_LED_CTRL, (tp->led_ctrl | 5629 tw32(MAC_LED_CTRL, (tp->led_ctrl |
@@ -5312,33 +5648,63 @@ static int tg3_setup_fiber_phy(struct tg3 *tp, int force_reset)
5312 return 0; 5648 return 0;
5313} 5649}
5314 5650
5315static int tg3_setup_fiber_mii_phy(struct tg3 *tp, int force_reset) 5651static int tg3_setup_fiber_mii_phy(struct tg3 *tp, bool force_reset)
5316{ 5652{
5317 int current_link_up, err = 0; 5653 int err = 0;
5318 u32 bmsr, bmcr; 5654 u32 bmsr, bmcr;
5319 u16 current_speed; 5655 u16 current_speed = SPEED_UNKNOWN;
5320 u8 current_duplex; 5656 u8 current_duplex = DUPLEX_UNKNOWN;
5321 u32 local_adv, remote_adv; 5657 bool current_link_up = false;
5658 u32 local_adv, remote_adv, sgsr;
5659
5660 if ((tg3_asic_rev(tp) == ASIC_REV_5719 ||
5661 tg3_asic_rev(tp) == ASIC_REV_5720) &&
5662 !tg3_readphy(tp, SERDES_TG3_1000X_STATUS, &sgsr) &&
5663 (sgsr & SERDES_TG3_SGMII_MODE)) {
5664
5665 if (force_reset)
5666 tg3_phy_reset(tp);
5667
5668 tp->mac_mode &= ~MAC_MODE_PORT_MODE_MASK;
5669
5670 if (!(sgsr & SERDES_TG3_LINK_UP)) {
5671 tp->mac_mode |= MAC_MODE_PORT_MODE_GMII;
5672 } else {
5673 current_link_up = true;
5674 if (sgsr & SERDES_TG3_SPEED_1000) {
5675 current_speed = SPEED_1000;
5676 tp->mac_mode |= MAC_MODE_PORT_MODE_GMII;
5677 } else if (sgsr & SERDES_TG3_SPEED_100) {
5678 current_speed = SPEED_100;
5679 tp->mac_mode |= MAC_MODE_PORT_MODE_MII;
5680 } else {
5681 current_speed = SPEED_10;
5682 tp->mac_mode |= MAC_MODE_PORT_MODE_MII;
5683 }
5684
5685 if (sgsr & SERDES_TG3_FULL_DUPLEX)
5686 current_duplex = DUPLEX_FULL;
5687 else
5688 current_duplex = DUPLEX_HALF;
5689 }
5690
5691 tw32_f(MAC_MODE, tp->mac_mode);
5692 udelay(40);
5693
5694 tg3_clear_mac_status(tp);
5695
5696 goto fiber_setup_done;
5697 }
5322 5698
5323 tp->mac_mode |= MAC_MODE_PORT_MODE_GMII; 5699 tp->mac_mode |= MAC_MODE_PORT_MODE_GMII;
5324 tw32_f(MAC_MODE, tp->mac_mode); 5700 tw32_f(MAC_MODE, tp->mac_mode);
5325 udelay(40); 5701 udelay(40);
5326 5702
5327 tw32(MAC_EVENT, 0); 5703 tg3_clear_mac_status(tp);
5328
5329 tw32_f(MAC_STATUS,
5330 (MAC_STATUS_SYNC_CHANGED |
5331 MAC_STATUS_CFG_CHANGED |
5332 MAC_STATUS_MI_COMPLETION |
5333 MAC_STATUS_LNKSTATE_CHANGED));
5334 udelay(40);
5335 5704
5336 if (force_reset) 5705 if (force_reset)
5337 tg3_phy_reset(tp); 5706 tg3_phy_reset(tp);
5338 5707
5339 current_link_up = 0;
5340 current_speed = SPEED_UNKNOWN;
5341 current_duplex = DUPLEX_UNKNOWN;
5342 tp->link_config.rmt_adv = 0; 5708 tp->link_config.rmt_adv = 0;
5343 5709
5344 err |= tg3_readphy(tp, MII_BMSR, &bmsr); 5710 err |= tg3_readphy(tp, MII_BMSR, &bmsr);
@@ -5424,7 +5790,7 @@ static int tg3_setup_fiber_mii_phy(struct tg3 *tp, int force_reset)
5424 5790
5425 if (bmsr & BMSR_LSTATUS) { 5791 if (bmsr & BMSR_LSTATUS) {
5426 current_speed = SPEED_1000; 5792 current_speed = SPEED_1000;
5427 current_link_up = 1; 5793 current_link_up = true;
5428 if (bmcr & BMCR_FULLDPLX) 5794 if (bmcr & BMCR_FULLDPLX)
5429 current_duplex = DUPLEX_FULL; 5795 current_duplex = DUPLEX_FULL;
5430 else 5796 else
@@ -5451,12 +5817,13 @@ static int tg3_setup_fiber_mii_phy(struct tg3 *tp, int force_reset)
5451 } else if (!tg3_flag(tp, 5780_CLASS)) { 5817 } else if (!tg3_flag(tp, 5780_CLASS)) {
5452 /* Link is up via parallel detect */ 5818 /* Link is up via parallel detect */
5453 } else { 5819 } else {
5454 current_link_up = 0; 5820 current_link_up = false;
5455 } 5821 }
5456 } 5822 }
5457 } 5823 }
5458 5824
5459 if (current_link_up == 1 && current_duplex == DUPLEX_FULL) 5825fiber_setup_done:
5826 if (current_link_up && current_duplex == DUPLEX_FULL)
5460 tg3_setup_flow_control(tp, local_adv, remote_adv); 5827 tg3_setup_flow_control(tp, local_adv, remote_adv);
5461 5828
5462 tp->mac_mode &= ~MAC_MODE_HALF_DUPLEX; 5829 tp->mac_mode &= ~MAC_MODE_HALF_DUPLEX;
@@ -5535,7 +5902,7 @@ static void tg3_serdes_parallel_detect(struct tg3 *tp)
5535 } 5902 }
5536} 5903}
5537 5904
5538static int tg3_setup_phy(struct tg3 *tp, int force_reset) 5905static int tg3_setup_phy(struct tg3 *tp, bool force_reset)
5539{ 5906{
5540 u32 val; 5907 u32 val;
5541 int err; 5908 int err;
@@ -5625,10 +5992,13 @@ static int tg3_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
5625 5992
5626 info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE | 5993 info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE |
5627 SOF_TIMESTAMPING_RX_SOFTWARE | 5994 SOF_TIMESTAMPING_RX_SOFTWARE |
5628 SOF_TIMESTAMPING_SOFTWARE | 5995 SOF_TIMESTAMPING_SOFTWARE;
5629 SOF_TIMESTAMPING_TX_HARDWARE | 5996
5630 SOF_TIMESTAMPING_RX_HARDWARE | 5997 if (tg3_flag(tp, PTP_CAPABLE)) {
5631 SOF_TIMESTAMPING_RAW_HARDWARE; 5998 info->so_timestamping |= SOF_TIMESTAMPING_TX_HARDWARE |
5999 SOF_TIMESTAMPING_RX_HARDWARE |
6000 SOF_TIMESTAMPING_RAW_HARDWARE;
6001 }
5632 6002
5633 if (tp->ptp_clock) 6003 if (tp->ptp_clock)
5634 info->phc_index = ptp_clock_index(tp->ptp_clock); 6004 info->phc_index = ptp_clock_index(tp->ptp_clock);
@@ -6348,7 +6718,7 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
6348 6718
6349 if (desc->type_flags & RXD_FLAG_VLAN && 6719 if (desc->type_flags & RXD_FLAG_VLAN &&
6350 !(tp->rx_mode & RX_MODE_KEEP_VLAN_TAG)) 6720 !(tp->rx_mode & RX_MODE_KEEP_VLAN_TAG))
6351 __vlan_hwaccel_put_tag(skb, 6721 __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
6352 desc->err_vlan & RXD_VLAN_MASK); 6722 desc->err_vlan & RXD_VLAN_MASK);
6353 6723
6354 napi_gro_receive(&tnapi->napi, skb); 6724 napi_gro_receive(&tnapi->napi, skb);
@@ -6436,7 +6806,7 @@ static void tg3_poll_link(struct tg3 *tp)
6436 MAC_STATUS_LNKSTATE_CHANGED)); 6806 MAC_STATUS_LNKSTATE_CHANGED));
6437 udelay(40); 6807 udelay(40);
6438 } else 6808 } else
6439 tg3_setup_phy(tp, 0); 6809 tg3_setup_phy(tp, false);
6440 spin_unlock(&tp->lock); 6810 spin_unlock(&tp->lock);
6441 } 6811 }
6442 } 6812 }
@@ -7533,7 +7903,7 @@ static int tg3_phy_lpbk_set(struct tg3 *tp, u32 speed, bool extlpbk)
7533 u32 val, bmcr, mac_mode, ptest = 0; 7903 u32 val, bmcr, mac_mode, ptest = 0;
7534 7904
7535 tg3_phy_toggle_apd(tp, false); 7905 tg3_phy_toggle_apd(tp, false);
7536 tg3_phy_toggle_automdix(tp, 0); 7906 tg3_phy_toggle_automdix(tp, false);
7537 7907
7538 if (extlpbk && tg3_phy_set_extloopbk(tp)) 7908 if (extlpbk && tg3_phy_set_extloopbk(tp))
7539 return -EIO; 7909 return -EIO;
@@ -7641,7 +8011,7 @@ static void tg3_set_loopback(struct net_device *dev, netdev_features_t features)
7641 spin_lock_bh(&tp->lock); 8011 spin_lock_bh(&tp->lock);
7642 tg3_mac_loopback(tp, false); 8012 tg3_mac_loopback(tp, false);
7643 /* Force link status check */ 8013 /* Force link status check */
7644 tg3_setup_phy(tp, 1); 8014 tg3_setup_phy(tp, true);
7645 spin_unlock_bh(&tp->lock); 8015 spin_unlock_bh(&tp->lock);
7646 netdev_info(dev, "Internal MAC loopback mode disabled.\n"); 8016 netdev_info(dev, "Internal MAC loopback mode disabled.\n");
7647 } 8017 }
@@ -8039,11 +8409,9 @@ static int tg3_mem_rx_acquire(struct tg3 *tp)
8039 tnapi->rx_rcb = dma_alloc_coherent(&tp->pdev->dev, 8409 tnapi->rx_rcb = dma_alloc_coherent(&tp->pdev->dev,
8040 TG3_RX_RCB_RING_BYTES(tp), 8410 TG3_RX_RCB_RING_BYTES(tp),
8041 &tnapi->rx_rcb_mapping, 8411 &tnapi->rx_rcb_mapping,
8042 GFP_KERNEL); 8412 GFP_KERNEL | __GFP_ZERO);
8043 if (!tnapi->rx_rcb) 8413 if (!tnapi->rx_rcb)
8044 goto err_out; 8414 goto err_out;
8045
8046 memset(tnapi->rx_rcb, 0, TG3_RX_RCB_RING_BYTES(tp));
8047 } 8415 }
8048 8416
8049 return 0; 8417 return 0;
@@ -8093,12 +8461,10 @@ static int tg3_alloc_consistent(struct tg3 *tp)
8093 tp->hw_stats = dma_alloc_coherent(&tp->pdev->dev, 8461 tp->hw_stats = dma_alloc_coherent(&tp->pdev->dev,
8094 sizeof(struct tg3_hw_stats), 8462 sizeof(struct tg3_hw_stats),
8095 &tp->stats_mapping, 8463 &tp->stats_mapping,
8096 GFP_KERNEL); 8464 GFP_KERNEL | __GFP_ZERO);
8097 if (!tp->hw_stats) 8465 if (!tp->hw_stats)
8098 goto err_out; 8466 goto err_out;
8099 8467
8100 memset(tp->hw_stats, 0, sizeof(struct tg3_hw_stats));
8101
8102 for (i = 0; i < tp->irq_cnt; i++) { 8468 for (i = 0; i < tp->irq_cnt; i++) {
8103 struct tg3_napi *tnapi = &tp->napi[i]; 8469 struct tg3_napi *tnapi = &tp->napi[i];
8104 struct tg3_hw_status *sblk; 8470 struct tg3_hw_status *sblk;
@@ -8106,11 +8472,10 @@ static int tg3_alloc_consistent(struct tg3 *tp)
8106 tnapi->hw_status = dma_alloc_coherent(&tp->pdev->dev, 8472 tnapi->hw_status = dma_alloc_coherent(&tp->pdev->dev,
8107 TG3_HW_STATUS_SIZE, 8473 TG3_HW_STATUS_SIZE,
8108 &tnapi->status_mapping, 8474 &tnapi->status_mapping,
8109 GFP_KERNEL); 8475 GFP_KERNEL | __GFP_ZERO);
8110 if (!tnapi->hw_status) 8476 if (!tnapi->hw_status)
8111 goto err_out; 8477 goto err_out;
8112 8478
8113 memset(tnapi->hw_status, 0, TG3_HW_STATUS_SIZE);
8114 sblk = tnapi->hw_status; 8479 sblk = tnapi->hw_status;
8115 8480
8116 if (tg3_flag(tp, ENABLE_RSS)) { 8481 if (tg3_flag(tp, ENABLE_RSS)) {
@@ -8157,7 +8522,7 @@ err_out:
8157/* To stop a block, clear the enable bit and poll till it 8522/* To stop a block, clear the enable bit and poll till it
8158 * clears. tp->lock is held. 8523 * clears. tp->lock is held.
8159 */ 8524 */
8160static int tg3_stop_block(struct tg3 *tp, unsigned long ofs, u32 enable_bit, int silent) 8525static int tg3_stop_block(struct tg3 *tp, unsigned long ofs, u32 enable_bit, bool silent)
8161{ 8526{
8162 unsigned int i; 8527 unsigned int i;
8163 u32 val; 8528 u32 val;
@@ -8201,7 +8566,7 @@ static int tg3_stop_block(struct tg3 *tp, unsigned long ofs, u32 enable_bit, int
8201} 8566}
8202 8567
8203/* tp->lock is held. */ 8568/* tp->lock is held. */
8204static int tg3_abort_hw(struct tg3 *tp, int silent) 8569static int tg3_abort_hw(struct tg3 *tp, bool silent)
8205{ 8570{
8206 int i, err; 8571 int i, err;
8207 8572
@@ -8561,6 +8926,9 @@ static int tg3_chip_reset(struct tg3 *tp)
8561 8926
8562 /* Reprobe ASF enable state. */ 8927 /* Reprobe ASF enable state. */
8563 tg3_flag_clear(tp, ENABLE_ASF); 8928 tg3_flag_clear(tp, ENABLE_ASF);
8929 tp->phy_flags &= ~(TG3_PHYFLG_1G_ON_VAUX_OK |
8930 TG3_PHYFLG_KEEP_LINK_ON_PWRDN);
8931
8564 tg3_flag_clear(tp, ASF_NEW_HANDSHAKE); 8932 tg3_flag_clear(tp, ASF_NEW_HANDSHAKE);
8565 tg3_read_mem(tp, NIC_SRAM_DATA_SIG, &val); 8933 tg3_read_mem(tp, NIC_SRAM_DATA_SIG, &val);
8566 if (val == NIC_SRAM_DATA_SIG_MAGIC) { 8934 if (val == NIC_SRAM_DATA_SIG_MAGIC) {
@@ -8572,6 +8940,12 @@ static int tg3_chip_reset(struct tg3 *tp)
8572 tp->last_event_jiffies = jiffies; 8940 tp->last_event_jiffies = jiffies;
8573 if (tg3_flag(tp, 5750_PLUS)) 8941 if (tg3_flag(tp, 5750_PLUS))
8574 tg3_flag_set(tp, ASF_NEW_HANDSHAKE); 8942 tg3_flag_set(tp, ASF_NEW_HANDSHAKE);
8943
8944 tg3_read_mem(tp, NIC_SRAM_DATA_CFG_3, &nic_cfg);
8945 if (nic_cfg & NIC_SRAM_1G_ON_VAUX_OK)
8946 tp->phy_flags |= TG3_PHYFLG_1G_ON_VAUX_OK;
8947 if (nic_cfg & NIC_SRAM_LNK_FLAP_AVOID)
8948 tp->phy_flags |= TG3_PHYFLG_KEEP_LINK_ON_PWRDN;
8575 } 8949 }
8576 } 8950 }
8577 8951
@@ -8582,7 +8956,7 @@ static void tg3_get_nstats(struct tg3 *, struct rtnl_link_stats64 *);
8582static void tg3_get_estats(struct tg3 *, struct tg3_ethtool_stats *); 8956static void tg3_get_estats(struct tg3 *, struct tg3_ethtool_stats *);
8583 8957
8584/* tp->lock is held. */ 8958/* tp->lock is held. */
8585static int tg3_halt(struct tg3 *tp, int kind, int silent) 8959static int tg3_halt(struct tg3 *tp, int kind, bool silent)
8586{ 8960{
8587 int err; 8961 int err;
8588 8962
@@ -8593,7 +8967,7 @@ static int tg3_halt(struct tg3 *tp, int kind, int silent)
8593 tg3_abort_hw(tp, silent); 8967 tg3_abort_hw(tp, silent);
8594 err = tg3_chip_reset(tp); 8968 err = tg3_chip_reset(tp);
8595 8969
8596 __tg3_set_mac_addr(tp, 0); 8970 __tg3_set_mac_addr(tp, false);
8597 8971
8598 tg3_write_sig_legacy(tp, kind); 8972 tg3_write_sig_legacy(tp, kind);
8599 tg3_write_sig_post_reset(tp, kind); 8973 tg3_write_sig_post_reset(tp, kind);
@@ -8617,7 +8991,8 @@ static int tg3_set_mac_addr(struct net_device *dev, void *p)
8617{ 8991{
8618 struct tg3 *tp = netdev_priv(dev); 8992 struct tg3 *tp = netdev_priv(dev);
8619 struct sockaddr *addr = p; 8993 struct sockaddr *addr = p;
8620 int err = 0, skip_mac_1 = 0; 8994 int err = 0;
8995 bool skip_mac_1 = false;
8621 8996
8622 if (!is_valid_ether_addr(addr->sa_data)) 8997 if (!is_valid_ether_addr(addr->sa_data))
8623 return -EADDRNOTAVAIL; 8998 return -EADDRNOTAVAIL;
@@ -8638,7 +9013,7 @@ static int tg3_set_mac_addr(struct net_device *dev, void *p)
8638 /* Skip MAC addr 1 if ASF is using it. */ 9013 /* Skip MAC addr 1 if ASF is using it. */
8639 if ((addr0_high != addr1_high || addr0_low != addr1_low) && 9014 if ((addr0_high != addr1_high || addr0_low != addr1_low) &&
8640 !(addr1_high == 0 && addr1_low == 0)) 9015 !(addr1_high == 0 && addr1_low == 0))
8641 skip_mac_1 = 1; 9016 skip_mac_1 = true;
8642 } 9017 }
8643 spin_lock_bh(&tp->lock); 9018 spin_lock_bh(&tp->lock);
8644 __tg3_set_mac_addr(tp, skip_mac_1); 9019 __tg3_set_mac_addr(tp, skip_mac_1);
@@ -9057,7 +9432,7 @@ static void tg3_rss_write_indir_tbl(struct tg3 *tp)
9057} 9432}
9058 9433
9059/* tp->lock is held. */ 9434/* tp->lock is held. */
9060static int tg3_reset_hw(struct tg3 *tp, int reset_phy) 9435static int tg3_reset_hw(struct tg3 *tp, bool reset_phy)
9061{ 9436{
9062 u32 val, rdmac_mode; 9437 u32 val, rdmac_mode;
9063 int i, err, limit; 9438 int i, err, limit;
@@ -9106,6 +9481,12 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
9106 TG3_CPMU_DBTMR2_TXIDXEQ_2047US); 9481 TG3_CPMU_DBTMR2_TXIDXEQ_2047US);
9107 } 9482 }
9108 9483
9484 if ((tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN) &&
9485 !(tp->phy_flags & TG3_PHYFLG_USER_CONFIGURED)) {
9486 tg3_phy_pull_config(tp);
9487 tp->phy_flags |= TG3_PHYFLG_USER_CONFIGURED;
9488 }
9489
9109 if (reset_phy) 9490 if (reset_phy)
9110 tg3_phy_reset(tp); 9491 tg3_phy_reset(tp);
9111 9492
@@ -9444,7 +9825,7 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
9444 tg3_rings_reset(tp); 9825 tg3_rings_reset(tp);
9445 9826
9446 /* Initialize MAC address and backoff seed. */ 9827 /* Initialize MAC address and backoff seed. */
9447 __tg3_set_mac_addr(tp, 0); 9828 __tg3_set_mac_addr(tp, false);
9448 9829
9449 /* MTU + ethernet header + FCS + optional VLAN tag */ 9830 /* MTU + ethernet header + FCS + optional VLAN tag */
9450 tw32(MAC_RX_MTU_SIZE, 9831 tw32(MAC_RX_MTU_SIZE,
@@ -9781,6 +10162,13 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
9781 return err; 10162 return err;
9782 } 10163 }
9783 10164
10165 if (tg3_asic_rev(tp) == ASIC_REV_57766) {
10166 /* Ignore any errors for the firmware download. If download
10167 * fails, the device will operate with EEE disabled
10168 */
10169 tg3_load_57766_firmware(tp);
10170 }
10171
9784 if (tg3_flag(tp, TSO_CAPABLE)) { 10172 if (tg3_flag(tp, TSO_CAPABLE)) {
9785 err = tg3_load_tso_firmware(tp); 10173 err = tg3_load_tso_firmware(tp);
9786 if (err) 10174 if (err)
@@ -9888,7 +10276,7 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
9888 if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER) 10276 if (tp->phy_flags & TG3_PHYFLG_IS_LOW_POWER)
9889 tp->phy_flags &= ~TG3_PHYFLG_IS_LOW_POWER; 10277 tp->phy_flags &= ~TG3_PHYFLG_IS_LOW_POWER;
9890 10278
9891 err = tg3_setup_phy(tp, 0); 10279 err = tg3_setup_phy(tp, false);
9892 if (err) 10280 if (err)
9893 return err; 10281 return err;
9894 10282
@@ -9968,7 +10356,7 @@ static int tg3_reset_hw(struct tg3 *tp, int reset_phy)
9968/* Called at device open time to get the chip ready for 10356/* Called at device open time to get the chip ready for
9969 * packet processing. Invoked with tp->lock held. 10357 * packet processing. Invoked with tp->lock held.
9970 */ 10358 */
9971static int tg3_init_hw(struct tg3 *tp, int reset_phy) 10359static int tg3_init_hw(struct tg3 *tp, bool reset_phy)
9972{ 10360{
9973 tg3_switch_clocks(tp); 10361 tg3_switch_clocks(tp);
9974 10362
@@ -10229,7 +10617,7 @@ static void tg3_timer(unsigned long __opaque)
10229 phy_event = 1; 10617 phy_event = 1;
10230 10618
10231 if (phy_event) 10619 if (phy_event)
10232 tg3_setup_phy(tp, 0); 10620 tg3_setup_phy(tp, false);
10233 } else if (tg3_flag(tp, POLL_SERDES)) { 10621 } else if (tg3_flag(tp, POLL_SERDES)) {
10234 u32 mac_stat = tr32(MAC_STATUS); 10622 u32 mac_stat = tr32(MAC_STATUS);
10235 int need_setup = 0; 10623 int need_setup = 0;
@@ -10252,7 +10640,7 @@ static void tg3_timer(unsigned long __opaque)
10252 tw32_f(MAC_MODE, tp->mac_mode); 10640 tw32_f(MAC_MODE, tp->mac_mode);
10253 udelay(40); 10641 udelay(40);
10254 } 10642 }
10255 tg3_setup_phy(tp, 0); 10643 tg3_setup_phy(tp, false);
10256 } 10644 }
10257 } else if ((tp->phy_flags & TG3_PHYFLG_MII_SERDES) && 10645 } else if ((tp->phy_flags & TG3_PHYFLG_MII_SERDES) &&
10258 tg3_flag(tp, 5780_CLASS)) { 10646 tg3_flag(tp, 5780_CLASS)) {
@@ -10338,7 +10726,7 @@ static void tg3_timer_stop(struct tg3 *tp)
10338/* Restart hardware after configuration changes, self-test, etc. 10726/* Restart hardware after configuration changes, self-test, etc.
10339 * Invoked with tp->lock held. 10727 * Invoked with tp->lock held.
10340 */ 10728 */
10341static int tg3_restart_hw(struct tg3 *tp, int reset_phy) 10729static int tg3_restart_hw(struct tg3 *tp, bool reset_phy)
10342 __releases(tp->lock) 10730 __releases(tp->lock)
10343 __acquires(tp->lock) 10731 __acquires(tp->lock)
10344{ 10732{
@@ -10388,7 +10776,7 @@ static void tg3_reset_task(struct work_struct *work)
10388 } 10776 }
10389 10777
10390 tg3_halt(tp, RESET_KIND_SHUTDOWN, 0); 10778 tg3_halt(tp, RESET_KIND_SHUTDOWN, 0);
10391 err = tg3_init_hw(tp, 1); 10779 err = tg3_init_hw(tp, true);
10392 if (err) 10780 if (err)
10393 goto out; 10781 goto out;
10394 10782
@@ -10558,7 +10946,7 @@ static int tg3_test_msi(struct tg3 *tp)
10558 tg3_full_lock(tp, 1); 10946 tg3_full_lock(tp, 1);
10559 10947
10560 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 10948 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
10561 err = tg3_init_hw(tp, 1); 10949 err = tg3_init_hw(tp, true);
10562 10950
10563 tg3_full_unlock(tp); 10951 tg3_full_unlock(tp);
10564 10952
@@ -10570,7 +10958,7 @@ static int tg3_test_msi(struct tg3 *tp)
10570 10958
10571static int tg3_request_firmware(struct tg3 *tp) 10959static int tg3_request_firmware(struct tg3 *tp)
10572{ 10960{
10573 const __be32 *fw_data; 10961 const struct tg3_firmware_hdr *fw_hdr;
10574 10962
10575 if (request_firmware(&tp->fw, tp->fw_needed, &tp->pdev->dev)) { 10963 if (request_firmware(&tp->fw, tp->fw_needed, &tp->pdev->dev)) {
10576 netdev_err(tp->dev, "Failed to load firmware \"%s\"\n", 10964 netdev_err(tp->dev, "Failed to load firmware \"%s\"\n",
@@ -10578,15 +10966,15 @@ static int tg3_request_firmware(struct tg3 *tp)
10578 return -ENOENT; 10966 return -ENOENT;
10579 } 10967 }
10580 10968
10581 fw_data = (void *)tp->fw->data; 10969 fw_hdr = (struct tg3_firmware_hdr *)tp->fw->data;
10582 10970
10583 /* Firmware blob starts with version numbers, followed by 10971 /* Firmware blob starts with version numbers, followed by
10584 * start address and _full_ length including BSS sections 10972 * start address and _full_ length including BSS sections
10585 * (which must be longer than the actual data, of course 10973 * (which must be longer than the actual data, of course
10586 */ 10974 */
10587 10975
10588 tp->fw_len = be32_to_cpu(fw_data[2]); /* includes bss */ 10976 tp->fw_len = be32_to_cpu(fw_hdr->len); /* includes bss */
10589 if (tp->fw_len < (tp->fw->size - 12)) { 10977 if (tp->fw_len < (tp->fw->size - TG3_FW_HDR_LEN)) {
10590 netdev_err(tp->dev, "bogus length %d in \"%s\"\n", 10978 netdev_err(tp->dev, "bogus length %d in \"%s\"\n",
10591 tp->fw_len, tp->fw_needed); 10979 tp->fw_len, tp->fw_needed);
10592 release_firmware(tp->fw); 10980 release_firmware(tp->fw);
@@ -10885,7 +11273,15 @@ static int tg3_open(struct net_device *dev)
10885 11273
10886 if (tp->fw_needed) { 11274 if (tp->fw_needed) {
10887 err = tg3_request_firmware(tp); 11275 err = tg3_request_firmware(tp);
10888 if (tg3_chip_rev_id(tp) == CHIPREV_ID_5701_A0) { 11276 if (tg3_asic_rev(tp) == ASIC_REV_57766) {
11277 if (err) {
11278 netdev_warn(tp->dev, "EEE capability disabled\n");
11279 tp->phy_flags &= ~TG3_PHYFLG_EEE_CAP;
11280 } else if (!(tp->phy_flags & TG3_PHYFLG_EEE_CAP)) {
11281 netdev_warn(tp->dev, "EEE capability restored\n");
11282 tp->phy_flags |= TG3_PHYFLG_EEE_CAP;
11283 }
11284 } else if (tg3_chip_rev_id(tp) == CHIPREV_ID_5701_A0) {
10889 if (err) 11285 if (err)
10890 return err; 11286 return err;
10891 } else if (err) { 11287 } else if (err) {
@@ -10910,7 +11306,9 @@ static int tg3_open(struct net_device *dev)
10910 11306
10911 tg3_full_unlock(tp); 11307 tg3_full_unlock(tp);
10912 11308
10913 err = tg3_start(tp, true, true, true); 11309 err = tg3_start(tp,
11310 !(tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN),
11311 true, true);
10914 if (err) { 11312 if (err) {
10915 tg3_frob_aux_power(tp, false); 11313 tg3_frob_aux_power(tp, false);
10916 pci_set_power_state(tp->pdev, PCI_D3hot); 11314 pci_set_power_state(tp->pdev, PCI_D3hot);
@@ -11416,8 +11814,12 @@ static int tg3_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
11416 tp->link_config.duplex = cmd->duplex; 11814 tp->link_config.duplex = cmd->duplex;
11417 } 11815 }
11418 11816
11817 tp->phy_flags |= TG3_PHYFLG_USER_CONFIGURED;
11818
11819 tg3_warn_mgmt_link_flap(tp);
11820
11419 if (netif_running(dev)) 11821 if (netif_running(dev))
11420 tg3_setup_phy(tp, 1); 11822 tg3_setup_phy(tp, true);
11421 11823
11422 tg3_full_unlock(tp); 11824 tg3_full_unlock(tp);
11423 11825
@@ -11494,6 +11896,8 @@ static int tg3_nway_reset(struct net_device *dev)
11494 if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES) 11896 if (tp->phy_flags & TG3_PHYFLG_PHY_SERDES)
11495 return -EINVAL; 11897 return -EINVAL;
11496 11898
11899 tg3_warn_mgmt_link_flap(tp);
11900
11497 if (tg3_flag(tp, USE_PHYLIB)) { 11901 if (tg3_flag(tp, USE_PHYLIB)) {
11498 if (!(tp->phy_flags & TG3_PHYFLG_IS_CONNECTED)) 11902 if (!(tp->phy_flags & TG3_PHYFLG_IS_CONNECTED))
11499 return -EAGAIN; 11903 return -EAGAIN;
@@ -11571,7 +11975,7 @@ static int tg3_set_ringparam(struct net_device *dev, struct ethtool_ringparam *e
11571 11975
11572 if (netif_running(dev)) { 11976 if (netif_running(dev)) {
11573 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 11977 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
11574 err = tg3_restart_hw(tp, 1); 11978 err = tg3_restart_hw(tp, false);
11575 if (!err) 11979 if (!err)
11576 tg3_netif_start(tp); 11980 tg3_netif_start(tp);
11577 } 11981 }
@@ -11606,6 +12010,9 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
11606 struct tg3 *tp = netdev_priv(dev); 12010 struct tg3 *tp = netdev_priv(dev);
11607 int err = 0; 12011 int err = 0;
11608 12012
12013 if (tp->link_config.autoneg == AUTONEG_ENABLE)
12014 tg3_warn_mgmt_link_flap(tp);
12015
11609 if (tg3_flag(tp, USE_PHYLIB)) { 12016 if (tg3_flag(tp, USE_PHYLIB)) {
11610 u32 newadv; 12017 u32 newadv;
11611 struct phy_device *phydev; 12018 struct phy_device *phydev;
@@ -11692,7 +12099,7 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
11692 12099
11693 if (netif_running(dev)) { 12100 if (netif_running(dev)) {
11694 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 12101 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
11695 err = tg3_restart_hw(tp, 1); 12102 err = tg3_restart_hw(tp, false);
11696 if (!err) 12103 if (!err)
11697 tg3_netif_start(tp); 12104 tg3_netif_start(tp);
11698 } 12105 }
@@ -11700,6 +12107,8 @@ static int tg3_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam
11700 tg3_full_unlock(tp); 12107 tg3_full_unlock(tp);
11701 } 12108 }
11702 12109
12110 tp->phy_flags |= TG3_PHYFLG_USER_CONFIGURED;
12111
11703 return err; 12112 return err;
11704} 12113}
11705 12114
@@ -12760,7 +13169,7 @@ static int tg3_test_loopback(struct tg3 *tp, u64 *data, bool do_extlpbk)
12760 goto done; 13169 goto done;
12761 } 13170 }
12762 13171
12763 err = tg3_reset_hw(tp, 1); 13172 err = tg3_reset_hw(tp, true);
12764 if (err) { 13173 if (err) {
12765 data[TG3_MAC_LOOPB_TEST] = TG3_LOOPBACK_FAILED; 13174 data[TG3_MAC_LOOPB_TEST] = TG3_LOOPBACK_FAILED;
12766 data[TG3_PHY_LOOPB_TEST] = TG3_LOOPBACK_FAILED; 13175 data[TG3_PHY_LOOPB_TEST] = TG3_LOOPBACK_FAILED;
@@ -12927,7 +13336,7 @@ static void tg3_self_test(struct net_device *dev, struct ethtool_test *etest,
12927 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1); 13336 tg3_halt(tp, RESET_KIND_SHUTDOWN, 1);
12928 if (netif_running(dev)) { 13337 if (netif_running(dev)) {
12929 tg3_flag_set(tp, INIT_COMPLETE); 13338 tg3_flag_set(tp, INIT_COMPLETE);
12930 err2 = tg3_restart_hw(tp, 1); 13339 err2 = tg3_restart_hw(tp, true);
12931 if (!err2) 13340 if (!err2)
12932 tg3_netif_start(tp); 13341 tg3_netif_start(tp);
12933 } 13342 }
@@ -13244,7 +13653,8 @@ static inline void tg3_set_mtu(struct net_device *dev, struct tg3 *tp,
13244static int tg3_change_mtu(struct net_device *dev, int new_mtu) 13653static int tg3_change_mtu(struct net_device *dev, int new_mtu)
13245{ 13654{
13246 struct tg3 *tp = netdev_priv(dev); 13655 struct tg3 *tp = netdev_priv(dev);
13247 int err, reset_phy = 0; 13656 int err;
13657 bool reset_phy = false;
13248 13658
13249 if (new_mtu < TG3_MIN_MTU || new_mtu > TG3_MAX_MTU(tp)) 13659 if (new_mtu < TG3_MIN_MTU || new_mtu > TG3_MAX_MTU(tp))
13250 return -EINVAL; 13660 return -EINVAL;
@@ -13271,7 +13681,7 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu)
13271 * breaks all requests to 256 bytes. 13681 * breaks all requests to 256 bytes.
13272 */ 13682 */
13273 if (tg3_asic_rev(tp) == ASIC_REV_57766) 13683 if (tg3_asic_rev(tp) == ASIC_REV_57766)
13274 reset_phy = 1; 13684 reset_phy = true;
13275 13685
13276 err = tg3_restart_hw(tp, reset_phy); 13686 err = tg3_restart_hw(tp, reset_phy);
13277 13687
@@ -13837,6 +14247,12 @@ static void tg3_get_5720_nvram_info(struct tg3 *tp)
13837 case FLASH_5762_EEPROM_LD: 14247 case FLASH_5762_EEPROM_LD:
13838 nvmpinstrp = FLASH_5720_EEPROM_LD; 14248 nvmpinstrp = FLASH_5720_EEPROM_LD;
13839 break; 14249 break;
14250 case FLASH_5720VENDOR_M_ST_M45PE20:
14251 /* This pinstrap supports multiple sizes, so force it
14252 * to read the actual size from location 0xf0.
14253 */
14254 nvmpinstrp = FLASH_5720VENDOR_ST_45USPT;
14255 break;
13840 } 14256 }
13841 } 14257 }
13842 14258
@@ -14289,14 +14705,18 @@ static void tg3_get_eeprom_hw_cfg(struct tg3 *tp)
14289 (cfg2 & NIC_SRAM_DATA_CFG_2_APD_EN)) 14705 (cfg2 & NIC_SRAM_DATA_CFG_2_APD_EN))
14290 tp->phy_flags |= TG3_PHYFLG_ENABLE_APD; 14706 tp->phy_flags |= TG3_PHYFLG_ENABLE_APD;
14291 14707
14292 if (tg3_flag(tp, PCI_EXPRESS) && 14708 if (tg3_flag(tp, PCI_EXPRESS)) {
14293 tg3_asic_rev(tp) != ASIC_REV_5785 &&
14294 !tg3_flag(tp, 57765_PLUS)) {
14295 u32 cfg3; 14709 u32 cfg3;
14296 14710
14297 tg3_read_mem(tp, NIC_SRAM_DATA_CFG_3, &cfg3); 14711 tg3_read_mem(tp, NIC_SRAM_DATA_CFG_3, &cfg3);
14298 if (cfg3 & NIC_SRAM_ASPM_DEBOUNCE) 14712 if (tg3_asic_rev(tp) != ASIC_REV_5785 &&
14713 !tg3_flag(tp, 57765_PLUS) &&
14714 (cfg3 & NIC_SRAM_ASPM_DEBOUNCE))
14299 tg3_flag_set(tp, ASPM_WORKAROUND); 14715 tg3_flag_set(tp, ASPM_WORKAROUND);
14716 if (cfg3 & NIC_SRAM_LNK_FLAP_AVOID)
14717 tp->phy_flags |= TG3_PHYFLG_KEEP_LINK_ON_PWRDN;
14718 if (cfg3 & NIC_SRAM_1G_ON_VAUX_OK)
14719 tp->phy_flags |= TG3_PHYFLG_1G_ON_VAUX_OK;
14300 } 14720 }
14301 14721
14302 if (cfg4 & NIC_SRAM_RGMII_INBAND_DISABLE) 14722 if (cfg4 & NIC_SRAM_RGMII_INBAND_DISABLE)
@@ -14450,6 +14870,12 @@ static int tg3_phy_probe(struct tg3 *tp)
14450 } 14870 }
14451 } 14871 }
14452 14872
14873 if (!tg3_flag(tp, ENABLE_ASF) &&
14874 !(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) &&
14875 !(tp->phy_flags & TG3_PHYFLG_10_100_ONLY))
14876 tp->phy_flags &= ~(TG3_PHYFLG_1G_ON_VAUX_OK |
14877 TG3_PHYFLG_KEEP_LINK_ON_PWRDN);
14878
14453 if (tg3_flag(tp, USE_PHYLIB)) 14879 if (tg3_flag(tp, USE_PHYLIB))
14454 return tg3_phy_init(tp); 14880 return tg3_phy_init(tp);
14455 14881
@@ -14515,6 +14941,7 @@ static int tg3_phy_probe(struct tg3 *tp)
14515 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) && 14941 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) &&
14516 (tg3_asic_rev(tp) == ASIC_REV_5719 || 14942 (tg3_asic_rev(tp) == ASIC_REV_5719 ||
14517 tg3_asic_rev(tp) == ASIC_REV_5720 || 14943 tg3_asic_rev(tp) == ASIC_REV_5720 ||
14944 tg3_asic_rev(tp) == ASIC_REV_57766 ||
14518 tg3_asic_rev(tp) == ASIC_REV_5762 || 14945 tg3_asic_rev(tp) == ASIC_REV_5762 ||
14519 (tg3_asic_rev(tp) == ASIC_REV_5717 && 14946 (tg3_asic_rev(tp) == ASIC_REV_5717 &&
14520 tg3_chip_rev_id(tp) != CHIPREV_ID_5717_A0) || 14947 tg3_chip_rev_id(tp) != CHIPREV_ID_5717_A0) ||
@@ -14524,7 +14951,8 @@ static int tg3_phy_probe(struct tg3 *tp)
14524 14951
14525 tg3_phy_init_link_config(tp); 14952 tg3_phy_init_link_config(tp);
14526 14953
14527 if (!(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) && 14954 if (!(tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN) &&
14955 !(tp->phy_flags & TG3_PHYFLG_ANY_SERDES) &&
14528 !tg3_flag(tp, ENABLE_APE) && 14956 !tg3_flag(tp, ENABLE_APE) &&
14529 !tg3_flag(tp, ENABLE_ASF)) { 14957 !tg3_flag(tp, ENABLE_ASF)) {
14530 u32 bmsr, dummy; 14958 u32 bmsr, dummy;
@@ -15300,7 +15728,8 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent)
15300 } else if (tg3_asic_rev(tp) != ASIC_REV_5700 && 15728 } else if (tg3_asic_rev(tp) != ASIC_REV_5700 &&
15301 tg3_asic_rev(tp) != ASIC_REV_5701 && 15729 tg3_asic_rev(tp) != ASIC_REV_5701 &&
15302 tg3_chip_rev_id(tp) != CHIPREV_ID_5705_A0) { 15730 tg3_chip_rev_id(tp) != CHIPREV_ID_5705_A0) {
15303 tg3_flag_set(tp, TSO_BUG); 15731 tg3_flag_set(tp, FW_TSO);
15732 tg3_flag_set(tp, TSO_BUG);
15304 if (tg3_asic_rev(tp) == ASIC_REV_5705) 15733 if (tg3_asic_rev(tp) == ASIC_REV_5705)
15305 tp->fw_needed = FIRMWARE_TG3TSO5; 15734 tp->fw_needed = FIRMWARE_TG3TSO5;
15306 else 15735 else
@@ -15311,7 +15740,7 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent)
15311 if (tg3_flag(tp, HW_TSO_1) || 15740 if (tg3_flag(tp, HW_TSO_1) ||
15312 tg3_flag(tp, HW_TSO_2) || 15741 tg3_flag(tp, HW_TSO_2) ||
15313 tg3_flag(tp, HW_TSO_3) || 15742 tg3_flag(tp, HW_TSO_3) ||
15314 tp->fw_needed) { 15743 tg3_flag(tp, FW_TSO)) {
15315 /* For firmware TSO, assume ASF is disabled. 15744 /* For firmware TSO, assume ASF is disabled.
15316 * We'll disable TSO later if we discover ASF 15745 * We'll disable TSO later if we discover ASF
15317 * is enabled in tg3_get_eeprom_hw_cfg(). 15746 * is enabled in tg3_get_eeprom_hw_cfg().
@@ -15326,6 +15755,9 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent)
15326 if (tg3_chip_rev_id(tp) == CHIPREV_ID_5701_A0) 15755 if (tg3_chip_rev_id(tp) == CHIPREV_ID_5701_A0)
15327 tp->fw_needed = FIRMWARE_TG3; 15756 tp->fw_needed = FIRMWARE_TG3;
15328 15757
15758 if (tg3_asic_rev(tp) == ASIC_REV_57766)
15759 tp->fw_needed = FIRMWARE_TG357766;
15760
15329 tp->irq_max = 1; 15761 tp->irq_max = 1;
15330 15762
15331 if (tg3_flag(tp, 5750_PLUS)) { 15763 if (tg3_flag(tp, 5750_PLUS)) {
@@ -15598,7 +16030,7 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent)
15598 */ 16030 */
15599 tg3_get_eeprom_hw_cfg(tp); 16031 tg3_get_eeprom_hw_cfg(tp);
15600 16032
15601 if (tp->fw_needed && tg3_flag(tp, ENABLE_ASF)) { 16033 if (tg3_flag(tp, FW_TSO) && tg3_flag(tp, ENABLE_ASF)) {
15602 tg3_flag_clear(tp, TSO_CAPABLE); 16034 tg3_flag_clear(tp, TSO_CAPABLE);
15603 tg3_flag_clear(tp, TSO_BUG); 16035 tg3_flag_clear(tp, TSO_BUG);
15604 tp->fw_needed = NULL; 16036 tp->fw_needed = NULL;
@@ -15786,6 +16218,11 @@ static int tg3_get_invariants(struct tg3 *tp, const struct pci_device_id *ent)
15786 udelay(50); 16218 udelay(50);
15787 tg3_nvram_init(tp); 16219 tg3_nvram_init(tp);
15788 16220
16221 /* If the device has an NVRAM, no need to load patch firmware */
16222 if (tg3_asic_rev(tp) == ASIC_REV_57766 &&
16223 !tg3_flag(tp, NO_NVRAM))
16224 tp->fw_needed = NULL;
16225
15789 grc_misc_cfg = tr32(GRC_MISC_CFG); 16226 grc_misc_cfg = tr32(GRC_MISC_CFG);
15790 grc_misc_cfg &= GRC_MISC_CFG_BOARD_ID_MASK; 16227 grc_misc_cfg &= GRC_MISC_CFG_BOARD_ID_MASK;
15791 16228
@@ -16144,7 +16581,7 @@ out:
16144} 16581}
16145 16582
16146static int tg3_do_test_dma(struct tg3 *tp, u32 *buf, dma_addr_t buf_dma, 16583static int tg3_do_test_dma(struct tg3 *tp, u32 *buf, dma_addr_t buf_dma,
16147 int size, int to_device) 16584 int size, bool to_device)
16148{ 16585{
16149 struct tg3_internal_buffer_desc test_desc; 16586 struct tg3_internal_buffer_desc test_desc;
16150 u32 sram_dma_descs; 16587 u32 sram_dma_descs;
@@ -16344,7 +16781,7 @@ static int tg3_test_dma(struct tg3 *tp)
16344 p[i] = i; 16781 p[i] = i;
16345 16782
16346 /* Send the buffer to the chip. */ 16783 /* Send the buffer to the chip. */
16347 ret = tg3_do_test_dma(tp, buf, buf_dma, TEST_BUFFER_SIZE, 1); 16784 ret = tg3_do_test_dma(tp, buf, buf_dma, TEST_BUFFER_SIZE, true);
16348 if (ret) { 16785 if (ret) {
16349 dev_err(&tp->pdev->dev, 16786 dev_err(&tp->pdev->dev,
16350 "%s: Buffer write failed. err = %d\n", 16787 "%s: Buffer write failed. err = %d\n",
@@ -16367,7 +16804,7 @@ static int tg3_test_dma(struct tg3 *tp)
16367 } 16804 }
16368#endif 16805#endif
16369 /* Now read it back. */ 16806 /* Now read it back. */
16370 ret = tg3_do_test_dma(tp, buf, buf_dma, TEST_BUFFER_SIZE, 0); 16807 ret = tg3_do_test_dma(tp, buf, buf_dma, TEST_BUFFER_SIZE, false);
16371 if (ret) { 16808 if (ret) {
16372 dev_err(&tp->pdev->dev, "%s: Buffer read failed. " 16809 dev_err(&tp->pdev->dev, "%s: Buffer read failed. "
16373 "err = %d\n", __func__, ret); 16810 "err = %d\n", __func__, ret);
@@ -16763,7 +17200,7 @@ static int tg3_init_one(struct pci_dev *pdev,
16763 17200
16764 tg3_init_bufmgr_config(tp); 17201 tg3_init_bufmgr_config(tp);
16765 17202
16766 features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 17203 features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX;
16767 17204
16768 /* 5700 B0 chips do not support checksumming correctly due 17205 /* 5700 B0 chips do not support checksumming correctly due
16769 * to hardware bugs. 17206 * to hardware bugs.
@@ -17048,7 +17485,7 @@ static int tg3_suspend(struct device *device)
17048 tg3_full_lock(tp, 0); 17485 tg3_full_lock(tp, 0);
17049 17486
17050 tg3_flag_set(tp, INIT_COMPLETE); 17487 tg3_flag_set(tp, INIT_COMPLETE);
17051 err2 = tg3_restart_hw(tp, 1); 17488 err2 = tg3_restart_hw(tp, true);
17052 if (err2) 17489 if (err2)
17053 goto out; 17490 goto out;
17054 17491
@@ -17082,7 +17519,8 @@ static int tg3_resume(struct device *device)
17082 tg3_full_lock(tp, 0); 17519 tg3_full_lock(tp, 0);
17083 17520
17084 tg3_flag_set(tp, INIT_COMPLETE); 17521 tg3_flag_set(tp, INIT_COMPLETE);
17085 err = tg3_restart_hw(tp, 1); 17522 err = tg3_restart_hw(tp,
17523 !(tp->phy_flags & TG3_PHYFLG_KEEP_LINK_ON_PWRDN));
17086 if (err) 17524 if (err)
17087 goto out; 17525 goto out;
17088 17526
@@ -17098,15 +17536,9 @@ out:
17098 17536
17099 return err; 17537 return err;
17100} 17538}
17539#endif /* CONFIG_PM_SLEEP */
17101 17540
17102static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume); 17541static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume);
17103#define TG3_PM_OPS (&tg3_pm_ops)
17104
17105#else
17106
17107#define TG3_PM_OPS NULL
17108
17109#endif /* CONFIG_PM_SLEEP */
17110 17542
17111/** 17543/**
17112 * tg3_io_error_detected - called when PCI error is detected 17544 * tg3_io_error_detected - called when PCI error is detected
@@ -17221,7 +17653,7 @@ static void tg3_io_resume(struct pci_dev *pdev)
17221 17653
17222 tg3_full_lock(tp, 0); 17654 tg3_full_lock(tp, 0);
17223 tg3_flag_set(tp, INIT_COMPLETE); 17655 tg3_flag_set(tp, INIT_COMPLETE);
17224 err = tg3_restart_hw(tp, 1); 17656 err = tg3_restart_hw(tp, true);
17225 if (err) { 17657 if (err) {
17226 tg3_full_unlock(tp); 17658 tg3_full_unlock(tp);
17227 netdev_err(netdev, "Cannot restart hardware after reset.\n"); 17659 netdev_err(netdev, "Cannot restart hardware after reset.\n");
@@ -17254,7 +17686,7 @@ static struct pci_driver tg3_driver = {
17254 .probe = tg3_init_one, 17686 .probe = tg3_init_one,
17255 .remove = tg3_remove_one, 17687 .remove = tg3_remove_one,
17256 .err_handler = &tg3_err_handler, 17688 .err_handler = &tg3_err_handler,
17257 .driver.pm = TG3_PM_OPS, 17689 .driver.pm = &tg3_pm_ops,
17258}; 17690};
17259 17691
17260static int __init tg3_init(void) 17692static int __init tg3_init(void)
diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h
index 8d7d4c2ab5d6..9b2d3ac2474a 100644
--- a/drivers/net/ethernet/broadcom/tg3.h
+++ b/drivers/net/ethernet/broadcom/tg3.h
@@ -2198,6 +2198,8 @@
2198 2198
2199#define NIC_SRAM_DATA_CFG_3 0x00000d3c 2199#define NIC_SRAM_DATA_CFG_3 0x00000d3c
2200#define NIC_SRAM_ASPM_DEBOUNCE 0x00000002 2200#define NIC_SRAM_ASPM_DEBOUNCE 0x00000002
2201#define NIC_SRAM_LNK_FLAP_AVOID 0x00400000
2202#define NIC_SRAM_1G_ON_VAUX_OK 0x00800000
2201 2203
2202#define NIC_SRAM_DATA_CFG_4 0x00000d60 2204#define NIC_SRAM_DATA_CFG_4 0x00000d60
2203#define NIC_SRAM_GMII_MODE 0x00000002 2205#define NIC_SRAM_GMII_MODE 0x00000002
@@ -2222,6 +2224,12 @@
2222#define NIC_SRAM_MBUF_POOL_BASE5705 0x00010000 2224#define NIC_SRAM_MBUF_POOL_BASE5705 0x00010000
2223#define NIC_SRAM_MBUF_POOL_SIZE5705 0x0000e000 2225#define NIC_SRAM_MBUF_POOL_SIZE5705 0x0000e000
2224 2226
2227#define TG3_SRAM_RXCPU_SCRATCH_BASE_57766 0x00030000
2228#define TG3_SRAM_RXCPU_SCRATCH_SIZE_57766 0x00010000
2229#define TG3_57766_FW_BASE_ADDR 0x00030000
2230#define TG3_57766_FW_HANDSHAKE 0x0003fccc
2231#define TG3_SBROM_IN_SERVICE_LOOP 0x51
2232
2225#define TG3_SRAM_RX_STD_BDCACHE_SIZE_5700 128 2233#define TG3_SRAM_RX_STD_BDCACHE_SIZE_5700 128
2226#define TG3_SRAM_RX_STD_BDCACHE_SIZE_5755 64 2234#define TG3_SRAM_RX_STD_BDCACHE_SIZE_5755 64
2227#define TG3_SRAM_RX_STD_BDCACHE_SIZE_5906 32 2235#define TG3_SRAM_RX_STD_BDCACHE_SIZE_5906 32
@@ -2365,6 +2373,13 @@
2365#define MII_TG3_FET_SHDW_AUXSTAT2 0x1b 2373#define MII_TG3_FET_SHDW_AUXSTAT2 0x1b
2366#define MII_TG3_FET_SHDW_AUXSTAT2_APD 0x0020 2374#define MII_TG3_FET_SHDW_AUXSTAT2_APD 0x0020
2367 2375
2376/* Serdes PHY Register Definitions */
2377#define SERDES_TG3_1000X_STATUS 0x14
2378#define SERDES_TG3_SGMII_MODE 0x0001
2379#define SERDES_TG3_LINK_UP 0x0002
2380#define SERDES_TG3_FULL_DUPLEX 0x0004
2381#define SERDES_TG3_SPEED_100 0x0008
2382#define SERDES_TG3_SPEED_1000 0x0010
2368 2383
2369/* APE registers. Accessible through BAR1 */ 2384/* APE registers. Accessible through BAR1 */
2370#define TG3_APE_GPIO_MSG 0x0008 2385#define TG3_APE_GPIO_MSG 0x0008
@@ -3009,17 +3024,18 @@ enum TG3_FLAGS {
3009 TG3_FLAG_JUMBO_CAPABLE, 3024 TG3_FLAG_JUMBO_CAPABLE,
3010 TG3_FLAG_CHIP_RESETTING, 3025 TG3_FLAG_CHIP_RESETTING,
3011 TG3_FLAG_INIT_COMPLETE, 3026 TG3_FLAG_INIT_COMPLETE,
3012 TG3_FLAG_TSO_BUG,
3013 TG3_FLAG_MAX_RXPEND_64, 3027 TG3_FLAG_MAX_RXPEND_64,
3014 TG3_FLAG_TSO_CAPABLE,
3015 TG3_FLAG_PCI_EXPRESS, /* BCM5785 + pci_is_pcie() */ 3028 TG3_FLAG_PCI_EXPRESS, /* BCM5785 + pci_is_pcie() */
3016 TG3_FLAG_ASF_NEW_HANDSHAKE, 3029 TG3_FLAG_ASF_NEW_HANDSHAKE,
3017 TG3_FLAG_HW_AUTONEG, 3030 TG3_FLAG_HW_AUTONEG,
3018 TG3_FLAG_IS_NIC, 3031 TG3_FLAG_IS_NIC,
3019 TG3_FLAG_FLASH, 3032 TG3_FLAG_FLASH,
3033 TG3_FLAG_FW_TSO,
3020 TG3_FLAG_HW_TSO_1, 3034 TG3_FLAG_HW_TSO_1,
3021 TG3_FLAG_HW_TSO_2, 3035 TG3_FLAG_HW_TSO_2,
3022 TG3_FLAG_HW_TSO_3, 3036 TG3_FLAG_HW_TSO_3,
3037 TG3_FLAG_TSO_CAPABLE,
3038 TG3_FLAG_TSO_BUG,
3023 TG3_FLAG_ICH_WORKAROUND, 3039 TG3_FLAG_ICH_WORKAROUND,
3024 TG3_FLAG_1SHOT_MSI, 3040 TG3_FLAG_1SHOT_MSI,
3025 TG3_FLAG_NO_FWARE_REPORTED, 3041 TG3_FLAG_NO_FWARE_REPORTED,
@@ -3064,6 +3080,13 @@ enum TG3_FLAGS {
3064 TG3_FLAG_NUMBER_OF_FLAGS, /* Last entry in enum TG3_FLAGS */ 3080 TG3_FLAG_NUMBER_OF_FLAGS, /* Last entry in enum TG3_FLAGS */
3065}; 3081};
3066 3082
3083struct tg3_firmware_hdr {
3084 __be32 version; /* unused for fragments */
3085 __be32 base_addr;
3086 __be32 len;
3087};
3088#define TG3_FW_HDR_LEN (sizeof(struct tg3_firmware_hdr))
3089
3067struct tg3 { 3090struct tg3 {
3068 /* begin "general, frequently-used members" cacheline section */ 3091 /* begin "general, frequently-used members" cacheline section */
3069 3092
@@ -3267,6 +3290,7 @@ struct tg3 {
3267#define TG3_PHYFLG_IS_LOW_POWER 0x00000001 3290#define TG3_PHYFLG_IS_LOW_POWER 0x00000001
3268#define TG3_PHYFLG_IS_CONNECTED 0x00000002 3291#define TG3_PHYFLG_IS_CONNECTED 0x00000002
3269#define TG3_PHYFLG_USE_MI_INTERRUPT 0x00000004 3292#define TG3_PHYFLG_USE_MI_INTERRUPT 0x00000004
3293#define TG3_PHYFLG_USER_CONFIGURED 0x00000008
3270#define TG3_PHYFLG_PHY_SERDES 0x00000010 3294#define TG3_PHYFLG_PHY_SERDES 0x00000010
3271#define TG3_PHYFLG_MII_SERDES 0x00000020 3295#define TG3_PHYFLG_MII_SERDES 0x00000020
3272#define TG3_PHYFLG_ANY_SERDES (TG3_PHYFLG_PHY_SERDES | \ 3296#define TG3_PHYFLG_ANY_SERDES (TG3_PHYFLG_PHY_SERDES | \
@@ -3284,6 +3308,8 @@ struct tg3 {
3284#define TG3_PHYFLG_SERDES_PREEMPHASIS 0x00010000 3308#define TG3_PHYFLG_SERDES_PREEMPHASIS 0x00010000
3285#define TG3_PHYFLG_PARALLEL_DETECT 0x00020000 3309#define TG3_PHYFLG_PARALLEL_DETECT 0x00020000
3286#define TG3_PHYFLG_EEE_CAP 0x00040000 3310#define TG3_PHYFLG_EEE_CAP 0x00040000
3311#define TG3_PHYFLG_1G_ON_VAUX_OK 0x00080000
3312#define TG3_PHYFLG_KEEP_LINK_ON_PWRDN 0x00100000
3287#define TG3_PHYFLG_MDIX_STATE 0x00200000 3313#define TG3_PHYFLG_MDIX_STATE 0x00200000
3288 3314
3289 u32 led_ctrl; 3315 u32 led_ctrl;