diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2013-11-13 03:40:34 -0500 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-11-13 03:40:34 -0500 |
commit | 42a2d923cc349583ebf6fdd52a7d35e1c2f7e6bd (patch) | |
tree | 2b2b0c03b5389c1301800119333967efafd994ca /drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c | |
parent | 5cbb3d216e2041700231bcfc383ee5f8b7fc8b74 (diff) | |
parent | 75ecab1df14d90e86cebef9ec5c76befde46e65f (diff) |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:
1) The addition of nftables. No longer will we need protocol aware
firewall filtering modules, it can all live in userspace.
At the core of nftables is a, for lack of a better term, virtual
machine that executes byte codes to inspect packet or metadata
(arriving interface index, etc.) and make verdict decisions.
Besides support for loading packet contents and comparing them, the
interpreter supports lookups in various datastructures as
fundamental operations. For example sets are supports, and
therefore one could create a set of whitelist IP address entries
which have ACCEPT verdicts attached to them, and use the appropriate
byte codes to do such lookups.
Since the interpreted code is composed in userspace, userspace can
do things like optimize things before giving it to the kernel.
Another major improvement is the capability of atomically updating
portions of the ruleset. In the existing netfilter implementation,
one has to update the entire rule set in order to make a change and
this is very expensive.
Userspace tools exist to create nftables rules using existing
netfilter rule sets, but both kernel implementations will need to
co-exist for quite some time as we transition from the old to the
new stuff.
Kudos to Patrick McHardy, Pablo Neira Ayuso, and others who have
worked so hard on this.
2) Daniel Borkmann and Hannes Frederic Sowa made several improvements
to our pseudo-random number generator, mostly used for things like
UDP port randomization and netfitler, amongst other things.
In particular the taus88 generater is updated to taus113, and test
cases are added.
3) Support 64-bit rates in HTB and TBF schedulers, from Eric Dumazet
and Yang Yingliang.
4) Add support for new 577xx tigon3 chips to tg3 driver, from Nithin
Sujir.
5) Fix two fatal flaws in TCP dynamic right sizing, from Eric Dumazet,
Neal Cardwell, and Yuchung Cheng.
6) Allow IP_TOS and IP_TTL to be specified in sendmsg() ancillary
control message data, much like other socket option attributes.
From Francesco Fusco.
7) Allow applications to specify a cap on the rate computed
automatically by the kernel for pacing flows, via a new
SO_MAX_PACING_RATE socket option. From Eric Dumazet.
8) Make the initial autotuned send buffer sizing in TCP more closely
reflect actual needs, from Eric Dumazet.
9) Currently early socket demux only happens for TCP sockets, but we
can do it for connected UDP sockets too. Implementation from Shawn
Bohrer.
10) Refactor inet socket demux with the goal of improving hash demux
performance for listening sockets. With the main goals being able
to use RCU lookups on even request sockets, and eliminating the
listening lock contention. From Eric Dumazet.
11) The bonding layer has many demuxes in it's fast path, and an RCU
conversion was started back in 3.11, several changes here extend the
RCU usage to even more locations. From Ding Tianhong and Wang
Yufen, based upon suggestions by Nikolay Aleksandrov and Veaceslav
Falico.
12) Allow stackability of segmentation offloads to, in particular, allow
segmentation offloading over tunnels. From Eric Dumazet.
13) Significantly improve the handling of secret keys we input into the
various hash functions in the inet hashtables, TCP fast open, as
well as syncookies. From Hannes Frederic Sowa. The key fundamental
operation is "net_get_random_once()" which uses static keys.
Hannes even extended this to ipv4/ipv6 fragmentation handling and
our generic flow dissector.
14) The generic driver layer takes care now to set the driver data to
NULL on device removal, so it's no longer necessary for drivers to
explicitly set it to NULL any more. Many drivers have been cleaned
up in this way, from Jingoo Han.
15) Add a BPF based packet scheduler classifier, from Daniel Borkmann.
16) Improve CRC32 interfaces and generic SKB checksum iterators so that
SCTP's checksumming can more cleanly be handled. Also from Daniel
Borkmann.
17) Add a new PMTU discovery mode, IP_PMTUDISC_INTERFACE, which forces
using the interface MTU value. This helps avoid PMTU attacks,
particularly on DNS servers. From Hannes Frederic Sowa.
18) Use generic XPS for transmit queue steering rather than internal
(re-)implementation in virtio-net. From Jason Wang.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits)
random32: add test cases for taus113 implementation
random32: upgrade taus88 generator to taus113 from errata paper
random32: move rnd_state to linux/random.h
random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized
random32: add periodic reseeding
random32: fix off-by-one in seeding requirement
PHY: Add RTL8201CP phy_driver to realtek
xtsonic: add missing platform_set_drvdata() in xtsonic_probe()
macmace: add missing platform_set_drvdata() in mace_probe()
ethernet/arc/arc_emac: add missing platform_set_drvdata() in arc_emac_probe()
ipv6: protect for_each_sk_fl_rcu in mem_check with rcu_read_lock_bh
vlan: Implement vlan_dev_get_egress_qos_mask as an inline.
ixgbe: add warning when max_vfs is out of range.
igb: Update link modes display in ethtool
netfilter: push reasm skb through instead of original frag skbs
ip6_output: fragment outgoing reassembled skb properly
MAINTAINERS: mv643xx_eth: take over maintainership from Lennart
net_sched: tbf: support of 64bit rates
ixgbe: deleting dfwd stations out of order can cause null ptr deref
ixgbe: fix build err, num_rx_queues is only available with CONFIG_RPS
...
Diffstat (limited to 'drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c')
-rw-r--r-- | drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c | 434 |
1 files changed, 222 insertions, 212 deletions
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c index d8f4897e9e82..05c1eef8df13 100644 --- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c +++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c | |||
@@ -548,36 +548,75 @@ static struct qlcnic_hardware_ops qlcnic_hw_ops = { | |||
548 | .io_resume = qlcnic_82xx_io_resume, | 548 | .io_resume = qlcnic_82xx_io_resume, |
549 | }; | 549 | }; |
550 | 550 | ||
551 | static void qlcnic_get_multiq_capability(struct qlcnic_adapter *adapter) | 551 | static int qlcnic_check_multi_tx_capability(struct qlcnic_adapter *adapter) |
552 | { | 552 | { |
553 | struct qlcnic_hardware_context *ahw = adapter->ahw; | 553 | struct qlcnic_hardware_context *ahw = adapter->ahw; |
554 | int num_tx_q; | ||
555 | 554 | ||
556 | if (ahw->msix_supported && | 555 | if (qlcnic_82xx_check(adapter) && |
557 | (ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_2_MULTI_TX)) { | 556 | (ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_2_MULTI_TX)) { |
558 | num_tx_q = min_t(int, QLCNIC_DEF_NUM_TX_RINGS, | 557 | test_and_set_bit(__QLCNIC_MULTI_TX_UNIQUE, &adapter->state); |
559 | num_online_cpus()); | 558 | return 0; |
560 | if (num_tx_q > 1) { | ||
561 | test_and_set_bit(__QLCNIC_MULTI_TX_UNIQUE, | ||
562 | &adapter->state); | ||
563 | adapter->max_drv_tx_rings = num_tx_q; | ||
564 | } | ||
565 | } else { | 559 | } else { |
566 | adapter->max_drv_tx_rings = 1; | 560 | return 1; |
567 | } | 561 | } |
568 | } | 562 | } |
569 | 563 | ||
564 | static int qlcnic_max_rings(struct qlcnic_adapter *adapter, u8 ring_cnt, | ||
565 | int queue_type) | ||
566 | { | ||
567 | int num_rings, max_rings = QLCNIC_MAX_SDS_RINGS; | ||
568 | |||
569 | if (queue_type == QLCNIC_RX_QUEUE) | ||
570 | max_rings = adapter->max_sds_rings; | ||
571 | else if (queue_type == QLCNIC_TX_QUEUE) | ||
572 | max_rings = adapter->max_tx_rings; | ||
573 | |||
574 | num_rings = rounddown_pow_of_two(min_t(int, num_online_cpus(), | ||
575 | max_rings)); | ||
576 | |||
577 | if (ring_cnt > num_rings) | ||
578 | return num_rings; | ||
579 | else | ||
580 | return ring_cnt; | ||
581 | } | ||
582 | |||
583 | void qlcnic_set_tx_ring_count(struct qlcnic_adapter *adapter, u8 tx_cnt) | ||
584 | { | ||
585 | /* 83xx adapter does not have max_tx_rings intialized in probe */ | ||
586 | if (adapter->max_tx_rings) | ||
587 | adapter->drv_tx_rings = qlcnic_max_rings(adapter, tx_cnt, | ||
588 | QLCNIC_TX_QUEUE); | ||
589 | else | ||
590 | adapter->drv_tx_rings = tx_cnt; | ||
591 | |||
592 | dev_info(&adapter->pdev->dev, "Set %d Tx rings\n", | ||
593 | adapter->drv_tx_rings); | ||
594 | } | ||
595 | |||
596 | void qlcnic_set_sds_ring_count(struct qlcnic_adapter *adapter, u8 rx_cnt) | ||
597 | { | ||
598 | /* 83xx adapter does not have max_sds_rings intialized in probe */ | ||
599 | if (adapter->max_sds_rings) | ||
600 | adapter->drv_sds_rings = qlcnic_max_rings(adapter, rx_cnt, | ||
601 | QLCNIC_RX_QUEUE); | ||
602 | else | ||
603 | adapter->drv_sds_rings = rx_cnt; | ||
604 | |||
605 | dev_info(&adapter->pdev->dev, "Set %d SDS rings\n", | ||
606 | adapter->drv_sds_rings); | ||
607 | } | ||
608 | |||
570 | int qlcnic_enable_msix(struct qlcnic_adapter *adapter, u32 num_msix) | 609 | int qlcnic_enable_msix(struct qlcnic_adapter *adapter, u32 num_msix) |
571 | { | 610 | { |
572 | struct pci_dev *pdev = adapter->pdev; | 611 | struct pci_dev *pdev = adapter->pdev; |
573 | int max_tx_rings, max_sds_rings, tx_vector; | 612 | int drv_tx_rings, drv_sds_rings, tx_vector; |
574 | int err = -1, i; | 613 | int err = -1, i; |
575 | 614 | ||
576 | if (adapter->flags & QLCNIC_TX_INTR_SHARED) { | 615 | if (adapter->flags & QLCNIC_TX_INTR_SHARED) { |
577 | max_tx_rings = 0; | 616 | drv_tx_rings = 0; |
578 | tx_vector = 0; | 617 | tx_vector = 0; |
579 | } else { | 618 | } else { |
580 | max_tx_rings = adapter->max_drv_tx_rings; | 619 | drv_tx_rings = adapter->drv_tx_rings; |
581 | tx_vector = 1; | 620 | tx_vector = 1; |
582 | } | 621 | } |
583 | 622 | ||
@@ -589,7 +628,7 @@ int qlcnic_enable_msix(struct qlcnic_adapter *adapter, u32 num_msix) | |||
589 | return -ENOMEM; | 628 | return -ENOMEM; |
590 | } | 629 | } |
591 | 630 | ||
592 | adapter->max_sds_rings = 1; | 631 | adapter->drv_sds_rings = QLCNIC_SINGLE_RING; |
593 | adapter->flags &= ~(QLCNIC_MSI_ENABLED | QLCNIC_MSIX_ENABLED); | 632 | adapter->flags &= ~(QLCNIC_MSI_ENABLED | QLCNIC_MSIX_ENABLED); |
594 | 633 | ||
595 | if (adapter->ahw->msix_supported) { | 634 | if (adapter->ahw->msix_supported) { |
@@ -602,18 +641,18 @@ int qlcnic_enable_msix(struct qlcnic_adapter *adapter, u32 num_msix) | |||
602 | if (qlcnic_83xx_check(adapter)) { | 641 | if (qlcnic_83xx_check(adapter)) { |
603 | adapter->ahw->num_msix = num_msix; | 642 | adapter->ahw->num_msix = num_msix; |
604 | /* subtract mail box and tx ring vectors */ | 643 | /* subtract mail box and tx ring vectors */ |
605 | adapter->max_sds_rings = num_msix - | 644 | adapter->drv_sds_rings = num_msix - |
606 | max_tx_rings - 1; | 645 | drv_tx_rings - 1; |
607 | } else { | 646 | } else { |
608 | adapter->ahw->num_msix = num_msix; | 647 | adapter->ahw->num_msix = num_msix; |
609 | if (qlcnic_check_multi_tx(adapter) && | 648 | if (qlcnic_check_multi_tx(adapter) && |
610 | !adapter->ahw->diag_test && | 649 | !adapter->ahw->diag_test && |
611 | (adapter->max_drv_tx_rings > 1)) | 650 | (adapter->drv_tx_rings > 1)) |
612 | max_sds_rings = num_msix - max_tx_rings; | 651 | drv_sds_rings = num_msix - drv_tx_rings; |
613 | else | 652 | else |
614 | max_sds_rings = num_msix; | 653 | drv_sds_rings = num_msix; |
615 | 654 | ||
616 | adapter->max_sds_rings = max_sds_rings; | 655 | adapter->drv_sds_rings = drv_sds_rings; |
617 | } | 656 | } |
618 | dev_info(&pdev->dev, "using msi-x interrupts\n"); | 657 | dev_info(&pdev->dev, "using msi-x interrupts\n"); |
619 | return err; | 658 | return err; |
@@ -624,13 +663,13 @@ int qlcnic_enable_msix(struct qlcnic_adapter *adapter, u32 num_msix) | |||
624 | if (qlcnic_83xx_check(adapter)) { | 663 | if (qlcnic_83xx_check(adapter)) { |
625 | if (err < (QLC_83XX_MINIMUM_VECTOR - tx_vector)) | 664 | if (err < (QLC_83XX_MINIMUM_VECTOR - tx_vector)) |
626 | return err; | 665 | return err; |
627 | err -= (max_tx_rings + 1); | 666 | err -= drv_tx_rings + 1; |
628 | num_msix = rounddown_pow_of_two(err); | 667 | num_msix = rounddown_pow_of_two(err); |
629 | num_msix += (max_tx_rings + 1); | 668 | num_msix += drv_tx_rings + 1; |
630 | } else { | 669 | } else { |
631 | num_msix = rounddown_pow_of_two(err); | 670 | num_msix = rounddown_pow_of_two(err); |
632 | if (qlcnic_check_multi_tx(adapter)) | 671 | if (qlcnic_check_multi_tx(adapter)) |
633 | num_msix += max_tx_rings; | 672 | num_msix += drv_tx_rings; |
634 | } | 673 | } |
635 | 674 | ||
636 | if (num_msix) { | 675 | if (num_msix) { |
@@ -683,25 +722,14 @@ static int qlcnic_enable_msi_legacy(struct qlcnic_adapter *adapter) | |||
683 | return err; | 722 | return err; |
684 | } | 723 | } |
685 | 724 | ||
686 | int qlcnic_82xx_setup_intr(struct qlcnic_adapter *adapter, u8 num_intr, int txq) | 725 | int qlcnic_82xx_setup_intr(struct qlcnic_adapter *adapter) |
687 | { | 726 | { |
688 | struct qlcnic_hardware_context *ahw = adapter->ahw; | ||
689 | int num_msix, err = 0; | 727 | int num_msix, err = 0; |
690 | 728 | ||
691 | if (!num_intr) | 729 | num_msix = adapter->drv_sds_rings; |
692 | num_intr = QLCNIC_DEF_NUM_STS_DESC_RINGS; | ||
693 | 730 | ||
694 | if (ahw->msix_supported) { | 731 | if (qlcnic_check_multi_tx(adapter)) |
695 | num_msix = rounddown_pow_of_two(min_t(int, num_online_cpus(), | 732 | num_msix += adapter->drv_tx_rings; |
696 | num_intr)); | ||
697 | if (qlcnic_check_multi_tx(adapter)) { | ||
698 | if (txq) | ||
699 | adapter->max_drv_tx_rings = txq; | ||
700 | num_msix += adapter->max_drv_tx_rings; | ||
701 | } | ||
702 | } else { | ||
703 | num_msix = 1; | ||
704 | } | ||
705 | 733 | ||
706 | err = qlcnic_enable_msix(adapter, num_msix); | 734 | err = qlcnic_enable_msix(adapter, num_msix); |
707 | if (err == -ENOMEM) | 735 | if (err == -ENOMEM) |
@@ -819,7 +847,7 @@ static bool qlcnic_port_eswitch_cfg_capability(struct qlcnic_adapter *adapter) | |||
819 | int qlcnic_init_pci_info(struct qlcnic_adapter *adapter) | 847 | int qlcnic_init_pci_info(struct qlcnic_adapter *adapter) |
820 | { | 848 | { |
821 | struct qlcnic_pci_info *pci_info; | 849 | struct qlcnic_pci_info *pci_info; |
822 | int i, ret = 0, j = 0; | 850 | int i, id = 0, ret = 0, j = 0; |
823 | u16 act_pci_func; | 851 | u16 act_pci_func; |
824 | u8 pfn; | 852 | u8 pfn; |
825 | 853 | ||
@@ -860,7 +888,8 @@ int qlcnic_init_pci_info(struct qlcnic_adapter *adapter) | |||
860 | continue; | 888 | continue; |
861 | 889 | ||
862 | if (qlcnic_port_eswitch_cfg_capability(adapter)) { | 890 | if (qlcnic_port_eswitch_cfg_capability(adapter)) { |
863 | if (!qlcnic_83xx_enable_port_eswitch(adapter, pfn)) | 891 | if (!qlcnic_83xx_set_port_eswitch_status(adapter, pfn, |
892 | &id)) | ||
864 | adapter->npars[j].eswitch_status = true; | 893 | adapter->npars[j].eswitch_status = true; |
865 | else | 894 | else |
866 | continue; | 895 | continue; |
@@ -875,15 +904,16 @@ int qlcnic_init_pci_info(struct qlcnic_adapter *adapter) | |||
875 | adapter->npars[j].min_bw = pci_info[i].tx_min_bw; | 904 | adapter->npars[j].min_bw = pci_info[i].tx_min_bw; |
876 | adapter->npars[j].max_bw = pci_info[i].tx_max_bw; | 905 | adapter->npars[j].max_bw = pci_info[i].tx_max_bw; |
877 | 906 | ||
907 | memcpy(&adapter->npars[j].mac, &pci_info[i].mac, ETH_ALEN); | ||
878 | j++; | 908 | j++; |
879 | } | 909 | } |
880 | 910 | ||
881 | if (qlcnic_82xx_check(adapter)) { | 911 | /* Update eSwitch status for adapters without per port eSwitch |
912 | * configuration capability | ||
913 | */ | ||
914 | if (!qlcnic_port_eswitch_cfg_capability(adapter)) { | ||
882 | for (i = 0; i < QLCNIC_NIU_MAX_XG_PORTS; i++) | 915 | for (i = 0; i < QLCNIC_NIU_MAX_XG_PORTS; i++) |
883 | adapter->eswitch[i].flags |= QLCNIC_SWITCH_ENABLE; | 916 | adapter->eswitch[i].flags |= QLCNIC_SWITCH_ENABLE; |
884 | } else if (!qlcnic_port_eswitch_cfg_capability(adapter)) { | ||
885 | for (i = 0; i < QLCNIC_NIU_MAX_XG_PORTS; i++) | ||
886 | qlcnic_enable_eswitch(adapter, i, 1); | ||
887 | } | 917 | } |
888 | 918 | ||
889 | kfree(pci_info); | 919 | kfree(pci_info); |
@@ -1138,14 +1168,18 @@ qlcnic_initialize_nic(struct qlcnic_adapter *adapter) | |||
1138 | adapter->ahw->max_mac_filters = nic_info.max_mac_filters; | 1168 | adapter->ahw->max_mac_filters = nic_info.max_mac_filters; |
1139 | adapter->ahw->max_mtu = nic_info.max_mtu; | 1169 | adapter->ahw->max_mtu = nic_info.max_mtu; |
1140 | 1170 | ||
1141 | /* Disable NPAR for 83XX */ | 1171 | if (adapter->ahw->capabilities & BIT_6) { |
1142 | if (qlcnic_83xx_check(adapter)) | ||
1143 | return err; | ||
1144 | |||
1145 | if (adapter->ahw->capabilities & BIT_6) | ||
1146 | adapter->flags |= QLCNIC_ESWITCH_ENABLED; | 1172 | adapter->flags |= QLCNIC_ESWITCH_ENABLED; |
1147 | else | 1173 | adapter->ahw->nic_mode = QLCNIC_VNIC_MODE; |
1174 | adapter->max_tx_rings = QLCNIC_MAX_HW_VNIC_TX_RINGS; | ||
1175 | adapter->max_sds_rings = QLCNIC_MAX_VNIC_SDS_RINGS; | ||
1176 | |||
1177 | dev_info(&adapter->pdev->dev, "vNIC mode enabled.\n"); | ||
1178 | } else { | ||
1179 | adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE; | ||
1180 | adapter->max_tx_rings = QLCNIC_MAX_HW_TX_RINGS; | ||
1148 | adapter->flags &= ~QLCNIC_ESWITCH_ENABLED; | 1181 | adapter->flags &= ~QLCNIC_ESWITCH_ENABLED; |
1182 | } | ||
1149 | 1183 | ||
1150 | return err; | 1184 | return err; |
1151 | } | 1185 | } |
@@ -1293,6 +1327,8 @@ qlcnic_check_eswitch_mode(struct qlcnic_adapter *adapter) | |||
1293 | "HAL Version: %d, Privileged function\n", | 1327 | "HAL Version: %d, Privileged function\n", |
1294 | adapter->ahw->fw_hal_version); | 1328 | adapter->ahw->fw_hal_version); |
1295 | } | 1329 | } |
1330 | } else { | ||
1331 | adapter->ahw->nic_mode = QLCNIC_DEFAULT_MODE; | ||
1296 | } | 1332 | } |
1297 | 1333 | ||
1298 | adapter->flags |= QLCNIC_ADAPTER_INITIALIZED; | 1334 | adapter->flags |= QLCNIC_ADAPTER_INITIALIZED; |
@@ -1552,7 +1588,7 @@ qlcnic_request_irq(struct qlcnic_adapter *adapter) | |||
1552 | if (qlcnic_82xx_check(adapter) || | 1588 | if (qlcnic_82xx_check(adapter) || |
1553 | (qlcnic_83xx_check(adapter) && | 1589 | (qlcnic_83xx_check(adapter) && |
1554 | (adapter->flags & QLCNIC_MSIX_ENABLED))) { | 1590 | (adapter->flags & QLCNIC_MSIX_ENABLED))) { |
1555 | num_sds_rings = adapter->max_sds_rings; | 1591 | num_sds_rings = adapter->drv_sds_rings; |
1556 | for (ring = 0; ring < num_sds_rings; ring++) { | 1592 | for (ring = 0; ring < num_sds_rings; ring++) { |
1557 | sds_ring = &recv_ctx->sds_rings[ring]; | 1593 | sds_ring = &recv_ctx->sds_rings[ring]; |
1558 | if (qlcnic_82xx_check(adapter) && | 1594 | if (qlcnic_82xx_check(adapter) && |
@@ -1586,7 +1622,7 @@ qlcnic_request_irq(struct qlcnic_adapter *adapter) | |||
1586 | (adapter->flags & QLCNIC_MSIX_ENABLED) && | 1622 | (adapter->flags & QLCNIC_MSIX_ENABLED) && |
1587 | !(adapter->flags & QLCNIC_TX_INTR_SHARED))) { | 1623 | !(adapter->flags & QLCNIC_TX_INTR_SHARED))) { |
1588 | handler = qlcnic_msix_tx_intr; | 1624 | handler = qlcnic_msix_tx_intr; |
1589 | for (ring = 0; ring < adapter->max_drv_tx_rings; | 1625 | for (ring = 0; ring < adapter->drv_tx_rings; |
1590 | ring++) { | 1626 | ring++) { |
1591 | tx_ring = &adapter->tx_ring[ring]; | 1627 | tx_ring = &adapter->tx_ring[ring]; |
1592 | snprintf(tx_ring->name, sizeof(tx_ring->name), | 1628 | snprintf(tx_ring->name, sizeof(tx_ring->name), |
@@ -1614,7 +1650,7 @@ qlcnic_free_irq(struct qlcnic_adapter *adapter) | |||
1614 | if (qlcnic_82xx_check(adapter) || | 1650 | if (qlcnic_82xx_check(adapter) || |
1615 | (qlcnic_83xx_check(adapter) && | 1651 | (qlcnic_83xx_check(adapter) && |
1616 | (adapter->flags & QLCNIC_MSIX_ENABLED))) { | 1652 | (adapter->flags & QLCNIC_MSIX_ENABLED))) { |
1617 | for (ring = 0; ring < adapter->max_sds_rings; ring++) { | 1653 | for (ring = 0; ring < adapter->drv_sds_rings; ring++) { |
1618 | sds_ring = &recv_ctx->sds_rings[ring]; | 1654 | sds_ring = &recv_ctx->sds_rings[ring]; |
1619 | free_irq(sds_ring->irq, sds_ring); | 1655 | free_irq(sds_ring->irq, sds_ring); |
1620 | } | 1656 | } |
@@ -1623,7 +1659,7 @@ qlcnic_free_irq(struct qlcnic_adapter *adapter) | |||
1623 | !(adapter->flags & QLCNIC_TX_INTR_SHARED)) || | 1659 | !(adapter->flags & QLCNIC_TX_INTR_SHARED)) || |
1624 | (qlcnic_82xx_check(adapter) && | 1660 | (qlcnic_82xx_check(adapter) && |
1625 | qlcnic_check_multi_tx(adapter))) { | 1661 | qlcnic_check_multi_tx(adapter))) { |
1626 | for (ring = 0; ring < adapter->max_drv_tx_rings; | 1662 | for (ring = 0; ring < adapter->drv_tx_rings; |
1627 | ring++) { | 1663 | ring++) { |
1628 | tx_ring = &adapter->tx_ring[ring]; | 1664 | tx_ring = &adapter->tx_ring[ring]; |
1629 | if (tx_ring->irq) | 1665 | if (tx_ring->irq) |
@@ -1677,7 +1713,7 @@ int __qlcnic_up(struct qlcnic_adapter *adapter, struct net_device *netdev) | |||
1677 | 1713 | ||
1678 | adapter->ahw->linkup = 0; | 1714 | adapter->ahw->linkup = 0; |
1679 | 1715 | ||
1680 | if (adapter->max_sds_rings > 1) | 1716 | if (adapter->drv_sds_rings > 1) |
1681 | qlcnic_config_rss(adapter, 1); | 1717 | qlcnic_config_rss(adapter, 1); |
1682 | 1718 | ||
1683 | qlcnic_config_intr_coalesce(adapter); | 1719 | qlcnic_config_intr_coalesce(adapter); |
@@ -1719,6 +1755,7 @@ void __qlcnic_down(struct qlcnic_adapter *adapter, struct net_device *netdev) | |||
1719 | if (qlcnic_sriov_vf_check(adapter)) | 1755 | if (qlcnic_sriov_vf_check(adapter)) |
1720 | qlcnic_sriov_cleanup_async_list(&adapter->ahw->sriov->bc); | 1756 | qlcnic_sriov_cleanup_async_list(&adapter->ahw->sriov->bc); |
1721 | smp_mb(); | 1757 | smp_mb(); |
1758 | spin_lock(&adapter->tx_clean_lock); | ||
1722 | netif_carrier_off(netdev); | 1759 | netif_carrier_off(netdev); |
1723 | adapter->ahw->linkup = 0; | 1760 | adapter->ahw->linkup = 0; |
1724 | netif_tx_disable(netdev); | 1761 | netif_tx_disable(netdev); |
@@ -1737,8 +1774,9 @@ void __qlcnic_down(struct qlcnic_adapter *adapter, struct net_device *netdev) | |||
1737 | 1774 | ||
1738 | qlcnic_reset_rx_buffers_list(adapter); | 1775 | qlcnic_reset_rx_buffers_list(adapter); |
1739 | 1776 | ||
1740 | for (ring = 0; ring < adapter->max_drv_tx_rings; ring++) | 1777 | for (ring = 0; ring < adapter->drv_tx_rings; ring++) |
1741 | qlcnic_release_tx_buffers(adapter, &adapter->tx_ring[ring]); | 1778 | qlcnic_release_tx_buffers(adapter, &adapter->tx_ring[ring]); |
1779 | spin_unlock(&adapter->tx_clean_lock); | ||
1742 | } | 1780 | } |
1743 | 1781 | ||
1744 | /* Usage: During suspend and firmware recovery module */ | 1782 | /* Usage: During suspend and firmware recovery module */ |
@@ -1814,16 +1852,16 @@ void qlcnic_detach(struct qlcnic_adapter *adapter) | |||
1814 | adapter->is_up = 0; | 1852 | adapter->is_up = 0; |
1815 | } | 1853 | } |
1816 | 1854 | ||
1817 | void qlcnic_diag_free_res(struct net_device *netdev, int max_sds_rings) | 1855 | void qlcnic_diag_free_res(struct net_device *netdev, int drv_sds_rings) |
1818 | { | 1856 | { |
1819 | struct qlcnic_adapter *adapter = netdev_priv(netdev); | 1857 | struct qlcnic_adapter *adapter = netdev_priv(netdev); |
1820 | struct qlcnic_host_sds_ring *sds_ring; | 1858 | struct qlcnic_host_sds_ring *sds_ring; |
1821 | int max_tx_rings = adapter->max_drv_tx_rings; | 1859 | int drv_tx_rings = adapter->drv_tx_rings; |
1822 | int ring; | 1860 | int ring; |
1823 | 1861 | ||
1824 | clear_bit(__QLCNIC_DEV_UP, &adapter->state); | 1862 | clear_bit(__QLCNIC_DEV_UP, &adapter->state); |
1825 | if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { | 1863 | if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { |
1826 | for (ring = 0; ring < adapter->max_sds_rings; ring++) { | 1864 | for (ring = 0; ring < adapter->drv_sds_rings; ring++) { |
1827 | sds_ring = &adapter->recv_ctx->sds_rings[ring]; | 1865 | sds_ring = &adapter->recv_ctx->sds_rings[ring]; |
1828 | qlcnic_disable_int(sds_ring); | 1866 | qlcnic_disable_int(sds_ring); |
1829 | } | 1867 | } |
@@ -1834,8 +1872,8 @@ void qlcnic_diag_free_res(struct net_device *netdev, int max_sds_rings) | |||
1834 | qlcnic_detach(adapter); | 1872 | qlcnic_detach(adapter); |
1835 | 1873 | ||
1836 | adapter->ahw->diag_test = 0; | 1874 | adapter->ahw->diag_test = 0; |
1837 | adapter->max_sds_rings = max_sds_rings; | 1875 | adapter->drv_sds_rings = drv_sds_rings; |
1838 | adapter->max_drv_tx_rings = max_tx_rings; | 1876 | adapter->drv_tx_rings = drv_tx_rings; |
1839 | 1877 | ||
1840 | if (qlcnic_attach(adapter)) | 1878 | if (qlcnic_attach(adapter)) |
1841 | goto out; | 1879 | goto out; |
@@ -1901,10 +1939,10 @@ int qlcnic_diag_alloc_res(struct net_device *netdev, int test) | |||
1901 | 1939 | ||
1902 | qlcnic_detach(adapter); | 1940 | qlcnic_detach(adapter); |
1903 | 1941 | ||
1904 | adapter->max_sds_rings = 1; | 1942 | adapter->drv_sds_rings = QLCNIC_SINGLE_RING; |
1943 | adapter->drv_tx_rings = QLCNIC_SINGLE_RING; | ||
1905 | adapter->ahw->diag_test = test; | 1944 | adapter->ahw->diag_test = test; |
1906 | adapter->ahw->linkup = 0; | 1945 | adapter->ahw->linkup = 0; |
1907 | adapter->max_drv_tx_rings = 1; | ||
1908 | 1946 | ||
1909 | ret = qlcnic_attach(adapter); | 1947 | ret = qlcnic_attach(adapter); |
1910 | if (ret) { | 1948 | if (ret) { |
@@ -1925,7 +1963,7 @@ int qlcnic_diag_alloc_res(struct net_device *netdev, int test) | |||
1925 | } | 1963 | } |
1926 | 1964 | ||
1927 | if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { | 1965 | if (adapter->ahw->diag_test == QLCNIC_INTERRUPT_TEST) { |
1928 | for (ring = 0; ring < adapter->max_sds_rings; ring++) { | 1966 | for (ring = 0; ring < adapter->drv_sds_rings; ring++) { |
1929 | sds_ring = &adapter->recv_ctx->sds_rings[ring]; | 1967 | sds_ring = &adapter->recv_ctx->sds_rings[ring]; |
1930 | qlcnic_enable_int(sds_ring); | 1968 | qlcnic_enable_int(sds_ring); |
1931 | } | 1969 | } |
@@ -2072,7 +2110,7 @@ qlcnic_setup_netdev(struct qlcnic_adapter *adapter, struct net_device *netdev, | |||
2072 | return err; | 2110 | return err; |
2073 | } | 2111 | } |
2074 | 2112 | ||
2075 | qlcnic_dcb_init_dcbnl_ops(adapter); | 2113 | qlcnic_dcb_init_dcbnl_ops(adapter->dcb); |
2076 | 2114 | ||
2077 | return 0; | 2115 | return 0; |
2078 | } | 2116 | } |
@@ -2098,7 +2136,7 @@ void qlcnic_free_tx_rings(struct qlcnic_adapter *adapter) | |||
2098 | int ring; | 2136 | int ring; |
2099 | struct qlcnic_host_tx_ring *tx_ring; | 2137 | struct qlcnic_host_tx_ring *tx_ring; |
2100 | 2138 | ||
2101 | for (ring = 0; ring < adapter->max_drv_tx_rings; ring++) { | 2139 | for (ring = 0; ring < adapter->drv_tx_rings; ring++) { |
2102 | tx_ring = &adapter->tx_ring[ring]; | 2140 | tx_ring = &adapter->tx_ring[ring]; |
2103 | if (tx_ring && tx_ring->cmd_buf_arr != NULL) { | 2141 | if (tx_ring && tx_ring->cmd_buf_arr != NULL) { |
2104 | vfree(tx_ring->cmd_buf_arr); | 2142 | vfree(tx_ring->cmd_buf_arr); |
@@ -2116,14 +2154,14 @@ int qlcnic_alloc_tx_rings(struct qlcnic_adapter *adapter, | |||
2116 | struct qlcnic_host_tx_ring *tx_ring; | 2154 | struct qlcnic_host_tx_ring *tx_ring; |
2117 | struct qlcnic_cmd_buffer *cmd_buf_arr; | 2155 | struct qlcnic_cmd_buffer *cmd_buf_arr; |
2118 | 2156 | ||
2119 | tx_ring = kcalloc(adapter->max_drv_tx_rings, | 2157 | tx_ring = kcalloc(adapter->drv_tx_rings, |
2120 | sizeof(struct qlcnic_host_tx_ring), GFP_KERNEL); | 2158 | sizeof(struct qlcnic_host_tx_ring), GFP_KERNEL); |
2121 | if (tx_ring == NULL) | 2159 | if (tx_ring == NULL) |
2122 | return -ENOMEM; | 2160 | return -ENOMEM; |
2123 | 2161 | ||
2124 | adapter->tx_ring = tx_ring; | 2162 | adapter->tx_ring = tx_ring; |
2125 | 2163 | ||
2126 | for (ring = 0; ring < adapter->max_drv_tx_rings; ring++) { | 2164 | for (ring = 0; ring < adapter->drv_tx_rings; ring++) { |
2127 | tx_ring = &adapter->tx_ring[ring]; | 2165 | tx_ring = &adapter->tx_ring[ring]; |
2128 | tx_ring->num_desc = adapter->num_txd; | 2166 | tx_ring->num_desc = adapter->num_txd; |
2129 | tx_ring->txq = netdev_get_tx_queue(netdev, ring); | 2167 | tx_ring->txq = netdev_get_tx_queue(netdev, ring); |
@@ -2138,11 +2176,11 @@ int qlcnic_alloc_tx_rings(struct qlcnic_adapter *adapter, | |||
2138 | 2176 | ||
2139 | if (qlcnic_83xx_check(adapter) || | 2177 | if (qlcnic_83xx_check(adapter) || |
2140 | (qlcnic_82xx_check(adapter) && qlcnic_check_multi_tx(adapter))) { | 2178 | (qlcnic_82xx_check(adapter) && qlcnic_check_multi_tx(adapter))) { |
2141 | for (ring = 0; ring < adapter->max_drv_tx_rings; ring++) { | 2179 | for (ring = 0; ring < adapter->drv_tx_rings; ring++) { |
2142 | tx_ring = &adapter->tx_ring[ring]; | 2180 | tx_ring = &adapter->tx_ring[ring]; |
2143 | tx_ring->adapter = adapter; | 2181 | tx_ring->adapter = adapter; |
2144 | if (adapter->flags & QLCNIC_MSIX_ENABLED) { | 2182 | if (adapter->flags & QLCNIC_MSIX_ENABLED) { |
2145 | index = adapter->max_sds_rings + ring; | 2183 | index = adapter->drv_sds_rings + ring; |
2146 | vector = adapter->msix_entries[index].vector; | 2184 | vector = adapter->msix_entries[index].vector; |
2147 | tx_ring->irq = vector; | 2185 | tx_ring->irq = vector; |
2148 | } | 2186 | } |
@@ -2166,17 +2204,6 @@ void qlcnic_set_drv_version(struct qlcnic_adapter *adapter) | |||
2166 | qlcnic_fw_cmd_set_drv_version(adapter, fw_cmd); | 2204 | qlcnic_fw_cmd_set_drv_version(adapter, fw_cmd); |
2167 | } | 2205 | } |
2168 | 2206 | ||
2169 | static int qlcnic_register_dcb(struct qlcnic_adapter *adapter) | ||
2170 | { | ||
2171 | return __qlcnic_register_dcb(adapter); | ||
2172 | } | ||
2173 | |||
2174 | void qlcnic_clear_dcb_ops(struct qlcnic_adapter *adapter) | ||
2175 | { | ||
2176 | kfree(adapter->dcb); | ||
2177 | adapter->dcb = NULL; | ||
2178 | } | ||
2179 | |||
2180 | static int | 2207 | static int |
2181 | qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | 2208 | qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) |
2182 | { | 2209 | { |
@@ -2185,6 +2212,7 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
2185 | struct qlcnic_hardware_context *ahw; | 2212 | struct qlcnic_hardware_context *ahw; |
2186 | int err, pci_using_dac = -1; | 2213 | int err, pci_using_dac = -1; |
2187 | char board_name[QLCNIC_MAX_BOARD_NAME_LEN + 19]; /* MAC + ": " + name */ | 2214 | char board_name[QLCNIC_MAX_BOARD_NAME_LEN + 19]; /* MAC + ": " + name */ |
2215 | struct qlcnic_dcb *dcb; | ||
2188 | 2216 | ||
2189 | if (pdev->is_virtfn) | 2217 | if (pdev->is_virtfn) |
2190 | return -ENODEV; | 2218 | return -ENODEV; |
@@ -2271,6 +2299,7 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
2271 | rwlock_init(&adapter->ahw->crb_lock); | 2299 | rwlock_init(&adapter->ahw->crb_lock); |
2272 | mutex_init(&adapter->ahw->mem_lock); | 2300 | mutex_init(&adapter->ahw->mem_lock); |
2273 | 2301 | ||
2302 | spin_lock_init(&adapter->tx_clean_lock); | ||
2274 | INIT_LIST_HEAD(&adapter->mac_list); | 2303 | INIT_LIST_HEAD(&adapter->mac_list); |
2275 | 2304 | ||
2276 | qlcnic_register_dcb(adapter); | 2305 | qlcnic_register_dcb(adapter); |
@@ -2285,38 +2314,51 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
2285 | goto err_out_maintenance_mode; | 2314 | goto err_out_maintenance_mode; |
2286 | } | 2315 | } |
2287 | 2316 | ||
2288 | qlcnic_get_multiq_capability(adapter); | 2317 | /* compute and set default and max tx/sds rings */ |
2289 | 2318 | if (adapter->ahw->msix_supported) { | |
2290 | if ((adapter->ahw->act_pci_func > 2) && | 2319 | if (qlcnic_check_multi_tx_capability(adapter) == 1) |
2291 | qlcnic_check_multi_tx(adapter)) { | 2320 | qlcnic_set_tx_ring_count(adapter, |
2292 | adapter->max_drv_tx_rings = QLCNIC_DEF_NUM_TX_RINGS; | 2321 | QLCNIC_SINGLE_RING); |
2293 | dev_info(&adapter->pdev->dev, | 2322 | else |
2294 | "vNIC mode enabled, Set max TX rings = %d\n", | 2323 | qlcnic_set_tx_ring_count(adapter, |
2295 | adapter->max_drv_tx_rings); | 2324 | QLCNIC_DEF_TX_RINGS); |
2325 | qlcnic_set_sds_ring_count(adapter, | ||
2326 | QLCNIC_DEF_SDS_RINGS); | ||
2327 | } else { | ||
2328 | qlcnic_set_tx_ring_count(adapter, QLCNIC_SINGLE_RING); | ||
2329 | qlcnic_set_sds_ring_count(adapter, QLCNIC_SINGLE_RING); | ||
2296 | } | 2330 | } |
2297 | 2331 | ||
2298 | if (!qlcnic_check_multi_tx(adapter)) { | ||
2299 | clear_bit(__QLCNIC_MULTI_TX_UNIQUE, &adapter->state); | ||
2300 | adapter->max_drv_tx_rings = 1; | ||
2301 | } | ||
2302 | err = qlcnic_setup_idc_param(adapter); | 2332 | err = qlcnic_setup_idc_param(adapter); |
2303 | if (err) | 2333 | if (err) |
2304 | goto err_out_free_hw; | 2334 | goto err_out_free_hw; |
2305 | 2335 | ||
2306 | adapter->flags |= QLCNIC_NEED_FLR; | 2336 | adapter->flags |= QLCNIC_NEED_FLR; |
2307 | 2337 | ||
2308 | if (adapter->dcb && qlcnic_dcb_attach(adapter)) | 2338 | dcb = adapter->dcb; |
2309 | qlcnic_clear_dcb_ops(adapter); | ||
2310 | 2339 | ||
2340 | if (dcb && qlcnic_dcb_attach(dcb)) | ||
2341 | qlcnic_clear_dcb_ops(dcb); | ||
2311 | } else if (qlcnic_83xx_check(adapter)) { | 2342 | } else if (qlcnic_83xx_check(adapter)) { |
2312 | adapter->max_drv_tx_rings = 1; | ||
2313 | qlcnic_83xx_check_vf(adapter, ent); | 2343 | qlcnic_83xx_check_vf(adapter, ent); |
2314 | adapter->portnum = adapter->ahw->pci_func; | 2344 | adapter->portnum = adapter->ahw->pci_func; |
2315 | err = qlcnic_83xx_init(adapter, pci_using_dac); | 2345 | err = qlcnic_83xx_init(adapter, pci_using_dac); |
2316 | if (err) { | 2346 | if (err) { |
2317 | dev_err(&pdev->dev, "%s: failed\n", __func__); | 2347 | switch (err) { |
2318 | goto err_out_free_hw; | 2348 | case -ENOTRECOVERABLE: |
2349 | dev_err(&pdev->dev, "Adapter initialization failed due to a faulty hardware. Please reboot\n"); | ||
2350 | dev_err(&pdev->dev, "If reboot doesn't help, please replace the adapter with new one and return the faulty adapter for repair\n"); | ||
2351 | goto err_out_free_hw; | ||
2352 | case -ENOMEM: | ||
2353 | dev_err(&pdev->dev, "Adapter initialization failed. Please reboot\n"); | ||
2354 | goto err_out_free_hw; | ||
2355 | default: | ||
2356 | dev_err(&pdev->dev, "Adapter initialization failed. A reboot may be required to recover from this failure\n"); | ||
2357 | dev_err(&pdev->dev, "If reboot does not help to recover from this failure, try a flash update of the adapter\n"); | ||
2358 | goto err_out_maintenance_mode; | ||
2359 | } | ||
2319 | } | 2360 | } |
2361 | |||
2320 | if (qlcnic_sriov_vf_check(adapter)) | 2362 | if (qlcnic_sriov_vf_check(adapter)) |
2321 | return 0; | 2363 | return 0; |
2322 | } else { | 2364 | } else { |
@@ -2344,7 +2386,7 @@ qlcnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) | |||
2344 | "Device does not support MSI interrupts\n"); | 2386 | "Device does not support MSI interrupts\n"); |
2345 | 2387 | ||
2346 | if (qlcnic_82xx_check(adapter)) { | 2388 | if (qlcnic_82xx_check(adapter)) { |
2347 | err = qlcnic_setup_intr(adapter, 0, 0); | 2389 | err = qlcnic_setup_intr(adapter); |
2348 | if (err) { | 2390 | if (err) { |
2349 | dev_err(&pdev->dev, "Failed to setup interrupt\n"); | 2391 | dev_err(&pdev->dev, "Failed to setup interrupt\n"); |
2350 | goto err_out_disable_msi; | 2392 | goto err_out_disable_msi; |
@@ -2414,13 +2456,20 @@ err_out_free_res: | |||
2414 | pci_release_regions(pdev); | 2456 | pci_release_regions(pdev); |
2415 | 2457 | ||
2416 | err_out_disable_pdev: | 2458 | err_out_disable_pdev: |
2417 | pci_set_drvdata(pdev, NULL); | ||
2418 | pci_disable_device(pdev); | 2459 | pci_disable_device(pdev); |
2419 | return err; | 2460 | return err; |
2420 | 2461 | ||
2421 | err_out_maintenance_mode: | 2462 | err_out_maintenance_mode: |
2463 | set_bit(__QLCNIC_MAINTENANCE_MODE, &adapter->state); | ||
2422 | netdev->netdev_ops = &qlcnic_netdev_failed_ops; | 2464 | netdev->netdev_ops = &qlcnic_netdev_failed_ops; |
2423 | SET_ETHTOOL_OPS(netdev, &qlcnic_ethtool_failed_ops); | 2465 | SET_ETHTOOL_OPS(netdev, &qlcnic_ethtool_failed_ops); |
2466 | ahw->port_type = QLCNIC_XGBE; | ||
2467 | |||
2468 | if (qlcnic_83xx_check(adapter)) | ||
2469 | adapter->tgt_status_reg = NULL; | ||
2470 | else | ||
2471 | ahw->board_type = QLCNIC_BRDTYPE_P3P_10G_SFP_PLUS; | ||
2472 | |||
2424 | err = register_netdev(netdev); | 2473 | err = register_netdev(netdev); |
2425 | 2474 | ||
2426 | if (err) { | 2475 | if (err) { |
@@ -2451,7 +2500,7 @@ static void qlcnic_remove(struct pci_dev *pdev) | |||
2451 | qlcnic_cancel_idc_work(adapter); | 2500 | qlcnic_cancel_idc_work(adapter); |
2452 | ahw = adapter->ahw; | 2501 | ahw = adapter->ahw; |
2453 | 2502 | ||
2454 | qlcnic_dcb_free(adapter); | 2503 | qlcnic_dcb_free(adapter->dcb); |
2455 | 2504 | ||
2456 | unregister_netdev(netdev); | 2505 | unregister_netdev(netdev); |
2457 | qlcnic_sriov_cleanup(adapter); | 2506 | qlcnic_sriov_cleanup(adapter); |
@@ -2490,7 +2539,6 @@ static void qlcnic_remove(struct pci_dev *pdev) | |||
2490 | pci_disable_pcie_error_reporting(pdev); | 2539 | pci_disable_pcie_error_reporting(pdev); |
2491 | pci_release_regions(pdev); | 2540 | pci_release_regions(pdev); |
2492 | pci_disable_device(pdev); | 2541 | pci_disable_device(pdev); |
2493 | pci_set_drvdata(pdev, NULL); | ||
2494 | 2542 | ||
2495 | if (adapter->qlcnic_wq) { | 2543 | if (adapter->qlcnic_wq) { |
2496 | destroy_workqueue(adapter->qlcnic_wq); | 2544 | destroy_workqueue(adapter->qlcnic_wq); |
@@ -2543,12 +2591,11 @@ static int qlcnic_resume(struct pci_dev *pdev) | |||
2543 | static int qlcnic_open(struct net_device *netdev) | 2591 | static int qlcnic_open(struct net_device *netdev) |
2544 | { | 2592 | { |
2545 | struct qlcnic_adapter *adapter = netdev_priv(netdev); | 2593 | struct qlcnic_adapter *adapter = netdev_priv(netdev); |
2546 | u32 state; | ||
2547 | int err; | 2594 | int err; |
2548 | 2595 | ||
2549 | state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); | 2596 | if (test_bit(__QLCNIC_MAINTENANCE_MODE, &adapter->state)) { |
2550 | if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) { | 2597 | netdev_err(netdev, "%s: Device is in non-operational state\n", |
2551 | netdev_err(netdev, "%s: Device is in FAILED state\n", __func__); | 2598 | __func__); |
2552 | 2599 | ||
2553 | return -EIO; | 2600 | return -EIO; |
2554 | } | 2601 | } |
@@ -2710,24 +2757,21 @@ static void qlcnic_tx_timeout(struct net_device *netdev) | |||
2710 | QLCNIC_FORCE_FW_DUMP_KEY); | 2757 | QLCNIC_FORCE_FW_DUMP_KEY); |
2711 | } else { | 2758 | } else { |
2712 | netdev_info(netdev, "Tx timeout, reset adapter context.\n"); | 2759 | netdev_info(netdev, "Tx timeout, reset adapter context.\n"); |
2713 | if (qlcnic_82xx_check(adapter)) { | 2760 | for (ring = 0; ring < adapter->drv_tx_rings; ring++) { |
2714 | for (ring = 0; ring < adapter->max_drv_tx_rings; | 2761 | tx_ring = &adapter->tx_ring[ring]; |
2715 | ring++) { | 2762 | netdev_info(netdev, "Tx ring=%d\n", ring); |
2716 | tx_ring = &adapter->tx_ring[ring]; | 2763 | netdev_info(netdev, |
2717 | dev_info(&netdev->dev, "ring=%d\n", ring); | 2764 | "crb_intr_mask=%d, producer=%d, sw_consumer=%d, hw_consumer=%d\n", |
2718 | dev_info(&netdev->dev, "crb_intr_mask=%d\n", | 2765 | readl(tx_ring->crb_intr_mask), |
2719 | readl(tx_ring->crb_intr_mask)); | 2766 | readl(tx_ring->crb_cmd_producer), |
2720 | dev_info(&netdev->dev, "producer=%d\n", | 2767 | tx_ring->sw_consumer, |
2721 | readl(tx_ring->crb_cmd_producer)); | 2768 | le32_to_cpu(*(tx_ring->hw_consumer))); |
2722 | dev_info(&netdev->dev, "sw_consumer = %d\n", | 2769 | netdev_info(netdev, |
2723 | tx_ring->sw_consumer); | 2770 | "xmit_finished=%llu, xmit_called=%llu, xmit_on=%llu, xmit_off=%llu\n", |
2724 | dev_info(&netdev->dev, "hw_consumer = %d\n", | 2771 | tx_ring->tx_stats.xmit_finished, |
2725 | le32_to_cpu(*(tx_ring->hw_consumer))); | 2772 | tx_ring->tx_stats.xmit_called, |
2726 | dev_info(&netdev->dev, "xmit-on=%llu\n", | 2773 | tx_ring->tx_stats.xmit_on, |
2727 | tx_ring->xmit_on); | 2774 | tx_ring->tx_stats.xmit_off); |
2728 | dev_info(&netdev->dev, "xmit-off=%llu\n", | ||
2729 | tx_ring->xmit_off); | ||
2730 | } | ||
2731 | } | 2775 | } |
2732 | adapter->ahw->reset_context = 1; | 2776 | adapter->ahw->reset_context = 1; |
2733 | } | 2777 | } |
@@ -2841,7 +2885,7 @@ static void qlcnic_poll_controller(struct net_device *netdev) | |||
2841 | struct qlcnic_recv_context *recv_ctx = adapter->recv_ctx; | 2885 | struct qlcnic_recv_context *recv_ctx = adapter->recv_ctx; |
2842 | 2886 | ||
2843 | disable_irq(adapter->irq); | 2887 | disable_irq(adapter->irq); |
2844 | for (ring = 0; ring < adapter->max_sds_rings; ring++) { | 2888 | for (ring = 0; ring < adapter->drv_sds_rings; ring++) { |
2845 | sds_ring = &recv_ctx->sds_rings[ring]; | 2889 | sds_ring = &recv_ctx->sds_rings[ring]; |
2846 | qlcnic_intr(adapter->irq, sds_ring); | 2890 | qlcnic_intr(adapter->irq, sds_ring); |
2847 | } | 2891 | } |
@@ -3261,8 +3305,9 @@ void qlcnic_82xx_dev_request_reset(struct qlcnic_adapter *adapter, u32 key) | |||
3261 | return; | 3305 | return; |
3262 | 3306 | ||
3263 | state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); | 3307 | state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); |
3264 | if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) { | 3308 | |
3265 | netdev_err(adapter->netdev, "%s: Device is in FAILED state\n", | 3309 | if (test_bit(__QLCNIC_MAINTENANCE_MODE, &adapter->state)) { |
3310 | netdev_err(adapter->netdev, "%s: Device is in non-operational state\n", | ||
3266 | __func__); | 3311 | __func__); |
3267 | qlcnic_api_unlock(adapter); | 3312 | qlcnic_api_unlock(adapter); |
3268 | 3313 | ||
@@ -3329,7 +3374,7 @@ qlcnic_attach_work(struct work_struct *work) | |||
3329 | return; | 3374 | return; |
3330 | } | 3375 | } |
3331 | attach: | 3376 | attach: |
3332 | qlcnic_dcb_get_info(adapter); | 3377 | qlcnic_dcb_get_info(adapter->dcb); |
3333 | 3378 | ||
3334 | if (netif_running(netdev)) { | 3379 | if (netif_running(netdev)) { |
3335 | if (qlcnic_up(adapter, netdev)) | 3380 | if (qlcnic_up(adapter, netdev)) |
@@ -3354,6 +3399,8 @@ done: | |||
3354 | static int | 3399 | static int |
3355 | qlcnic_check_health(struct qlcnic_adapter *adapter) | 3400 | qlcnic_check_health(struct qlcnic_adapter *adapter) |
3356 | { | 3401 | { |
3402 | struct qlcnic_hardware_context *ahw = adapter->ahw; | ||
3403 | struct qlcnic_fw_dump *fw_dump = &ahw->fw_dump; | ||
3357 | u32 state = 0, heartbeat; | 3404 | u32 state = 0, heartbeat; |
3358 | u32 peg_status; | 3405 | u32 peg_status; |
3359 | int err = 0; | 3406 | int err = 0; |
@@ -3378,7 +3425,7 @@ qlcnic_check_health(struct qlcnic_adapter *adapter) | |||
3378 | if (adapter->need_fw_reset) | 3425 | if (adapter->need_fw_reset) |
3379 | goto detach; | 3426 | goto detach; |
3380 | 3427 | ||
3381 | if (adapter->ahw->reset_context && qlcnic_auto_fw_reset) | 3428 | if (ahw->reset_context && qlcnic_auto_fw_reset) |
3382 | qlcnic_reset_hw_context(adapter); | 3429 | qlcnic_reset_hw_context(adapter); |
3383 | 3430 | ||
3384 | return 0; | 3431 | return 0; |
@@ -3421,6 +3468,9 @@ detach: | |||
3421 | 3468 | ||
3422 | qlcnic_schedule_work(adapter, qlcnic_detach_work, 0); | 3469 | qlcnic_schedule_work(adapter, qlcnic_detach_work, 0); |
3423 | QLCDB(adapter, DRV, "fw recovery scheduled.\n"); | 3470 | QLCDB(adapter, DRV, "fw recovery scheduled.\n"); |
3471 | } else if (!qlcnic_auto_fw_reset && fw_dump->enable && | ||
3472 | adapter->flags & QLCNIC_FW_RESET_OWNER) { | ||
3473 | qlcnic_dump_fw(adapter); | ||
3424 | } | 3474 | } |
3425 | 3475 | ||
3426 | return 1; | 3476 | return 1; |
@@ -3502,7 +3552,7 @@ static int qlcnic_attach_func(struct pci_dev *pdev) | |||
3502 | qlcnic_clr_drv_state(adapter); | 3552 | qlcnic_clr_drv_state(adapter); |
3503 | kfree(adapter->msix_entries); | 3553 | kfree(adapter->msix_entries); |
3504 | adapter->msix_entries = NULL; | 3554 | adapter->msix_entries = NULL; |
3505 | err = qlcnic_setup_intr(adapter, 0, 0); | 3555 | err = qlcnic_setup_intr(adapter); |
3506 | 3556 | ||
3507 | if (err) { | 3557 | if (err) { |
3508 | kfree(adapter->msix_entries); | 3558 | kfree(adapter->msix_entries); |
@@ -3647,130 +3697,90 @@ qlcnicvf_start_firmware(struct qlcnic_adapter *adapter) | |||
3647 | return err; | 3697 | return err; |
3648 | } | 3698 | } |
3649 | 3699 | ||
3650 | int qlcnic_validate_max_tx_rings(struct qlcnic_adapter *adapter, u32 txq) | 3700 | int qlcnic_validate_rings(struct qlcnic_adapter *adapter, __u32 ring_cnt, |
3701 | int queue_type) | ||
3651 | { | 3702 | { |
3652 | struct net_device *netdev = adapter->netdev; | 3703 | struct net_device *netdev = adapter->netdev; |
3653 | u8 max_hw = QLCNIC_MAX_TX_RINGS; | 3704 | u8 max_hw_rings = 0; |
3654 | u32 max_allowed; | 3705 | char buf[8]; |
3706 | int cur_rings; | ||
3655 | 3707 | ||
3656 | if (!qlcnic_use_msi_x && !qlcnic_use_msi) { | 3708 | if (queue_type == QLCNIC_RX_QUEUE) { |
3657 | netdev_err(netdev, "No Multi TX-Q support in INT-x mode\n"); | 3709 | max_hw_rings = adapter->max_sds_rings; |
3658 | return -EINVAL; | 3710 | cur_rings = adapter->drv_sds_rings; |
3711 | strcpy(buf, "SDS"); | ||
3712 | } else if (queue_type == QLCNIC_TX_QUEUE) { | ||
3713 | max_hw_rings = adapter->max_tx_rings; | ||
3714 | cur_rings = adapter->drv_tx_rings; | ||
3715 | strcpy(buf, "Tx"); | ||
3659 | } | 3716 | } |
3660 | 3717 | ||
3661 | if (!qlcnic_check_multi_tx(adapter)) { | 3718 | if (!qlcnic_use_msi_x && !qlcnic_use_msi) { |
3662 | netdev_err(netdev, "No Multi TX-Q support\n"); | 3719 | netdev_err(netdev, "No RSS/TSS support in INT-x mode\n"); |
3663 | return -EINVAL; | 3720 | return -EINVAL; |
3664 | } | 3721 | } |
3665 | 3722 | ||
3666 | if (txq > QLCNIC_MAX_TX_RINGS) { | 3723 | if (adapter->flags & QLCNIC_MSI_ENABLED) { |
3667 | netdev_err(netdev, "Invalid ring count\n"); | 3724 | netdev_err(netdev, "No RSS/TSS support in MSI mode\n"); |
3668 | return -EINVAL; | 3725 | return -EINVAL; |
3669 | } | 3726 | } |
3670 | 3727 | ||
3671 | max_allowed = rounddown_pow_of_two(min_t(int, max_hw, | 3728 | if (ring_cnt < 2) { |
3672 | num_online_cpus())); | 3729 | netdev_err(netdev, |
3673 | if ((txq > max_allowed) || !is_power_of_2(txq)) { | 3730 | "%s rings value should not be lower than 2\n", buf); |
3674 | if (!is_power_of_2(txq)) | ||
3675 | netdev_err(netdev, | ||
3676 | "TX queue should be a power of 2\n"); | ||
3677 | if (txq > num_online_cpus()) | ||
3678 | netdev_err(netdev, | ||
3679 | "Tx queue should not be higher than [%u], number of online CPUs in the system\n", | ||
3680 | num_online_cpus()); | ||
3681 | netdev_err(netdev, "Unable to configure %u Tx rings\n", txq); | ||
3682 | return -EINVAL; | 3731 | return -EINVAL; |
3683 | } | 3732 | } |
3684 | 3733 | ||
3685 | return 0; | 3734 | if (!is_power_of_2(ring_cnt)) { |
3686 | } | 3735 | netdev_err(netdev, "%s rings value should be a power of 2\n", |
3687 | 3736 | buf); | |
3688 | int qlcnic_validate_max_rss(struct qlcnic_adapter *adapter, | ||
3689 | __u32 val) | ||
3690 | { | ||
3691 | struct net_device *netdev = adapter->netdev; | ||
3692 | u8 max_hw = adapter->ahw->max_rx_ques; | ||
3693 | u32 max_allowed; | ||
3694 | |||
3695 | if (!qlcnic_use_msi_x && !qlcnic_use_msi) { | ||
3696 | netdev_err(netdev, "No RSS support in INT-x mode\n"); | ||
3697 | return -EINVAL; | 3737 | return -EINVAL; |
3698 | } | 3738 | } |
3699 | 3739 | ||
3700 | if (val > QLCNIC_MAX_SDS_RINGS) { | 3740 | if (qlcnic_82xx_check(adapter) && (queue_type == QLCNIC_TX_QUEUE) && |
3701 | netdev_err(netdev, "RSS value should not be higher than %u\n", | 3741 | !qlcnic_check_multi_tx(adapter)) { |
3702 | QLCNIC_MAX_SDS_RINGS); | 3742 | netdev_err(netdev, "No Multi Tx queue support\n"); |
3703 | return -EINVAL; | 3743 | return -EINVAL; |
3704 | } | 3744 | } |
3705 | 3745 | ||
3706 | max_allowed = rounddown_pow_of_two(min_t(int, max_hw, | 3746 | if (ring_cnt > num_online_cpus()) { |
3707 | num_online_cpus())); | 3747 | netdev_err(netdev, |
3708 | if ((val > max_allowed) || (val < 2) || !is_power_of_2(val)) { | 3748 | "%s value[%u] should not be higher than, number of online CPUs\n", |
3709 | if (!is_power_of_2(val)) | 3749 | buf, num_online_cpus()); |
3710 | netdev_err(netdev, "RSS value should be a power of 2\n"); | ||
3711 | |||
3712 | if (val < 2) | ||
3713 | netdev_err(netdev, "RSS value should not be lower than 2\n"); | ||
3714 | |||
3715 | if (val > max_hw) | ||
3716 | netdev_err(netdev, | ||
3717 | "RSS value should not be higher than[%u], the max RSS rings supported by the adapter\n", | ||
3718 | max_hw); | ||
3719 | |||
3720 | if (val > num_online_cpus()) | ||
3721 | netdev_err(netdev, | ||
3722 | "RSS value should not be higher than[%u], number of online CPUs in the system\n", | ||
3723 | num_online_cpus()); | ||
3724 | |||
3725 | netdev_err(netdev, "Unable to configure %u RSS rings\n", val); | ||
3726 | |||
3727 | return -EINVAL; | 3750 | return -EINVAL; |
3728 | } | 3751 | } |
3752 | |||
3729 | return 0; | 3753 | return 0; |
3730 | } | 3754 | } |
3731 | 3755 | ||
3732 | int qlcnic_set_max_rss(struct qlcnic_adapter *adapter, u8 data, int txq) | 3756 | int qlcnic_setup_rings(struct qlcnic_adapter *adapter, u8 rx_cnt, u8 tx_cnt) |
3733 | { | 3757 | { |
3734 | int err; | ||
3735 | struct net_device *netdev = adapter->netdev; | 3758 | struct net_device *netdev = adapter->netdev; |
3736 | int num_msix; | 3759 | int err; |
3737 | 3760 | ||
3738 | if (test_bit(__QLCNIC_RESETTING, &adapter->state)) | 3761 | if (test_bit(__QLCNIC_RESETTING, &adapter->state)) |
3739 | return -EBUSY; | 3762 | return -EBUSY; |
3740 | 3763 | ||
3741 | if (qlcnic_82xx_check(adapter) && !qlcnic_use_msi_x && | ||
3742 | !qlcnic_use_msi) { | ||
3743 | netdev_err(netdev, "No RSS support in INT-x mode\n"); | ||
3744 | return -EINVAL; | ||
3745 | } | ||
3746 | |||
3747 | netif_device_detach(netdev); | 3764 | netif_device_detach(netdev); |
3748 | if (netif_running(netdev)) | 3765 | if (netif_running(netdev)) |
3749 | __qlcnic_down(adapter, netdev); | 3766 | __qlcnic_down(adapter, netdev); |
3750 | 3767 | ||
3751 | qlcnic_detach(adapter); | 3768 | qlcnic_detach(adapter); |
3752 | 3769 | ||
3753 | if (qlcnic_82xx_check(adapter)) { | ||
3754 | if (txq != 0) | ||
3755 | adapter->max_drv_tx_rings = txq; | ||
3756 | |||
3757 | if (qlcnic_check_multi_tx(adapter) && | ||
3758 | (txq > adapter->max_drv_tx_rings)) | ||
3759 | num_msix = adapter->max_drv_tx_rings; | ||
3760 | else | ||
3761 | num_msix = data; | ||
3762 | } | ||
3763 | |||
3764 | if (qlcnic_83xx_check(adapter)) { | 3770 | if (qlcnic_83xx_check(adapter)) { |
3765 | qlcnic_83xx_free_mbx_intr(adapter); | 3771 | qlcnic_83xx_free_mbx_intr(adapter); |
3766 | qlcnic_83xx_enable_mbx_poll(adapter); | 3772 | qlcnic_83xx_enable_mbx_poll(adapter); |
3767 | } | 3773 | } |
3768 | 3774 | ||
3769 | netif_set_real_num_tx_queues(netdev, adapter->max_drv_tx_rings); | ||
3770 | |||
3771 | qlcnic_teardown_intr(adapter); | 3775 | qlcnic_teardown_intr(adapter); |
3772 | 3776 | ||
3773 | err = qlcnic_setup_intr(adapter, data, txq); | 3777 | /* compute and set default and max tx/sds rings */ |
3778 | qlcnic_set_tx_ring_count(adapter, tx_cnt); | ||
3779 | qlcnic_set_sds_ring_count(adapter, rx_cnt); | ||
3780 | |||
3781 | netif_set_real_num_tx_queues(netdev, adapter->drv_tx_rings); | ||
3782 | |||
3783 | err = qlcnic_setup_intr(adapter); | ||
3774 | if (err) { | 3784 | if (err) { |
3775 | kfree(adapter->msix_entries); | 3785 | kfree(adapter->msix_entries); |
3776 | netdev_err(netdev, "failed to setup interrupt\n"); | 3786 | netdev_err(netdev, "failed to setup interrupt\n"); |