aboutsummaryrefslogtreecommitdiffstats
path: root/drivers/scsi
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2013-05-01 17:08:52 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2013-05-01 17:08:52 -0400
commit73287a43cc79ca06629a88d1a199cd283f42456a (patch)
treeacf4456e260115bea77ee31a29f10ce17f0db45c /drivers/scsi
parent251df49db3327c64bf917bfdba94491fde2b4ee0 (diff)
parent20074f357da4a637430aec2879c9d864c5d2c23c (diff)
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller: "Highlights (1721 non-merge commits, this has to be a record of some sort): 1) Add 'random' mode to team driver, from Jiri Pirko and Eric Dumazet. 2) Make it so that any driver that supports configuration of multiple MAC addresses can provide the forwarding database add and del calls by providing a default implementation and hooking that up if the driver doesn't have an explicit set of handlers. From Vlad Yasevich. 3) Support GSO segmentation over tunnels and other encapsulating devices such as VXLAN, from Pravin B Shelar. 4) Support L2 GRE tunnels in the flow dissector, from Michael Dalton. 5) Implement Tail Loss Probe (TLP) detection in TCP, from Nandita Dukkipati. 6) In the PHY layer, allow supporting wake-on-lan in situations where the PHY registers have to be written for it to be configured. Use it to support wake-on-lan in mv643xx_eth. From Michael Stapelberg. 7) Significantly improve firewire IPV6 support, from YOSHIFUJI Hideaki. 8) Allow multiple packets to be sent in a single transmission using network coding in batman-adv, from Martin Hundebøll. 9) Add support for T5 cxgb4 chips, from Santosh Rastapur. 10) Generalize the VXLAN forwarding tables so that there is more flexibility in configurating various aspects of the endpoints. From David Stevens. 11) Support RSS and TSO in hardware over GRE tunnels in bxn2x driver, from Dmitry Kravkov. 12) Zero copy support in nfnelink_queue, from Eric Dumazet and Pablo Neira Ayuso. 13) Start adding networking selftests. 14) In situations of overload on the same AF_PACKET fanout socket, or per-cpu packet receive queue, minimize drop by distributing the load to other cpus/fanouts. From Willem de Bruijn and Eric Dumazet. 15) Add support for new payload offset BPF instruction, from Daniel Borkmann. 16) Convert several drivers over to mdoule_platform_driver(), from Sachin Kamat. 17) Provide a minimal BPF JIT image disassembler userspace tool, from Daniel Borkmann. 18) Rewrite F-RTO implementation in TCP to match the final specification of it in RFC4138 and RFC5682. From Yuchung Cheng. 19) Provide netlink socket diag of netlink sockets ("Yo dawg, I hear you like netlink, so I implemented netlink dumping of netlink sockets.") From Andrey Vagin. 20) Remove ugly passing of rtnetlink attributes into rtnl_doit functions, from Thomas Graf. 21) Allow userspace to be able to see if a configuration change occurs in the middle of an address or device list dump, from Nicolas Dichtel. 22) Support RFC3168 ECN protection for ipv6 fragments, from Hannes Frederic Sowa. 23) Increase accuracy of packet length used by packet scheduler, from Jason Wang. 24) Beginning set of changes to make ipv4/ipv6 fragment handling more scalable and less susceptible to overload and locking contention, from Jesper Dangaard Brouer. 25) Get rid of using non-type-safe NLMSG_* macros and use nlmsg_*() instead. From Hong Zhiguo. 26) Optimize route usage in IPVS by avoiding reference counting where possible, from Julian Anastasov. 27) Convert IPVS schedulers to RCU, also from Julian Anastasov. 28) Support cpu fanouts in xt_NFQUEUE netfilter target, from Holger Eitzenberger. 29) Network namespace support for nf_log, ebt_log, xt_LOG, ipt_ULOG, nfnetlink_log, and nfnetlink_queue. From Gao feng. 30) Implement RFC3168 ECN protection, from Hannes Frederic Sowa. 31) Support several new r8169 chips, from Hayes Wang. 32) Support tokenized interface identifiers in ipv6, from Daniel Borkmann. 33) Use usbnet_link_change() helper in USB net driver, from Ming Lei. 34) Add 802.1ad vlan offload support, from Patrick McHardy. 35) Support mmap() based netlink communication, also from Patrick McHardy. 36) Support HW timestamping in mlx4 driver, from Amir Vadai. 37) Rationalize AF_PACKET packet timestamping when transmitting, from Willem de Bruijn and Daniel Borkmann. 38) Bring parity to what's provided by /proc/net/packet socket dumping and the info provided by netlink socket dumping of AF_PACKET sockets. From Nicolas Dichtel. 39) Fix peeking beyond zero sized SKBs in AF_UNIX, from Benjamin Poirier" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1722 commits) filter: fix va_list build error af_unix: fix a fatal race with bit fields bnx2x: Prevent memory leak when cnic is absent bnx2x: correct reading of speed capabilities net: sctp: attribute printl with __printf for gcc fmt checks netlink: kconfig: move mmap i/o into netlink kconfig netpoll: convert mutex into a semaphore netlink: Fix skb ref counting. net_sched: act_ipt forward compat with xtables mlx4_en: fix a build error on 32bit arches Revert "bnx2x: allow nvram test to run when device is down" bridge: avoid OOPS if root port not found drivers: net: cpsw: fix kernel warn on cpsw irq enable sh_eth: use random MAC address if no valid one supplied 3c509.c: call SET_NETDEV_DEV for all device types (ISA/ISAPnP/EISA) tg3: fix to append hardware time stamping flags unix/stream: fix peeking with an offset larger than data in queue unix/dgram: fix peeking with an offset larger than data in queue unix/dgram: peek beyond 0-sized skbs openvswitch: Remove unneeded ovs_netdev_get_ifindex() ...
Diffstat (limited to 'drivers/scsi')
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc.h23
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_fcoe.c35
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_hwi.c22
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_io.c9
-rw-r--r--drivers/scsi/csiostor/Makefile3
-rw-r--r--drivers/scsi/csiostor/csio_hw.c559
-rw-r--r--drivers/scsi/csiostor/csio_hw.h47
-rw-r--r--drivers/scsi/csiostor/csio_hw_chip.h175
-rw-r--r--drivers/scsi/csiostor/csio_hw_t4.c403
-rw-r--r--drivers/scsi/csiostor/csio_hw_t5.c397
-rw-r--r--drivers/scsi/csiostor/csio_init.c48
-rw-r--r--drivers/scsi/csiostor/csio_init.h29
-rw-r--r--drivers/scsi/csiostor/csio_lnode.h2
-rw-r--r--drivers/scsi/csiostor/csio_rnode.c10
-rw-r--r--drivers/scsi/csiostor/csio_rnode.h2
-rw-r--r--drivers/scsi/csiostor/csio_wr.c60
-rw-r--r--drivers/scsi/fcoe/fcoe.c2
-rw-r--r--drivers/scsi/scsi_netlink.c4
-rw-r--r--drivers/scsi/scsi_transport_fc.c21
-rw-r--r--drivers/scsi/scsi_transport_iscsi.c70
20 files changed, 1299 insertions, 622 deletions
diff --git a/drivers/scsi/bnx2fc/bnx2fc.h b/drivers/scsi/bnx2fc/bnx2fc.h
index 50fcd018d14b..11596b2c4702 100644
--- a/drivers/scsi/bnx2fc/bnx2fc.h
+++ b/drivers/scsi/bnx2fc/bnx2fc.h
@@ -88,9 +88,6 @@
88 88
89#define BNX2FC_MAX_NPIV 256 89#define BNX2FC_MAX_NPIV 256
90 90
91#define BNX2FC_MAX_OUTSTANDING_CMNDS 2048
92#define BNX2FC_CAN_QUEUE BNX2FC_MAX_OUTSTANDING_CMNDS
93#define BNX2FC_ELSTM_XIDS BNX2FC_CAN_QUEUE
94#define BNX2FC_MIN_PAYLOAD 256 91#define BNX2FC_MIN_PAYLOAD 256
95#define BNX2FC_MAX_PAYLOAD 2048 92#define BNX2FC_MAX_PAYLOAD 2048
96#define BNX2FC_MFS \ 93#define BNX2FC_MFS \
@@ -108,11 +105,8 @@
108#define BNX2FC_CONFQ_WQE_SIZE (sizeof(struct fcoe_confqe)) 105#define BNX2FC_CONFQ_WQE_SIZE (sizeof(struct fcoe_confqe))
109#define BNX2FC_5771X_DB_PAGE_SIZE 128 106#define BNX2FC_5771X_DB_PAGE_SIZE 128
110 107
111#define BNX2FC_MAX_TASKS \
112 (BNX2FC_MAX_OUTSTANDING_CMNDS + BNX2FC_ELSTM_XIDS)
113#define BNX2FC_TASK_SIZE 128 108#define BNX2FC_TASK_SIZE 128
114#define BNX2FC_TASKS_PER_PAGE (PAGE_SIZE/BNX2FC_TASK_SIZE) 109#define BNX2FC_TASKS_PER_PAGE (PAGE_SIZE/BNX2FC_TASK_SIZE)
115#define BNX2FC_TASK_CTX_ARR_SZ (BNX2FC_MAX_TASKS/BNX2FC_TASKS_PER_PAGE)
116 110
117#define BNX2FC_MAX_ROWS_IN_HASH_TBL 8 111#define BNX2FC_MAX_ROWS_IN_HASH_TBL 8
118#define BNX2FC_HASH_TBL_CHUNK_SIZE (16 * 1024) 112#define BNX2FC_HASH_TBL_CHUNK_SIZE (16 * 1024)
@@ -125,12 +119,9 @@
125#define BNX2FC_WRITE (1 << 0) 119#define BNX2FC_WRITE (1 << 0)
126 120
127#define BNX2FC_MIN_XID 0 121#define BNX2FC_MIN_XID 0
128#define BNX2FC_MAX_XID \
129 (BNX2FC_MAX_OUTSTANDING_CMNDS + BNX2FC_ELSTM_XIDS - 1)
130#define FCOE_MAX_NUM_XIDS 0x2000 122#define FCOE_MAX_NUM_XIDS 0x2000
131#define FCOE_MIN_XID (BNX2FC_MAX_XID + 1) 123#define FCOE_MAX_XID_OFFSET (FCOE_MAX_NUM_XIDS - 1)
132#define FCOE_MAX_XID (FCOE_MIN_XID + FCOE_MAX_NUM_XIDS - 1) 124#define FCOE_XIDS_PER_CPU_OFFSET ((512 * nr_cpu_ids) - 1)
133#define FCOE_XIDS_PER_CPU (FCOE_MIN_XID + (512 * nr_cpu_ids) - 1)
134#define BNX2FC_MAX_LUN 0xFFFF 125#define BNX2FC_MAX_LUN 0xFFFF
135#define BNX2FC_MAX_FCP_TGT 256 126#define BNX2FC_MAX_FCP_TGT 256
136#define BNX2FC_MAX_CMD_LEN 16 127#define BNX2FC_MAX_CMD_LEN 16
@@ -206,6 +197,13 @@ struct bnx2fc_hba {
206 #define BNX2FC_FLAG_FW_INIT_DONE 0 197 #define BNX2FC_FLAG_FW_INIT_DONE 0
207 #define BNX2FC_FLAG_DESTROY_CMPL 1 198 #define BNX2FC_FLAG_DESTROY_CMPL 1
208 u32 next_conn_id; 199 u32 next_conn_id;
200
201 /* xid resources */
202 u16 max_xid;
203 u32 max_tasks;
204 u32 max_outstanding_cmds;
205 u32 elstm_xids;
206
209 struct fcoe_task_ctx_entry **task_ctx; 207 struct fcoe_task_ctx_entry **task_ctx;
210 dma_addr_t *task_ctx_dma; 208 dma_addr_t *task_ctx_dma;
211 struct regpair *task_ctx_bd_tbl; 209 struct regpair *task_ctx_bd_tbl;
@@ -504,8 +502,7 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba);
504void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba); 502void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba);
505int bnx2fc_setup_fw_resc(struct bnx2fc_hba *hba); 503int bnx2fc_setup_fw_resc(struct bnx2fc_hba *hba);
506void bnx2fc_free_fw_resc(struct bnx2fc_hba *hba); 504void bnx2fc_free_fw_resc(struct bnx2fc_hba *hba);
507struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba, 505struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba);
508 u16 min_xid, u16 max_xid);
509void bnx2fc_cmd_mgr_free(struct bnx2fc_cmd_mgr *cmgr); 506void bnx2fc_cmd_mgr_free(struct bnx2fc_cmd_mgr *cmgr);
510void bnx2fc_get_link_state(struct bnx2fc_hba *hba); 507void bnx2fc_get_link_state(struct bnx2fc_hba *hba);
511char *bnx2fc_get_next_rqe(struct bnx2fc_rport *tgt, u8 num_items); 508char *bnx2fc_get_next_rqe(struct bnx2fc_rport *tgt, u8 num_items);
diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index 90bc7bd00966..7dffec1e5715 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -71,7 +71,7 @@ static void bnx2fc_recv_frame(struct sk_buff *skb);
71static void bnx2fc_start_disc(struct bnx2fc_interface *interface); 71static void bnx2fc_start_disc(struct bnx2fc_interface *interface);
72static int bnx2fc_shost_config(struct fc_lport *lport, struct device *dev); 72static int bnx2fc_shost_config(struct fc_lport *lport, struct device *dev);
73static int bnx2fc_lport_config(struct fc_lport *lport); 73static int bnx2fc_lport_config(struct fc_lport *lport);
74static int bnx2fc_em_config(struct fc_lport *lport); 74static int bnx2fc_em_config(struct fc_lport *lport, struct bnx2fc_hba *hba);
75static int bnx2fc_bind_adapter_devices(struct bnx2fc_hba *hba); 75static int bnx2fc_bind_adapter_devices(struct bnx2fc_hba *hba);
76static void bnx2fc_unbind_adapter_devices(struct bnx2fc_hba *hba); 76static void bnx2fc_unbind_adapter_devices(struct bnx2fc_hba *hba);
77static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba); 77static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba);
@@ -944,16 +944,17 @@ static int bnx2fc_libfc_config(struct fc_lport *lport)
944 return 0; 944 return 0;
945} 945}
946 946
947static int bnx2fc_em_config(struct fc_lport *lport) 947static int bnx2fc_em_config(struct fc_lport *lport, struct bnx2fc_hba *hba)
948{ 948{
949 int max_xid; 949 int fcoe_min_xid, fcoe_max_xid;
950 950
951 fcoe_min_xid = hba->max_xid + 1;
951 if (nr_cpu_ids <= 2) 952 if (nr_cpu_ids <= 2)
952 max_xid = FCOE_XIDS_PER_CPU; 953 fcoe_max_xid = hba->max_xid + FCOE_XIDS_PER_CPU_OFFSET;
953 else 954 else
954 max_xid = FCOE_MAX_XID; 955 fcoe_max_xid = hba->max_xid + FCOE_MAX_XID_OFFSET;
955 if (!fc_exch_mgr_alloc(lport, FC_CLASS_3, FCOE_MIN_XID, 956 if (!fc_exch_mgr_alloc(lport, FC_CLASS_3, fcoe_min_xid,
956 max_xid, NULL)) { 957 fcoe_max_xid, NULL)) {
957 printk(KERN_ERR PFX "em_config:fc_exch_mgr_alloc failed\n"); 958 printk(KERN_ERR PFX "em_config:fc_exch_mgr_alloc failed\n");
958 return -ENOMEM; 959 return -ENOMEM;
959 } 960 }
@@ -1300,6 +1301,12 @@ static struct bnx2fc_hba *bnx2fc_hba_create(struct cnic_dev *cnic)
1300 mutex_init(&hba->hba_mutex); 1301 mutex_init(&hba->hba_mutex);
1301 1302
1302 hba->cnic = cnic; 1303 hba->cnic = cnic;
1304
1305 hba->max_tasks = cnic->max_fcoe_exchanges;
1306 hba->elstm_xids = (hba->max_tasks / 2);
1307 hba->max_outstanding_cmds = hba->elstm_xids;
1308 hba->max_xid = (hba->max_tasks - 1);
1309
1303 rc = bnx2fc_bind_pcidev(hba); 1310 rc = bnx2fc_bind_pcidev(hba);
1304 if (rc) { 1311 if (rc) {
1305 printk(KERN_ERR PFX "create_adapter: bind error\n"); 1312 printk(KERN_ERR PFX "create_adapter: bind error\n");
@@ -1318,8 +1325,7 @@ static struct bnx2fc_hba *bnx2fc_hba_create(struct cnic_dev *cnic)
1318 1325
1319 hba->num_ofld_sess = 0; 1326 hba->num_ofld_sess = 0;
1320 1327
1321 hba->cmd_mgr = bnx2fc_cmd_mgr_alloc(hba, BNX2FC_MIN_XID, 1328 hba->cmd_mgr = bnx2fc_cmd_mgr_alloc(hba);
1322 BNX2FC_MAX_XID);
1323 if (!hba->cmd_mgr) { 1329 if (!hba->cmd_mgr) {
1324 printk(KERN_ERR PFX "em_config:bnx2fc_cmd_mgr_alloc failed\n"); 1330 printk(KERN_ERR PFX "em_config:bnx2fc_cmd_mgr_alloc failed\n");
1325 goto cmgr_err; 1331 goto cmgr_err;
@@ -1330,13 +1336,13 @@ static struct bnx2fc_hba *bnx2fc_hba_create(struct cnic_dev *cnic)
1330 FCOE_IOS_PER_CONNECTION_SHIFT; 1336 FCOE_IOS_PER_CONNECTION_SHIFT;
1331 fcoe_cap->capability1 |= BNX2FC_NUM_MAX_SESS << 1337 fcoe_cap->capability1 |= BNX2FC_NUM_MAX_SESS <<
1332 FCOE_LOGINS_PER_PORT_SHIFT; 1338 FCOE_LOGINS_PER_PORT_SHIFT;
1333 fcoe_cap->capability2 = BNX2FC_MAX_OUTSTANDING_CMNDS << 1339 fcoe_cap->capability2 = hba->max_outstanding_cmds <<
1334 FCOE_NUMBER_OF_EXCHANGES_SHIFT; 1340 FCOE_NUMBER_OF_EXCHANGES_SHIFT;
1335 fcoe_cap->capability2 |= BNX2FC_MAX_NPIV << 1341 fcoe_cap->capability2 |= BNX2FC_MAX_NPIV <<
1336 FCOE_NPIV_WWN_PER_PORT_SHIFT; 1342 FCOE_NPIV_WWN_PER_PORT_SHIFT;
1337 fcoe_cap->capability3 = BNX2FC_NUM_MAX_SESS << 1343 fcoe_cap->capability3 = BNX2FC_NUM_MAX_SESS <<
1338 FCOE_TARGETS_SUPPORTED_SHIFT; 1344 FCOE_TARGETS_SUPPORTED_SHIFT;
1339 fcoe_cap->capability3 |= BNX2FC_MAX_OUTSTANDING_CMNDS << 1345 fcoe_cap->capability3 |= hba->max_outstanding_cmds <<
1340 FCOE_OUTSTANDING_COMMANDS_SHIFT; 1346 FCOE_OUTSTANDING_COMMANDS_SHIFT;
1341 fcoe_cap->capability4 = FCOE_CAPABILITY4_STATEFUL; 1347 fcoe_cap->capability4 = FCOE_CAPABILITY4_STATEFUL;
1342 1348
@@ -1416,7 +1422,7 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
1416 struct Scsi_Host *shost; 1422 struct Scsi_Host *shost;
1417 struct fc_vport *vport = dev_to_vport(parent); 1423 struct fc_vport *vport = dev_to_vport(parent);
1418 struct bnx2fc_lport *blport; 1424 struct bnx2fc_lport *blport;
1419 struct bnx2fc_hba *hba; 1425 struct bnx2fc_hba *hba = interface->hba;
1420 int rc = 0; 1426 int rc = 0;
1421 1427
1422 blport = kzalloc(sizeof(struct bnx2fc_lport), GFP_KERNEL); 1428 blport = kzalloc(sizeof(struct bnx2fc_lport), GFP_KERNEL);
@@ -1426,6 +1432,7 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
1426 } 1432 }
1427 1433
1428 /* Allocate Scsi_Host structure */ 1434 /* Allocate Scsi_Host structure */
1435 bnx2fc_shost_template.can_queue = hba->max_outstanding_cmds;
1429 if (!npiv) 1436 if (!npiv)
1430 lport = libfc_host_alloc(&bnx2fc_shost_template, sizeof(*port)); 1437 lport = libfc_host_alloc(&bnx2fc_shost_template, sizeof(*port));
1431 else 1438 else
@@ -1477,7 +1484,7 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
1477 1484
1478 /* Allocate exchange manager */ 1485 /* Allocate exchange manager */
1479 if (!npiv) 1486 if (!npiv)
1480 rc = bnx2fc_em_config(lport); 1487 rc = bnx2fc_em_config(lport, hba);
1481 else { 1488 else {
1482 shost = vport_to_shost(vport); 1489 shost = vport_to_shost(vport);
1483 n_port = shost_priv(shost); 1490 n_port = shost_priv(shost);
@@ -1491,7 +1498,6 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
1491 1498
1492 bnx2fc_interface_get(interface); 1499 bnx2fc_interface_get(interface);
1493 1500
1494 hba = interface->hba;
1495 spin_lock_bh(&hba->hba_lock); 1501 spin_lock_bh(&hba->hba_lock);
1496 blport->lport = lport; 1502 blport->lport = lport;
1497 list_add_tail(&blport->list, &hba->vports); 1503 list_add_tail(&blport->list, &hba->vports);
@@ -2706,7 +2712,6 @@ static struct scsi_host_template bnx2fc_shost_template = {
2706 .change_queue_type = fc_change_queue_type, 2712 .change_queue_type = fc_change_queue_type,
2707 .this_id = -1, 2713 .this_id = -1,
2708 .cmd_per_lun = 3, 2714 .cmd_per_lun = 3,
2709 .can_queue = BNX2FC_CAN_QUEUE,
2710 .use_clustering = ENABLE_CLUSTERING, 2715 .use_clustering = ENABLE_CLUSTERING,
2711 .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD, 2716 .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD,
2712 .max_sectors = 1024, 2717 .max_sectors = 1024,
diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
index 85ea98a80f40..50510ffe1bf5 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
@@ -77,7 +77,7 @@ int bnx2fc_send_fw_fcoe_init_msg(struct bnx2fc_hba *hba)
77 fcoe_init1.hdr.flags = (FCOE_KWQE_LAYER_CODE << 77 fcoe_init1.hdr.flags = (FCOE_KWQE_LAYER_CODE <<
78 FCOE_KWQE_HEADER_LAYER_CODE_SHIFT); 78 FCOE_KWQE_HEADER_LAYER_CODE_SHIFT);
79 79
80 fcoe_init1.num_tasks = BNX2FC_MAX_TASKS; 80 fcoe_init1.num_tasks = hba->max_tasks;
81 fcoe_init1.sq_num_wqes = BNX2FC_SQ_WQES_MAX; 81 fcoe_init1.sq_num_wqes = BNX2FC_SQ_WQES_MAX;
82 fcoe_init1.rq_num_wqes = BNX2FC_RQ_WQES_MAX; 82 fcoe_init1.rq_num_wqes = BNX2FC_RQ_WQES_MAX;
83 fcoe_init1.rq_buffer_log_size = BNX2FC_RQ_BUF_LOG_SZ; 83 fcoe_init1.rq_buffer_log_size = BNX2FC_RQ_BUF_LOG_SZ;
@@ -697,7 +697,7 @@ static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe)
697 err_entry->data.tx_buf_off, err_entry->data.rx_buf_off); 697 err_entry->data.tx_buf_off, err_entry->data.rx_buf_off);
698 698
699 699
700 if (xid > BNX2FC_MAX_XID) { 700 if (xid > hba->max_xid) {
701 BNX2FC_TGT_DBG(tgt, "xid(0x%x) out of FW range\n", 701 BNX2FC_TGT_DBG(tgt, "xid(0x%x) out of FW range\n",
702 xid); 702 xid);
703 goto ret_err_rqe; 703 goto ret_err_rqe;
@@ -815,7 +815,7 @@ ret_err_rqe:
815 BNX2FC_TGT_DBG(tgt, "buf_offsets - tx = 0x%x, rx = 0x%x", 815 BNX2FC_TGT_DBG(tgt, "buf_offsets - tx = 0x%x, rx = 0x%x",
816 err_entry->data.tx_buf_off, err_entry->data.rx_buf_off); 816 err_entry->data.tx_buf_off, err_entry->data.rx_buf_off);
817 817
818 if (xid > BNX2FC_MAX_XID) { 818 if (xid > hba->max_xid) {
819 BNX2FC_TGT_DBG(tgt, "xid(0x%x) out of FW range\n", xid); 819 BNX2FC_TGT_DBG(tgt, "xid(0x%x) out of FW range\n", xid);
820 goto ret_warn_rqe; 820 goto ret_warn_rqe;
821 } 821 }
@@ -880,7 +880,7 @@ void bnx2fc_process_cq_compl(struct bnx2fc_rport *tgt, u16 wqe)
880 880
881 spin_lock_bh(&tgt->tgt_lock); 881 spin_lock_bh(&tgt->tgt_lock);
882 xid = wqe & FCOE_PEND_WQ_CQE_TASK_ID; 882 xid = wqe & FCOE_PEND_WQ_CQE_TASK_ID;
883 if (xid >= BNX2FC_MAX_TASKS) { 883 if (xid >= hba->max_tasks) {
884 printk(KERN_ERR PFX "ERROR:xid out of range\n"); 884 printk(KERN_ERR PFX "ERROR:xid out of range\n");
885 spin_unlock_bh(&tgt->tgt_lock); 885 spin_unlock_bh(&tgt->tgt_lock);
886 return; 886 return;
@@ -1842,6 +1842,7 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba)
1842 int rc = 0; 1842 int rc = 0;
1843 struct regpair *task_ctx_bdt; 1843 struct regpair *task_ctx_bdt;
1844 dma_addr_t addr; 1844 dma_addr_t addr;
1845 int task_ctx_arr_sz;
1845 int i; 1846 int i;
1846 1847
1847 /* 1848 /*
@@ -1865,7 +1866,8 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba)
1865 * Allocate task_ctx which is an array of pointers pointing to 1866 * Allocate task_ctx which is an array of pointers pointing to
1866 * a page containing 32 task contexts 1867 * a page containing 32 task contexts
1867 */ 1868 */
1868 hba->task_ctx = kzalloc((BNX2FC_TASK_CTX_ARR_SZ * sizeof(void *)), 1869 task_ctx_arr_sz = (hba->max_tasks / BNX2FC_TASKS_PER_PAGE);
1870 hba->task_ctx = kzalloc((task_ctx_arr_sz * sizeof(void *)),
1869 GFP_KERNEL); 1871 GFP_KERNEL);
1870 if (!hba->task_ctx) { 1872 if (!hba->task_ctx) {
1871 printk(KERN_ERR PFX "unable to allocate task context array\n"); 1873 printk(KERN_ERR PFX "unable to allocate task context array\n");
@@ -1876,7 +1878,7 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba)
1876 /* 1878 /*
1877 * Allocate task_ctx_dma which is an array of dma addresses 1879 * Allocate task_ctx_dma which is an array of dma addresses
1878 */ 1880 */
1879 hba->task_ctx_dma = kmalloc((BNX2FC_TASK_CTX_ARR_SZ * 1881 hba->task_ctx_dma = kmalloc((task_ctx_arr_sz *
1880 sizeof(dma_addr_t)), GFP_KERNEL); 1882 sizeof(dma_addr_t)), GFP_KERNEL);
1881 if (!hba->task_ctx_dma) { 1883 if (!hba->task_ctx_dma) {
1882 printk(KERN_ERR PFX "unable to alloc context mapping array\n"); 1884 printk(KERN_ERR PFX "unable to alloc context mapping array\n");
@@ -1885,7 +1887,7 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba)
1885 } 1887 }
1886 1888
1887 task_ctx_bdt = (struct regpair *)hba->task_ctx_bd_tbl; 1889 task_ctx_bdt = (struct regpair *)hba->task_ctx_bd_tbl;
1888 for (i = 0; i < BNX2FC_TASK_CTX_ARR_SZ; i++) { 1890 for (i = 0; i < task_ctx_arr_sz; i++) {
1889 1891
1890 hba->task_ctx[i] = dma_alloc_coherent(&hba->pcidev->dev, 1892 hba->task_ctx[i] = dma_alloc_coherent(&hba->pcidev->dev,
1891 PAGE_SIZE, 1893 PAGE_SIZE,
@@ -1905,7 +1907,7 @@ int bnx2fc_setup_task_ctx(struct bnx2fc_hba *hba)
1905 return 0; 1907 return 0;
1906 1908
1907out3: 1909out3:
1908 for (i = 0; i < BNX2FC_TASK_CTX_ARR_SZ; i++) { 1910 for (i = 0; i < task_ctx_arr_sz; i++) {
1909 if (hba->task_ctx[i]) { 1911 if (hba->task_ctx[i]) {
1910 1912
1911 dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1913 dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE,
@@ -1929,6 +1931,7 @@ out:
1929 1931
1930void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba) 1932void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba)
1931{ 1933{
1934 int task_ctx_arr_sz;
1932 int i; 1935 int i;
1933 1936
1934 if (hba->task_ctx_bd_tbl) { 1937 if (hba->task_ctx_bd_tbl) {
@@ -1938,8 +1941,9 @@ void bnx2fc_free_task_ctx(struct bnx2fc_hba *hba)
1938 hba->task_ctx_bd_tbl = NULL; 1941 hba->task_ctx_bd_tbl = NULL;
1939 } 1942 }
1940 1943
1944 task_ctx_arr_sz = (hba->max_tasks / BNX2FC_TASKS_PER_PAGE);
1941 if (hba->task_ctx) { 1945 if (hba->task_ctx) {
1942 for (i = 0; i < BNX2FC_TASK_CTX_ARR_SZ; i++) { 1946 for (i = 0; i < task_ctx_arr_sz; i++) {
1943 if (hba->task_ctx[i]) { 1947 if (hba->task_ctx[i]) {
1944 dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE, 1948 dma_free_coherent(&hba->pcidev->dev, PAGE_SIZE,
1945 hba->task_ctx[i], 1949 hba->task_ctx[i],
diff --git a/drivers/scsi/bnx2fc/bnx2fc_io.c b/drivers/scsi/bnx2fc/bnx2fc_io.c
index 60798e829de6..723a9a8ba5ee 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_io.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_io.c
@@ -239,8 +239,7 @@ static void bnx2fc_scsi_done(struct bnx2fc_cmd *io_req, int err_code)
239 sc_cmd->scsi_done(sc_cmd); 239 sc_cmd->scsi_done(sc_cmd);
240} 240}
241 241
242struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba, 242struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba)
243 u16 min_xid, u16 max_xid)
244{ 243{
245 struct bnx2fc_cmd_mgr *cmgr; 244 struct bnx2fc_cmd_mgr *cmgr;
246 struct io_bdt *bdt_info; 245 struct io_bdt *bdt_info;
@@ -252,6 +251,8 @@ struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba,
252 int num_ios, num_pri_ios; 251 int num_ios, num_pri_ios;
253 size_t bd_tbl_sz; 252 size_t bd_tbl_sz;
254 int arr_sz = num_possible_cpus() + 1; 253 int arr_sz = num_possible_cpus() + 1;
254 u16 min_xid = BNX2FC_MIN_XID;
255 u16 max_xid = hba->max_xid;
255 256
256 if (max_xid <= min_xid || max_xid == FC_XID_UNKNOWN) { 257 if (max_xid <= min_xid || max_xid == FC_XID_UNKNOWN) {
257 printk(KERN_ERR PFX "cmd_mgr_alloc: Invalid min_xid 0x%x \ 258 printk(KERN_ERR PFX "cmd_mgr_alloc: Invalid min_xid 0x%x \
@@ -298,7 +299,7 @@ struct bnx2fc_cmd_mgr *bnx2fc_cmd_mgr_alloc(struct bnx2fc_hba *hba,
298 * of slow path requests. 299 * of slow path requests.
299 */ 300 */
300 xid = BNX2FC_MIN_XID; 301 xid = BNX2FC_MIN_XID;
301 num_pri_ios = num_ios - BNX2FC_ELSTM_XIDS; 302 num_pri_ios = num_ios - hba->elstm_xids;
302 for (i = 0; i < num_ios; i++) { 303 for (i = 0; i < num_ios; i++) {
303 io_req = kzalloc(sizeof(*io_req), GFP_KERNEL); 304 io_req = kzalloc(sizeof(*io_req), GFP_KERNEL);
304 305
@@ -367,7 +368,7 @@ void bnx2fc_cmd_mgr_free(struct bnx2fc_cmd_mgr *cmgr)
367 struct bnx2fc_hba *hba = cmgr->hba; 368 struct bnx2fc_hba *hba = cmgr->hba;
368 size_t bd_tbl_sz; 369 size_t bd_tbl_sz;
369 u16 min_xid = BNX2FC_MIN_XID; 370 u16 min_xid = BNX2FC_MIN_XID;
370 u16 max_xid = BNX2FC_MAX_XID; 371 u16 max_xid = hba->max_xid;
371 int num_ios; 372 int num_ios;
372 int i; 373 int i;
373 374
diff --git a/drivers/scsi/csiostor/Makefile b/drivers/scsi/csiostor/Makefile
index b581966c88f9..913b9a92fb06 100644
--- a/drivers/scsi/csiostor/Makefile
+++ b/drivers/scsi/csiostor/Makefile
@@ -8,4 +8,5 @@ ccflags-y += -I$(srctree)/drivers/net/ethernet/chelsio/cxgb4
8obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor.o 8obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor.o
9 9
10csiostor-objs := csio_attr.o csio_init.o csio_lnode.o csio_scsi.o \ 10csiostor-objs := csio_attr.o csio_init.o csio_lnode.o csio_scsi.o \
11 csio_hw.o csio_isr.o csio_mb.o csio_rnode.o csio_wr.o 11 csio_hw.o csio_hw_t4.o csio_hw_t5.o csio_isr.o \
12 csio_mb.o csio_rnode.o csio_wr.o
diff --git a/drivers/scsi/csiostor/csio_hw.c b/drivers/scsi/csiostor/csio_hw.c
index 7dbaf58fab9f..193605519361 100644
--- a/drivers/scsi/csiostor/csio_hw.c
+++ b/drivers/scsi/csiostor/csio_hw.c
@@ -61,7 +61,7 @@ int csio_msi = 2;
61static int dev_num; 61static int dev_num;
62 62
63/* FCoE Adapter types & its description */ 63/* FCoE Adapter types & its description */
64static const struct csio_adap_desc csio_fcoe_adapters[] = { 64static const struct csio_adap_desc csio_t4_fcoe_adapters[] = {
65 {"T440-Dbg 10G", "Chelsio T440-Dbg 10G [FCoE]"}, 65 {"T440-Dbg 10G", "Chelsio T440-Dbg 10G [FCoE]"},
66 {"T420-CR 10G", "Chelsio T420-CR 10G [FCoE]"}, 66 {"T420-CR 10G", "Chelsio T420-CR 10G [FCoE]"},
67 {"T422-CR 10G/1G", "Chelsio T422-CR 10G/1G [FCoE]"}, 67 {"T422-CR 10G/1G", "Chelsio T422-CR 10G/1G [FCoE]"},
@@ -77,7 +77,38 @@ static const struct csio_adap_desc csio_fcoe_adapters[] = {
77 {"B404-BT 1G", "Chelsio B404-BT 1G [FCoE]"}, 77 {"B404-BT 1G", "Chelsio B404-BT 1G [FCoE]"},
78 {"T480-CR 10G", "Chelsio T480-CR 10G [FCoE]"}, 78 {"T480-CR 10G", "Chelsio T480-CR 10G [FCoE]"},
79 {"T440-LP-CR 10G", "Chelsio T440-LP-CR 10G [FCoE]"}, 79 {"T440-LP-CR 10G", "Chelsio T440-LP-CR 10G [FCoE]"},
80 {"T4 FPGA", "Chelsio T4 FPGA [FCoE]"} 80 {"AMSTERDAM 10G", "Chelsio AMSTERDAM 10G [FCoE]"},
81 {"HUAWEI T480 10G", "Chelsio HUAWEI T480 10G [FCoE]"},
82 {"HUAWEI T440 10G", "Chelsio HUAWEI T440 10G [FCoE]"},
83 {"HUAWEI STG 10G", "Chelsio HUAWEI STG 10G [FCoE]"},
84 {"ACROMAG XAUI 10G", "Chelsio ACROMAG XAUI 10G [FCoE]"},
85 {"ACROMAG SFP+ 10G", "Chelsio ACROMAG SFP+ 10G [FCoE]"},
86 {"QUANTA SFP+ 10G", "Chelsio QUANTA SFP+ 10G [FCoE]"},
87 {"HUAWEI 10Gbase-T", "Chelsio HUAWEI 10Gbase-T [FCoE]"},
88 {"HUAWEI T4TOE 10G", "Chelsio HUAWEI T4TOE 10G [FCoE]"}
89};
90
91static const struct csio_adap_desc csio_t5_fcoe_adapters[] = {
92 {"T580-Dbg 10G", "Chelsio T580-Dbg 10G [FCoE]"},
93 {"T520-CR 10G", "Chelsio T520-CR 10G [FCoE]"},
94 {"T522-CR 10G/1G", "Chelsio T452-CR 10G/1G [FCoE]"},
95 {"T540-CR 10G", "Chelsio T540-CR 10G [FCoE]"},
96 {"T520-BCH 10G", "Chelsio T520-BCH 10G [FCoE]"},
97 {"T540-BCH 10G", "Chelsio T540-BCH 10G [FCoE]"},
98 {"T540-CH 10G", "Chelsio T540-CH 10G [FCoE]"},
99 {"T520-SO 10G", "Chelsio T520-SO 10G [FCoE]"},
100 {"T520-CX4 10G", "Chelsio T520-CX4 10G [FCoE]"},
101 {"T520-BT 10G", "Chelsio T520-BT 10G [FCoE]"},
102 {"T504-BT 1G", "Chelsio T504-BT 1G [FCoE]"},
103 {"B520-SR 10G", "Chelsio B520-SR 10G [FCoE]"},
104 {"B504-BT 1G", "Chelsio B504-BT 1G [FCoE]"},
105 {"T580-CR 10G", "Chelsio T580-CR 10G [FCoE]"},
106 {"T540-LP-CR 10G", "Chelsio T540-LP-CR 10G [FCoE]"},
107 {"AMSTERDAM 10G", "Chelsio AMSTERDAM 10G [FCoE]"},
108 {"T580-LP-CR 40G", "Chelsio T580-LP-CR 40G [FCoE]"},
109 {"T520-LL-CR 10G", "Chelsio T520-LL-CR 10G [FCoE]"},
110 {"T560-CR 40G", "Chelsio T560-CR 40G [FCoE]"},
111 {"T580-CR 40G", "Chelsio T580-CR 40G [FCoE]"}
81}; 112};
82 113
83static void csio_mgmtm_cleanup(struct csio_mgmtm *); 114static void csio_mgmtm_cleanup(struct csio_mgmtm *);
@@ -124,7 +155,7 @@ int csio_is_hw_removing(struct csio_hw *hw)
124 * at the time it indicated completion is stored there. Returns 0 if the 155 * at the time it indicated completion is stored there. Returns 0 if the
125 * operation completes and -EAGAIN otherwise. 156 * operation completes and -EAGAIN otherwise.
126 */ 157 */
127static int 158int
128csio_hw_wait_op_done_val(struct csio_hw *hw, int reg, uint32_t mask, 159csio_hw_wait_op_done_val(struct csio_hw *hw, int reg, uint32_t mask,
129 int polarity, int attempts, int delay, uint32_t *valp) 160 int polarity, int attempts, int delay, uint32_t *valp)
130{ 161{
@@ -145,6 +176,24 @@ csio_hw_wait_op_done_val(struct csio_hw *hw, int reg, uint32_t mask,
145 } 176 }
146} 177}
147 178
179/*
180 * csio_hw_tp_wr_bits_indirect - set/clear bits in an indirect TP register
181 * @hw: the adapter
182 * @addr: the indirect TP register address
183 * @mask: specifies the field within the register to modify
184 * @val: new value for the field
185 *
186 * Sets a field of an indirect TP register to the given value.
187 */
188void
189csio_hw_tp_wr_bits_indirect(struct csio_hw *hw, unsigned int addr,
190 unsigned int mask, unsigned int val)
191{
192 csio_wr_reg32(hw, addr, TP_PIO_ADDR);
193 val |= csio_rd_reg32(hw, TP_PIO_DATA) & ~mask;
194 csio_wr_reg32(hw, val, TP_PIO_DATA);
195}
196
148void 197void
149csio_set_reg_field(struct csio_hw *hw, uint32_t reg, uint32_t mask, 198csio_set_reg_field(struct csio_hw *hw, uint32_t reg, uint32_t mask,
150 uint32_t value) 199 uint32_t value)
@@ -157,242 +206,22 @@ csio_set_reg_field(struct csio_hw *hw, uint32_t reg, uint32_t mask,
157 206
158} 207}
159 208
160/*
161 * csio_hw_mc_read - read from MC through backdoor accesses
162 * @hw: the hw module
163 * @addr: address of first byte requested
164 * @data: 64 bytes of data containing the requested address
165 * @ecc: where to store the corresponding 64-bit ECC word
166 *
167 * Read 64 bytes of data from MC starting at a 64-byte-aligned address
168 * that covers the requested address @addr. If @parity is not %NULL it
169 * is assigned the 64-bit ECC word for the read data.
170 */
171int
172csio_hw_mc_read(struct csio_hw *hw, uint32_t addr, __be32 *data,
173 uint64_t *ecc)
174{
175 int i;
176
177 if (csio_rd_reg32(hw, MC_BIST_CMD) & START_BIST)
178 return -EBUSY;
179 csio_wr_reg32(hw, addr & ~0x3fU, MC_BIST_CMD_ADDR);
180 csio_wr_reg32(hw, 64, MC_BIST_CMD_LEN);
181 csio_wr_reg32(hw, 0xc, MC_BIST_DATA_PATTERN);
182 csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
183 MC_BIST_CMD);
184 i = csio_hw_wait_op_done_val(hw, MC_BIST_CMD, START_BIST,
185 0, 10, 1, NULL);
186 if (i)
187 return i;
188
189#define MC_DATA(i) MC_BIST_STATUS_REG(MC_BIST_STATUS_RDATA, i)
190
191 for (i = 15; i >= 0; i--)
192 *data++ = htonl(csio_rd_reg32(hw, MC_DATA(i)));
193 if (ecc)
194 *ecc = csio_rd_reg64(hw, MC_DATA(16));
195#undef MC_DATA
196 return 0;
197}
198
199/*
200 * csio_hw_edc_read - read from EDC through backdoor accesses
201 * @hw: the hw module
202 * @idx: which EDC to access
203 * @addr: address of first byte requested
204 * @data: 64 bytes of data containing the requested address
205 * @ecc: where to store the corresponding 64-bit ECC word
206 *
207 * Read 64 bytes of data from EDC starting at a 64-byte-aligned address
208 * that covers the requested address @addr. If @parity is not %NULL it
209 * is assigned the 64-bit ECC word for the read data.
210 */
211int
212csio_hw_edc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
213 uint64_t *ecc)
214{
215 int i;
216
217 idx *= EDC_STRIDE;
218 if (csio_rd_reg32(hw, EDC_BIST_CMD + idx) & START_BIST)
219 return -EBUSY;
220 csio_wr_reg32(hw, addr & ~0x3fU, EDC_BIST_CMD_ADDR + idx);
221 csio_wr_reg32(hw, 64, EDC_BIST_CMD_LEN + idx);
222 csio_wr_reg32(hw, 0xc, EDC_BIST_DATA_PATTERN + idx);
223 csio_wr_reg32(hw, BIST_OPCODE(1) | BIST_CMD_GAP(1) | START_BIST,
224 EDC_BIST_CMD + idx);
225 i = csio_hw_wait_op_done_val(hw, EDC_BIST_CMD + idx, START_BIST,
226 0, 10, 1, NULL);
227 if (i)
228 return i;
229
230#define EDC_DATA(i) (EDC_BIST_STATUS_REG(EDC_BIST_STATUS_RDATA, i) + idx)
231
232 for (i = 15; i >= 0; i--)
233 *data++ = htonl(csio_rd_reg32(hw, EDC_DATA(i)));
234 if (ecc)
235 *ecc = csio_rd_reg64(hw, EDC_DATA(16));
236#undef EDC_DATA
237 return 0;
238}
239
240/*
241 * csio_mem_win_rw - read/write memory through PCIE memory window
242 * @hw: the adapter
243 * @addr: address of first byte requested
244 * @data: MEMWIN0_APERTURE bytes of data containing the requested address
245 * @dir: direction of transfer 1 => read, 0 => write
246 *
247 * Read/write MEMWIN0_APERTURE bytes of data from MC starting at a
248 * MEMWIN0_APERTURE-byte-aligned address that covers the requested
249 * address @addr.
250 */
251static int
252csio_mem_win_rw(struct csio_hw *hw, u32 addr, u32 *data, int dir)
253{
254 int i;
255
256 /*
257 * Setup offset into PCIE memory window. Address must be a
258 * MEMWIN0_APERTURE-byte-aligned address. (Read back MA register to
259 * ensure that changes propagate before we attempt to use the new
260 * values.)
261 */
262 csio_wr_reg32(hw, addr & ~(MEMWIN0_APERTURE - 1),
263 PCIE_MEM_ACCESS_OFFSET);
264 csio_rd_reg32(hw, PCIE_MEM_ACCESS_OFFSET);
265
266 /* Collecting data 4 bytes at a time upto MEMWIN0_APERTURE */
267 for (i = 0; i < MEMWIN0_APERTURE; i = i + sizeof(__be32)) {
268 if (dir)
269 *data++ = csio_rd_reg32(hw, (MEMWIN0_BASE + i));
270 else
271 csio_wr_reg32(hw, *data++, (MEMWIN0_BASE + i));
272 }
273
274 return 0;
275}
276
277/*
278 * csio_memory_rw - read/write EDC 0, EDC 1 or MC via PCIE memory window
279 * @hw: the csio_hw
280 * @mtype: memory type: MEM_EDC0, MEM_EDC1 or MEM_MC
281 * @addr: address within indicated memory type
282 * @len: amount of memory to transfer
283 * @buf: host memory buffer
284 * @dir: direction of transfer 1 => read, 0 => write
285 *
286 * Reads/writes an [almost] arbitrary memory region in the firmware: the
287 * firmware memory address, length and host buffer must be aligned on
288 * 32-bit boudaries. The memory is transferred as a raw byte sequence
289 * from/to the firmware's memory. If this memory contains data
290 * structures which contain multi-byte integers, it's the callers
291 * responsibility to perform appropriate byte order conversions.
292 */
293static int
294csio_memory_rw(struct csio_hw *hw, int mtype, u32 addr, u32 len,
295 uint32_t *buf, int dir)
296{
297 uint32_t pos, start, end, offset, memoffset;
298 int ret;
299 uint32_t *data;
300
301 /*
302 * Argument sanity checks ...
303 */
304 if ((addr & 0x3) || (len & 0x3))
305 return -EINVAL;
306
307 data = kzalloc(MEMWIN0_APERTURE, GFP_KERNEL);
308 if (!data)
309 return -ENOMEM;
310
311 /* Offset into the region of memory which is being accessed
312 * MEM_EDC0 = 0
313 * MEM_EDC1 = 1
314 * MEM_MC = 2
315 */
316 memoffset = (mtype * (5 * 1024 * 1024));
317
318 /* Determine the PCIE_MEM_ACCESS_OFFSET */
319 addr = addr + memoffset;
320
321 /*
322 * The underlaying EDC/MC read routines read MEMWIN0_APERTURE bytes
323 * at a time so we need to round down the start and round up the end.
324 * We'll start copying out of the first line at (addr - start) a word
325 * at a time.
326 */
327 start = addr & ~(MEMWIN0_APERTURE-1);
328 end = (addr + len + MEMWIN0_APERTURE-1) & ~(MEMWIN0_APERTURE-1);
329 offset = (addr - start)/sizeof(__be32);
330
331 for (pos = start; pos < end; pos += MEMWIN0_APERTURE, offset = 0) {
332 /*
333 * If we're writing, copy the data from the caller's memory
334 * buffer
335 */
336 if (!dir) {
337 /*
338 * If we're doing a partial write, then we need to do
339 * a read-modify-write ...
340 */
341 if (offset || len < MEMWIN0_APERTURE) {
342 ret = csio_mem_win_rw(hw, pos, data, 1);
343 if (ret) {
344 kfree(data);
345 return ret;
346 }
347 }
348 while (offset < (MEMWIN0_APERTURE/sizeof(__be32)) &&
349 len > 0) {
350 data[offset++] = *buf++;
351 len -= sizeof(__be32);
352 }
353 }
354
355 /*
356 * Transfer a block of memory and bail if there's an error.
357 */
358 ret = csio_mem_win_rw(hw, pos, data, dir);
359 if (ret) {
360 kfree(data);
361 return ret;
362 }
363
364 /*
365 * If we're reading, copy the data into the caller's memory
366 * buffer.
367 */
368 if (dir)
369 while (offset < (MEMWIN0_APERTURE/sizeof(__be32)) &&
370 len > 0) {
371 *buf++ = data[offset++];
372 len -= sizeof(__be32);
373 }
374 }
375
376 kfree(data);
377
378 return 0;
379}
380
381static int 209static int
382csio_memory_write(struct csio_hw *hw, int mtype, u32 addr, u32 len, u32 *buf) 210csio_memory_write(struct csio_hw *hw, int mtype, u32 addr, u32 len, u32 *buf)
383{ 211{
384 return csio_memory_rw(hw, mtype, addr, len, buf, 0); 212 return hw->chip_ops->chip_memory_rw(hw, MEMWIN_CSIOSTOR, mtype,
213 addr, len, buf, 0);
385} 214}
386 215
387/* 216/*
388 * EEPROM reads take a few tens of us while writes can take a bit over 5 ms. 217 * EEPROM reads take a few tens of us while writes can take a bit over 5 ms.
389 */ 218 */
390#define EEPROM_MAX_RD_POLL 40 219#define EEPROM_MAX_RD_POLL 40
391#define EEPROM_MAX_WR_POLL 6 220#define EEPROM_MAX_WR_POLL 6
392#define EEPROM_STAT_ADDR 0x7bfc 221#define EEPROM_STAT_ADDR 0x7bfc
393#define VPD_BASE 0x400 222#define VPD_BASE 0x400
394#define VPD_BASE_OLD 0 223#define VPD_BASE_OLD 0
395#define VPD_LEN 512 224#define VPD_LEN 1024
396#define VPD_INFO_FLD_HDR_SIZE 3 225#define VPD_INFO_FLD_HDR_SIZE 3
397 226
398/* 227/*
@@ -817,23 +646,6 @@ out:
817 return 0; 646 return 0;
818} 647}
819 648
820/*
821 * csio_hw_flash_cfg_addr - return the address of the flash
822 * configuration file
823 * @hw: the HW module
824 *
825 * Return the address within the flash where the Firmware Configuration
826 * File is stored.
827 */
828static unsigned int
829csio_hw_flash_cfg_addr(struct csio_hw *hw)
830{
831 if (hw->params.sf_size == 0x100000)
832 return FPGA_FLASH_CFG_OFFSET;
833 else
834 return FLASH_CFG_OFFSET;
835}
836
837static void 649static void
838csio_hw_print_fw_version(struct csio_hw *hw, char *str) 650csio_hw_print_fw_version(struct csio_hw *hw, char *str)
839{ 651{
@@ -898,13 +710,13 @@ csio_hw_check_fw_version(struct csio_hw *hw)
898 minor = FW_HDR_FW_VER_MINOR_GET(hw->fwrev); 710 minor = FW_HDR_FW_VER_MINOR_GET(hw->fwrev);
899 micro = FW_HDR_FW_VER_MICRO_GET(hw->fwrev); 711 micro = FW_HDR_FW_VER_MICRO_GET(hw->fwrev);
900 712
901 if (major != FW_VERSION_MAJOR) { /* major mismatch - fail */ 713 if (major != FW_VERSION_MAJOR(hw)) { /* major mismatch - fail */
902 csio_err(hw, "card FW has major version %u, driver wants %u\n", 714 csio_err(hw, "card FW has major version %u, driver wants %u\n",
903 major, FW_VERSION_MAJOR); 715 major, FW_VERSION_MAJOR(hw));
904 return -EINVAL; 716 return -EINVAL;
905 } 717 }
906 718
907 if (minor == FW_VERSION_MINOR && micro == FW_VERSION_MICRO) 719 if (minor == FW_VERSION_MINOR(hw) && micro == FW_VERSION_MICRO(hw))
908 return 0; /* perfect match */ 720 return 0; /* perfect match */
909 721
910 /* Minor/micro version mismatch */ 722 /* Minor/micro version mismatch */
@@ -1044,7 +856,7 @@ static void
1044csio_set_pcie_completion_timeout(struct csio_hw *hw, u8 range) 856csio_set_pcie_completion_timeout(struct csio_hw *hw, u8 range)
1045{ 857{
1046 uint16_t val; 858 uint16_t val;
1047 uint32_t pcie_cap; 859 int pcie_cap;
1048 860
1049 if (!csio_pci_capability(hw->pdev, PCI_CAP_ID_EXP, &pcie_cap)) { 861 if (!csio_pci_capability(hw->pdev, PCI_CAP_ID_EXP, &pcie_cap)) {
1050 pci_read_config_word(hw->pdev, 862 pci_read_config_word(hw->pdev,
@@ -1056,84 +868,6 @@ csio_set_pcie_completion_timeout(struct csio_hw *hw, u8 range)
1056 } 868 }
1057} 869}
1058 870
1059
1060/*
1061 * Return the specified PCI-E Configuration Space register from our Physical
1062 * Function. We try first via a Firmware LDST Command since we prefer to let
1063 * the firmware own all of these registers, but if that fails we go for it
1064 * directly ourselves.
1065 */
1066static uint32_t
1067csio_read_pcie_cfg4(struct csio_hw *hw, int reg)
1068{
1069 u32 val = 0;
1070 struct csio_mb *mbp;
1071 int rv;
1072 struct fw_ldst_cmd *ldst_cmd;
1073
1074 mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
1075 if (!mbp) {
1076 CSIO_INC_STATS(hw, n_err_nomem);
1077 pci_read_config_dword(hw->pdev, reg, &val);
1078 return val;
1079 }
1080
1081 csio_mb_ldst(hw, mbp, CSIO_MB_DEFAULT_TMO, reg);
1082
1083 rv = csio_mb_issue(hw, mbp);
1084
1085 /*
1086 * If the LDST Command suucceeded, exctract the returned register
1087 * value. Otherwise read it directly ourself.
1088 */
1089 if (rv == 0) {
1090 ldst_cmd = (struct fw_ldst_cmd *)(mbp->mb);
1091 val = ntohl(ldst_cmd->u.pcie.data[0]);
1092 } else
1093 pci_read_config_dword(hw->pdev, reg, &val);
1094
1095 mempool_free(mbp, hw->mb_mempool);
1096
1097 return val;
1098} /* csio_read_pcie_cfg4 */
1099
1100static int
1101csio_hw_set_mem_win(struct csio_hw *hw)
1102{
1103 u32 bar0;
1104
1105 /*
1106 * Truncation intentional: we only read the bottom 32-bits of the
1107 * 64-bit BAR0/BAR1 ... We use the hardware backdoor mechanism to
1108 * read BAR0 instead of using pci_resource_start() because we could be
1109 * operating from within a Virtual Machine which is trapping our
1110 * accesses to our Configuration Space and we need to set up the PCI-E
1111 * Memory Window decoders with the actual addresses which will be
1112 * coming across the PCI-E link.
1113 */
1114 bar0 = csio_read_pcie_cfg4(hw, PCI_BASE_ADDRESS_0);
1115 bar0 &= PCI_BASE_ADDRESS_MEM_MASK;
1116
1117 /*
1118 * Set up memory window for accessing adapter memory ranges. (Read
1119 * back MA register to ensure that changes propagate before we attempt
1120 * to use the new values.)
1121 */
1122 csio_wr_reg32(hw, (bar0 + MEMWIN0_BASE) | BIR(0) |
1123 WINDOW(ilog2(MEMWIN0_APERTURE) - 10),
1124 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 0));
1125 csio_wr_reg32(hw, (bar0 + MEMWIN1_BASE) | BIR(0) |
1126 WINDOW(ilog2(MEMWIN1_APERTURE) - 10),
1127 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 1));
1128 csio_wr_reg32(hw, (bar0 + MEMWIN2_BASE) | BIR(0) |
1129 WINDOW(ilog2(MEMWIN2_APERTURE) - 10),
1130 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 2));
1131 csio_rd_reg32(hw, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 2));
1132 return 0;
1133} /* csio_hw_set_mem_win */
1134
1135
1136
1137/*****************************************************************************/ 871/*****************************************************************************/
1138/* HW State machine assists */ 872/* HW State machine assists */
1139/*****************************************************************************/ 873/*****************************************************************************/
@@ -1234,7 +968,9 @@ retry:
1234 for (;;) { 968 for (;;) {
1235 uint32_t pcie_fw; 969 uint32_t pcie_fw;
1236 970
971 spin_unlock_irq(&hw->lock);
1237 msleep(50); 972 msleep(50);
973 spin_lock_irq(&hw->lock);
1238 waiting -= 50; 974 waiting -= 50;
1239 975
1240 /* 976 /*
@@ -2121,9 +1857,9 @@ csio_hw_flash_config(struct csio_hw *hw, u32 *fw_cfg_param, char *path)
2121 uint32_t *cfg_data; 1857 uint32_t *cfg_data;
2122 int value_to_add = 0; 1858 int value_to_add = 0;
2123 1859
2124 if (request_firmware(&cf, CSIO_CF_FNAME, dev) < 0) { 1860 if (request_firmware(&cf, CSIO_CF_FNAME(hw), dev) < 0) {
2125 csio_err(hw, "could not find config file " CSIO_CF_FNAME 1861 csio_err(hw, "could not find config file %s, err: %d\n",
2126 ",err: %d\n", ret); 1862 CSIO_CF_FNAME(hw), ret);
2127 return -ENOENT; 1863 return -ENOENT;
2128 } 1864 }
2129 1865
@@ -2147,9 +1883,24 @@ csio_hw_flash_config(struct csio_hw *hw, u32 *fw_cfg_param, char *path)
2147 1883
2148 ret = csio_memory_write(hw, mtype, maddr, 1884 ret = csio_memory_write(hw, mtype, maddr,
2149 cf->size + value_to_add, cfg_data); 1885 cf->size + value_to_add, cfg_data);
1886
1887 if ((ret == 0) && (value_to_add != 0)) {
1888 union {
1889 u32 word;
1890 char buf[4];
1891 } last;
1892 size_t size = cf->size & ~0x3;
1893 int i;
1894
1895 last.word = cfg_data[size >> 2];
1896 for (i = value_to_add; i < 4; i++)
1897 last.buf[i] = 0;
1898 ret = csio_memory_write(hw, mtype, maddr + size, 4, &last.word);
1899 }
2150 if (ret == 0) { 1900 if (ret == 0) {
2151 csio_info(hw, "config file upgraded to " CSIO_CF_FNAME "\n"); 1901 csio_info(hw, "config file upgraded to %s\n",
2152 strncpy(path, "/lib/firmware/" CSIO_CF_FNAME, 64); 1902 CSIO_CF_FNAME(hw));
1903 snprintf(path, 64, "%s%s", "/lib/firmware/", CSIO_CF_FNAME(hw));
2153 } 1904 }
2154 1905
2155leave: 1906leave:
@@ -2179,7 +1930,7 @@ csio_hw_use_fwconfig(struct csio_hw *hw, int reset, u32 *fw_cfg_param)
2179{ 1930{
2180 unsigned int mtype, maddr; 1931 unsigned int mtype, maddr;
2181 int rv; 1932 int rv;
2182 uint32_t finiver, finicsum, cfcsum; 1933 uint32_t finiver = 0, finicsum = 0, cfcsum = 0;
2183 int using_flash; 1934 int using_flash;
2184 char path[64]; 1935 char path[64];
2185 1936
@@ -2207,7 +1958,7 @@ csio_hw_use_fwconfig(struct csio_hw *hw, int reset, u32 *fw_cfg_param)
2207 * config file from flash. 1958 * config file from flash.
2208 */ 1959 */
2209 mtype = FW_MEMTYPE_CF_FLASH; 1960 mtype = FW_MEMTYPE_CF_FLASH;
2210 maddr = csio_hw_flash_cfg_addr(hw); 1961 maddr = hw->chip_ops->chip_flash_cfg_addr(hw);
2211 using_flash = 1; 1962 using_flash = 1;
2212 } else { 1963 } else {
2213 /* 1964 /*
@@ -2346,30 +2097,32 @@ csio_hw_flash_fw(struct csio_hw *hw)
2346 struct pci_dev *pci_dev = hw->pdev; 2097 struct pci_dev *pci_dev = hw->pdev;
2347 struct device *dev = &pci_dev->dev ; 2098 struct device *dev = &pci_dev->dev ;
2348 2099
2349 if (request_firmware(&fw, CSIO_FW_FNAME, dev) < 0) { 2100 if (request_firmware(&fw, CSIO_FW_FNAME(hw), dev) < 0) {
2350 csio_err(hw, "could not find firmware image " CSIO_FW_FNAME 2101 csio_err(hw, "could not find firmware image %s, err: %d\n",
2351 ",err: %d\n", ret); 2102 CSIO_FW_FNAME(hw), ret);
2352 return -EINVAL; 2103 return -EINVAL;
2353 } 2104 }
2354 2105
2355 hdr = (const struct fw_hdr *)fw->data; 2106 hdr = (const struct fw_hdr *)fw->data;
2356 fw_ver = ntohl(hdr->fw_ver); 2107 fw_ver = ntohl(hdr->fw_ver);
2357 if (FW_HDR_FW_VER_MAJOR_GET(fw_ver) != FW_VERSION_MAJOR) 2108 if (FW_HDR_FW_VER_MAJOR_GET(fw_ver) != FW_VERSION_MAJOR(hw))
2358 return -EINVAL; /* wrong major version, won't do */ 2109 return -EINVAL; /* wrong major version, won't do */
2359 2110
2360 /* 2111 /*
2361 * If the flash FW is unusable or we found something newer, load it. 2112 * If the flash FW is unusable or we found something newer, load it.
2362 */ 2113 */
2363 if (FW_HDR_FW_VER_MAJOR_GET(hw->fwrev) != FW_VERSION_MAJOR || 2114 if (FW_HDR_FW_VER_MAJOR_GET(hw->fwrev) != FW_VERSION_MAJOR(hw) ||
2364 fw_ver > hw->fwrev) { 2115 fw_ver > hw->fwrev) {
2365 ret = csio_hw_fw_upgrade(hw, hw->pfn, fw->data, fw->size, 2116 ret = csio_hw_fw_upgrade(hw, hw->pfn, fw->data, fw->size,
2366 /*force=*/false); 2117 /*force=*/false);
2367 if (!ret) 2118 if (!ret)
2368 csio_info(hw, "firmware upgraded to version %pI4 from " 2119 csio_info(hw,
2369 CSIO_FW_FNAME "\n", &hdr->fw_ver); 2120 "firmware upgraded to version %pI4 from %s\n",
2121 &hdr->fw_ver, CSIO_FW_FNAME(hw));
2370 else 2122 else
2371 csio_err(hw, "firmware upgrade failed! err=%d\n", ret); 2123 csio_err(hw, "firmware upgrade failed! err=%d\n", ret);
2372 } 2124 } else
2125 ret = -EINVAL;
2373 2126
2374 release_firmware(fw); 2127 release_firmware(fw);
2375 2128
@@ -2410,7 +2163,7 @@ csio_hw_configure(struct csio_hw *hw)
2410 /* Set pci completion timeout value to 4 seconds. */ 2163 /* Set pci completion timeout value to 4 seconds. */
2411 csio_set_pcie_completion_timeout(hw, 0xd); 2164 csio_set_pcie_completion_timeout(hw, 0xd);
2412 2165
2413 csio_hw_set_mem_win(hw); 2166 hw->chip_ops->chip_set_mem_win(hw, MEMWIN_CSIOSTOR);
2414 2167
2415 rv = csio_hw_get_fw_version(hw, &hw->fwrev); 2168 rv = csio_hw_get_fw_version(hw, &hw->fwrev);
2416 if (rv != 0) 2169 if (rv != 0)
@@ -2478,6 +2231,8 @@ csio_hw_configure(struct csio_hw *hw)
2478 } else { 2231 } else {
2479 if (hw->fw_state == CSIO_DEV_STATE_INIT) { 2232 if (hw->fw_state == CSIO_DEV_STATE_INIT) {
2480 2233
2234 hw->flags |= CSIO_HWF_USING_SOFT_PARAMS;
2235
2481 /* device parameters */ 2236 /* device parameters */
2482 rv = csio_get_device_params(hw); 2237 rv = csio_get_device_params(hw);
2483 if (rv != 0) 2238 if (rv != 0)
@@ -2651,7 +2406,7 @@ csio_hw_intr_disable(struct csio_hw *hw)
2651 2406
2652} 2407}
2653 2408
2654static void 2409void
2655csio_hw_fatal_err(struct csio_hw *hw) 2410csio_hw_fatal_err(struct csio_hw *hw)
2656{ 2411{
2657 csio_set_reg_field(hw, SGE_CONTROL, GLOBALENABLE, 0); 2412 csio_set_reg_field(hw, SGE_CONTROL, GLOBALENABLE, 0);
@@ -2990,14 +2745,6 @@ csio_hws_pcierr(struct csio_hw *hw, enum csio_hw_ev evt)
2990/* END: HW SM */ 2745/* END: HW SM */
2991/*****************************************************************************/ 2746/*****************************************************************************/
2992 2747
2993/* Slow path handlers */
2994struct intr_info {
2995 unsigned int mask; /* bits to check in interrupt status */
2996 const char *msg; /* message to print or NULL */
2997 short stat_idx; /* stat counter to increment or -1 */
2998 unsigned short fatal; /* whether the condition reported is fatal */
2999};
3000
3001/* 2748/*
3002 * csio_handle_intr_status - table driven interrupt handler 2749 * csio_handle_intr_status - table driven interrupt handler
3003 * @hw: HW instance 2750 * @hw: HW instance
@@ -3011,7 +2758,7 @@ struct intr_info {
3011 * by an entry specifying mask 0. Returns the number of fatal interrupt 2758 * by an entry specifying mask 0. Returns the number of fatal interrupt
3012 * conditions. 2759 * conditions.
3013 */ 2760 */
3014static int 2761int
3015csio_handle_intr_status(struct csio_hw *hw, unsigned int reg, 2762csio_handle_intr_status(struct csio_hw *hw, unsigned int reg,
3016 const struct intr_info *acts) 2763 const struct intr_info *acts)
3017{ 2764{
@@ -3038,80 +2785,6 @@ csio_handle_intr_status(struct csio_hw *hw, unsigned int reg,
3038} 2785}
3039 2786
3040/* 2787/*
3041 * Interrupt handler for the PCIE module.
3042 */
3043static void
3044csio_pcie_intr_handler(struct csio_hw *hw)
3045{
3046 static struct intr_info sysbus_intr_info[] = {
3047 { RNPP, "RXNP array parity error", -1, 1 },
3048 { RPCP, "RXPC array parity error", -1, 1 },
3049 { RCIP, "RXCIF array parity error", -1, 1 },
3050 { RCCP, "Rx completions control array parity error", -1, 1 },
3051 { RFTP, "RXFT array parity error", -1, 1 },
3052 { 0, NULL, 0, 0 }
3053 };
3054 static struct intr_info pcie_port_intr_info[] = {
3055 { TPCP, "TXPC array parity error", -1, 1 },
3056 { TNPP, "TXNP array parity error", -1, 1 },
3057 { TFTP, "TXFT array parity error", -1, 1 },
3058 { TCAP, "TXCA array parity error", -1, 1 },
3059 { TCIP, "TXCIF array parity error", -1, 1 },
3060 { RCAP, "RXCA array parity error", -1, 1 },
3061 { OTDD, "outbound request TLP discarded", -1, 1 },
3062 { RDPE, "Rx data parity error", -1, 1 },
3063 { TDUE, "Tx uncorrectable data error", -1, 1 },
3064 { 0, NULL, 0, 0 }
3065 };
3066 static struct intr_info pcie_intr_info[] = {
3067 { MSIADDRLPERR, "MSI AddrL parity error", -1, 1 },
3068 { MSIADDRHPERR, "MSI AddrH parity error", -1, 1 },
3069 { MSIDATAPERR, "MSI data parity error", -1, 1 },
3070 { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
3071 { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
3072 { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
3073 { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
3074 { PIOCPLPERR, "PCI PIO completion FIFO parity error", -1, 1 },
3075 { PIOREQPERR, "PCI PIO request FIFO parity error", -1, 1 },
3076 { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
3077 { CCNTPERR, "PCI CMD channel count parity error", -1, 1 },
3078 { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
3079 { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
3080 { DCNTPERR, "PCI DMA channel count parity error", -1, 1 },
3081 { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
3082 { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
3083 { HCNTPERR, "PCI HMA channel count parity error", -1, 1 },
3084 { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
3085 { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
3086 { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
3087 { FIDPERR, "PCI FID parity error", -1, 1 },
3088 { INTXCLRPERR, "PCI INTx clear parity error", -1, 1 },
3089 { MATAGPERR, "PCI MA tag parity error", -1, 1 },
3090 { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
3091 { RXCPLPERR, "PCI Rx completion parity error", -1, 1 },
3092 { RXWRPERR, "PCI Rx write parity error", -1, 1 },
3093 { RPLPERR, "PCI replay buffer parity error", -1, 1 },
3094 { PCIESINT, "PCI core secondary fault", -1, 1 },
3095 { PCIEPINT, "PCI core primary fault", -1, 1 },
3096 { UNXSPLCPLERR, "PCI unexpected split completion error", -1,
3097 0 },
3098 { 0, NULL, 0, 0 }
3099 };
3100
3101 int fat;
3102
3103 fat = csio_handle_intr_status(hw,
3104 PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
3105 sysbus_intr_info) +
3106 csio_handle_intr_status(hw,
3107 PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
3108 pcie_port_intr_info) +
3109 csio_handle_intr_status(hw, PCIE_INT_CAUSE, pcie_intr_info);
3110 if (fat)
3111 csio_hw_fatal_err(hw);
3112}
3113
3114/*
3115 * TP interrupt handler. 2788 * TP interrupt handler.
3116 */ 2789 */
3117static void csio_tp_intr_handler(struct csio_hw *hw) 2790static void csio_tp_intr_handler(struct csio_hw *hw)
@@ -3517,7 +3190,7 @@ static void csio_ncsi_intr_handler(struct csio_hw *hw)
3517 */ 3190 */
3518static void csio_xgmac_intr_handler(struct csio_hw *hw, int port) 3191static void csio_xgmac_intr_handler(struct csio_hw *hw, int port)
3519{ 3192{
3520 uint32_t v = csio_rd_reg32(hw, PORT_REG(port, XGMAC_PORT_INT_CAUSE)); 3193 uint32_t v = csio_rd_reg32(hw, CSIO_MAC_INT_CAUSE_REG(hw, port));
3521 3194
3522 v &= TXFIFO_PRTY_ERR | RXFIFO_PRTY_ERR; 3195 v &= TXFIFO_PRTY_ERR | RXFIFO_PRTY_ERR;
3523 if (!v) 3196 if (!v)
@@ -3527,7 +3200,7 @@ static void csio_xgmac_intr_handler(struct csio_hw *hw, int port)
3527 csio_fatal(hw, "XGMAC %d Tx FIFO parity error\n", port); 3200 csio_fatal(hw, "XGMAC %d Tx FIFO parity error\n", port);
3528 if (v & RXFIFO_PRTY_ERR) 3201 if (v & RXFIFO_PRTY_ERR)
3529 csio_fatal(hw, "XGMAC %d Rx FIFO parity error\n", port); 3202 csio_fatal(hw, "XGMAC %d Rx FIFO parity error\n", port);
3530 csio_wr_reg32(hw, v, PORT_REG(port, XGMAC_PORT_INT_CAUSE)); 3203 csio_wr_reg32(hw, v, CSIO_MAC_INT_CAUSE_REG(hw, port));
3531 csio_hw_fatal_err(hw); 3204 csio_hw_fatal_err(hw);
3532} 3205}
3533 3206
@@ -3596,7 +3269,7 @@ csio_hw_slow_intr_handler(struct csio_hw *hw)
3596 csio_xgmac_intr_handler(hw, 3); 3269 csio_xgmac_intr_handler(hw, 3);
3597 3270
3598 if (cause & PCIE) 3271 if (cause & PCIE)
3599 csio_pcie_intr_handler(hw); 3272 hw->chip_ops->chip_pcie_intr_handler(hw);
3600 3273
3601 if (cause & MC) 3274 if (cause & MC)
3602 csio_mem_intr_handler(hw, MEM_MC); 3275 csio_mem_intr_handler(hw, MEM_MC);
@@ -4257,6 +3930,7 @@ csio_hw_get_device_id(struct csio_hw *hw)
4257 &hw->params.pci.device_id); 3930 &hw->params.pci.device_id);
4258 3931
4259 csio_dev_id_cached(hw); 3932 csio_dev_id_cached(hw);
3933 hw->chip_id = (hw->params.pci.device_id & CSIO_HW_CHIP_MASK);
4260 3934
4261} /* csio_hw_get_device_id */ 3935} /* csio_hw_get_device_id */
4262 3936
@@ -4275,19 +3949,21 @@ csio_hw_set_description(struct csio_hw *hw, uint16_t ven_id, uint16_t dev_id)
4275 prot_type = (dev_id & CSIO_ASIC_DEVID_PROTO_MASK); 3949 prot_type = (dev_id & CSIO_ASIC_DEVID_PROTO_MASK);
4276 adap_type = (dev_id & CSIO_ASIC_DEVID_TYPE_MASK); 3950 adap_type = (dev_id & CSIO_ASIC_DEVID_TYPE_MASK);
4277 3951
4278 if (prot_type == CSIO_FPGA) { 3952 if (prot_type == CSIO_T4_FCOE_ASIC) {
3953 memcpy(hw->hw_ver,
3954 csio_t4_fcoe_adapters[adap_type].model_no, 16);
4279 memcpy(hw->model_desc, 3955 memcpy(hw->model_desc,
4280 csio_fcoe_adapters[13].description, 32); 3956 csio_t4_fcoe_adapters[adap_type].description,
4281 } else if (prot_type == CSIO_T4_FCOE_ASIC) { 3957 32);
3958 } else if (prot_type == CSIO_T5_FCOE_ASIC) {
4282 memcpy(hw->hw_ver, 3959 memcpy(hw->hw_ver,
4283 csio_fcoe_adapters[adap_type].model_no, 16); 3960 csio_t5_fcoe_adapters[adap_type].model_no, 16);
4284 memcpy(hw->model_desc, 3961 memcpy(hw->model_desc,
4285 csio_fcoe_adapters[adap_type].description, 32); 3962 csio_t5_fcoe_adapters[adap_type].description,
3963 32);
4286 } else { 3964 } else {
4287 char tempName[32] = "Chelsio FCoE Controller"; 3965 char tempName[32] = "Chelsio FCoE Controller";
4288 memcpy(hw->model_desc, tempName, 32); 3966 memcpy(hw->model_desc, tempName, 32);
4289
4290 CSIO_DB_ASSERT(0);
4291 } 3967 }
4292 } 3968 }
4293} /* csio_hw_set_description */ 3969} /* csio_hw_set_description */
@@ -4316,6 +3992,9 @@ csio_hw_init(struct csio_hw *hw)
4316 3992
4317 strcpy(hw->name, CSIO_HW_NAME); 3993 strcpy(hw->name, CSIO_HW_NAME);
4318 3994
3995 /* Initialize the HW chip ops with T4/T5 specific ops */
3996 hw->chip_ops = csio_is_t4(hw->chip_id) ? &t4_ops : &t5_ops;
3997
4319 /* Set the model & its description */ 3998 /* Set the model & its description */
4320 3999
4321 ven_id = hw->params.pci.vendor_id; 4000 ven_id = hw->params.pci.vendor_id;
diff --git a/drivers/scsi/csiostor/csio_hw.h b/drivers/scsi/csiostor/csio_hw.h
index 9edcca4c71af..489fc095cb03 100644
--- a/drivers/scsi/csiostor/csio_hw.h
+++ b/drivers/scsi/csiostor/csio_hw.h
@@ -48,6 +48,7 @@
48#include <scsi/scsi_device.h> 48#include <scsi/scsi_device.h>
49#include <scsi/scsi_transport_fc.h> 49#include <scsi/scsi_transport_fc.h>
50 50
51#include "csio_hw_chip.h"
51#include "csio_wr.h" 52#include "csio_wr.h"
52#include "csio_mb.h" 53#include "csio_mb.h"
53#include "csio_scsi.h" 54#include "csio_scsi.h"
@@ -60,13 +61,6 @@
60 */ 61 */
61#define FW_HOSTERROR 255 62#define FW_HOSTERROR 255
62 63
63#define CSIO_FW_FNAME "cxgb4/t4fw.bin"
64#define CSIO_CF_FNAME "cxgb4/t4-config.txt"
65
66#define FW_VERSION_MAJOR 1
67#define FW_VERSION_MINOR 2
68#define FW_VERSION_MICRO 8
69
70#define CSIO_HW_NAME "Chelsio FCoE Adapter" 64#define CSIO_HW_NAME "Chelsio FCoE Adapter"
71#define CSIO_MAX_PFN 8 65#define CSIO_MAX_PFN 8
72#define CSIO_MAX_PPORTS 4 66#define CSIO_MAX_PPORTS 4
@@ -123,8 +117,6 @@ extern int csio_msi;
123#define CSIO_VENDOR_ID 0x1425 117#define CSIO_VENDOR_ID 0x1425
124#define CSIO_ASIC_DEVID_PROTO_MASK 0xFF00 118#define CSIO_ASIC_DEVID_PROTO_MASK 0xFF00
125#define CSIO_ASIC_DEVID_TYPE_MASK 0x00FF 119#define CSIO_ASIC_DEVID_TYPE_MASK 0x00FF
126#define CSIO_FPGA 0xA000
127#define CSIO_T4_FCOE_ASIC 0x4600
128 120
129#define CSIO_GLBL_INTR_MASK (CIM | MPS | PL | PCIE | MC | EDC0 | \ 121#define CSIO_GLBL_INTR_MASK (CIM | MPS | PL | PCIE | MC | EDC0 | \
130 EDC1 | LE | TP | MA | PM_TX | PM_RX | \ 122 EDC1 | LE | TP | MA | PM_TX | PM_RX | \
@@ -207,17 +199,6 @@ enum {
207 SF_SIZE = SF_SEC_SIZE * 16, /* serial flash size */ 199 SF_SIZE = SF_SEC_SIZE * 16, /* serial flash size */
208}; 200};
209 201
210enum { MEM_EDC0, MEM_EDC1, MEM_MC };
211
212enum {
213 MEMWIN0_APERTURE = 2048,
214 MEMWIN0_BASE = 0x1b800,
215 MEMWIN1_APERTURE = 32768,
216 MEMWIN1_BASE = 0x28000,
217 MEMWIN2_APERTURE = 65536,
218 MEMWIN2_BASE = 0x30000,
219};
220
221/* serial flash and firmware constants */ 202/* serial flash and firmware constants */
222enum { 203enum {
223 SF_ATTEMPTS = 10, /* max retries for SF operations */ 204 SF_ATTEMPTS = 10, /* max retries for SF operations */
@@ -239,9 +220,6 @@ enum {
239 FLASH_CFG_MAX_SIZE = 0x10000 , /* max size of the flash config file*/ 220 FLASH_CFG_MAX_SIZE = 0x10000 , /* max size of the flash config file*/
240 FLASH_CFG_OFFSET = 0x1f0000, 221 FLASH_CFG_OFFSET = 0x1f0000,
241 FLASH_CFG_START_SEC = FLASH_CFG_OFFSET / SF_SEC_SIZE, 222 FLASH_CFG_START_SEC = FLASH_CFG_OFFSET / SF_SEC_SIZE,
242 FPGA_FLASH_CFG_OFFSET = 0xf0000 , /* if FPGA mode, then cfg file is
243 * at 1MB - 64KB */
244 FPGA_FLASH_CFG_START_SEC = FPGA_FLASH_CFG_OFFSET / SF_SEC_SIZE,
245}; 223};
246 224
247/* 225/*
@@ -259,6 +237,8 @@ enum {
259 FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC), 237 FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
260 FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS), 238 FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
261 239
240 /* Location of Firmware Configuration File in FLASH. */
241 FLASH_CFG_START = FLASH_START(FLASH_CFG_START_SEC),
262}; 242};
263 243
264#undef FLASH_START 244#undef FLASH_START
@@ -310,7 +290,7 @@ struct csio_adap_desc {
310struct pci_params { 290struct pci_params {
311 uint16_t vendor_id; 291 uint16_t vendor_id;
312 uint16_t device_id; 292 uint16_t device_id;
313 uint32_t vpd_cap_addr; 293 int vpd_cap_addr;
314 uint16_t speed; 294 uint16_t speed;
315 uint8_t width; 295 uint8_t width;
316}; 296};
@@ -513,6 +493,7 @@ struct csio_hw {
513 uint32_t fwrev; 493 uint32_t fwrev;
514 uint32_t tp_vers; 494 uint32_t tp_vers;
515 char chip_ver; 495 char chip_ver;
496 uint16_t chip_id; /* Tells T4/T5 chip */
516 uint32_t cfg_finiver; 497 uint32_t cfg_finiver;
517 uint32_t cfg_finicsum; 498 uint32_t cfg_finicsum;
518 uint32_t cfg_cfcsum; 499 uint32_t cfg_cfcsum;
@@ -556,6 +537,9 @@ struct csio_hw {
556 */ 537 */
557 538
558 struct csio_fcoe_res_info fres_info; /* Fcoe resource info */ 539 struct csio_fcoe_res_info fres_info; /* Fcoe resource info */
540 struct csio_hw_chip_ops *chip_ops; /* T4/T5 Chip specific
541 * Operations
542 */
559 543
560 /* MSIX vectors */ 544 /* MSIX vectors */
561 struct csio_msix_entries msix_entries[CSIO_MAX_MSIX_VECS]; 545 struct csio_msix_entries msix_entries[CSIO_MAX_MSIX_VECS];
@@ -636,9 +620,16 @@ csio_us_to_core_ticks(struct csio_hw *hw, uint32_t us)
636#define csio_dbg(__hw, __fmt, ...) 620#define csio_dbg(__hw, __fmt, ...)
637#endif 621#endif
638 622
623int csio_hw_wait_op_done_val(struct csio_hw *, int, uint32_t, int,
624 int, int, uint32_t *);
625void csio_hw_tp_wr_bits_indirect(struct csio_hw *, unsigned int,
626 unsigned int, unsigned int);
639int csio_mgmt_req_lookup(struct csio_mgmtm *, struct csio_ioreq *); 627int csio_mgmt_req_lookup(struct csio_mgmtm *, struct csio_ioreq *);
640void csio_hw_intr_disable(struct csio_hw *); 628void csio_hw_intr_disable(struct csio_hw *);
641int csio_hw_slow_intr_handler(struct csio_hw *hw); 629int csio_hw_slow_intr_handler(struct csio_hw *);
630int csio_handle_intr_status(struct csio_hw *, unsigned int,
631 const struct intr_info *);
632
642int csio_hw_start(struct csio_hw *); 633int csio_hw_start(struct csio_hw *);
643int csio_hw_stop(struct csio_hw *); 634int csio_hw_stop(struct csio_hw *);
644int csio_hw_reset(struct csio_hw *); 635int csio_hw_reset(struct csio_hw *);
@@ -647,19 +638,17 @@ int csio_is_hw_removing(struct csio_hw *);
647 638
648int csio_fwevtq_handler(struct csio_hw *); 639int csio_fwevtq_handler(struct csio_hw *);
649void csio_evtq_worker(struct work_struct *); 640void csio_evtq_worker(struct work_struct *);
650int csio_enqueue_evt(struct csio_hw *hw, enum csio_evt type, 641int csio_enqueue_evt(struct csio_hw *, enum csio_evt, void *, uint16_t);
651 void *evt_msg, uint16_t len);
652void csio_evtq_flush(struct csio_hw *hw); 642void csio_evtq_flush(struct csio_hw *hw);
653 643
654int csio_request_irqs(struct csio_hw *); 644int csio_request_irqs(struct csio_hw *);
655void csio_intr_enable(struct csio_hw *); 645void csio_intr_enable(struct csio_hw *);
656void csio_intr_disable(struct csio_hw *, bool); 646void csio_intr_disable(struct csio_hw *, bool);
647void csio_hw_fatal_err(struct csio_hw *);
657 648
658struct csio_lnode *csio_lnode_alloc(struct csio_hw *); 649struct csio_lnode *csio_lnode_alloc(struct csio_hw *);
659int csio_config_queues(struct csio_hw *); 650int csio_config_queues(struct csio_hw *);
660 651
661int csio_hw_mc_read(struct csio_hw *, uint32_t, __be32 *, uint64_t *);
662int csio_hw_edc_read(struct csio_hw *, int, uint32_t, __be32 *, uint64_t *);
663int csio_hw_init(struct csio_hw *); 652int csio_hw_init(struct csio_hw *);
664void csio_hw_exit(struct csio_hw *); 653void csio_hw_exit(struct csio_hw *);
665#endif /* ifndef __CSIO_HW_H__ */ 654#endif /* ifndef __CSIO_HW_H__ */
diff --git a/drivers/scsi/csiostor/csio_hw_chip.h b/drivers/scsi/csiostor/csio_hw_chip.h
new file mode 100644
index 000000000000..bca0de61ae80
--- /dev/null
+++ b/drivers/scsi/csiostor/csio_hw_chip.h
@@ -0,0 +1,175 @@
1/*
2 * This file is part of the Chelsio FCoE driver for Linux.
3 *
4 * Copyright (c) 2008-2013 Chelsio Communications, Inc. All rights reserved.
5 *
6 * This software is available to you under a choice of one of two
7 * licenses. You may choose to be licensed under the terms of the GNU
8 * General Public License (GPL) Version 2, available from the file
9 * OpenIB.org BSD license below:
10 *
11 * Redistribution and use in source and binary forms, with or
12 * without modification, are permitted provided that the following
13 * conditions are met:
14 *
15 * - Redistributions of source code must retain the above
16 * copyright notice, this list of conditions and the following
17 * disclaimer.
18 *
19 * - Redistributions in binary form must reproduce the above
20 * copyright notice, this list of conditions and the following
21 * disclaimer in the documentation and/or other materials
22 * provided with the distribution.
23 *
24 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
25 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
26 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
27 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
28 * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
29 * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
30 * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
31 * SOFTWARE.
32 */
33
34#ifndef __CSIO_HW_CHIP_H__
35#define __CSIO_HW_CHIP_H__
36
37#include "csio_defs.h"
38
39/* FCoE device IDs for T4 */
40#define CSIO_DEVID_T440DBG_FCOE 0x4600
41#define CSIO_DEVID_T420CR_FCOE 0x4601
42#define CSIO_DEVID_T422CR_FCOE 0x4602
43#define CSIO_DEVID_T440CR_FCOE 0x4603
44#define CSIO_DEVID_T420BCH_FCOE 0x4604
45#define CSIO_DEVID_T440BCH_FCOE 0x4605
46#define CSIO_DEVID_T440CH_FCOE 0x4606
47#define CSIO_DEVID_T420SO_FCOE 0x4607
48#define CSIO_DEVID_T420CX_FCOE 0x4608
49#define CSIO_DEVID_T420BT_FCOE 0x4609
50#define CSIO_DEVID_T404BT_FCOE 0x460A
51#define CSIO_DEVID_B420_FCOE 0x460B
52#define CSIO_DEVID_B404_FCOE 0x460C
53#define CSIO_DEVID_T480CR_FCOE 0x460D
54#define CSIO_DEVID_T440LPCR_FCOE 0x460E
55#define CSIO_DEVID_AMSTERDAM_T4_FCOE 0x460F
56#define CSIO_DEVID_HUAWEI_T480_FCOE 0x4680
57#define CSIO_DEVID_HUAWEI_T440_FCOE 0x4681
58#define CSIO_DEVID_HUAWEI_STG310_FCOE 0x4682
59#define CSIO_DEVID_ACROMAG_XMC_XAUI 0x4683
60#define CSIO_DEVID_ACROMAG_XMC_SFP_FCOE 0x4684
61#define CSIO_DEVID_QUANTA_MEZZ_SFP_FCOE 0x4685
62#define CSIO_DEVID_HUAWEI_10GT_FCOE 0x4686
63#define CSIO_DEVID_HUAWEI_T440_TOE_FCOE 0x4687
64
65/* FCoE device IDs for T5 */
66#define CSIO_DEVID_T580DBG_FCOE 0x5600
67#define CSIO_DEVID_T520CR_FCOE 0x5601
68#define CSIO_DEVID_T522CR_FCOE 0x5602
69#define CSIO_DEVID_T540CR_FCOE 0x5603
70#define CSIO_DEVID_T520BCH_FCOE 0x5604
71#define CSIO_DEVID_T540BCH_FCOE 0x5605
72#define CSIO_DEVID_T540CH_FCOE 0x5606
73#define CSIO_DEVID_T520SO_FCOE 0x5607
74#define CSIO_DEVID_T520CX_FCOE 0x5608
75#define CSIO_DEVID_T520BT_FCOE 0x5609
76#define CSIO_DEVID_T504BT_FCOE 0x560A
77#define CSIO_DEVID_B520_FCOE 0x560B
78#define CSIO_DEVID_B504_FCOE 0x560C
79#define CSIO_DEVID_T580CR2_FCOE 0x560D
80#define CSIO_DEVID_T540LPCR_FCOE 0x560E
81#define CSIO_DEVID_AMSTERDAM_T5_FCOE 0x560F
82#define CSIO_DEVID_T580LPCR_FCOE 0x5610
83#define CSIO_DEVID_T520LLCR_FCOE 0x5611
84#define CSIO_DEVID_T560CR_FCOE 0x5612
85#define CSIO_DEVID_T580CR_FCOE 0x5613
86
87/* Define MACRO values */
88#define CSIO_HW_T4 0x4000
89#define CSIO_T4_FCOE_ASIC 0x4600
90#define CSIO_HW_T5 0x5000
91#define CSIO_T5_FCOE_ASIC 0x5600
92#define CSIO_HW_CHIP_MASK 0xF000
93#define T4_REGMAP_SIZE (160 * 1024)
94#define T5_REGMAP_SIZE (332 * 1024)
95#define FW_FNAME_T4 "cxgb4/t4fw.bin"
96#define FW_FNAME_T5 "cxgb4/t5fw.bin"
97#define FW_CFG_NAME_T4 "cxgb4/t4-config.txt"
98#define FW_CFG_NAME_T5 "cxgb4/t5-config.txt"
99
100/* Define static functions */
101static inline int csio_is_t4(uint16_t chip)
102{
103 return (chip == CSIO_HW_T4);
104}
105
106static inline int csio_is_t5(uint16_t chip)
107{
108 return (chip == CSIO_HW_T5);
109}
110
111/* Define MACRO DEFINITIONS */
112#define CSIO_DEVICE(devid, idx) \
113 { PCI_VENDOR_ID_CHELSIO, (devid), PCI_ANY_ID, PCI_ANY_ID, 0, 0, (idx) }
114
115#define CSIO_HW_PIDX(hw, index) \
116 (csio_is_t4(hw->chip_id) ? (PIDX(index)) : \
117 (PIDX_T5(index) | DBTYPE(1U)))
118
119#define CSIO_HW_LP_INT_THRESH(hw, val) \
120 (csio_is_t4(hw->chip_id) ? (LP_INT_THRESH(val)) : \
121 (V_LP_INT_THRESH_T5(val)))
122
123#define CSIO_HW_M_LP_INT_THRESH(hw) \
124 (csio_is_t4(hw->chip_id) ? (LP_INT_THRESH_MASK) : (M_LP_INT_THRESH_T5))
125
126#define CSIO_MAC_INT_CAUSE_REG(hw, port) \
127 (csio_is_t4(hw->chip_id) ? (PORT_REG(port, XGMAC_PORT_INT_CAUSE)) : \
128 (T5_PORT_REG(port, MAC_PORT_INT_CAUSE)))
129
130#define FW_VERSION_MAJOR(hw) (csio_is_t4(hw->chip_id) ? 1 : 0)
131#define FW_VERSION_MINOR(hw) (csio_is_t4(hw->chip_id) ? 2 : 0)
132#define FW_VERSION_MICRO(hw) (csio_is_t4(hw->chip_id) ? 8 : 0)
133
134#define CSIO_FW_FNAME(hw) \
135 (csio_is_t4(hw->chip_id) ? FW_FNAME_T4 : FW_FNAME_T5)
136
137#define CSIO_CF_FNAME(hw) \
138 (csio_is_t4(hw->chip_id) ? FW_CFG_NAME_T4 : FW_CFG_NAME_T5)
139
140/* Declare ENUMS */
141enum { MEM_EDC0, MEM_EDC1, MEM_MC, MEM_MC0 = MEM_MC, MEM_MC1 };
142
143enum {
144 MEMWIN_APERTURE = 2048,
145 MEMWIN_BASE = 0x1b800,
146 MEMWIN_CSIOSTOR = 6, /* PCI-e Memory Window access */
147};
148
149/* Slow path handlers */
150struct intr_info {
151 unsigned int mask; /* bits to check in interrupt status */
152 const char *msg; /* message to print or NULL */
153 short stat_idx; /* stat counter to increment or -1 */
154 unsigned short fatal; /* whether the condition reported is fatal */
155};
156
157/* T4/T5 Chip specific ops */
158struct csio_hw;
159struct csio_hw_chip_ops {
160 int (*chip_set_mem_win)(struct csio_hw *, uint32_t);
161 void (*chip_pcie_intr_handler)(struct csio_hw *);
162 uint32_t (*chip_flash_cfg_addr)(struct csio_hw *);
163 int (*chip_mc_read)(struct csio_hw *, int, uint32_t,
164 __be32 *, uint64_t *);
165 int (*chip_edc_read)(struct csio_hw *, int, uint32_t,
166 __be32 *, uint64_t *);
167 int (*chip_memory_rw)(struct csio_hw *, u32, int, u32,
168 u32, uint32_t *, int);
169 void (*chip_dfs_create_ext_mem)(struct csio_hw *);
170};
171
172extern struct csio_hw_chip_ops t4_ops;
173extern struct csio_hw_chip_ops t5_ops;
174
175#endif /* #ifndef __CSIO_HW_CHIP_H__ */
diff --git a/drivers/scsi/csiostor/csio_hw_t4.c b/drivers/scsi/csiostor/csio_hw_t4.c
new file mode 100644
index 000000000000..89ecbac5478f
--- /dev/null
+++ b/drivers/scsi/csiostor/csio_hw_t4.c
@@ -0,0 +1,403 @@
1/*
2 * This file is part of the Chelsio FCoE driver for Linux.
3 *
4 * Copyright (c) 2008-2013 Chelsio Communications, Inc. All rights reserved.
5 *
6 * This software is available to you under a choice of one of two
7 * licenses. You may choose to be licensed under the terms of the GNU
8 * General Public License (GPL) Version 2, available from the file
9 * OpenIB.org BSD license below:
10 *
11 * Redistribution and use in source and binary forms, with or
12 * without modification, are permitted provided that the following
13 * conditions are met:
14 *
15 * - Redistributions of source code must retain the above
16 * copyright notice, this list of conditions and the following
17 * - Redistributions in binary form must reproduce the above
18 * copyright notice, this list of conditions and the following
19 * disclaimer in the documentation and/or other materials
20 * provided with the distribution.
21 *
22 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
23 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
24 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
25 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
26 * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
27 * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
28 * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
29 * SOFTWARE.
30 */
31
32#include "csio_hw.h"
33#include "csio_init.h"
34
35/*
36 * Return the specified PCI-E Configuration Space register from our Physical
37 * Function. We try first via a Firmware LDST Command since we prefer to let
38 * the firmware own all of these registers, but if that fails we go for it
39 * directly ourselves.
40 */
41static uint32_t
42csio_t4_read_pcie_cfg4(struct csio_hw *hw, int reg)
43{
44 u32 val = 0;
45 struct csio_mb *mbp;
46 int rv;
47 struct fw_ldst_cmd *ldst_cmd;
48
49 mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
50 if (!mbp) {
51 CSIO_INC_STATS(hw, n_err_nomem);
52 pci_read_config_dword(hw->pdev, reg, &val);
53 return val;
54 }
55
56 csio_mb_ldst(hw, mbp, CSIO_MB_DEFAULT_TMO, reg);
57 rv = csio_mb_issue(hw, mbp);
58
59 /*
60 * If the LDST Command suucceeded, exctract the returned register
61 * value. Otherwise read it directly ourself.
62 */
63 if (rv == 0) {
64 ldst_cmd = (struct fw_ldst_cmd *)(mbp->mb);
65 val = ntohl(ldst_cmd->u.pcie.data[0]);
66 } else
67 pci_read_config_dword(hw->pdev, reg, &val);
68
69 mempool_free(mbp, hw->mb_mempool);
70
71 return val;
72}
73
74static int
75csio_t4_set_mem_win(struct csio_hw *hw, uint32_t win)
76{
77 u32 bar0;
78 u32 mem_win_base;
79
80 /*
81 * Truncation intentional: we only read the bottom 32-bits of the
82 * 64-bit BAR0/BAR1 ... We use the hardware backdoor mechanism to
83 * read BAR0 instead of using pci_resource_start() because we could be
84 * operating from within a Virtual Machine which is trapping our
85 * accesses to our Configuration Space and we need to set up the PCI-E
86 * Memory Window decoders with the actual addresses which will be
87 * coming across the PCI-E link.
88 */
89 bar0 = csio_t4_read_pcie_cfg4(hw, PCI_BASE_ADDRESS_0);
90 bar0 &= PCI_BASE_ADDRESS_MEM_MASK;
91
92 mem_win_base = bar0 + MEMWIN_BASE;
93
94 /*
95 * Set up memory window for accessing adapter memory ranges. (Read
96 * back MA register to ensure that changes propagate before we attempt
97 * to use the new values.)
98 */
99 csio_wr_reg32(hw, mem_win_base | BIR(0) |
100 WINDOW(ilog2(MEMWIN_APERTURE) - 10),
101 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
102 csio_rd_reg32(hw,
103 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
104 return 0;
105}
106
107/*
108 * Interrupt handler for the PCIE module.
109 */
110static void
111csio_t4_pcie_intr_handler(struct csio_hw *hw)
112{
113 static struct intr_info sysbus_intr_info[] = {
114 { RNPP, "RXNP array parity error", -1, 1 },
115 { RPCP, "RXPC array parity error", -1, 1 },
116 { RCIP, "RXCIF array parity error", -1, 1 },
117 { RCCP, "Rx completions control array parity error", -1, 1 },
118 { RFTP, "RXFT array parity error", -1, 1 },
119 { 0, NULL, 0, 0 }
120 };
121 static struct intr_info pcie_port_intr_info[] = {
122 { TPCP, "TXPC array parity error", -1, 1 },
123 { TNPP, "TXNP array parity error", -1, 1 },
124 { TFTP, "TXFT array parity error", -1, 1 },
125 { TCAP, "TXCA array parity error", -1, 1 },
126 { TCIP, "TXCIF array parity error", -1, 1 },
127 { RCAP, "RXCA array parity error", -1, 1 },
128 { OTDD, "outbound request TLP discarded", -1, 1 },
129 { RDPE, "Rx data parity error", -1, 1 },
130 { TDUE, "Tx uncorrectable data error", -1, 1 },
131 { 0, NULL, 0, 0 }
132 };
133
134 static struct intr_info pcie_intr_info[] = {
135 { MSIADDRLPERR, "MSI AddrL parity error", -1, 1 },
136 { MSIADDRHPERR, "MSI AddrH parity error", -1, 1 },
137 { MSIDATAPERR, "MSI data parity error", -1, 1 },
138 { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
139 { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
140 { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
141 { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
142 { PIOCPLPERR, "PCI PIO completion FIFO parity error", -1, 1 },
143 { PIOREQPERR, "PCI PIO request FIFO parity error", -1, 1 },
144 { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
145 { CCNTPERR, "PCI CMD channel count parity error", -1, 1 },
146 { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
147 { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
148 { DCNTPERR, "PCI DMA channel count parity error", -1, 1 },
149 { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
150 { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
151 { HCNTPERR, "PCI HMA channel count parity error", -1, 1 },
152 { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
153 { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
154 { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
155 { FIDPERR, "PCI FID parity error", -1, 1 },
156 { INTXCLRPERR, "PCI INTx clear parity error", -1, 1 },
157 { MATAGPERR, "PCI MA tag parity error", -1, 1 },
158 { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
159 { RXCPLPERR, "PCI Rx completion parity error", -1, 1 },
160 { RXWRPERR, "PCI Rx write parity error", -1, 1 },
161 { RPLPERR, "PCI replay buffer parity error", -1, 1 },
162 { PCIESINT, "PCI core secondary fault", -1, 1 },
163 { PCIEPINT, "PCI core primary fault", -1, 1 },
164 { UNXSPLCPLERR, "PCI unexpected split completion error", -1,
165 0 },
166 { 0, NULL, 0, 0 }
167 };
168
169 int fat;
170 fat = csio_handle_intr_status(hw,
171 PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
172 sysbus_intr_info) +
173 csio_handle_intr_status(hw,
174 PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
175 pcie_port_intr_info) +
176 csio_handle_intr_status(hw, PCIE_INT_CAUSE, pcie_intr_info);
177 if (fat)
178 csio_hw_fatal_err(hw);
179}
180
181/*
182 * csio_t4_flash_cfg_addr - return the address of the flash configuration file
183 * @hw: the HW module
184 *
185 * Return the address within the flash where the Firmware Configuration
186 * File is stored.
187 */
188static unsigned int
189csio_t4_flash_cfg_addr(struct csio_hw *hw)
190{
191 return FLASH_CFG_OFFSET;
192}
193
194/*
195 * csio_t4_mc_read - read from MC through backdoor accesses
196 * @hw: the hw module
197 * @idx: not used for T4 adapter
198 * @addr: address of first byte requested
199 * @data: 64 bytes of data containing the requested address
200 * @ecc: where to store the corresponding 64-bit ECC word
201 *
202 * Read 64 bytes of data from MC starting at a 64-byte-aligned address
203 * that covers the requested address @addr. If @parity is not %NULL it
204 * is assigned the 64-bit ECC word for the read data.
205 */
206static int
207csio_t4_mc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
208 uint64_t *ecc)
209{
210 int i;
211
212 if (csio_rd_reg32(hw, MC_BIST_CMD) & START_BIST)
213 return -EBUSY;
214 csio_wr_reg32(hw, addr & ~0x3fU, MC_BIST_CMD_ADDR);
215 csio_wr_reg32(hw, 64, MC_BIST_CMD_LEN);
216 csio_wr_reg32(hw, 0xc, MC_BIST_DATA_PATTERN);
217 csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
218 MC_BIST_CMD);
219 i = csio_hw_wait_op_done_val(hw, MC_BIST_CMD, START_BIST,
220 0, 10, 1, NULL);
221 if (i)
222 return i;
223
224#define MC_DATA(i) MC_BIST_STATUS_REG(MC_BIST_STATUS_RDATA, i)
225
226 for (i = 15; i >= 0; i--)
227 *data++ = htonl(csio_rd_reg32(hw, MC_DATA(i)));
228 if (ecc)
229 *ecc = csio_rd_reg64(hw, MC_DATA(16));
230#undef MC_DATA
231 return 0;
232}
233
234/*
235 * csio_t4_edc_read - read from EDC through backdoor accesses
236 * @hw: the hw module
237 * @idx: which EDC to access
238 * @addr: address of first byte requested
239 * @data: 64 bytes of data containing the requested address
240 * @ecc: where to store the corresponding 64-bit ECC word
241 *
242 * Read 64 bytes of data from EDC starting at a 64-byte-aligned address
243 * that covers the requested address @addr. If @parity is not %NULL it
244 * is assigned the 64-bit ECC word for the read data.
245 */
246static int
247csio_t4_edc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
248 uint64_t *ecc)
249{
250 int i;
251
252 idx *= EDC_STRIDE;
253 if (csio_rd_reg32(hw, EDC_BIST_CMD + idx) & START_BIST)
254 return -EBUSY;
255 csio_wr_reg32(hw, addr & ~0x3fU, EDC_BIST_CMD_ADDR + idx);
256 csio_wr_reg32(hw, 64, EDC_BIST_CMD_LEN + idx);
257 csio_wr_reg32(hw, 0xc, EDC_BIST_DATA_PATTERN + idx);
258 csio_wr_reg32(hw, BIST_OPCODE(1) | BIST_CMD_GAP(1) | START_BIST,
259 EDC_BIST_CMD + idx);
260 i = csio_hw_wait_op_done_val(hw, EDC_BIST_CMD + idx, START_BIST,
261 0, 10, 1, NULL);
262 if (i)
263 return i;
264
265#define EDC_DATA(i) (EDC_BIST_STATUS_REG(EDC_BIST_STATUS_RDATA, i) + idx)
266
267 for (i = 15; i >= 0; i--)
268 *data++ = htonl(csio_rd_reg32(hw, EDC_DATA(i)));
269 if (ecc)
270 *ecc = csio_rd_reg64(hw, EDC_DATA(16));
271#undef EDC_DATA
272 return 0;
273}
274
275/*
276 * csio_t4_memory_rw - read/write EDC 0, EDC 1 or MC via PCIE memory window
277 * @hw: the csio_hw
278 * @win: PCI-E memory Window to use
279 * @mtype: memory type: MEM_EDC0, MEM_EDC1, MEM_MC0 (or MEM_MC) or MEM_MC1
280 * @addr: address within indicated memory type
281 * @len: amount of memory to transfer
282 * @buf: host memory buffer
283 * @dir: direction of transfer 1 => read, 0 => write
284 *
285 * Reads/writes an [almost] arbitrary memory region in the firmware: the
286 * firmware memory address, length and host buffer must be aligned on
287 * 32-bit boudaries. The memory is transferred as a raw byte sequence
288 * from/to the firmware's memory. If this memory contains data
289 * structures which contain multi-byte integers, it's the callers
290 * responsibility to perform appropriate byte order conversions.
291 */
292static int
293csio_t4_memory_rw(struct csio_hw *hw, u32 win, int mtype, u32 addr,
294 u32 len, uint32_t *buf, int dir)
295{
296 u32 pos, start, offset, memoffset, bar0;
297 u32 edc_size, mc_size, mem_reg, mem_aperture, mem_base;
298
299 /*
300 * Argument sanity checks ...
301 */
302 if ((addr & 0x3) || (len & 0x3))
303 return -EINVAL;
304
305 /* Offset into the region of memory which is being accessed
306 * MEM_EDC0 = 0
307 * MEM_EDC1 = 1
308 * MEM_MC = 2 -- T4
309 */
310 edc_size = EDRAM_SIZE_GET(csio_rd_reg32(hw, MA_EDRAM0_BAR));
311 if (mtype != MEM_MC1)
312 memoffset = (mtype * (edc_size * 1024 * 1024));
313 else {
314 mc_size = EXT_MEM_SIZE_GET(csio_rd_reg32(hw,
315 MA_EXT_MEMORY_BAR));
316 memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024;
317 }
318
319 /* Determine the PCIE_MEM_ACCESS_OFFSET */
320 addr = addr + memoffset;
321
322 /*
323 * Each PCI-E Memory Window is programmed with a window size -- or
324 * "aperture" -- which controls the granularity of its mapping onto
325 * adapter memory. We need to grab that aperture in order to know
326 * how to use the specified window. The window is also programmed
327 * with the base address of the Memory Window in BAR0's address
328 * space. For T4 this is an absolute PCI-E Bus Address. For T5
329 * the address is relative to BAR0.
330 */
331 mem_reg = csio_rd_reg32(hw,
332 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
333 mem_aperture = 1 << (WINDOW(mem_reg) + 10);
334 mem_base = GET_PCIEOFST(mem_reg) << 10;
335
336 bar0 = csio_t4_read_pcie_cfg4(hw, PCI_BASE_ADDRESS_0);
337 bar0 &= PCI_BASE_ADDRESS_MEM_MASK;
338 mem_base -= bar0;
339
340 start = addr & ~(mem_aperture-1);
341 offset = addr - start;
342
343 csio_dbg(hw, "csio_t4_memory_rw: mem_reg: 0x%x, mem_aperture: 0x%x\n",
344 mem_reg, mem_aperture);
345 csio_dbg(hw, "csio_t4_memory_rw: mem_base: 0x%x, mem_offset: 0x%x\n",
346 mem_base, memoffset);
347 csio_dbg(hw, "csio_t4_memory_rw: bar0: 0x%x, start:0x%x, offset:0x%x\n",
348 bar0, start, offset);
349 csio_dbg(hw, "csio_t4_memory_rw: mtype: %d, addr: 0x%x, len: %d\n",
350 mtype, addr, len);
351
352 for (pos = start; len > 0; pos += mem_aperture, offset = 0) {
353 /*
354 * Move PCI-E Memory Window to our current transfer
355 * position. Read it back to ensure that changes propagate
356 * before we attempt to use the new value.
357 */
358 csio_wr_reg32(hw, pos,
359 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
360 csio_rd_reg32(hw,
361 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
362
363 while (offset < mem_aperture && len > 0) {
364 if (dir)
365 *buf++ = csio_rd_reg32(hw, mem_base + offset);
366 else
367 csio_wr_reg32(hw, *buf++, mem_base + offset);
368
369 offset += sizeof(__be32);
370 len -= sizeof(__be32);
371 }
372 }
373 return 0;
374}
375
376/*
377 * csio_t4_dfs_create_ext_mem - setup debugfs for MC to read the values
378 * @hw: the csio_hw
379 *
380 * This function creates files in the debugfs with external memory region MC.
381 */
382static void
383csio_t4_dfs_create_ext_mem(struct csio_hw *hw)
384{
385 u32 size;
386 int i = csio_rd_reg32(hw, MA_TARGET_MEM_ENABLE);
387 if (i & EXT_MEM_ENABLE) {
388 size = csio_rd_reg32(hw, MA_EXT_MEMORY_BAR);
389 csio_add_debugfs_mem(hw, "mc", MEM_MC,
390 EXT_MEM_SIZE_GET(size));
391 }
392}
393
394/* T4 adapter specific function */
395struct csio_hw_chip_ops t4_ops = {
396 .chip_set_mem_win = csio_t4_set_mem_win,
397 .chip_pcie_intr_handler = csio_t4_pcie_intr_handler,
398 .chip_flash_cfg_addr = csio_t4_flash_cfg_addr,
399 .chip_mc_read = csio_t4_mc_read,
400 .chip_edc_read = csio_t4_edc_read,
401 .chip_memory_rw = csio_t4_memory_rw,
402 .chip_dfs_create_ext_mem = csio_t4_dfs_create_ext_mem,
403};
diff --git a/drivers/scsi/csiostor/csio_hw_t5.c b/drivers/scsi/csiostor/csio_hw_t5.c
new file mode 100644
index 000000000000..27745c170c24
--- /dev/null
+++ b/drivers/scsi/csiostor/csio_hw_t5.c
@@ -0,0 +1,397 @@
1/*
2 * This file is part of the Chelsio FCoE driver for Linux.
3 *
4 * Copyright (c) 2008-2013 Chelsio Communications, Inc. All rights reserved.
5 *
6 * This software is available to you under a choice of one of two
7 * licenses. You may choose to be licensed under the terms of the GNU
8 * General Public License (GPL) Version 2, available from the file
9 * OpenIB.org BSD license below:
10 *
11 * Redistribution and use in source and binary forms, with or
12 * without modification, are permitted provided that the following
13 * conditions are met:
14 *
15 * - Redistributions of source code must retain the above
16 * copyright notice, this list of conditions and the following
17 * disclaimer.
18 *
19 * - Redistributions in binary form must reproduce the above
20 * copyright notice, this list of conditions and the following
21 * disclaimer in the documentation and/or other materials
22 * provided with the distribution.
23 *
24 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
25 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
26 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
27 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
28 * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
29 * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
30 * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
31 * SOFTWARE.
32 */
33
34#include "csio_hw.h"
35#include "csio_init.h"
36
37static int
38csio_t5_set_mem_win(struct csio_hw *hw, uint32_t win)
39{
40 u32 mem_win_base;
41 /*
42 * Truncation intentional: we only read the bottom 32-bits of the
43 * 64-bit BAR0/BAR1 ... We use the hardware backdoor mechanism to
44 * read BAR0 instead of using pci_resource_start() because we could be
45 * operating from within a Virtual Machine which is trapping our
46 * accesses to our Configuration Space and we need to set up the PCI-E
47 * Memory Window decoders with the actual addresses which will be
48 * coming across the PCI-E link.
49 */
50
51 /* For T5, only relative offset inside the PCIe BAR is passed */
52 mem_win_base = MEMWIN_BASE;
53
54 /*
55 * Set up memory window for accessing adapter memory ranges. (Read
56 * back MA register to ensure that changes propagate before we attempt
57 * to use the new values.)
58 */
59 csio_wr_reg32(hw, mem_win_base | BIR(0) |
60 WINDOW(ilog2(MEMWIN_APERTURE) - 10),
61 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
62 csio_rd_reg32(hw,
63 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
64
65 return 0;
66}
67
68/*
69 * Interrupt handler for the PCIE module.
70 */
71static void
72csio_t5_pcie_intr_handler(struct csio_hw *hw)
73{
74 static struct intr_info sysbus_intr_info[] = {
75 { RNPP, "RXNP array parity error", -1, 1 },
76 { RPCP, "RXPC array parity error", -1, 1 },
77 { RCIP, "RXCIF array parity error", -1, 1 },
78 { RCCP, "Rx completions control array parity error", -1, 1 },
79 { RFTP, "RXFT array parity error", -1, 1 },
80 { 0, NULL, 0, 0 }
81 };
82 static struct intr_info pcie_port_intr_info[] = {
83 { TPCP, "TXPC array parity error", -1, 1 },
84 { TNPP, "TXNP array parity error", -1, 1 },
85 { TFTP, "TXFT array parity error", -1, 1 },
86 { TCAP, "TXCA array parity error", -1, 1 },
87 { TCIP, "TXCIF array parity error", -1, 1 },
88 { RCAP, "RXCA array parity error", -1, 1 },
89 { OTDD, "outbound request TLP discarded", -1, 1 },
90 { RDPE, "Rx data parity error", -1, 1 },
91 { TDUE, "Tx uncorrectable data error", -1, 1 },
92 { 0, NULL, 0, 0 }
93 };
94
95 static struct intr_info pcie_intr_info[] = {
96 { MSTGRPPERR, "Master Response Read Queue parity error",
97 -1, 1 },
98 { MSTTIMEOUTPERR, "Master Timeout FIFO parity error", -1, 1 },
99 { MSIXSTIPERR, "MSI-X STI SRAM parity error", -1, 1 },
100 { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
101 { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
102 { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
103 { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
104 { PIOCPLGRPPERR, "PCI PIO completion Group FIFO parity error",
105 -1, 1 },
106 { PIOREQGRPPERR, "PCI PIO request Group FIFO parity error",
107 -1, 1 },
108 { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
109 { MSTTAGQPERR, "PCI master tag queue parity error", -1, 1 },
110 { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
111 { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
112 { DREQWRPERR, "PCI DMA channel write request parity error",
113 -1, 1 },
114 { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
115 { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
116 { HREQWRPERR, "PCI HMA channel count parity error", -1, 1 },
117 { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
118 { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
119 { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
120 { FIDPERR, "PCI FID parity error", -1, 1 },
121 { VFIDPERR, "PCI INTx clear parity error", -1, 1 },
122 { MAGRPPERR, "PCI MA group FIFO parity error", -1, 1 },
123 { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
124 { IPRXHDRGRPPERR, "PCI IP Rx header group parity error",
125 -1, 1 },
126 { IPRXDATAGRPPERR, "PCI IP Rx data group parity error",
127 -1, 1 },
128 { RPLPERR, "PCI IP replay buffer parity error", -1, 1 },
129 { IPSOTPERR, "PCI IP SOT buffer parity error", -1, 1 },
130 { TRGT1GRPPERR, "PCI TRGT1 group FIFOs parity error", -1, 1 },
131 { READRSPERR, "Outbound read error", -1, 0 },
132 { 0, NULL, 0, 0 }
133 };
134
135 int fat;
136 fat = csio_handle_intr_status(hw,
137 PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
138 sysbus_intr_info) +
139 csio_handle_intr_status(hw,
140 PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
141 pcie_port_intr_info) +
142 csio_handle_intr_status(hw, PCIE_INT_CAUSE, pcie_intr_info);
143 if (fat)
144 csio_hw_fatal_err(hw);
145}
146
147/*
148 * csio_t5_flash_cfg_addr - return the address of the flash configuration file
149 * @hw: the HW module
150 *
151 * Return the address within the flash where the Firmware Configuration
152 * File is stored.
153 */
154static unsigned int
155csio_t5_flash_cfg_addr(struct csio_hw *hw)
156{
157 return FLASH_CFG_START;
158}
159
160/*
161 * csio_t5_mc_read - read from MC through backdoor accesses
162 * @hw: the hw module
163 * @idx: index to the register
164 * @addr: address of first byte requested
165 * @data: 64 bytes of data containing the requested address
166 * @ecc: where to store the corresponding 64-bit ECC word
167 *
168 * Read 64 bytes of data from MC starting at a 64-byte-aligned address
169 * that covers the requested address @addr. If @parity is not %NULL it
170 * is assigned the 64-bit ECC word for the read data.
171 */
172static int
173csio_t5_mc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
174 uint64_t *ecc)
175{
176 int i;
177 uint32_t mc_bist_cmd_reg, mc_bist_cmd_addr_reg, mc_bist_cmd_len_reg;
178 uint32_t mc_bist_status_rdata_reg, mc_bist_data_pattern_reg;
179
180 mc_bist_cmd_reg = MC_REG(MC_P_BIST_CMD, idx);
181 mc_bist_cmd_addr_reg = MC_REG(MC_P_BIST_CMD_ADDR, idx);
182 mc_bist_cmd_len_reg = MC_REG(MC_P_BIST_CMD_LEN, idx);
183 mc_bist_status_rdata_reg = MC_REG(MC_P_BIST_STATUS_RDATA, idx);
184 mc_bist_data_pattern_reg = MC_REG(MC_P_BIST_DATA_PATTERN, idx);
185
186 if (csio_rd_reg32(hw, mc_bist_cmd_reg) & START_BIST)
187 return -EBUSY;
188 csio_wr_reg32(hw, addr & ~0x3fU, mc_bist_cmd_addr_reg);
189 csio_wr_reg32(hw, 64, mc_bist_cmd_len_reg);
190 csio_wr_reg32(hw, 0xc, mc_bist_data_pattern_reg);
191 csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
192 mc_bist_cmd_reg);
193 i = csio_hw_wait_op_done_val(hw, mc_bist_cmd_reg, START_BIST,
194 0, 10, 1, NULL);
195 if (i)
196 return i;
197
198#define MC_DATA(i) MC_BIST_STATUS_REG(MC_BIST_STATUS_RDATA, i)
199
200 for (i = 15; i >= 0; i--)
201 *data++ = htonl(csio_rd_reg32(hw, MC_DATA(i)));
202 if (ecc)
203 *ecc = csio_rd_reg64(hw, MC_DATA(16));
204#undef MC_DATA
205 return 0;
206}
207
208/*
209 * csio_t5_edc_read - read from EDC through backdoor accesses
210 * @hw: the hw module
211 * @idx: which EDC to access
212 * @addr: address of first byte requested
213 * @data: 64 bytes of data containing the requested address
214 * @ecc: where to store the corresponding 64-bit ECC word
215 *
216 * Read 64 bytes of data from EDC starting at a 64-byte-aligned address
217 * that covers the requested address @addr. If @parity is not %NULL it
218 * is assigned the 64-bit ECC word for the read data.
219 */
220static int
221csio_t5_edc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
222 uint64_t *ecc)
223{
224 int i;
225 uint32_t edc_bist_cmd_reg, edc_bist_cmd_addr_reg, edc_bist_cmd_len_reg;
226 uint32_t edc_bist_cmd_data_pattern, edc_bist_status_rdata_reg;
227
228/*
229 * These macro are missing in t4_regs.h file.
230 */
231#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
232#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
233
234 edc_bist_cmd_reg = EDC_REG_T5(EDC_H_BIST_CMD, idx);
235 edc_bist_cmd_addr_reg = EDC_REG_T5(EDC_H_BIST_CMD_ADDR, idx);
236 edc_bist_cmd_len_reg = EDC_REG_T5(EDC_H_BIST_CMD_LEN, idx);
237 edc_bist_cmd_data_pattern = EDC_REG_T5(EDC_H_BIST_DATA_PATTERN, idx);
238 edc_bist_status_rdata_reg = EDC_REG_T5(EDC_H_BIST_STATUS_RDATA, idx);
239#undef EDC_REG_T5
240#undef EDC_STRIDE_T5
241
242 if (csio_rd_reg32(hw, edc_bist_cmd_reg) & START_BIST)
243 return -EBUSY;
244 csio_wr_reg32(hw, addr & ~0x3fU, edc_bist_cmd_addr_reg);
245 csio_wr_reg32(hw, 64, edc_bist_cmd_len_reg);
246 csio_wr_reg32(hw, 0xc, edc_bist_cmd_data_pattern);
247 csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
248 edc_bist_cmd_reg);
249 i = csio_hw_wait_op_done_val(hw, edc_bist_cmd_reg, START_BIST,
250 0, 10, 1, NULL);
251 if (i)
252 return i;
253
254#define EDC_DATA(i) (EDC_BIST_STATUS_REG(EDC_BIST_STATUS_RDATA, i) + idx)
255
256 for (i = 15; i >= 0; i--)
257 *data++ = htonl(csio_rd_reg32(hw, EDC_DATA(i)));
258 if (ecc)
259 *ecc = csio_rd_reg64(hw, EDC_DATA(16));
260#undef EDC_DATA
261 return 0;
262}
263
264/*
265 * csio_t5_memory_rw - read/write EDC 0, EDC 1 or MC via PCIE memory window
266 * @hw: the csio_hw
267 * @win: PCI-E memory Window to use
268 * @mtype: memory type: MEM_EDC0, MEM_EDC1, MEM_MC0 (or MEM_MC) or MEM_MC1
269 * @addr: address within indicated memory type
270 * @len: amount of memory to transfer
271 * @buf: host memory buffer
272 * @dir: direction of transfer 1 => read, 0 => write
273 *
274 * Reads/writes an [almost] arbitrary memory region in the firmware: the
275 * firmware memory address, length and host buffer must be aligned on
276 * 32-bit boudaries. The memory is transferred as a raw byte sequence
277 * from/to the firmware's memory. If this memory contains data
278 * structures which contain multi-byte integers, it's the callers
279 * responsibility to perform appropriate byte order conversions.
280 */
281static int
282csio_t5_memory_rw(struct csio_hw *hw, u32 win, int mtype, u32 addr,
283 u32 len, uint32_t *buf, int dir)
284{
285 u32 pos, start, offset, memoffset;
286 u32 edc_size, mc_size, win_pf, mem_reg, mem_aperture, mem_base;
287
288 /*
289 * Argument sanity checks ...
290 */
291 if ((addr & 0x3) || (len & 0x3))
292 return -EINVAL;
293
294 /* Offset into the region of memory which is being accessed
295 * MEM_EDC0 = 0
296 * MEM_EDC1 = 1
297 * MEM_MC = 2 -- T4
298 * MEM_MC0 = 2 -- For T5
299 * MEM_MC1 = 3 -- For T5
300 */
301 edc_size = EDRAM_SIZE_GET(csio_rd_reg32(hw, MA_EDRAM0_BAR));
302 if (mtype != MEM_MC1)
303 memoffset = (mtype * (edc_size * 1024 * 1024));
304 else {
305 mc_size = EXT_MEM_SIZE_GET(csio_rd_reg32(hw,
306 MA_EXT_MEMORY_BAR));
307 memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024;
308 }
309
310 /* Determine the PCIE_MEM_ACCESS_OFFSET */
311 addr = addr + memoffset;
312
313 /*
314 * Each PCI-E Memory Window is programmed with a window size -- or
315 * "aperture" -- which controls the granularity of its mapping onto
316 * adapter memory. We need to grab that aperture in order to know
317 * how to use the specified window. The window is also programmed
318 * with the base address of the Memory Window in BAR0's address
319 * space. For T4 this is an absolute PCI-E Bus Address. For T5
320 * the address is relative to BAR0.
321 */
322 mem_reg = csio_rd_reg32(hw,
323 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
324 mem_aperture = 1 << (WINDOW(mem_reg) + 10);
325 mem_base = GET_PCIEOFST(mem_reg) << 10;
326
327 start = addr & ~(mem_aperture-1);
328 offset = addr - start;
329 win_pf = V_PFNUM(hw->pfn);
330
331 csio_dbg(hw, "csio_t5_memory_rw: mem_reg: 0x%x, mem_aperture: 0x%x\n",
332 mem_reg, mem_aperture);
333 csio_dbg(hw, "csio_t5_memory_rw: mem_base: 0x%x, mem_offset: 0x%x\n",
334 mem_base, memoffset);
335 csio_dbg(hw, "csio_t5_memory_rw: start:0x%x, offset:0x%x, win_pf:%d\n",
336 start, offset, win_pf);
337 csio_dbg(hw, "csio_t5_memory_rw: mtype: %d, addr: 0x%x, len: %d\n",
338 mtype, addr, len);
339
340 for (pos = start; len > 0; pos += mem_aperture, offset = 0) {
341 /*
342 * Move PCI-E Memory Window to our current transfer
343 * position. Read it back to ensure that changes propagate
344 * before we attempt to use the new value.
345 */
346 csio_wr_reg32(hw, pos | win_pf,
347 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
348 csio_rd_reg32(hw,
349 PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
350
351 while (offset < mem_aperture && len > 0) {
352 if (dir)
353 *buf++ = csio_rd_reg32(hw, mem_base + offset);
354 else
355 csio_wr_reg32(hw, *buf++, mem_base + offset);
356
357 offset += sizeof(__be32);
358 len -= sizeof(__be32);
359 }
360 }
361 return 0;
362}
363
364/*
365 * csio_t5_dfs_create_ext_mem - setup debugfs for MC0 or MC1 to read the values
366 * @hw: the csio_hw
367 *
368 * This function creates files in the debugfs with external memory region
369 * MC0 & MC1.
370 */
371static void
372csio_t5_dfs_create_ext_mem(struct csio_hw *hw)
373{
374 u32 size;
375 int i = csio_rd_reg32(hw, MA_TARGET_MEM_ENABLE);
376 if (i & EXT_MEM_ENABLE) {
377 size = csio_rd_reg32(hw, MA_EXT_MEMORY_BAR);
378 csio_add_debugfs_mem(hw, "mc0", MEM_MC0,
379 EXT_MEM_SIZE_GET(size));
380 }
381 if (i & EXT_MEM1_ENABLE) {
382 size = csio_rd_reg32(hw, MA_EXT_MEMORY1_BAR);
383 csio_add_debugfs_mem(hw, "mc1", MEM_MC1,
384 EXT_MEM_SIZE_GET(size));
385 }
386}
387
388/* T5 adapter specific function */
389struct csio_hw_chip_ops t5_ops = {
390 .chip_set_mem_win = csio_t5_set_mem_win,
391 .chip_pcie_intr_handler = csio_t5_pcie_intr_handler,
392 .chip_flash_cfg_addr = csio_t5_flash_cfg_addr,
393 .chip_mc_read = csio_t5_mc_read,
394 .chip_edc_read = csio_t5_edc_read,
395 .chip_memory_rw = csio_t5_memory_rw,
396 .chip_dfs_create_ext_mem = csio_t5_dfs_create_ext_mem,
397};
diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c
index 0604b5ff3638..00346fe939d5 100644
--- a/drivers/scsi/csiostor/csio_init.c
+++ b/drivers/scsi/csiostor/csio_init.c
@@ -81,9 +81,11 @@ csio_mem_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
81 __be32 data[16]; 81 __be32 data[16];
82 82
83 if (mem == MEM_MC) 83 if (mem == MEM_MC)
84 ret = csio_hw_mc_read(hw, pos, data, NULL); 84 ret = hw->chip_ops->chip_mc_read(hw, 0, pos,
85 data, NULL);
85 else 86 else
86 ret = csio_hw_edc_read(hw, mem, pos, data, NULL); 87 ret = hw->chip_ops->chip_edc_read(hw, mem, pos,
88 data, NULL);
87 if (ret) 89 if (ret)
88 return ret; 90 return ret;
89 91
@@ -108,7 +110,7 @@ static const struct file_operations csio_mem_debugfs_fops = {
108 .llseek = default_llseek, 110 .llseek = default_llseek,
109}; 111};
110 112
111static void csio_add_debugfs_mem(struct csio_hw *hw, const char *name, 113void csio_add_debugfs_mem(struct csio_hw *hw, const char *name,
112 unsigned int idx, unsigned int size_mb) 114 unsigned int idx, unsigned int size_mb)
113{ 115{
114 struct dentry *de; 116 struct dentry *de;
@@ -131,9 +133,8 @@ static int csio_setup_debugfs(struct csio_hw *hw)
131 csio_add_debugfs_mem(hw, "edc0", MEM_EDC0, 5); 133 csio_add_debugfs_mem(hw, "edc0", MEM_EDC0, 5);
132 if (i & EDRAM1_ENABLE) 134 if (i & EDRAM1_ENABLE)
133 csio_add_debugfs_mem(hw, "edc1", MEM_EDC1, 5); 135 csio_add_debugfs_mem(hw, "edc1", MEM_EDC1, 5);
134 if (i & EXT_MEM_ENABLE) 136
135 csio_add_debugfs_mem(hw, "mc", MEM_MC, 137 hw->chip_ops->chip_dfs_create_ext_mem(hw);
136 EXT_MEM_SIZE_GET(csio_rd_reg32(hw, MA_EXT_MEMORY_BAR)));
137 return 0; 138 return 0;
138} 139}
139 140
@@ -1169,7 +1170,7 @@ static struct pci_error_handlers csio_err_handler = {
1169}; 1170};
1170 1171
1171static DEFINE_PCI_DEVICE_TABLE(csio_pci_tbl) = { 1172static DEFINE_PCI_DEVICE_TABLE(csio_pci_tbl) = {
1172 CSIO_DEVICE(CSIO_DEVID_T440DBG_FCOE, 0), /* T440DBG FCOE */ 1173 CSIO_DEVICE(CSIO_DEVID_T440DBG_FCOE, 0), /* T4 DEBUG FCOE */
1173 CSIO_DEVICE(CSIO_DEVID_T420CR_FCOE, 0), /* T420CR FCOE */ 1174 CSIO_DEVICE(CSIO_DEVID_T420CR_FCOE, 0), /* T420CR FCOE */
1174 CSIO_DEVICE(CSIO_DEVID_T422CR_FCOE, 0), /* T422CR FCOE */ 1175 CSIO_DEVICE(CSIO_DEVID_T422CR_FCOE, 0), /* T422CR FCOE */
1175 CSIO_DEVICE(CSIO_DEVID_T440CR_FCOE, 0), /* T440CR FCOE */ 1176 CSIO_DEVICE(CSIO_DEVID_T440CR_FCOE, 0), /* T440CR FCOE */
@@ -1184,8 +1185,34 @@ static DEFINE_PCI_DEVICE_TABLE(csio_pci_tbl) = {
1184 CSIO_DEVICE(CSIO_DEVID_B404_FCOE, 0), /* B404 FCOE */ 1185 CSIO_DEVICE(CSIO_DEVID_B404_FCOE, 0), /* B404 FCOE */
1185 CSIO_DEVICE(CSIO_DEVID_T480CR_FCOE, 0), /* T480 CR FCOE */ 1186 CSIO_DEVICE(CSIO_DEVID_T480CR_FCOE, 0), /* T480 CR FCOE */
1186 CSIO_DEVICE(CSIO_DEVID_T440LPCR_FCOE, 0), /* T440 LP-CR FCOE */ 1187 CSIO_DEVICE(CSIO_DEVID_T440LPCR_FCOE, 0), /* T440 LP-CR FCOE */
1187 CSIO_DEVICE(CSIO_DEVID_PE10K, 0), /* PE10K FCOE */ 1188 CSIO_DEVICE(CSIO_DEVID_AMSTERDAM_T4_FCOE, 0), /* AMSTERDAM T4 FCOE */
1188 CSIO_DEVICE(CSIO_DEVID_PE10K_PF1, 0), /* PE10K FCOE on PF1 */ 1189 CSIO_DEVICE(CSIO_DEVID_HUAWEI_T480_FCOE, 0), /* HUAWEI T480 FCOE */
1190 CSIO_DEVICE(CSIO_DEVID_HUAWEI_T440_FCOE, 0), /* HUAWEI T440 FCOE */
1191 CSIO_DEVICE(CSIO_DEVID_HUAWEI_STG310_FCOE, 0), /* HUAWEI STG FCOE */
1192 CSIO_DEVICE(CSIO_DEVID_ACROMAG_XMC_XAUI, 0), /* ACROMAG XAUI FCOE */
1193 CSIO_DEVICE(CSIO_DEVID_QUANTA_MEZZ_SFP_FCOE, 0),/* QUANTA MEZZ FCOE */
1194 CSIO_DEVICE(CSIO_DEVID_HUAWEI_10GT_FCOE, 0), /* HUAWEI 10GT FCOE */
1195 CSIO_DEVICE(CSIO_DEVID_HUAWEI_T440_TOE_FCOE, 0),/* HUAWEI T4 TOE FCOE */
1196 CSIO_DEVICE(CSIO_DEVID_T580DBG_FCOE, 0), /* T5 DEBUG FCOE */
1197 CSIO_DEVICE(CSIO_DEVID_T520CR_FCOE, 0), /* T520CR FCOE */
1198 CSIO_DEVICE(CSIO_DEVID_T522CR_FCOE, 0), /* T522CR FCOE */
1199 CSIO_DEVICE(CSIO_DEVID_T540CR_FCOE, 0), /* T540CR FCOE */
1200 CSIO_DEVICE(CSIO_DEVID_T520BCH_FCOE, 0), /* T520BCH FCOE */
1201 CSIO_DEVICE(CSIO_DEVID_T540BCH_FCOE, 0), /* T540BCH FCOE */
1202 CSIO_DEVICE(CSIO_DEVID_T540CH_FCOE, 0), /* T540CH FCOE */
1203 CSIO_DEVICE(CSIO_DEVID_T520SO_FCOE, 0), /* T520SO FCOE */
1204 CSIO_DEVICE(CSIO_DEVID_T520CX_FCOE, 0), /* T520CX FCOE */
1205 CSIO_DEVICE(CSIO_DEVID_T520BT_FCOE, 0), /* T520BT FCOE */
1206 CSIO_DEVICE(CSIO_DEVID_T504BT_FCOE, 0), /* T504BT FCOE */
1207 CSIO_DEVICE(CSIO_DEVID_B520_FCOE, 0), /* B520 FCOE */
1208 CSIO_DEVICE(CSIO_DEVID_B504_FCOE, 0), /* B504 FCOE */
1209 CSIO_DEVICE(CSIO_DEVID_T580CR2_FCOE, 0), /* T580 CR FCOE */
1210 CSIO_DEVICE(CSIO_DEVID_T540LPCR_FCOE, 0), /* T540 LP-CR FCOE */
1211 CSIO_DEVICE(CSIO_DEVID_AMSTERDAM_T5_FCOE, 0), /* AMSTERDAM T5 FCOE */
1212 CSIO_DEVICE(CSIO_DEVID_T580LPCR_FCOE, 0), /* T580 LP-CR FCOE */
1213 CSIO_DEVICE(CSIO_DEVID_T520LLCR_FCOE, 0), /* T520 LL-CR FCOE */
1214 CSIO_DEVICE(CSIO_DEVID_T560CR_FCOE, 0), /* T560 CR FCOE */
1215 CSIO_DEVICE(CSIO_DEVID_T580CR_FCOE, 0), /* T580 CR FCOE */
1189 { 0, 0, 0, 0, 0, 0, 0 } 1216 { 0, 0, 0, 0, 0, 0, 0 }
1190}; 1217};
1191 1218
@@ -1259,4 +1286,5 @@ MODULE_DESCRIPTION(CSIO_DRV_DESC);
1259MODULE_LICENSE(CSIO_DRV_LICENSE); 1286MODULE_LICENSE(CSIO_DRV_LICENSE);
1260MODULE_DEVICE_TABLE(pci, csio_pci_tbl); 1287MODULE_DEVICE_TABLE(pci, csio_pci_tbl);
1261MODULE_VERSION(CSIO_DRV_VERSION); 1288MODULE_VERSION(CSIO_DRV_VERSION);
1262MODULE_FIRMWARE(CSIO_FW_FNAME); 1289MODULE_FIRMWARE(FW_FNAME_T4);
1290MODULE_FIRMWARE(FW_FNAME_T5);
diff --git a/drivers/scsi/csiostor/csio_init.h b/drivers/scsi/csiostor/csio_init.h
index 0838fd7ec9c7..5cc5d317a442 100644
--- a/drivers/scsi/csiostor/csio_init.h
+++ b/drivers/scsi/csiostor/csio_init.h
@@ -52,31 +52,6 @@
52#define CSIO_DRV_DESC "Chelsio FCoE driver" 52#define CSIO_DRV_DESC "Chelsio FCoE driver"
53#define CSIO_DRV_VERSION "1.0.0" 53#define CSIO_DRV_VERSION "1.0.0"
54 54
55#define CSIO_DEVICE(devid, idx) \
56{ PCI_VENDOR_ID_CHELSIO, (devid), PCI_ANY_ID, PCI_ANY_ID, 0, 0, (idx) }
57
58#define CSIO_IS_T4_FPGA(_dev) (((_dev) == CSIO_DEVID_PE10K) ||\
59 ((_dev) == CSIO_DEVID_PE10K_PF1))
60
61/* FCoE device IDs */
62#define CSIO_DEVID_PE10K 0xA000
63#define CSIO_DEVID_PE10K_PF1 0xA001
64#define CSIO_DEVID_T440DBG_FCOE 0x4600
65#define CSIO_DEVID_T420CR_FCOE 0x4601
66#define CSIO_DEVID_T422CR_FCOE 0x4602
67#define CSIO_DEVID_T440CR_FCOE 0x4603
68#define CSIO_DEVID_T420BCH_FCOE 0x4604
69#define CSIO_DEVID_T440BCH_FCOE 0x4605
70#define CSIO_DEVID_T440CH_FCOE 0x4606
71#define CSIO_DEVID_T420SO_FCOE 0x4607
72#define CSIO_DEVID_T420CX_FCOE 0x4608
73#define CSIO_DEVID_T420BT_FCOE 0x4609
74#define CSIO_DEVID_T404BT_FCOE 0x460A
75#define CSIO_DEVID_B420_FCOE 0x460B
76#define CSIO_DEVID_B404_FCOE 0x460C
77#define CSIO_DEVID_T480CR_FCOE 0x460D
78#define CSIO_DEVID_T440LPCR_FCOE 0x460E
79
80extern struct fc_function_template csio_fc_transport_funcs; 55extern struct fc_function_template csio_fc_transport_funcs;
81extern struct fc_function_template csio_fc_transport_vport_funcs; 56extern struct fc_function_template csio_fc_transport_vport_funcs;
82 57
@@ -100,6 +75,10 @@ struct csio_lnode *csio_shost_init(struct csio_hw *, struct device *, bool,
100void csio_shost_exit(struct csio_lnode *); 75void csio_shost_exit(struct csio_lnode *);
101void csio_lnodes_exit(struct csio_hw *, bool); 76void csio_lnodes_exit(struct csio_hw *, bool);
102 77
78/* DebugFS helper routines */
79void csio_add_debugfs_mem(struct csio_hw *, const char *,
80 unsigned int, unsigned int);
81
103static inline struct Scsi_Host * 82static inline struct Scsi_Host *
104csio_ln_to_shost(struct csio_lnode *ln) 83csio_ln_to_shost(struct csio_lnode *ln)
105{ 84{
diff --git a/drivers/scsi/csiostor/csio_lnode.h b/drivers/scsi/csiostor/csio_lnode.h
index 8d84988ab06d..0f9c04175b11 100644
--- a/drivers/scsi/csiostor/csio_lnode.h
+++ b/drivers/scsi/csiostor/csio_lnode.h
@@ -114,7 +114,7 @@ struct csio_lnode_stats {
114 uint32_t n_rnode_match; /* matched rnode */ 114 uint32_t n_rnode_match; /* matched rnode */
115 uint32_t n_dev_loss_tmo; /* Device loss timeout */ 115 uint32_t n_dev_loss_tmo; /* Device loss timeout */
116 uint32_t n_fdmi_err; /* fdmi err */ 116 uint32_t n_fdmi_err; /* fdmi err */
117 uint32_t n_evt_fw[RSCN_DEV_LOST]; /* fw events */ 117 uint32_t n_evt_fw[PROTO_ERR_IMPL_LOGO]; /* fw events */
118 enum csio_ln_ev n_evt_sm[CSIO_LNE_MAX_EVENT]; /* State m/c events */ 118 enum csio_ln_ev n_evt_sm[CSIO_LNE_MAX_EVENT]; /* State m/c events */
119 uint32_t n_rnode_alloc; /* rnode allocated */ 119 uint32_t n_rnode_alloc; /* rnode allocated */
120 uint32_t n_rnode_free; /* rnode freed */ 120 uint32_t n_rnode_free; /* rnode freed */
diff --git a/drivers/scsi/csiostor/csio_rnode.c b/drivers/scsi/csiostor/csio_rnode.c
index 51c6a388de2b..e9c3b045f587 100644
--- a/drivers/scsi/csiostor/csio_rnode.c
+++ b/drivers/scsi/csiostor/csio_rnode.c
@@ -302,7 +302,7 @@ csio_confirm_rnode(struct csio_lnode *ln, uint32_t rdev_flowid,
302{ 302{
303 uint8_t rport_type; 303 uint8_t rport_type;
304 struct csio_rnode *rn, *match_rn; 304 struct csio_rnode *rn, *match_rn;
305 uint32_t vnp_flowid; 305 uint32_t vnp_flowid = 0;
306 __be32 *port_id; 306 __be32 *port_id;
307 307
308 port_id = (__be32 *)&rdevp->r_id[0]; 308 port_id = (__be32 *)&rdevp->r_id[0];
@@ -350,6 +350,14 @@ csio_confirm_rnode(struct csio_lnode *ln, uint32_t rdev_flowid,
350 * Else, go ahead and alloc a new rnode. 350 * Else, go ahead and alloc a new rnode.
351 */ 351 */
352 if (!memcmp(csio_rn_wwpn(match_rn), rdevp->wwpn, 8)) { 352 if (!memcmp(csio_rn_wwpn(match_rn), rdevp->wwpn, 8)) {
353 if (rn == match_rn)
354 goto found_rnode;
355 csio_ln_dbg(ln,
356 "nport_id:x%x and wwpn:%llx"
357 " match for ssni:x%x\n",
358 rn->nport_id,
359 wwn_to_u64(rdevp->wwpn),
360 rdev_flowid);
353 if (csio_is_rnode_ready(rn)) { 361 if (csio_is_rnode_ready(rn)) {
354 csio_ln_warn(ln, 362 csio_ln_warn(ln,
355 "rnode is already" 363 "rnode is already"
diff --git a/drivers/scsi/csiostor/csio_rnode.h b/drivers/scsi/csiostor/csio_rnode.h
index a3b434c801da..65940096a80d 100644
--- a/drivers/scsi/csiostor/csio_rnode.h
+++ b/drivers/scsi/csiostor/csio_rnode.h
@@ -63,7 +63,7 @@ struct csio_rnode_stats {
63 uint32_t n_err_nomem; /* error nomem */ 63 uint32_t n_err_nomem; /* error nomem */
64 uint32_t n_evt_unexp; /* unexpected event */ 64 uint32_t n_evt_unexp; /* unexpected event */
65 uint32_t n_evt_drop; /* unexpected event */ 65 uint32_t n_evt_drop; /* unexpected event */
66 uint32_t n_evt_fw[RSCN_DEV_LOST]; /* fw events */ 66 uint32_t n_evt_fw[PROTO_ERR_IMPL_LOGO]; /* fw events */
67 enum csio_rn_ev n_evt_sm[CSIO_RNFE_MAX_EVENT]; /* State m/c events */ 67 enum csio_rn_ev n_evt_sm[CSIO_RNFE_MAX_EVENT]; /* State m/c events */
68 uint32_t n_lun_rst; /* Number of resets of 68 uint32_t n_lun_rst; /* Number of resets of
69 * of LUNs under this 69 * of LUNs under this
diff --git a/drivers/scsi/csiostor/csio_wr.c b/drivers/scsi/csiostor/csio_wr.c
index c32df1bdaa97..4255ce264abf 100644
--- a/drivers/scsi/csiostor/csio_wr.c
+++ b/drivers/scsi/csiostor/csio_wr.c
@@ -85,8 +85,8 @@ csio_wr_ring_fldb(struct csio_hw *hw, struct csio_q *flq)
85 */ 85 */
86 if (flq->inc_idx >= 8) { 86 if (flq->inc_idx >= 8) {
87 csio_wr_reg32(hw, DBPRIO(1) | QID(flq->un.fl.flid) | 87 csio_wr_reg32(hw, DBPRIO(1) | QID(flq->un.fl.flid) |
88 PIDX(flq->inc_idx / 8), 88 CSIO_HW_PIDX(hw, flq->inc_idx / 8),
89 MYPF_REG(SGE_PF_KDOORBELL)); 89 MYPF_REG(SGE_PF_KDOORBELL));
90 flq->inc_idx &= 7; 90 flq->inc_idx &= 7;
91 } 91 }
92} 92}
@@ -989,7 +989,8 @@ csio_wr_issue(struct csio_hw *hw, int qidx, bool prio)
989 wmb(); 989 wmb();
990 /* Ring SGE Doorbell writing q->pidx into it */ 990 /* Ring SGE Doorbell writing q->pidx into it */
991 csio_wr_reg32(hw, DBPRIO(prio) | QID(q->un.eq.physeqid) | 991 csio_wr_reg32(hw, DBPRIO(prio) | QID(q->un.eq.physeqid) |
992 PIDX(q->inc_idx), MYPF_REG(SGE_PF_KDOORBELL)); 992 CSIO_HW_PIDX(hw, q->inc_idx),
993 MYPF_REG(SGE_PF_KDOORBELL));
993 q->inc_idx = 0; 994 q->inc_idx = 0;
994 995
995 return 0; 996 return 0;
@@ -1331,20 +1332,30 @@ csio_wr_fixup_host_params(struct csio_hw *hw)
1331 1332
1332 /* FL BUFFER SIZE#0 is Page size i,e already aligned to cache line */ 1333 /* FL BUFFER SIZE#0 is Page size i,e already aligned to cache line */
1333 csio_wr_reg32(hw, PAGE_SIZE, SGE_FL_BUFFER_SIZE0); 1334 csio_wr_reg32(hw, PAGE_SIZE, SGE_FL_BUFFER_SIZE0);
1334 csio_wr_reg32(hw, 1335
1335 (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE2) + 1336 /*
1336 sge->csio_fl_align - 1) & ~(sge->csio_fl_align - 1), 1337 * If using hard params, the following will get set correctly
1337 SGE_FL_BUFFER_SIZE2); 1338 * in csio_wr_set_sge().
1338 csio_wr_reg32(hw, 1339 */
1339 (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE3) + 1340 if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS) {
1340 sge->csio_fl_align - 1) & ~(sge->csio_fl_align - 1), 1341 csio_wr_reg32(hw,
1341 SGE_FL_BUFFER_SIZE3); 1342 (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE2) +
1343 sge->csio_fl_align - 1) & ~(sge->csio_fl_align - 1),
1344 SGE_FL_BUFFER_SIZE2);
1345 csio_wr_reg32(hw,
1346 (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE3) +
1347 sge->csio_fl_align - 1) & ~(sge->csio_fl_align - 1),
1348 SGE_FL_BUFFER_SIZE3);
1349 }
1342 1350
1343 csio_wr_reg32(hw, HPZ0(PAGE_SHIFT - 12), ULP_RX_TDDP_PSZ); 1351 csio_wr_reg32(hw, HPZ0(PAGE_SHIFT - 12), ULP_RX_TDDP_PSZ);
1344 1352
1345 /* default value of rx_dma_offset of the NIC driver */ 1353 /* default value of rx_dma_offset of the NIC driver */
1346 csio_set_reg_field(hw, SGE_CONTROL, PKTSHIFT_MASK, 1354 csio_set_reg_field(hw, SGE_CONTROL, PKTSHIFT_MASK,
1347 PKTSHIFT(CSIO_SGE_RX_DMA_OFFSET)); 1355 PKTSHIFT(CSIO_SGE_RX_DMA_OFFSET));
1356
1357 csio_hw_tp_wr_bits_indirect(hw, TP_INGRESS_CONFIG,
1358 CSUM_HAS_PSEUDO_HDR, 0);
1348} 1359}
1349 1360
1350static void 1361static void
@@ -1460,18 +1471,21 @@ csio_wr_set_sge(struct csio_hw *hw)
1460 * and generate an interrupt when this occurs so we can recover. 1471 * and generate an interrupt when this occurs so we can recover.
1461 */ 1472 */
1462 csio_set_reg_field(hw, SGE_DBFIFO_STATUS, 1473 csio_set_reg_field(hw, SGE_DBFIFO_STATUS,
1463 HP_INT_THRESH(HP_INT_THRESH_MASK) | 1474 HP_INT_THRESH(HP_INT_THRESH_MASK) |
1464 LP_INT_THRESH(LP_INT_THRESH_MASK), 1475 CSIO_HW_LP_INT_THRESH(hw, CSIO_HW_M_LP_INT_THRESH(hw)),
1465 HP_INT_THRESH(CSIO_SGE_DBFIFO_INT_THRESH) | 1476 HP_INT_THRESH(CSIO_SGE_DBFIFO_INT_THRESH) |
1466 LP_INT_THRESH(CSIO_SGE_DBFIFO_INT_THRESH)); 1477 CSIO_HW_LP_INT_THRESH(hw, CSIO_SGE_DBFIFO_INT_THRESH));
1478
1467 csio_set_reg_field(hw, SGE_DOORBELL_CONTROL, ENABLE_DROP, 1479 csio_set_reg_field(hw, SGE_DOORBELL_CONTROL, ENABLE_DROP,
1468 ENABLE_DROP); 1480 ENABLE_DROP);
1469 1481
1470 /* SGE_FL_BUFFER_SIZE0 is set up by csio_wr_fixup_host_params(). */ 1482 /* SGE_FL_BUFFER_SIZE0 is set up by csio_wr_fixup_host_params(). */
1471 1483
1472 CSIO_SET_FLBUF_SIZE(hw, 1, CSIO_SGE_FLBUF_SIZE1); 1484 CSIO_SET_FLBUF_SIZE(hw, 1, CSIO_SGE_FLBUF_SIZE1);
1473 CSIO_SET_FLBUF_SIZE(hw, 2, CSIO_SGE_FLBUF_SIZE2); 1485 csio_wr_reg32(hw, (CSIO_SGE_FLBUF_SIZE2 + sge->csio_fl_align - 1)
1474 CSIO_SET_FLBUF_SIZE(hw, 3, CSIO_SGE_FLBUF_SIZE3); 1486 & ~(sge->csio_fl_align - 1), SGE_FL_BUFFER_SIZE2);
1487 csio_wr_reg32(hw, (CSIO_SGE_FLBUF_SIZE3 + sge->csio_fl_align - 1)
1488 & ~(sge->csio_fl_align - 1), SGE_FL_BUFFER_SIZE3);
1475 CSIO_SET_FLBUF_SIZE(hw, 4, CSIO_SGE_FLBUF_SIZE4); 1489 CSIO_SET_FLBUF_SIZE(hw, 4, CSIO_SGE_FLBUF_SIZE4);
1476 CSIO_SET_FLBUF_SIZE(hw, 5, CSIO_SGE_FLBUF_SIZE5); 1490 CSIO_SET_FLBUF_SIZE(hw, 5, CSIO_SGE_FLBUF_SIZE5);
1477 CSIO_SET_FLBUF_SIZE(hw, 6, CSIO_SGE_FLBUF_SIZE6); 1491 CSIO_SET_FLBUF_SIZE(hw, 6, CSIO_SGE_FLBUF_SIZE6);
@@ -1522,22 +1536,24 @@ void
1522csio_wr_sge_init(struct csio_hw *hw) 1536csio_wr_sge_init(struct csio_hw *hw)
1523{ 1537{
1524 /* 1538 /*
1525 * If we are master: 1539 * If we are master and chip is not initialized:
1526 * - If we plan to use the config file, we need to fixup some 1540 * - If we plan to use the config file, we need to fixup some
1527 * host specific registers, and read the rest of the SGE 1541 * host specific registers, and read the rest of the SGE
1528 * configuration. 1542 * configuration.
1529 * - If we dont plan to use the config file, we need to initialize 1543 * - If we dont plan to use the config file, we need to initialize
1530 * SGE entirely, including fixing the host specific registers. 1544 * SGE entirely, including fixing the host specific registers.
1545 * If we are master and chip is initialized, just read and work off of
1546 * the already initialized SGE values.
1531 * If we arent the master, we are only allowed to read and work off of 1547 * If we arent the master, we are only allowed to read and work off of
1532 * the already initialized SGE values. 1548 * the already initialized SGE values.
1533 * 1549 *
1534 * Therefore, before calling this function, we assume that the master- 1550 * Therefore, before calling this function, we assume that the master-
1535 * ship of the card, and whether to use config file or not, have 1551 * ship of the card, state and whether to use config file or not, have
1536 * already been decided. In other words, CSIO_HWF_USING_SOFT_PARAMS and 1552 * already been decided.
1537 * CSIO_HWF_MASTER should be set/unset.
1538 */ 1553 */
1539 if (csio_is_hw_master(hw)) { 1554 if (csio_is_hw_master(hw)) {
1540 csio_wr_fixup_host_params(hw); 1555 if (hw->fw_state != CSIO_DEV_STATE_INIT)
1556 csio_wr_fixup_host_params(hw);
1541 1557
1542 if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS) 1558 if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS)
1543 csio_wr_get_sge(hw); 1559 csio_wr_get_sge(hw);
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 9bfdc9a3f897..292b24f9bf93 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1655,7 +1655,7 @@ static int fcoe_xmit(struct fc_lport *lport, struct fc_frame *fp)
1655 skb->priority = fcoe->priority; 1655 skb->priority = fcoe->priority;
1656 1656
1657 if (fcoe->netdev->priv_flags & IFF_802_1Q_VLAN && 1657 if (fcoe->netdev->priv_flags & IFF_802_1Q_VLAN &&
1658 fcoe->realdev->features & NETIF_F_HW_VLAN_TX) { 1658 fcoe->realdev->features & NETIF_F_HW_VLAN_CTAG_TX) {
1659 skb->vlan_tci = VLAN_TAG_PRESENT | 1659 skb->vlan_tci = VLAN_TAG_PRESENT |
1660 vlan_dev_vlan_id(fcoe->netdev); 1660 vlan_dev_vlan_id(fcoe->netdev);
1661 skb->dev = fcoe->realdev; 1661 skb->dev = fcoe->realdev;
diff --git a/drivers/scsi/scsi_netlink.c b/drivers/scsi/scsi_netlink.c
index 65123a21b97e..fe30ea94ffe6 100644
--- a/drivers/scsi/scsi_netlink.c
+++ b/drivers/scsi/scsi_netlink.c
@@ -50,7 +50,7 @@ scsi_nl_rcv_msg(struct sk_buff *skb)
50 u32 rlen; 50 u32 rlen;
51 int err, tport; 51 int err, tport;
52 52
53 while (skb->len >= NLMSG_SPACE(0)) { 53 while (skb->len >= NLMSG_HDRLEN) {
54 err = 0; 54 err = 0;
55 55
56 nlh = nlmsg_hdr(skb); 56 nlh = nlmsg_hdr(skb);
@@ -70,7 +70,7 @@ scsi_nl_rcv_msg(struct sk_buff *skb)
70 goto next_msg; 70 goto next_msg;
71 } 71 }
72 72
73 hdr = NLMSG_DATA(nlh); 73 hdr = nlmsg_data(nlh);
74 if ((hdr->version != SCSI_NL_VERSION) || 74 if ((hdr->version != SCSI_NL_VERSION) ||
75 (hdr->magic != SCSI_NL_MAGIC)) { 75 (hdr->magic != SCSI_NL_MAGIC)) {
76 err = -EPROTOTYPE; 76 err = -EPROTOTYPE;
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
index e894ca7b54c0..e106c276aa00 100644
--- a/drivers/scsi/scsi_transport_fc.c
+++ b/drivers/scsi/scsi_transport_fc.c
@@ -35,7 +35,6 @@
35#include <scsi/scsi_transport.h> 35#include <scsi/scsi_transport.h>
36#include <scsi/scsi_transport_fc.h> 36#include <scsi/scsi_transport_fc.h>
37#include <scsi/scsi_cmnd.h> 37#include <scsi/scsi_cmnd.h>
38#include <linux/netlink.h>
39#include <net/netlink.h> 38#include <net/netlink.h>
40#include <scsi/scsi_netlink_fc.h> 39#include <scsi/scsi_netlink_fc.h>
41#include <scsi/scsi_bsg_fc.h> 40#include <scsi/scsi_bsg_fc.h>
@@ -534,7 +533,7 @@ fc_host_post_event(struct Scsi_Host *shost, u32 event_number,
534 struct nlmsghdr *nlh; 533 struct nlmsghdr *nlh;
535 struct fc_nl_event *event; 534 struct fc_nl_event *event;
536 const char *name; 535 const char *name;
537 u32 len, skblen; 536 u32 len;
538 int err; 537 int err;
539 538
540 if (!scsi_nl_sock) { 539 if (!scsi_nl_sock) {
@@ -543,21 +542,19 @@ fc_host_post_event(struct Scsi_Host *shost, u32 event_number,
543 } 542 }
544 543
545 len = FC_NL_MSGALIGN(sizeof(*event)); 544 len = FC_NL_MSGALIGN(sizeof(*event));
546 skblen = NLMSG_SPACE(len);
547 545
548 skb = alloc_skb(skblen, GFP_KERNEL); 546 skb = nlmsg_new(len, GFP_KERNEL);
549 if (!skb) { 547 if (!skb) {
550 err = -ENOBUFS; 548 err = -ENOBUFS;
551 goto send_fail; 549 goto send_fail;
552 } 550 }
553 551
554 nlh = nlmsg_put(skb, 0, 0, SCSI_TRANSPORT_MSG, 552 nlh = nlmsg_put(skb, 0, 0, SCSI_TRANSPORT_MSG, len, 0);
555 skblen - sizeof(*nlh), 0);
556 if (!nlh) { 553 if (!nlh) {
557 err = -ENOBUFS; 554 err = -ENOBUFS;
558 goto send_fail_skb; 555 goto send_fail_skb;
559 } 556 }
560 event = NLMSG_DATA(nlh); 557 event = nlmsg_data(nlh);
561 558
562 INIT_SCSI_NL_HDR(&event->snlh, SCSI_NL_TRANSPORT_FC, 559 INIT_SCSI_NL_HDR(&event->snlh, SCSI_NL_TRANSPORT_FC,
563 FC_NL_ASYNC_EVENT, len); 560 FC_NL_ASYNC_EVENT, len);
@@ -604,7 +601,7 @@ fc_host_post_vendor_event(struct Scsi_Host *shost, u32 event_number,
604 struct sk_buff *skb; 601 struct sk_buff *skb;
605 struct nlmsghdr *nlh; 602 struct nlmsghdr *nlh;
606 struct fc_nl_event *event; 603 struct fc_nl_event *event;
607 u32 len, skblen; 604 u32 len;
608 int err; 605 int err;
609 606
610 if (!scsi_nl_sock) { 607 if (!scsi_nl_sock) {
@@ -613,21 +610,19 @@ fc_host_post_vendor_event(struct Scsi_Host *shost, u32 event_number,
613 } 610 }
614 611
615 len = FC_NL_MSGALIGN(sizeof(*event) + data_len); 612 len = FC_NL_MSGALIGN(sizeof(*event) + data_len);
616 skblen = NLMSG_SPACE(len);
617 613
618 skb = alloc_skb(skblen, GFP_KERNEL); 614 skb = nlmsg_new(len, GFP_KERNEL);
619 if (!skb) { 615 if (!skb) {
620 err = -ENOBUFS; 616 err = -ENOBUFS;
621 goto send_vendor_fail; 617 goto send_vendor_fail;
622 } 618 }
623 619
624 nlh = nlmsg_put(skb, 0, 0, SCSI_TRANSPORT_MSG, 620 nlh = nlmsg_put(skb, 0, 0, SCSI_TRANSPORT_MSG, len, 0);
625 skblen - sizeof(*nlh), 0);
626 if (!nlh) { 621 if (!nlh) {
627 err = -ENOBUFS; 622 err = -ENOBUFS;
628 goto send_vendor_fail_skb; 623 goto send_vendor_fail_skb;
629 } 624 }
630 event = NLMSG_DATA(nlh); 625 event = nlmsg_data(nlh);
631 626
632 INIT_SCSI_NL_HDR(&event->snlh, SCSI_NL_TRANSPORT_FC, 627 INIT_SCSI_NL_HDR(&event->snlh, SCSI_NL_TRANSPORT_FC,
633 FC_NL_ASYNC_EVENT, len); 628 FC_NL_ASYNC_EVENT, len);
diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
index ce06e8772f3a..47799a33d6ca 100644
--- a/drivers/scsi/scsi_transport_iscsi.c
+++ b/drivers/scsi/scsi_transport_iscsi.c
@@ -2028,8 +2028,8 @@ int iscsi_recv_pdu(struct iscsi_cls_conn *conn, struct iscsi_hdr *hdr,
2028 struct iscsi_uevent *ev; 2028 struct iscsi_uevent *ev;
2029 char *pdu; 2029 char *pdu;
2030 struct iscsi_internal *priv; 2030 struct iscsi_internal *priv;
2031 int len = NLMSG_SPACE(sizeof(*ev) + sizeof(struct iscsi_hdr) + 2031 int len = nlmsg_total_size(sizeof(*ev) + sizeof(struct iscsi_hdr) +
2032 data_size); 2032 data_size);
2033 2033
2034 priv = iscsi_if_transport_lookup(conn->transport); 2034 priv = iscsi_if_transport_lookup(conn->transport);
2035 if (!priv) 2035 if (!priv)
@@ -2044,7 +2044,7 @@ int iscsi_recv_pdu(struct iscsi_cls_conn *conn, struct iscsi_hdr *hdr,
2044 } 2044 }
2045 2045
2046 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2046 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2047 ev = NLMSG_DATA(nlh); 2047 ev = nlmsg_data(nlh);
2048 memset(ev, 0, sizeof(*ev)); 2048 memset(ev, 0, sizeof(*ev));
2049 ev->transport_handle = iscsi_handle(conn->transport); 2049 ev->transport_handle = iscsi_handle(conn->transport);
2050 ev->type = ISCSI_KEVENT_RECV_PDU; 2050 ev->type = ISCSI_KEVENT_RECV_PDU;
@@ -2065,7 +2065,7 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
2065 struct nlmsghdr *nlh; 2065 struct nlmsghdr *nlh;
2066 struct sk_buff *skb; 2066 struct sk_buff *skb;
2067 struct iscsi_uevent *ev; 2067 struct iscsi_uevent *ev;
2068 int len = NLMSG_SPACE(sizeof(*ev) + data_size); 2068 int len = nlmsg_total_size(sizeof(*ev) + data_size);
2069 2069
2070 skb = alloc_skb(len, GFP_ATOMIC); 2070 skb = alloc_skb(len, GFP_ATOMIC);
2071 if (!skb) { 2071 if (!skb) {
@@ -2074,7 +2074,7 @@ int iscsi_offload_mesg(struct Scsi_Host *shost,
2074 } 2074 }
2075 2075
2076 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2076 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2077 ev = NLMSG_DATA(nlh); 2077 ev = nlmsg_data(nlh);
2078 memset(ev, 0, sizeof(*ev)); 2078 memset(ev, 0, sizeof(*ev));
2079 ev->type = type; 2079 ev->type = type;
2080 ev->transport_handle = iscsi_handle(transport); 2080 ev->transport_handle = iscsi_handle(transport);
@@ -2099,7 +2099,7 @@ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
2099 struct sk_buff *skb; 2099 struct sk_buff *skb;
2100 struct iscsi_uevent *ev; 2100 struct iscsi_uevent *ev;
2101 struct iscsi_internal *priv; 2101 struct iscsi_internal *priv;
2102 int len = NLMSG_SPACE(sizeof(*ev)); 2102 int len = nlmsg_total_size(sizeof(*ev));
2103 2103
2104 priv = iscsi_if_transport_lookup(conn->transport); 2104 priv = iscsi_if_transport_lookup(conn->transport);
2105 if (!priv) 2105 if (!priv)
@@ -2113,7 +2113,7 @@ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
2113 } 2113 }
2114 2114
2115 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2115 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2116 ev = NLMSG_DATA(nlh); 2116 ev = nlmsg_data(nlh);
2117 ev->transport_handle = iscsi_handle(conn->transport); 2117 ev->transport_handle = iscsi_handle(conn->transport);
2118 ev->type = ISCSI_KEVENT_CONN_ERROR; 2118 ev->type = ISCSI_KEVENT_CONN_ERROR;
2119 ev->r.connerror.error = error; 2119 ev->r.connerror.error = error;
@@ -2134,7 +2134,7 @@ void iscsi_conn_login_event(struct iscsi_cls_conn *conn,
2134 struct sk_buff *skb; 2134 struct sk_buff *skb;
2135 struct iscsi_uevent *ev; 2135 struct iscsi_uevent *ev;
2136 struct iscsi_internal *priv; 2136 struct iscsi_internal *priv;
2137 int len = NLMSG_SPACE(sizeof(*ev)); 2137 int len = nlmsg_total_size(sizeof(*ev));
2138 2138
2139 priv = iscsi_if_transport_lookup(conn->transport); 2139 priv = iscsi_if_transport_lookup(conn->transport);
2140 if (!priv) 2140 if (!priv)
@@ -2148,7 +2148,7 @@ void iscsi_conn_login_event(struct iscsi_cls_conn *conn,
2148 } 2148 }
2149 2149
2150 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2150 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2151 ev = NLMSG_DATA(nlh); 2151 ev = nlmsg_data(nlh);
2152 ev->transport_handle = iscsi_handle(conn->transport); 2152 ev->transport_handle = iscsi_handle(conn->transport);
2153 ev->type = ISCSI_KEVENT_CONN_LOGIN_STATE; 2153 ev->type = ISCSI_KEVENT_CONN_LOGIN_STATE;
2154 ev->r.conn_login.state = state; 2154 ev->r.conn_login.state = state;
@@ -2168,7 +2168,7 @@ void iscsi_post_host_event(uint32_t host_no, struct iscsi_transport *transport,
2168 struct nlmsghdr *nlh; 2168 struct nlmsghdr *nlh;
2169 struct sk_buff *skb; 2169 struct sk_buff *skb;
2170 struct iscsi_uevent *ev; 2170 struct iscsi_uevent *ev;
2171 int len = NLMSG_SPACE(sizeof(*ev) + data_size); 2171 int len = nlmsg_total_size(sizeof(*ev) + data_size);
2172 2172
2173 skb = alloc_skb(len, GFP_NOIO); 2173 skb = alloc_skb(len, GFP_NOIO);
2174 if (!skb) { 2174 if (!skb) {
@@ -2178,7 +2178,7 @@ void iscsi_post_host_event(uint32_t host_no, struct iscsi_transport *transport,
2178 } 2178 }
2179 2179
2180 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2180 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2181 ev = NLMSG_DATA(nlh); 2181 ev = nlmsg_data(nlh);
2182 ev->transport_handle = iscsi_handle(transport); 2182 ev->transport_handle = iscsi_handle(transport);
2183 ev->type = ISCSI_KEVENT_HOST_EVENT; 2183 ev->type = ISCSI_KEVENT_HOST_EVENT;
2184 ev->r.host_event.host_no = host_no; 2184 ev->r.host_event.host_no = host_no;
@@ -2199,7 +2199,7 @@ void iscsi_ping_comp_event(uint32_t host_no, struct iscsi_transport *transport,
2199 struct nlmsghdr *nlh; 2199 struct nlmsghdr *nlh;
2200 struct sk_buff *skb; 2200 struct sk_buff *skb;
2201 struct iscsi_uevent *ev; 2201 struct iscsi_uevent *ev;
2202 int len = NLMSG_SPACE(sizeof(*ev) + data_size); 2202 int len = nlmsg_total_size(sizeof(*ev) + data_size);
2203 2203
2204 skb = alloc_skb(len, GFP_NOIO); 2204 skb = alloc_skb(len, GFP_NOIO);
2205 if (!skb) { 2205 if (!skb) {
@@ -2208,7 +2208,7 @@ void iscsi_ping_comp_event(uint32_t host_no, struct iscsi_transport *transport,
2208 } 2208 }
2209 2209
2210 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2210 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2211 ev = NLMSG_DATA(nlh); 2211 ev = nlmsg_data(nlh);
2212 ev->transport_handle = iscsi_handle(transport); 2212 ev->transport_handle = iscsi_handle(transport);
2213 ev->type = ISCSI_KEVENT_PING_COMP; 2213 ev->type = ISCSI_KEVENT_PING_COMP;
2214 ev->r.ping_comp.host_no = host_no; 2214 ev->r.ping_comp.host_no = host_no;
@@ -2227,7 +2227,7 @@ iscsi_if_send_reply(uint32_t group, int seq, int type, int done, int multi,
2227{ 2227{
2228 struct sk_buff *skb; 2228 struct sk_buff *skb;
2229 struct nlmsghdr *nlh; 2229 struct nlmsghdr *nlh;
2230 int len = NLMSG_SPACE(size); 2230 int len = nlmsg_total_size(size);
2231 int flags = multi ? NLM_F_MULTI : 0; 2231 int flags = multi ? NLM_F_MULTI : 0;
2232 int t = done ? NLMSG_DONE : type; 2232 int t = done ? NLMSG_DONE : type;
2233 2233
@@ -2239,24 +2239,24 @@ iscsi_if_send_reply(uint32_t group, int seq, int type, int done, int multi,
2239 2239
2240 nlh = __nlmsg_put(skb, 0, 0, t, (len - sizeof(*nlh)), 0); 2240 nlh = __nlmsg_put(skb, 0, 0, t, (len - sizeof(*nlh)), 0);
2241 nlh->nlmsg_flags = flags; 2241 nlh->nlmsg_flags = flags;
2242 memcpy(NLMSG_DATA(nlh), payload, size); 2242 memcpy(nlmsg_data(nlh), payload, size);
2243 return iscsi_multicast_skb(skb, group, GFP_ATOMIC); 2243 return iscsi_multicast_skb(skb, group, GFP_ATOMIC);
2244} 2244}
2245 2245
2246static int 2246static int
2247iscsi_if_get_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh) 2247iscsi_if_get_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2248{ 2248{
2249 struct iscsi_uevent *ev = NLMSG_DATA(nlh); 2249 struct iscsi_uevent *ev = nlmsg_data(nlh);
2250 struct iscsi_stats *stats; 2250 struct iscsi_stats *stats;
2251 struct sk_buff *skbstat; 2251 struct sk_buff *skbstat;
2252 struct iscsi_cls_conn *conn; 2252 struct iscsi_cls_conn *conn;
2253 struct nlmsghdr *nlhstat; 2253 struct nlmsghdr *nlhstat;
2254 struct iscsi_uevent *evstat; 2254 struct iscsi_uevent *evstat;
2255 struct iscsi_internal *priv; 2255 struct iscsi_internal *priv;
2256 int len = NLMSG_SPACE(sizeof(*ev) + 2256 int len = nlmsg_total_size(sizeof(*ev) +
2257 sizeof(struct iscsi_stats) + 2257 sizeof(struct iscsi_stats) +
2258 sizeof(struct iscsi_stats_custom) * 2258 sizeof(struct iscsi_stats_custom) *
2259 ISCSI_STATS_CUSTOM_MAX); 2259 ISCSI_STATS_CUSTOM_MAX);
2260 int err = 0; 2260 int err = 0;
2261 2261
2262 priv = iscsi_if_transport_lookup(transport); 2262 priv = iscsi_if_transport_lookup(transport);
@@ -2279,7 +2279,7 @@ iscsi_if_get_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2279 2279
2280 nlhstat = __nlmsg_put(skbstat, 0, 0, 0, 2280 nlhstat = __nlmsg_put(skbstat, 0, 0, 0,
2281 (len - sizeof(*nlhstat)), 0); 2281 (len - sizeof(*nlhstat)), 0);
2282 evstat = NLMSG_DATA(nlhstat); 2282 evstat = nlmsg_data(nlhstat);
2283 memset(evstat, 0, sizeof(*evstat)); 2283 memset(evstat, 0, sizeof(*evstat));
2284 evstat->transport_handle = iscsi_handle(conn->transport); 2284 evstat->transport_handle = iscsi_handle(conn->transport);
2285 evstat->type = nlh->nlmsg_type; 2285 evstat->type = nlh->nlmsg_type;
@@ -2292,12 +2292,12 @@ iscsi_if_get_stats(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2292 memset(stats, 0, sizeof(*stats)); 2292 memset(stats, 0, sizeof(*stats));
2293 2293
2294 transport->get_stats(conn, stats); 2294 transport->get_stats(conn, stats);
2295 actual_size = NLMSG_SPACE(sizeof(struct iscsi_uevent) + 2295 actual_size = nlmsg_total_size(sizeof(struct iscsi_uevent) +
2296 sizeof(struct iscsi_stats) + 2296 sizeof(struct iscsi_stats) +
2297 sizeof(struct iscsi_stats_custom) * 2297 sizeof(struct iscsi_stats_custom) *
2298 stats->custom_length); 2298 stats->custom_length);
2299 actual_size -= sizeof(*nlhstat); 2299 actual_size -= sizeof(*nlhstat);
2300 actual_size = NLMSG_LENGTH(actual_size); 2300 actual_size = nlmsg_msg_size(actual_size);
2301 skb_trim(skbstat, NLMSG_ALIGN(actual_size)); 2301 skb_trim(skbstat, NLMSG_ALIGN(actual_size));
2302 nlhstat->nlmsg_len = actual_size; 2302 nlhstat->nlmsg_len = actual_size;
2303 2303
@@ -2321,7 +2321,7 @@ int iscsi_session_event(struct iscsi_cls_session *session,
2321 struct iscsi_uevent *ev; 2321 struct iscsi_uevent *ev;
2322 struct sk_buff *skb; 2322 struct sk_buff *skb;
2323 struct nlmsghdr *nlh; 2323 struct nlmsghdr *nlh;
2324 int rc, len = NLMSG_SPACE(sizeof(*ev)); 2324 int rc, len = nlmsg_total_size(sizeof(*ev));
2325 2325
2326 priv = iscsi_if_transport_lookup(session->transport); 2326 priv = iscsi_if_transport_lookup(session->transport);
2327 if (!priv) 2327 if (!priv)
@@ -2337,7 +2337,7 @@ int iscsi_session_event(struct iscsi_cls_session *session,
2337 } 2337 }
2338 2338
2339 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0); 2339 nlh = __nlmsg_put(skb, 0, 0, 0, (len - sizeof(*nlh)), 0);
2340 ev = NLMSG_DATA(nlh); 2340 ev = nlmsg_data(nlh);
2341 ev->transport_handle = iscsi_handle(session->transport); 2341 ev->transport_handle = iscsi_handle(session->transport);
2342 2342
2343 ev->type = event; 2343 ev->type = event;
@@ -2689,7 +2689,7 @@ iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev)
2689static int 2689static int
2690iscsi_get_chap(struct iscsi_transport *transport, struct nlmsghdr *nlh) 2690iscsi_get_chap(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2691{ 2691{
2692 struct iscsi_uevent *ev = NLMSG_DATA(nlh); 2692 struct iscsi_uevent *ev = nlmsg_data(nlh);
2693 struct Scsi_Host *shost = NULL; 2693 struct Scsi_Host *shost = NULL;
2694 struct iscsi_chap_rec *chap_rec; 2694 struct iscsi_chap_rec *chap_rec;
2695 struct iscsi_internal *priv; 2695 struct iscsi_internal *priv;
@@ -2708,7 +2708,7 @@ iscsi_get_chap(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2708 return -EINVAL; 2708 return -EINVAL;
2709 2709
2710 chap_buf_size = (ev->u.get_chap.num_entries * sizeof(*chap_rec)); 2710 chap_buf_size = (ev->u.get_chap.num_entries * sizeof(*chap_rec));
2711 len = NLMSG_SPACE(sizeof(*ev) + chap_buf_size); 2711 len = nlmsg_total_size(sizeof(*ev) + chap_buf_size);
2712 2712
2713 shost = scsi_host_lookup(ev->u.get_chap.host_no); 2713 shost = scsi_host_lookup(ev->u.get_chap.host_no);
2714 if (!shost) { 2714 if (!shost) {
@@ -2729,7 +2729,7 @@ iscsi_get_chap(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2729 2729
2730 nlhchap = __nlmsg_put(skbchap, 0, 0, 0, 2730 nlhchap = __nlmsg_put(skbchap, 0, 0, 0,
2731 (len - sizeof(*nlhchap)), 0); 2731 (len - sizeof(*nlhchap)), 0);
2732 evchap = NLMSG_DATA(nlhchap); 2732 evchap = nlmsg_data(nlhchap);
2733 memset(evchap, 0, sizeof(*evchap)); 2733 memset(evchap, 0, sizeof(*evchap));
2734 evchap->transport_handle = iscsi_handle(transport); 2734 evchap->transport_handle = iscsi_handle(transport);
2735 evchap->type = nlh->nlmsg_type; 2735 evchap->type = nlh->nlmsg_type;
@@ -2742,7 +2742,7 @@ iscsi_get_chap(struct iscsi_transport *transport, struct nlmsghdr *nlh)
2742 err = transport->get_chap(shost, ev->u.get_chap.chap_tbl_idx, 2742 err = transport->get_chap(shost, ev->u.get_chap.chap_tbl_idx,
2743 &evchap->u.get_chap.num_entries, buf); 2743 &evchap->u.get_chap.num_entries, buf);
2744 2744
2745 actual_size = NLMSG_SPACE(sizeof(*ev) + chap_buf_size); 2745 actual_size = nlmsg_total_size(sizeof(*ev) + chap_buf_size);
2746 skb_trim(skbchap, NLMSG_ALIGN(actual_size)); 2746 skb_trim(skbchap, NLMSG_ALIGN(actual_size));
2747 nlhchap->nlmsg_len = actual_size; 2747 nlhchap->nlmsg_len = actual_size;
2748 2748
@@ -3068,7 +3068,7 @@ static int
3068iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) 3068iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
3069{ 3069{
3070 int err = 0; 3070 int err = 0;
3071 struct iscsi_uevent *ev = NLMSG_DATA(nlh); 3071 struct iscsi_uevent *ev = nlmsg_data(nlh);
3072 struct iscsi_transport *transport = NULL; 3072 struct iscsi_transport *transport = NULL;
3073 struct iscsi_internal *priv; 3073 struct iscsi_internal *priv;
3074 struct iscsi_cls_session *session; 3074 struct iscsi_cls_session *session;
@@ -3256,7 +3256,7 @@ static void
3256iscsi_if_rx(struct sk_buff *skb) 3256iscsi_if_rx(struct sk_buff *skb)
3257{ 3257{
3258 mutex_lock(&rx_queue_mutex); 3258 mutex_lock(&rx_queue_mutex);
3259 while (skb->len >= NLMSG_SPACE(0)) { 3259 while (skb->len >= NLMSG_HDRLEN) {
3260 int err; 3260 int err;
3261 uint32_t rlen; 3261 uint32_t rlen;
3262 struct nlmsghdr *nlh; 3262 struct nlmsghdr *nlh;
@@ -3269,7 +3269,7 @@ iscsi_if_rx(struct sk_buff *skb)
3269 break; 3269 break;
3270 } 3270 }
3271 3271
3272 ev = NLMSG_DATA(nlh); 3272 ev = nlmsg_data(nlh);
3273 rlen = NLMSG_ALIGN(nlh->nlmsg_len); 3273 rlen = NLMSG_ALIGN(nlh->nlmsg_len);
3274 if (rlen > skb->len) 3274 if (rlen > skb->len)
3275 rlen = skb->len; 3275 rlen = skb->len;