aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2018-05-05 02:51:10 -0400
committerLinus Torvalds <torvalds@linux-foundation.org>2018-05-05 02:51:10 -0400
commiteb4f959b2697cea30cdaa0a7c351d7333f13f62c (patch)
treef628f9810dacb1799fc246c0eb04cbbc07b24508
parent2f50037a1c687ac928bbd47b6eb959b39f748ada (diff)
parent9aa169213d1166d30ae357a44abbeae93459339d (diff)
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Pull rdma fixes from Doug Ledford: "This is our first pull request of the rc cycle. It's not that it's been overly quiet, we were just waiting on a few things before sending this off. For instance, the 6 patch series from Intel for the hfi1 driver had actually been pulled in on Tuesday for a Wednesday pull request, only to have Jason notice something I missed, so we held off for some testing, and then on Thursday had to respin the series because the very first patch needed a minor fix (unnecessary cast is all). There is a sizable hns patch series in here, as well as a reasonably largish hfi1 patch series, then all of the lines of uapi updates are just the change to the new official Linux-OpenIB SPDX tag (a bunch of our files had what amounts to a BSD-2-Clause + MIT Warranty statement as their license as a result of the initial code submission years ago, and the SPDX folks decided it was unique enough to warrant a unique tag), then the typical mlx4 and mlx5 updates, and finally some cxgb4 and core/cache/cma updates to round out the bunch. None of it was overly large by itself, but in the 2 1/2 weeks we've been collecting patches, it has added up :-/. As best I can tell, it's been through 0day (I got a notice about my last for-next push, but not for my for-rc push, but Jason seems to think that failure messages are prioritized and success messages not so much). It's also been through linux-next. And yes, we did notice in the context portion of the CMA query gid fix patch that there is a dubious BUG_ON() in the code, and have plans to audit our BUG_ON usage and remove it anywhere we can. Summary: - Various build fixes (USER_ACCESS=m and ADDR_TRANS turned off) - SPDX license tag cleanups (new tag Linux-OpenIB) - RoCE GID fixes related to default GIDs - Various fixes to: cxgb4, uverbs, cma, iwpm, rxe, hns (big batch), mlx4, mlx5, and hfi1 (medium batch)" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (52 commits) RDMA/cma: Do not query GID during QP state transition to RTR IB/mlx4: Fix integer overflow when calculating optimal MTT size IB/hfi1: Fix memory leak in exception path in get_irq_affinity() IB/{hfi1, rdmavt}: Fix memory leak in hfi1_alloc_devdata() upon failure IB/hfi1: Fix NULL pointer dereference when invalid num_vls is used IB/hfi1: Fix loss of BECN with AHG IB/hfi1 Use correct type for num_user_context IB/hfi1: Fix handling of FECN marked multicast packet IB/core: Make ib_mad_client_id atomic iw_cxgb4: Atomically flush per QP HW CQEs IB/uverbs: Fix kernel crash during MR deregistration flow IB/uverbs: Prevent reregistration of DM_MR to regular MR RDMA/mlx4: Add missed RSS hash inner header flag RDMA/hns: Fix a couple misspellings RDMA/hns: Submit bad wr RDMA/hns: Update assignment method for owner field of send wqe RDMA/hns: Adjust the order of cleanup hem table RDMA/hns: Only assign dqpn if IB_QP_PATH_DEST_QPN bit is set RDMA/hns: Remove some unnecessary attr_mask judgement RDMA/hns: Only assign mtu if IB_QP_PATH_MTU bit is set ...
-rw-r--r--drivers/infiniband/Kconfig5
-rw-r--r--drivers/infiniband/core/cache.c55
-rw-r--r--drivers/infiniband/core/cma.c60
-rw-r--r--drivers/infiniband/core/iwpm_util.c5
-rw-r--r--drivers/infiniband/core/mad.c4
-rw-r--r--drivers/infiniband/core/roce_gid_mgmt.c26
-rw-r--r--drivers/infiniband/core/ucma.c44
-rw-r--r--drivers/infiniband/core/uverbs_cmd.c6
-rw-r--r--drivers/infiniband/core/uverbs_ioctl.c9
-rw-r--r--drivers/infiniband/core/uverbs_std_types_flow_action.c12
-rw-r--r--drivers/infiniband/core/verbs.c1
-rw-r--r--drivers/infiniband/hw/cxgb4/cq.c11
-rw-r--r--drivers/infiniband/hw/cxgb4/device.c9
-rw-r--r--drivers/infiniband/hw/cxgb4/iw_cxgb4.h6
-rw-r--r--drivers/infiniband/hw/cxgb4/qp.c4
-rw-r--r--drivers/infiniband/hw/cxgb4/resource.c26
-rw-r--r--drivers/infiniband/hw/hfi1/affinity.c11
-rw-r--r--drivers/infiniband/hw/hfi1/driver.c19
-rw-r--r--drivers/infiniband/hw/hfi1/hfi.h8
-rw-r--r--drivers/infiniband/hw/hfi1/init.c43
-rw-r--r--drivers/infiniband/hw/hfi1/pcie.c3
-rw-r--r--drivers/infiniband/hw/hfi1/platform.c1
-rw-r--r--drivers/infiniband/hw/hfi1/qsfp.c2
-rw-r--r--drivers/infiniband/hw/hfi1/ruc.c50
-rw-r--r--drivers/infiniband/hw/hfi1/ud.c4
-rw-r--r--drivers/infiniband/hw/hns/hns_roce_hem.c12
-rw-r--r--drivers/infiniband/hw/hns/hns_roce_hw_v2.c49
-rw-r--r--drivers/infiniband/hw/hns/hns_roce_qp.c2
-rw-r--r--drivers/infiniband/hw/mlx4/mr.c2
-rw-r--r--drivers/infiniband/hw/mlx4/qp.c3
-rw-r--r--drivers/infiniband/hw/mlx5/Kconfig1
-rw-r--r--drivers/infiniband/hw/mlx5/main.c7
-rw-r--r--drivers/infiniband/hw/mlx5/mr.c32
-rw-r--r--drivers/infiniband/hw/mlx5/qp.c22
-rw-r--r--drivers/infiniband/hw/nes/nes_nic.c2
-rw-r--r--drivers/infiniband/sw/rxe/rxe_opcode.c2
-rw-r--r--drivers/infiniband/sw/rxe/rxe_req.c1
-rw-r--r--drivers/infiniband/sw/rxe/rxe_resp.c6
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_main.c2
-rw-r--r--drivers/infiniband/ulp/srp/Kconfig2
-rw-r--r--drivers/infiniband/ulp/srpt/Kconfig2
-rw-r--r--drivers/nvme/host/Kconfig2
-rw-r--r--drivers/nvme/target/Kconfig2
-rw-r--r--fs/cifs/Kconfig2
-rw-r--r--include/uapi/linux/if_infiniband.h2
-rw-r--r--include/uapi/linux/rds.h2
-rw-r--r--include/uapi/linux/tls.h2
-rw-r--r--include/uapi/rdma/cxgb3-abi.h2
-rw-r--r--include/uapi/rdma/cxgb4-abi.h2
-rw-r--r--include/uapi/rdma/hns-abi.h2
-rw-r--r--include/uapi/rdma/ib_user_cm.h2
-rw-r--r--include/uapi/rdma/ib_user_ioctl_verbs.h2
-rw-r--r--include/uapi/rdma/ib_user_mad.h2
-rw-r--r--include/uapi/rdma/ib_user_sa.h2
-rw-r--r--include/uapi/rdma/ib_user_verbs.h2
-rw-r--r--include/uapi/rdma/mlx4-abi.h2
-rw-r--r--include/uapi/rdma/mlx5-abi.h2
-rw-r--r--include/uapi/rdma/mthca-abi.h2
-rw-r--r--include/uapi/rdma/nes-abi.h2
-rw-r--r--include/uapi/rdma/qedr-abi.h2
-rw-r--r--include/uapi/rdma/rdma_user_cm.h2
-rw-r--r--include/uapi/rdma/rdma_user_ioctl.h2
-rw-r--r--include/uapi/rdma/rdma_user_rxe.h2
63 files changed, 407 insertions, 208 deletions
diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index ee270e065ba9..2a972ed6851b 100644
--- a/drivers/infiniband/Kconfig
+++ b/drivers/infiniband/Kconfig
@@ -61,9 +61,12 @@ config INFINIBAND_ON_DEMAND_PAGING
61 pages on demand instead. 61 pages on demand instead.
62 62
63config INFINIBAND_ADDR_TRANS 63config INFINIBAND_ADDR_TRANS
64 bool 64 bool "RDMA/CM"
65 depends on INFINIBAND 65 depends on INFINIBAND
66 default y 66 default y
67 ---help---
68 Support for RDMA communication manager (CM).
69 This allows for a generic connection abstraction over RDMA.
67 70
68config INFINIBAND_ADDR_TRANS_CONFIGFS 71config INFINIBAND_ADDR_TRANS_CONFIGFS
69 bool 72 bool
diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
index e337b08de2ff..fb2d347f760f 100644
--- a/drivers/infiniband/core/cache.c
+++ b/drivers/infiniband/core/cache.c
@@ -291,14 +291,18 @@ static int find_gid(struct ib_gid_table *table, const union ib_gid *gid,
291 * so lookup free slot only if requested. 291 * so lookup free slot only if requested.
292 */ 292 */
293 if (pempty && empty < 0) { 293 if (pempty && empty < 0) {
294 if (data->props & GID_TABLE_ENTRY_INVALID) { 294 if (data->props & GID_TABLE_ENTRY_INVALID &&
295 /* Found an invalid (free) entry; allocate it */ 295 (default_gid ==
296 if (data->props & GID_TABLE_ENTRY_DEFAULT) { 296 !!(data->props & GID_TABLE_ENTRY_DEFAULT))) {
297 if (default_gid) 297 /*
298 empty = curr_index; 298 * Found an invalid (free) entry; allocate it.
299 } else { 299 * If default GID is requested, then our
300 empty = curr_index; 300 * found slot must be one of the DEFAULT
301 } 301 * reserved slots or we fail.
302 * This ensures that only DEFAULT reserved
303 * slots are used for default property GIDs.
304 */
305 empty = curr_index;
302 } 306 }
303 } 307 }
304 308
@@ -420,8 +424,10 @@ int ib_cache_gid_add(struct ib_device *ib_dev, u8 port,
420 return ret; 424 return ret;
421} 425}
422 426
423int ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 427static int
424 union ib_gid *gid, struct ib_gid_attr *attr) 428_ib_cache_gid_del(struct ib_device *ib_dev, u8 port,
429 union ib_gid *gid, struct ib_gid_attr *attr,
430 unsigned long mask, bool default_gid)
425{ 431{
426 struct ib_gid_table *table; 432 struct ib_gid_table *table;
427 int ret = 0; 433 int ret = 0;
@@ -431,11 +437,7 @@ int ib_cache_gid_del(struct ib_device *ib_dev, u8 port,
431 437
432 mutex_lock(&table->lock); 438 mutex_lock(&table->lock);
433 439
434 ix = find_gid(table, gid, attr, false, 440 ix = find_gid(table, gid, attr, default_gid, mask, NULL);
435 GID_ATTR_FIND_MASK_GID |
436 GID_ATTR_FIND_MASK_GID_TYPE |
437 GID_ATTR_FIND_MASK_NETDEV,
438 NULL);
439 if (ix < 0) { 441 if (ix < 0) {
440 ret = -EINVAL; 442 ret = -EINVAL;
441 goto out_unlock; 443 goto out_unlock;
@@ -452,6 +454,17 @@ out_unlock:
452 return ret; 454 return ret;
453} 455}
454 456
457int ib_cache_gid_del(struct ib_device *ib_dev, u8 port,
458 union ib_gid *gid, struct ib_gid_attr *attr)
459{
460 unsigned long mask = GID_ATTR_FIND_MASK_GID |
461 GID_ATTR_FIND_MASK_GID_TYPE |
462 GID_ATTR_FIND_MASK_DEFAULT |
463 GID_ATTR_FIND_MASK_NETDEV;
464
465 return _ib_cache_gid_del(ib_dev, port, gid, attr, mask, false);
466}
467
455int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port, 468int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port,
456 struct net_device *ndev) 469 struct net_device *ndev)
457{ 470{
@@ -728,7 +741,7 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
728 unsigned long gid_type_mask, 741 unsigned long gid_type_mask,
729 enum ib_cache_gid_default_mode mode) 742 enum ib_cache_gid_default_mode mode)
730{ 743{
731 union ib_gid gid; 744 union ib_gid gid = { };
732 struct ib_gid_attr gid_attr; 745 struct ib_gid_attr gid_attr;
733 struct ib_gid_table *table; 746 struct ib_gid_table *table;
734 unsigned int gid_type; 747 unsigned int gid_type;
@@ -736,7 +749,9 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
736 749
737 table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid; 750 table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid;
738 751
739 make_default_gid(ndev, &gid); 752 mask = GID_ATTR_FIND_MASK_GID_TYPE |
753 GID_ATTR_FIND_MASK_DEFAULT |
754 GID_ATTR_FIND_MASK_NETDEV;
740 memset(&gid_attr, 0, sizeof(gid_attr)); 755 memset(&gid_attr, 0, sizeof(gid_attr));
741 gid_attr.ndev = ndev; 756 gid_attr.ndev = ndev;
742 757
@@ -747,12 +762,12 @@ void ib_cache_gid_set_default_gid(struct ib_device *ib_dev, u8 port,
747 gid_attr.gid_type = gid_type; 762 gid_attr.gid_type = gid_type;
748 763
749 if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) { 764 if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) {
750 mask = GID_ATTR_FIND_MASK_GID_TYPE | 765 make_default_gid(ndev, &gid);
751 GID_ATTR_FIND_MASK_DEFAULT;
752 __ib_cache_gid_add(ib_dev, port, &gid, 766 __ib_cache_gid_add(ib_dev, port, &gid,
753 &gid_attr, mask, true); 767 &gid_attr, mask, true);
754 } else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) { 768 } else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) {
755 ib_cache_gid_del(ib_dev, port, &gid, &gid_attr); 769 _ib_cache_gid_del(ib_dev, port, &gid,
770 &gid_attr, mask, true);
756 } 771 }
757 } 772 }
758} 773}
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 51a641002e10..a693fcd4c513 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -382,6 +382,8 @@ struct cma_hdr {
382#define CMA_VERSION 0x00 382#define CMA_VERSION 0x00
383 383
384struct cma_req_info { 384struct cma_req_info {
385 struct sockaddr_storage listen_addr_storage;
386 struct sockaddr_storage src_addr_storage;
385 struct ib_device *device; 387 struct ib_device *device;
386 int port; 388 int port;
387 union ib_gid local_gid; 389 union ib_gid local_gid;
@@ -866,7 +868,6 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
866{ 868{
867 struct ib_qp_attr qp_attr; 869 struct ib_qp_attr qp_attr;
868 int qp_attr_mask, ret; 870 int qp_attr_mask, ret;
869 union ib_gid sgid;
870 871
871 mutex_lock(&id_priv->qp_mutex); 872 mutex_lock(&id_priv->qp_mutex);
872 if (!id_priv->id.qp) { 873 if (!id_priv->id.qp) {
@@ -889,12 +890,6 @@ static int cma_modify_qp_rtr(struct rdma_id_private *id_priv,
889 if (ret) 890 if (ret)
890 goto out; 891 goto out;
891 892
892 ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num,
893 rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index,
894 &sgid, NULL);
895 if (ret)
896 goto out;
897
898 BUG_ON(id_priv->cma_dev->device != id_priv->id.device); 893 BUG_ON(id_priv->cma_dev->device != id_priv->id.device);
899 894
900 if (conn_param) 895 if (conn_param)
@@ -1340,11 +1335,11 @@ static bool validate_net_dev(struct net_device *net_dev,
1340} 1335}
1341 1336
1342static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event, 1337static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
1343 const struct cma_req_info *req) 1338 struct cma_req_info *req)
1344{ 1339{
1345 struct sockaddr_storage listen_addr_storage, src_addr_storage; 1340 struct sockaddr *listen_addr =
1346 struct sockaddr *listen_addr = (struct sockaddr *)&listen_addr_storage, 1341 (struct sockaddr *)&req->listen_addr_storage;
1347 *src_addr = (struct sockaddr *)&src_addr_storage; 1342 struct sockaddr *src_addr = (struct sockaddr *)&req->src_addr_storage;
1348 struct net_device *net_dev; 1343 struct net_device *net_dev;
1349 const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL; 1344 const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL;
1350 int err; 1345 int err;
@@ -1359,11 +1354,6 @@ static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,
1359 if (!net_dev) 1354 if (!net_dev)
1360 return ERR_PTR(-ENODEV); 1355 return ERR_PTR(-ENODEV);
1361 1356
1362 if (!validate_net_dev(net_dev, listen_addr, src_addr)) {
1363 dev_put(net_dev);
1364 return ERR_PTR(-EHOSTUNREACH);
1365 }
1366
1367 return net_dev; 1357 return net_dev;
1368} 1358}
1369 1359
@@ -1490,15 +1480,51 @@ static struct rdma_id_private *cma_id_from_event(struct ib_cm_id *cm_id,
1490 } 1480 }
1491 } 1481 }
1492 1482
1483 /*
1484 * Net namespace might be getting deleted while route lookup,
1485 * cm_id lookup is in progress. Therefore, perform netdevice
1486 * validation, cm_id lookup under rcu lock.
1487 * RCU lock along with netdevice state check, synchronizes with
1488 * netdevice migrating to different net namespace and also avoids
1489 * case where net namespace doesn't get deleted while lookup is in
1490 * progress.
1491 * If the device state is not IFF_UP, its properties such as ifindex
1492 * and nd_net cannot be trusted to remain valid without rcu lock.
1493 * net/core/dev.c change_net_namespace() ensures to synchronize with
1494 * ongoing operations on net device after device is closed using
1495 * synchronize_net().
1496 */
1497 rcu_read_lock();
1498 if (*net_dev) {
1499 /*
1500 * If netdevice is down, it is likely that it is administratively
1501 * down or it might be migrating to different namespace.
1502 * In that case avoid further processing, as the net namespace
1503 * or ifindex may change.
1504 */
1505 if (((*net_dev)->flags & IFF_UP) == 0) {
1506 id_priv = ERR_PTR(-EHOSTUNREACH);
1507 goto err;
1508 }
1509
1510 if (!validate_net_dev(*net_dev,
1511 (struct sockaddr *)&req.listen_addr_storage,
1512 (struct sockaddr *)&req.src_addr_storage)) {
1513 id_priv = ERR_PTR(-EHOSTUNREACH);
1514 goto err;
1515 }
1516 }
1517
1493 bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net, 1518 bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net,
1494 rdma_ps_from_service_id(req.service_id), 1519 rdma_ps_from_service_id(req.service_id),
1495 cma_port_from_service_id(req.service_id)); 1520 cma_port_from_service_id(req.service_id));
1496 id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev); 1521 id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev);
1522err:
1523 rcu_read_unlock();
1497 if (IS_ERR(id_priv) && *net_dev) { 1524 if (IS_ERR(id_priv) && *net_dev) {
1498 dev_put(*net_dev); 1525 dev_put(*net_dev);
1499 *net_dev = NULL; 1526 *net_dev = NULL;
1500 } 1527 }
1501
1502 return id_priv; 1528 return id_priv;
1503} 1529}
1504 1530
diff --git a/drivers/infiniband/core/iwpm_util.c b/drivers/infiniband/core/iwpm_util.c
index 9821ae900f6d..da12da1c36f6 100644
--- a/drivers/infiniband/core/iwpm_util.c
+++ b/drivers/infiniband/core/iwpm_util.c
@@ -114,7 +114,7 @@ int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr,
114 struct sockaddr_storage *mapped_sockaddr, 114 struct sockaddr_storage *mapped_sockaddr,
115 u8 nl_client) 115 u8 nl_client)
116{ 116{
117 struct hlist_head *hash_bucket_head; 117 struct hlist_head *hash_bucket_head = NULL;
118 struct iwpm_mapping_info *map_info; 118 struct iwpm_mapping_info *map_info;
119 unsigned long flags; 119 unsigned long flags;
120 int ret = -EINVAL; 120 int ret = -EINVAL;
@@ -142,6 +142,9 @@ int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr,
142 } 142 }
143 } 143 }
144 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); 144 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags);
145
146 if (!hash_bucket_head)
147 kfree(map_info);
145 return ret; 148 return ret;
146} 149}
147 150
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index c50596f7f98a..b28452a55a08 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -59,7 +59,7 @@ module_param_named(recv_queue_size, mad_recvq_size, int, 0444);
59MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests"); 59MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests");
60 60
61static struct list_head ib_mad_port_list; 61static struct list_head ib_mad_port_list;
62static u32 ib_mad_client_id = 0; 62static atomic_t ib_mad_client_id = ATOMIC_INIT(0);
63 63
64/* Port list lock */ 64/* Port list lock */
65static DEFINE_SPINLOCK(ib_mad_port_list_lock); 65static DEFINE_SPINLOCK(ib_mad_port_list_lock);
@@ -377,7 +377,7 @@ struct ib_mad_agent *ib_register_mad_agent(struct ib_device *device,
377 } 377 }
378 378
379 spin_lock_irqsave(&port_priv->reg_lock, flags); 379 spin_lock_irqsave(&port_priv->reg_lock, flags);
380 mad_agent_priv->agent.hi_tid = ++ib_mad_client_id; 380 mad_agent_priv->agent.hi_tid = atomic_inc_return(&ib_mad_client_id);
381 381
382 /* 382 /*
383 * Make sure MAD registration (if supplied) 383 * Make sure MAD registration (if supplied)
diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c
index cc2966380c0c..c0e4fd55e2cc 100644
--- a/drivers/infiniband/core/roce_gid_mgmt.c
+++ b/drivers/infiniband/core/roce_gid_mgmt.c
@@ -255,6 +255,7 @@ static void bond_delete_netdev_default_gids(struct ib_device *ib_dev,
255 struct net_device *rdma_ndev) 255 struct net_device *rdma_ndev)
256{ 256{
257 struct net_device *real_dev = rdma_vlan_dev_real_dev(event_ndev); 257 struct net_device *real_dev = rdma_vlan_dev_real_dev(event_ndev);
258 unsigned long gid_type_mask;
258 259
259 if (!rdma_ndev) 260 if (!rdma_ndev)
260 return; 261 return;
@@ -264,21 +265,22 @@ static void bond_delete_netdev_default_gids(struct ib_device *ib_dev,
264 265
265 rcu_read_lock(); 266 rcu_read_lock();
266 267
267 if (rdma_is_upper_dev_rcu(rdma_ndev, event_ndev) && 268 if (((rdma_ndev != event_ndev &&
268 is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev) == 269 !rdma_is_upper_dev_rcu(rdma_ndev, event_ndev)) ||
269 BONDING_SLAVE_STATE_INACTIVE) { 270 is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev)
270 unsigned long gid_type_mask; 271 ==
271 272 BONDING_SLAVE_STATE_INACTIVE)) {
272 rcu_read_unlock(); 273 rcu_read_unlock();
274 return;
275 }
273 276
274 gid_type_mask = roce_gid_type_mask_support(ib_dev, port); 277 rcu_read_unlock();
275 278
276 ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev, 279 gid_type_mask = roce_gid_type_mask_support(ib_dev, port);
277 gid_type_mask, 280
278 IB_CACHE_GID_DEFAULT_MODE_DELETE); 281 ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev,
279 } else { 282 gid_type_mask,
280 rcu_read_unlock(); 283 IB_CACHE_GID_DEFAULT_MODE_DELETE);
281 }
282} 284}
283 285
284static void enum_netdev_ipv4_ips(struct ib_device *ib_dev, 286static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
index 74329483af6d..eab43b17e9cf 100644
--- a/drivers/infiniband/core/ucma.c
+++ b/drivers/infiniband/core/ucma.c
@@ -159,6 +159,23 @@ static void ucma_put_ctx(struct ucma_context *ctx)
159 complete(&ctx->comp); 159 complete(&ctx->comp);
160} 160}
161 161
162/*
163 * Same as ucm_get_ctx but requires that ->cm_id->device is valid, eg that the
164 * CM_ID is bound.
165 */
166static struct ucma_context *ucma_get_ctx_dev(struct ucma_file *file, int id)
167{
168 struct ucma_context *ctx = ucma_get_ctx(file, id);
169
170 if (IS_ERR(ctx))
171 return ctx;
172 if (!ctx->cm_id->device) {
173 ucma_put_ctx(ctx);
174 return ERR_PTR(-EINVAL);
175 }
176 return ctx;
177}
178
162static void ucma_close_event_id(struct work_struct *work) 179static void ucma_close_event_id(struct work_struct *work)
163{ 180{
164 struct ucma_event *uevent_close = container_of(work, struct ucma_event, close_work); 181 struct ucma_event *uevent_close = container_of(work, struct ucma_event, close_work);
@@ -683,7 +700,7 @@ static ssize_t ucma_resolve_ip(struct ucma_file *file,
683 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 700 if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
684 return -EFAULT; 701 return -EFAULT;
685 702
686 if (!rdma_addr_size_in6(&cmd.src_addr) || 703 if ((cmd.src_addr.sin6_family && !rdma_addr_size_in6(&cmd.src_addr)) ||
687 !rdma_addr_size_in6(&cmd.dst_addr)) 704 !rdma_addr_size_in6(&cmd.dst_addr))
688 return -EINVAL; 705 return -EINVAL;
689 706
@@ -734,7 +751,7 @@ static ssize_t ucma_resolve_route(struct ucma_file *file,
734 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 751 if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
735 return -EFAULT; 752 return -EFAULT;
736 753
737 ctx = ucma_get_ctx(file, cmd.id); 754 ctx = ucma_get_ctx_dev(file, cmd.id);
738 if (IS_ERR(ctx)) 755 if (IS_ERR(ctx))
739 return PTR_ERR(ctx); 756 return PTR_ERR(ctx);
740 757
@@ -1050,7 +1067,7 @@ static ssize_t ucma_connect(struct ucma_file *file, const char __user *inbuf,
1050 if (!cmd.conn_param.valid) 1067 if (!cmd.conn_param.valid)
1051 return -EINVAL; 1068 return -EINVAL;
1052 1069
1053 ctx = ucma_get_ctx(file, cmd.id); 1070 ctx = ucma_get_ctx_dev(file, cmd.id);
1054 if (IS_ERR(ctx)) 1071 if (IS_ERR(ctx))
1055 return PTR_ERR(ctx); 1072 return PTR_ERR(ctx);
1056 1073
@@ -1092,7 +1109,7 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf,
1092 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1109 if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
1093 return -EFAULT; 1110 return -EFAULT;
1094 1111
1095 ctx = ucma_get_ctx(file, cmd.id); 1112 ctx = ucma_get_ctx_dev(file, cmd.id);
1096 if (IS_ERR(ctx)) 1113 if (IS_ERR(ctx))
1097 return PTR_ERR(ctx); 1114 return PTR_ERR(ctx);
1098 1115
@@ -1120,7 +1137,7 @@ static ssize_t ucma_reject(struct ucma_file *file, const char __user *inbuf,
1120 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1137 if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
1121 return -EFAULT; 1138 return -EFAULT;
1122 1139
1123 ctx = ucma_get_ctx(file, cmd.id); 1140 ctx = ucma_get_ctx_dev(file, cmd.id);
1124 if (IS_ERR(ctx)) 1141 if (IS_ERR(ctx))
1125 return PTR_ERR(ctx); 1142 return PTR_ERR(ctx);
1126 1143
@@ -1139,7 +1156,7 @@ static ssize_t ucma_disconnect(struct ucma_file *file, const char __user *inbuf,
1139 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1156 if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
1140 return -EFAULT; 1157 return -EFAULT;
1141 1158
1142 ctx = ucma_get_ctx(file, cmd.id); 1159 ctx = ucma_get_ctx_dev(file, cmd.id);
1143 if (IS_ERR(ctx)) 1160 if (IS_ERR(ctx))
1144 return PTR_ERR(ctx); 1161 return PTR_ERR(ctx);
1145 1162
@@ -1167,15 +1184,10 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file,
1167 if (cmd.qp_state > IB_QPS_ERR) 1184 if (cmd.qp_state > IB_QPS_ERR)
1168 return -EINVAL; 1185 return -EINVAL;
1169 1186
1170 ctx = ucma_get_ctx(file, cmd.id); 1187 ctx = ucma_get_ctx_dev(file, cmd.id);
1171 if (IS_ERR(ctx)) 1188 if (IS_ERR(ctx))
1172 return PTR_ERR(ctx); 1189 return PTR_ERR(ctx);
1173 1190
1174 if (!ctx->cm_id->device) {
1175 ret = -EINVAL;
1176 goto out;
1177 }
1178
1179 resp.qp_attr_mask = 0; 1191 resp.qp_attr_mask = 0;
1180 memset(&qp_attr, 0, sizeof qp_attr); 1192 memset(&qp_attr, 0, sizeof qp_attr);
1181 qp_attr.qp_state = cmd.qp_state; 1193 qp_attr.qp_state = cmd.qp_state;
@@ -1316,13 +1328,13 @@ static ssize_t ucma_set_option(struct ucma_file *file, const char __user *inbuf,
1316 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1328 if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
1317 return -EFAULT; 1329 return -EFAULT;
1318 1330
1331 if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
1332 return -EINVAL;
1333
1319 ctx = ucma_get_ctx(file, cmd.id); 1334 ctx = ucma_get_ctx(file, cmd.id);
1320 if (IS_ERR(ctx)) 1335 if (IS_ERR(ctx))
1321 return PTR_ERR(ctx); 1336 return PTR_ERR(ctx);
1322 1337
1323 if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE))
1324 return -EINVAL;
1325
1326 optval = memdup_user(u64_to_user_ptr(cmd.optval), 1338 optval = memdup_user(u64_to_user_ptr(cmd.optval),
1327 cmd.optlen); 1339 cmd.optlen);
1328 if (IS_ERR(optval)) { 1340 if (IS_ERR(optval)) {
@@ -1384,7 +1396,7 @@ static ssize_t ucma_process_join(struct ucma_file *file,
1384 else 1396 else
1385 return -EINVAL; 1397 return -EINVAL;
1386 1398
1387 ctx = ucma_get_ctx(file, cmd->id); 1399 ctx = ucma_get_ctx_dev(file, cmd->id);
1388 if (IS_ERR(ctx)) 1400 if (IS_ERR(ctx))
1389 return PTR_ERR(ctx); 1401 return PTR_ERR(ctx);
1390 1402
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 13cb5e4deb86..21a887c9523b 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -691,6 +691,7 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file,
691 691
692 mr->device = pd->device; 692 mr->device = pd->device;
693 mr->pd = pd; 693 mr->pd = pd;
694 mr->dm = NULL;
694 mr->uobject = uobj; 695 mr->uobject = uobj;
695 atomic_inc(&pd->usecnt); 696 atomic_inc(&pd->usecnt);
696 mr->res.type = RDMA_RESTRACK_MR; 697 mr->res.type = RDMA_RESTRACK_MR;
@@ -765,6 +766,11 @@ ssize_t ib_uverbs_rereg_mr(struct ib_uverbs_file *file,
765 766
766 mr = uobj->object; 767 mr = uobj->object;
767 768
769 if (mr->dm) {
770 ret = -EINVAL;
771 goto put_uobjs;
772 }
773
768 if (cmd.flags & IB_MR_REREG_ACCESS) { 774 if (cmd.flags & IB_MR_REREG_ACCESS) {
769 ret = ib_check_mr_access(cmd.access_flags); 775 ret = ib_check_mr_access(cmd.access_flags);
770 if (ret) 776 if (ret)
diff --git a/drivers/infiniband/core/uverbs_ioctl.c b/drivers/infiniband/core/uverbs_ioctl.c
index 8c93970dc8f1..8d32c4ae368c 100644
--- a/drivers/infiniband/core/uverbs_ioctl.c
+++ b/drivers/infiniband/core/uverbs_ioctl.c
@@ -234,6 +234,15 @@ static int uverbs_validate_kernel_mandatory(const struct uverbs_method_spec *met
234 return -EINVAL; 234 return -EINVAL;
235 } 235 }
236 236
237 for (; i < method_spec->num_buckets; i++) {
238 struct uverbs_attr_spec_hash *attr_spec_bucket =
239 method_spec->attr_buckets[i];
240
241 if (!bitmap_empty(attr_spec_bucket->mandatory_attrs_bitmask,
242 attr_spec_bucket->num_attrs))
243 return -EINVAL;
244 }
245
237 return 0; 246 return 0;
238} 247}
239 248
diff --git a/drivers/infiniband/core/uverbs_std_types_flow_action.c b/drivers/infiniband/core/uverbs_std_types_flow_action.c
index cbcec3da12f6..b4f016dfa23d 100644
--- a/drivers/infiniband/core/uverbs_std_types_flow_action.c
+++ b/drivers/infiniband/core/uverbs_std_types_flow_action.c
@@ -363,28 +363,28 @@ static int UVERBS_HANDLER(UVERBS_METHOD_FLOW_ACTION_ESP_MODIFY)(struct ib_device
363 363
364static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { 364static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = {
365 [IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM] = { 365 [IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM] = {
366 .ptr = { 366 { .ptr = {
367 .type = UVERBS_ATTR_TYPE_PTR_IN, 367 .type = UVERBS_ATTR_TYPE_PTR_IN,
368 UVERBS_ATTR_TYPE(struct ib_uverbs_flow_action_esp_keymat_aes_gcm), 368 UVERBS_ATTR_TYPE(struct ib_uverbs_flow_action_esp_keymat_aes_gcm),
369 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, 369 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO,
370 }, 370 } },
371 }, 371 },
372}; 372};
373 373
374static const struct uverbs_attr_spec uverbs_flow_action_esp_replay[] = { 374static const struct uverbs_attr_spec uverbs_flow_action_esp_replay[] = {
375 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE] = { 375 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE] = {
376 .ptr = { 376 { .ptr = {
377 .type = UVERBS_ATTR_TYPE_PTR_IN, 377 .type = UVERBS_ATTR_TYPE_PTR_IN,
378 /* No need to specify any data */ 378 /* No need to specify any data */
379 .len = 0, 379 .len = 0,
380 } 380 } }
381 }, 381 },
382 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP] = { 382 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP] = {
383 .ptr = { 383 { .ptr = {
384 .type = UVERBS_ATTR_TYPE_PTR_IN, 384 .type = UVERBS_ATTR_TYPE_PTR_IN,
385 UVERBS_ATTR_STRUCT(struct ib_uverbs_flow_action_esp_replay_bmp, size), 385 UVERBS_ATTR_STRUCT(struct ib_uverbs_flow_action_esp_replay_bmp, size),
386 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, 386 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO,
387 } 387 } }
388 }, 388 },
389}; 389};
390 390
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index 7eff3aeffe01..6ddfb1fade79 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -1656,6 +1656,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd,
1656 if (!IS_ERR(mr)) { 1656 if (!IS_ERR(mr)) {
1657 mr->device = pd->device; 1657 mr->device = pd->device;
1658 mr->pd = pd; 1658 mr->pd = pd;
1659 mr->dm = NULL;
1659 mr->uobject = NULL; 1660 mr->uobject = NULL;
1660 atomic_inc(&pd->usecnt); 1661 atomic_inc(&pd->usecnt);
1661 mr->need_inval = false; 1662 mr->need_inval = false;
diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
index 6f2b26126c64..2be2e1ac1b5f 100644
--- a/drivers/infiniband/hw/cxgb4/cq.c
+++ b/drivers/infiniband/hw/cxgb4/cq.c
@@ -315,7 +315,7 @@ static void advance_oldest_read(struct t4_wq *wq)
315 * Deal with out-of-order and/or completions that complete 315 * Deal with out-of-order and/or completions that complete
316 * prior unsignalled WRs. 316 * prior unsignalled WRs.
317 */ 317 */
318void c4iw_flush_hw_cq(struct c4iw_cq *chp) 318void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp)
319{ 319{
320 struct t4_cqe *hw_cqe, *swcqe, read_cqe; 320 struct t4_cqe *hw_cqe, *swcqe, read_cqe;
321 struct c4iw_qp *qhp; 321 struct c4iw_qp *qhp;
@@ -339,6 +339,13 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp)
339 if (qhp == NULL) 339 if (qhp == NULL)
340 goto next_cqe; 340 goto next_cqe;
341 341
342 if (flush_qhp != qhp) {
343 spin_lock(&qhp->lock);
344
345 if (qhp->wq.flushed == 1)
346 goto next_cqe;
347 }
348
342 if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE) 349 if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE)
343 goto next_cqe; 350 goto next_cqe;
344 351
@@ -390,6 +397,8 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp)
390next_cqe: 397next_cqe:
391 t4_hwcq_consume(&chp->cq); 398 t4_hwcq_consume(&chp->cq);
392 ret = t4_next_hw_cqe(&chp->cq, &hw_cqe); 399 ret = t4_next_hw_cqe(&chp->cq, &hw_cqe);
400 if (qhp && flush_qhp != qhp)
401 spin_unlock(&qhp->lock);
393 } 402 }
394} 403}
395 404
diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
index feeb8ee6f4a2..44161ca4d2a8 100644
--- a/drivers/infiniband/hw/cxgb4/device.c
+++ b/drivers/infiniband/hw/cxgb4/device.c
@@ -875,6 +875,11 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
875 875
876 rdev->status_page->db_off = 0; 876 rdev->status_page->db_off = 0;
877 877
878 init_completion(&rdev->rqt_compl);
879 init_completion(&rdev->pbl_compl);
880 kref_init(&rdev->rqt_kref);
881 kref_init(&rdev->pbl_kref);
882
878 return 0; 883 return 0;
879err_free_status_page_and_wr_log: 884err_free_status_page_and_wr_log:
880 if (c4iw_wr_log && rdev->wr_log) 885 if (c4iw_wr_log && rdev->wr_log)
@@ -893,13 +898,15 @@ destroy_resource:
893 898
894static void c4iw_rdev_close(struct c4iw_rdev *rdev) 899static void c4iw_rdev_close(struct c4iw_rdev *rdev)
895{ 900{
896 destroy_workqueue(rdev->free_workq);
897 kfree(rdev->wr_log); 901 kfree(rdev->wr_log);
898 c4iw_release_dev_ucontext(rdev, &rdev->uctx); 902 c4iw_release_dev_ucontext(rdev, &rdev->uctx);
899 free_page((unsigned long)rdev->status_page); 903 free_page((unsigned long)rdev->status_page);
900 c4iw_pblpool_destroy(rdev); 904 c4iw_pblpool_destroy(rdev);
901 c4iw_rqtpool_destroy(rdev); 905 c4iw_rqtpool_destroy(rdev);
906 wait_for_completion(&rdev->pbl_compl);
907 wait_for_completion(&rdev->rqt_compl);
902 c4iw_ocqp_pool_destroy(rdev); 908 c4iw_ocqp_pool_destroy(rdev);
909 destroy_workqueue(rdev->free_workq);
903 c4iw_destroy_resource(&rdev->resource); 910 c4iw_destroy_resource(&rdev->resource);
904} 911}
905 912
diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
index cc929002c05e..831027717121 100644
--- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
@@ -185,6 +185,10 @@ struct c4iw_rdev {
185 struct wr_log_entry *wr_log; 185 struct wr_log_entry *wr_log;
186 int wr_log_size; 186 int wr_log_size;
187 struct workqueue_struct *free_workq; 187 struct workqueue_struct *free_workq;
188 struct completion rqt_compl;
189 struct completion pbl_compl;
190 struct kref rqt_kref;
191 struct kref pbl_kref;
188}; 192};
189 193
190static inline int c4iw_fatal_error(struct c4iw_rdev *rdev) 194static inline int c4iw_fatal_error(struct c4iw_rdev *rdev)
@@ -1049,7 +1053,7 @@ u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size);
1049void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size); 1053void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size);
1050u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size); 1054u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size);
1051void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size); 1055void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size);
1052void c4iw_flush_hw_cq(struct c4iw_cq *chp); 1056void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp);
1053void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count); 1057void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count);
1054int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp); 1058int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp);
1055int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count); 1059int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count);
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index de77b6027d69..ae167b686608 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -1343,12 +1343,12 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
1343 qhp->wq.flushed = 1; 1343 qhp->wq.flushed = 1;
1344 t4_set_wq_in_error(&qhp->wq); 1344 t4_set_wq_in_error(&qhp->wq);
1345 1345
1346 c4iw_flush_hw_cq(rchp); 1346 c4iw_flush_hw_cq(rchp, qhp);
1347 c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); 1347 c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count);
1348 rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); 1348 rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count);
1349 1349
1350 if (schp != rchp) 1350 if (schp != rchp)
1351 c4iw_flush_hw_cq(schp); 1351 c4iw_flush_hw_cq(schp, qhp);
1352 sq_flushed = c4iw_flush_sq(qhp); 1352 sq_flushed = c4iw_flush_sq(qhp);
1353 1353
1354 spin_unlock(&qhp->lock); 1354 spin_unlock(&qhp->lock);
diff --git a/drivers/infiniband/hw/cxgb4/resource.c b/drivers/infiniband/hw/cxgb4/resource.c
index 3cf25997ed2b..0ef25ae05e6f 100644
--- a/drivers/infiniband/hw/cxgb4/resource.c
+++ b/drivers/infiniband/hw/cxgb4/resource.c
@@ -260,12 +260,22 @@ u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size)
260 rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT); 260 rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT);
261 if (rdev->stats.pbl.cur > rdev->stats.pbl.max) 261 if (rdev->stats.pbl.cur > rdev->stats.pbl.max)
262 rdev->stats.pbl.max = rdev->stats.pbl.cur; 262 rdev->stats.pbl.max = rdev->stats.pbl.cur;
263 kref_get(&rdev->pbl_kref);
263 } else 264 } else
264 rdev->stats.pbl.fail++; 265 rdev->stats.pbl.fail++;
265 mutex_unlock(&rdev->stats.lock); 266 mutex_unlock(&rdev->stats.lock);
266 return (u32)addr; 267 return (u32)addr;
267} 268}
268 269
270static void destroy_pblpool(struct kref *kref)
271{
272 struct c4iw_rdev *rdev;
273
274 rdev = container_of(kref, struct c4iw_rdev, pbl_kref);
275 gen_pool_destroy(rdev->pbl_pool);
276 complete(&rdev->pbl_compl);
277}
278
269void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size) 279void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
270{ 280{
271 pr_debug("addr 0x%x size %d\n", addr, size); 281 pr_debug("addr 0x%x size %d\n", addr, size);
@@ -273,6 +283,7 @@ void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
273 rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT); 283 rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT);
274 mutex_unlock(&rdev->stats.lock); 284 mutex_unlock(&rdev->stats.lock);
275 gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size); 285 gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size);
286 kref_put(&rdev->pbl_kref, destroy_pblpool);
276} 287}
277 288
278int c4iw_pblpool_create(struct c4iw_rdev *rdev) 289int c4iw_pblpool_create(struct c4iw_rdev *rdev)
@@ -310,7 +321,7 @@ int c4iw_pblpool_create(struct c4iw_rdev *rdev)
310 321
311void c4iw_pblpool_destroy(struct c4iw_rdev *rdev) 322void c4iw_pblpool_destroy(struct c4iw_rdev *rdev)
312{ 323{
313 gen_pool_destroy(rdev->pbl_pool); 324 kref_put(&rdev->pbl_kref, destroy_pblpool);
314} 325}
315 326
316/* 327/*
@@ -331,12 +342,22 @@ u32 c4iw_rqtpool_alloc(struct c4iw_rdev *rdev, int size)
331 rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT); 342 rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT);
332 if (rdev->stats.rqt.cur > rdev->stats.rqt.max) 343 if (rdev->stats.rqt.cur > rdev->stats.rqt.max)
333 rdev->stats.rqt.max = rdev->stats.rqt.cur; 344 rdev->stats.rqt.max = rdev->stats.rqt.cur;
345 kref_get(&rdev->rqt_kref);
334 } else 346 } else
335 rdev->stats.rqt.fail++; 347 rdev->stats.rqt.fail++;
336 mutex_unlock(&rdev->stats.lock); 348 mutex_unlock(&rdev->stats.lock);
337 return (u32)addr; 349 return (u32)addr;
338} 350}
339 351
352static void destroy_rqtpool(struct kref *kref)
353{
354 struct c4iw_rdev *rdev;
355
356 rdev = container_of(kref, struct c4iw_rdev, rqt_kref);
357 gen_pool_destroy(rdev->rqt_pool);
358 complete(&rdev->rqt_compl);
359}
360
340void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size) 361void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
341{ 362{
342 pr_debug("addr 0x%x size %d\n", addr, size << 6); 363 pr_debug("addr 0x%x size %d\n", addr, size << 6);
@@ -344,6 +365,7 @@ void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size)
344 rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT); 365 rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT);
345 mutex_unlock(&rdev->stats.lock); 366 mutex_unlock(&rdev->stats.lock);
346 gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6); 367 gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6);
368 kref_put(&rdev->rqt_kref, destroy_rqtpool);
347} 369}
348 370
349int c4iw_rqtpool_create(struct c4iw_rdev *rdev) 371int c4iw_rqtpool_create(struct c4iw_rdev *rdev)
@@ -380,7 +402,7 @@ int c4iw_rqtpool_create(struct c4iw_rdev *rdev)
380 402
381void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev) 403void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev)
382{ 404{
383 gen_pool_destroy(rdev->rqt_pool); 405 kref_put(&rdev->rqt_kref, destroy_rqtpool);
384} 406}
385 407
386/* 408/*
diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index a97055dd4fbd..b5fab55cc275 100644
--- a/drivers/infiniband/hw/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -412,7 +412,6 @@ static void hfi1_cleanup_sdma_notifier(struct hfi1_msix_entry *msix)
412static int get_irq_affinity(struct hfi1_devdata *dd, 412static int get_irq_affinity(struct hfi1_devdata *dd,
413 struct hfi1_msix_entry *msix) 413 struct hfi1_msix_entry *msix)
414{ 414{
415 int ret;
416 cpumask_var_t diff; 415 cpumask_var_t diff;
417 struct hfi1_affinity_node *entry; 416 struct hfi1_affinity_node *entry;
418 struct cpu_mask_set *set = NULL; 417 struct cpu_mask_set *set = NULL;
@@ -424,10 +423,6 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
424 extra[0] = '\0'; 423 extra[0] = '\0';
425 cpumask_clear(&msix->mask); 424 cpumask_clear(&msix->mask);
426 425
427 ret = zalloc_cpumask_var(&diff, GFP_KERNEL);
428 if (!ret)
429 return -ENOMEM;
430
431 entry = node_affinity_lookup(dd->node); 426 entry = node_affinity_lookup(dd->node);
432 427
433 switch (msix->type) { 428 switch (msix->type) {
@@ -458,6 +453,9 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
458 * finds its CPU here. 453 * finds its CPU here.
459 */ 454 */
460 if (cpu == -1 && set) { 455 if (cpu == -1 && set) {
456 if (!zalloc_cpumask_var(&diff, GFP_KERNEL))
457 return -ENOMEM;
458
461 if (cpumask_equal(&set->mask, &set->used)) { 459 if (cpumask_equal(&set->mask, &set->used)) {
462 /* 460 /*
463 * We've used up all the CPUs, bump up the generation 461 * We've used up all the CPUs, bump up the generation
@@ -469,6 +467,8 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
469 cpumask_andnot(diff, &set->mask, &set->used); 467 cpumask_andnot(diff, &set->mask, &set->used);
470 cpu = cpumask_first(diff); 468 cpu = cpumask_first(diff);
471 cpumask_set_cpu(cpu, &set->used); 469 cpumask_set_cpu(cpu, &set->used);
470
471 free_cpumask_var(diff);
472 } 472 }
473 473
474 cpumask_set_cpu(cpu, &msix->mask); 474 cpumask_set_cpu(cpu, &msix->mask);
@@ -482,7 +482,6 @@ static int get_irq_affinity(struct hfi1_devdata *dd,
482 hfi1_setup_sdma_notifier(msix); 482 hfi1_setup_sdma_notifier(msix);
483 } 483 }
484 484
485 free_cpumask_var(diff);
486 return 0; 485 return 0;
487} 486}
488 487
diff --git a/drivers/infiniband/hw/hfi1/driver.c b/drivers/infiniband/hw/hfi1/driver.c
index 46d1475b2154..bd837a048bf4 100644
--- a/drivers/infiniband/hw/hfi1/driver.c
+++ b/drivers/infiniband/hw/hfi1/driver.c
@@ -433,31 +433,43 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
433 bool do_cnp) 433 bool do_cnp)
434{ 434{
435 struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num); 435 struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num);
436 struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
436 struct ib_other_headers *ohdr = pkt->ohdr; 437 struct ib_other_headers *ohdr = pkt->ohdr;
437 struct ib_grh *grh = pkt->grh; 438 struct ib_grh *grh = pkt->grh;
438 u32 rqpn = 0, bth1; 439 u32 rqpn = 0, bth1;
439 u16 pkey, rlid, dlid = ib_get_dlid(pkt->hdr); 440 u16 pkey;
441 u32 rlid, slid, dlid = 0;
440 u8 hdr_type, sc, svc_type; 442 u8 hdr_type, sc, svc_type;
441 bool is_mcast = false; 443 bool is_mcast = false;
442 444
445 /* can be called from prescan */
443 if (pkt->etype == RHF_RCV_TYPE_BYPASS) { 446 if (pkt->etype == RHF_RCV_TYPE_BYPASS) {
444 is_mcast = hfi1_is_16B_mcast(dlid); 447 is_mcast = hfi1_is_16B_mcast(dlid);
445 pkey = hfi1_16B_get_pkey(pkt->hdr); 448 pkey = hfi1_16B_get_pkey(pkt->hdr);
446 sc = hfi1_16B_get_sc(pkt->hdr); 449 sc = hfi1_16B_get_sc(pkt->hdr);
450 dlid = hfi1_16B_get_dlid(pkt->hdr);
451 slid = hfi1_16B_get_slid(pkt->hdr);
447 hdr_type = HFI1_PKT_TYPE_16B; 452 hdr_type = HFI1_PKT_TYPE_16B;
448 } else { 453 } else {
449 is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && 454 is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) &&
450 (dlid != be16_to_cpu(IB_LID_PERMISSIVE)); 455 (dlid != be16_to_cpu(IB_LID_PERMISSIVE));
451 pkey = ib_bth_get_pkey(ohdr); 456 pkey = ib_bth_get_pkey(ohdr);
452 sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf); 457 sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf);
458 dlid = ib_get_dlid(pkt->hdr);
459 slid = ib_get_slid(pkt->hdr);
453 hdr_type = HFI1_PKT_TYPE_9B; 460 hdr_type = HFI1_PKT_TYPE_9B;
454 } 461 }
455 462
456 switch (qp->ibqp.qp_type) { 463 switch (qp->ibqp.qp_type) {
464 case IB_QPT_UD:
465 dlid = ppd->lid;
466 rlid = slid;
467 rqpn = ib_get_sqpn(pkt->ohdr);
468 svc_type = IB_CC_SVCTYPE_UD;
469 break;
457 case IB_QPT_SMI: 470 case IB_QPT_SMI:
458 case IB_QPT_GSI: 471 case IB_QPT_GSI:
459 case IB_QPT_UD: 472 rlid = slid;
460 rlid = ib_get_slid(pkt->hdr);
461 rqpn = ib_get_sqpn(pkt->ohdr); 473 rqpn = ib_get_sqpn(pkt->ohdr);
462 svc_type = IB_CC_SVCTYPE_UD; 474 svc_type = IB_CC_SVCTYPE_UD;
463 break; 475 break;
@@ -482,7 +494,6 @@ void hfi1_process_ecn_slowpath(struct rvt_qp *qp, struct hfi1_packet *pkt,
482 dlid, rlid, sc, grh); 494 dlid, rlid, sc, grh);
483 495
484 if (!is_mcast && (bth1 & IB_BECN_SMASK)) { 496 if (!is_mcast && (bth1 & IB_BECN_SMASK)) {
485 struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
486 u32 lqpn = bth1 & RVT_QPN_MASK; 497 u32 lqpn = bth1 & RVT_QPN_MASK;
487 u8 sl = ibp->sc_to_sl[sc]; 498 u8 sl = ibp->sc_to_sl[sc];
488 499
diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
index 32c48265405e..cac2c62bc42d 100644
--- a/drivers/infiniband/hw/hfi1/hfi.h
+++ b/drivers/infiniband/hw/hfi1/hfi.h
@@ -1537,13 +1537,13 @@ void set_link_ipg(struct hfi1_pportdata *ppd);
1537void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn, 1537void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn,
1538 u32 rqpn, u8 svc_type); 1538 u32 rqpn, u8 svc_type);
1539void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, 1539void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
1540 u32 pkey, u32 slid, u32 dlid, u8 sc5, 1540 u16 pkey, u32 slid, u32 dlid, u8 sc5,
1541 const struct ib_grh *old_grh); 1541 const struct ib_grh *old_grh);
1542void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, 1542void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
1543 u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 1543 u32 remote_qpn, u16 pkey, u32 slid, u32 dlid,
1544 u8 sc5, const struct ib_grh *old_grh); 1544 u8 sc5, const struct ib_grh *old_grh);
1545typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp, 1545typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp,
1546 u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 1546 u32 remote_qpn, u16 pkey, u32 slid, u32 dlid,
1547 u8 sc5, const struct ib_grh *old_grh); 1547 u8 sc5, const struct ib_grh *old_grh);
1548 1548
1549#define PKEY_CHECK_INVALID -1 1549#define PKEY_CHECK_INVALID -1
@@ -2437,7 +2437,7 @@ static inline void hfi1_make_16b_hdr(struct hfi1_16b_header *hdr,
2437 ((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT); 2437 ((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT);
2438 lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) | 2438 lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) |
2439 ((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT); 2439 ((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT);
2440 lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | (pkey << OPA_16B_PKEY_SHIFT); 2440 lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | ((u32)pkey << OPA_16B_PKEY_SHIFT);
2441 lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4; 2441 lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4;
2442 2442
2443 hdr->lrh[0] = lrh0; 2443 hdr->lrh[0] = lrh0;
diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
index 33eba2356742..6309edf811df 100644
--- a/drivers/infiniband/hw/hfi1/init.c
+++ b/drivers/infiniband/hw/hfi1/init.c
@@ -88,9 +88,9 @@
88 * pio buffers per ctxt, etc.) Zero means use one user context per CPU. 88 * pio buffers per ctxt, etc.) Zero means use one user context per CPU.
89 */ 89 */
90int num_user_contexts = -1; 90int num_user_contexts = -1;
91module_param_named(num_user_contexts, num_user_contexts, uint, S_IRUGO); 91module_param_named(num_user_contexts, num_user_contexts, int, 0444);
92MODULE_PARM_DESC( 92MODULE_PARM_DESC(
93 num_user_contexts, "Set max number of user contexts to use"); 93 num_user_contexts, "Set max number of user contexts to use (default: -1 will use the real (non-HT) CPU count)");
94 94
95uint krcvqs[RXE_NUM_DATA_VL]; 95uint krcvqs[RXE_NUM_DATA_VL];
96int krcvqsset; 96int krcvqsset;
@@ -1209,30 +1209,49 @@ static void finalize_asic_data(struct hfi1_devdata *dd,
1209 kfree(ad); 1209 kfree(ad);
1210} 1210}
1211 1211
1212static void __hfi1_free_devdata(struct kobject *kobj) 1212/**
1213 * hfi1_clean_devdata - cleans up per-unit data structure
1214 * @dd: pointer to a valid devdata structure
1215 *
1216 * It cleans up all data structures set up by
1217 * by hfi1_alloc_devdata().
1218 */
1219static void hfi1_clean_devdata(struct hfi1_devdata *dd)
1213{ 1220{
1214 struct hfi1_devdata *dd =
1215 container_of(kobj, struct hfi1_devdata, kobj);
1216 struct hfi1_asic_data *ad; 1221 struct hfi1_asic_data *ad;
1217 unsigned long flags; 1222 unsigned long flags;
1218 1223
1219 spin_lock_irqsave(&hfi1_devs_lock, flags); 1224 spin_lock_irqsave(&hfi1_devs_lock, flags);
1220 idr_remove(&hfi1_unit_table, dd->unit); 1225 if (!list_empty(&dd->list)) {
1221 list_del(&dd->list); 1226 idr_remove(&hfi1_unit_table, dd->unit);
1227 list_del_init(&dd->list);
1228 }
1222 ad = release_asic_data(dd); 1229 ad = release_asic_data(dd);
1223 spin_unlock_irqrestore(&hfi1_devs_lock, flags); 1230 spin_unlock_irqrestore(&hfi1_devs_lock, flags);
1224 if (ad) 1231
1225 finalize_asic_data(dd, ad); 1232 finalize_asic_data(dd, ad);
1226 free_platform_config(dd); 1233 free_platform_config(dd);
1227 rcu_barrier(); /* wait for rcu callbacks to complete */ 1234 rcu_barrier(); /* wait for rcu callbacks to complete */
1228 free_percpu(dd->int_counter); 1235 free_percpu(dd->int_counter);
1229 free_percpu(dd->rcv_limit); 1236 free_percpu(dd->rcv_limit);
1230 free_percpu(dd->send_schedule); 1237 free_percpu(dd->send_schedule);
1231 free_percpu(dd->tx_opstats); 1238 free_percpu(dd->tx_opstats);
1239 dd->int_counter = NULL;
1240 dd->rcv_limit = NULL;
1241 dd->send_schedule = NULL;
1242 dd->tx_opstats = NULL;
1232 sdma_clean(dd, dd->num_sdma); 1243 sdma_clean(dd, dd->num_sdma);
1233 rvt_dealloc_device(&dd->verbs_dev.rdi); 1244 rvt_dealloc_device(&dd->verbs_dev.rdi);
1234} 1245}
1235 1246
1247static void __hfi1_free_devdata(struct kobject *kobj)
1248{
1249 struct hfi1_devdata *dd =
1250 container_of(kobj, struct hfi1_devdata, kobj);
1251
1252 hfi1_clean_devdata(dd);
1253}
1254
1236static struct kobj_type hfi1_devdata_type = { 1255static struct kobj_type hfi1_devdata_type = {
1237 .release = __hfi1_free_devdata, 1256 .release = __hfi1_free_devdata,
1238}; 1257};
@@ -1265,6 +1284,8 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
1265 return ERR_PTR(-ENOMEM); 1284 return ERR_PTR(-ENOMEM);
1266 dd->num_pports = nports; 1285 dd->num_pports = nports;
1267 dd->pport = (struct hfi1_pportdata *)(dd + 1); 1286 dd->pport = (struct hfi1_pportdata *)(dd + 1);
1287 dd->pcidev = pdev;
1288 pci_set_drvdata(pdev, dd);
1268 1289
1269 INIT_LIST_HEAD(&dd->list); 1290 INIT_LIST_HEAD(&dd->list);
1270 idr_preload(GFP_KERNEL); 1291 idr_preload(GFP_KERNEL);
@@ -1331,9 +1352,7 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
1331 return dd; 1352 return dd;
1332 1353
1333bail: 1354bail:
1334 if (!list_empty(&dd->list)) 1355 hfi1_clean_devdata(dd);
1335 list_del_init(&dd->list);
1336 rvt_dealloc_device(&dd->verbs_dev.rdi);
1337 return ERR_PTR(ret); 1356 return ERR_PTR(ret);
1338} 1357}
1339 1358
diff --git a/drivers/infiniband/hw/hfi1/pcie.c b/drivers/infiniband/hw/hfi1/pcie.c
index 83d66e862207..c1c982908b4b 100644
--- a/drivers/infiniband/hw/hfi1/pcie.c
+++ b/drivers/infiniband/hw/hfi1/pcie.c
@@ -163,9 +163,6 @@ int hfi1_pcie_ddinit(struct hfi1_devdata *dd, struct pci_dev *pdev)
163 resource_size_t addr; 163 resource_size_t addr;
164 int ret = 0; 164 int ret = 0;
165 165
166 dd->pcidev = pdev;
167 pci_set_drvdata(pdev, dd);
168
169 addr = pci_resource_start(pdev, 0); 166 addr = pci_resource_start(pdev, 0);
170 len = pci_resource_len(pdev, 0); 167 len = pci_resource_len(pdev, 0);
171 168
diff --git a/drivers/infiniband/hw/hfi1/platform.c b/drivers/infiniband/hw/hfi1/platform.c
index d486355880cb..cbf7faa5038c 100644
--- a/drivers/infiniband/hw/hfi1/platform.c
+++ b/drivers/infiniband/hw/hfi1/platform.c
@@ -199,6 +199,7 @@ void free_platform_config(struct hfi1_devdata *dd)
199{ 199{
200 /* Release memory allocated for eprom or fallback file read. */ 200 /* Release memory allocated for eprom or fallback file read. */
201 kfree(dd->platform_config.data); 201 kfree(dd->platform_config.data);
202 dd->platform_config.data = NULL;
202} 203}
203 204
204void get_port_type(struct hfi1_pportdata *ppd) 205void get_port_type(struct hfi1_pportdata *ppd)
diff --git a/drivers/infiniband/hw/hfi1/qsfp.c b/drivers/infiniband/hw/hfi1/qsfp.c
index 1869f639c3ae..b5966991d647 100644
--- a/drivers/infiniband/hw/hfi1/qsfp.c
+++ b/drivers/infiniband/hw/hfi1/qsfp.c
@@ -204,6 +204,8 @@ static void clean_i2c_bus(struct hfi1_i2c_bus *bus)
204 204
205void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad) 205void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad)
206{ 206{
207 if (!ad)
208 return;
207 clean_i2c_bus(ad->i2c_bus0); 209 clean_i2c_bus(ad->i2c_bus0);
208 ad->i2c_bus0 = NULL; 210 ad->i2c_bus0 = NULL;
209 clean_i2c_bus(ad->i2c_bus1); 211 clean_i2c_bus(ad->i2c_bus1);
diff --git a/drivers/infiniband/hw/hfi1/ruc.c b/drivers/infiniband/hw/hfi1/ruc.c
index 3daa94bdae3a..c0071ca4147a 100644
--- a/drivers/infiniband/hw/hfi1/ruc.c
+++ b/drivers/infiniband/hw/hfi1/ruc.c
@@ -733,6 +733,20 @@ static inline void hfi1_make_ruc_bth(struct rvt_qp *qp,
733 ohdr->bth[2] = cpu_to_be32(bth2); 733 ohdr->bth[2] = cpu_to_be32(bth2);
734} 734}
735 735
736/**
737 * hfi1_make_ruc_header_16B - build a 16B header
738 * @qp: the queue pair
739 * @ohdr: a pointer to the destination header memory
740 * @bth0: bth0 passed in from the RC/UC builder
741 * @bth2: bth2 passed in from the RC/UC builder
742 * @middle: non zero implies indicates ahg "could" be used
743 * @ps: the current packet state
744 *
745 * This routine may disarm ahg under these situations:
746 * - packet needs a GRH
747 * - BECN needed
748 * - migration state not IB_MIG_MIGRATED
749 */
736static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp, 750static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
737 struct ib_other_headers *ohdr, 751 struct ib_other_headers *ohdr,
738 u32 bth0, u32 bth2, int middle, 752 u32 bth0, u32 bth2, int middle,
@@ -777,6 +791,12 @@ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
777 else 791 else
778 middle = 0; 792 middle = 0;
779 793
794 if (qp->s_flags & RVT_S_ECN) {
795 qp->s_flags &= ~RVT_S_ECN;
796 /* we recently received a FECN, so return a BECN */
797 becn = true;
798 middle = 0;
799 }
780 if (middle) 800 if (middle)
781 build_ahg(qp, bth2); 801 build_ahg(qp, bth2);
782 else 802 else
@@ -784,11 +804,6 @@ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
784 804
785 bth0 |= pkey; 805 bth0 |= pkey;
786 bth0 |= extra_bytes << 20; 806 bth0 |= extra_bytes << 20;
787 if (qp->s_flags & RVT_S_ECN) {
788 qp->s_flags &= ~RVT_S_ECN;
789 /* we recently received a FECN, so return a BECN */
790 becn = true;
791 }
792 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); 807 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);
793 808
794 if (!ppd->lid) 809 if (!ppd->lid)
@@ -806,6 +821,20 @@ static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,
806 pkey, becn, 0, l4, priv->s_sc); 821 pkey, becn, 0, l4, priv->s_sc);
807} 822}
808 823
824/**
825 * hfi1_make_ruc_header_9B - build a 9B header
826 * @qp: the queue pair
827 * @ohdr: a pointer to the destination header memory
828 * @bth0: bth0 passed in from the RC/UC builder
829 * @bth2: bth2 passed in from the RC/UC builder
830 * @middle: non zero implies indicates ahg "could" be used
831 * @ps: the current packet state
832 *
833 * This routine may disarm ahg under these situations:
834 * - packet needs a GRH
835 * - BECN needed
836 * - migration state not IB_MIG_MIGRATED
837 */
809static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp, 838static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,
810 struct ib_other_headers *ohdr, 839 struct ib_other_headers *ohdr,
811 u32 bth0, u32 bth2, int middle, 840 u32 bth0, u32 bth2, int middle,
@@ -839,6 +868,12 @@ static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,
839 else 868 else
840 middle = 0; 869 middle = 0;
841 870
871 if (qp->s_flags & RVT_S_ECN) {
872 qp->s_flags &= ~RVT_S_ECN;
873 /* we recently received a FECN, so return a BECN */
874 bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);
875 middle = 0;
876 }
842 if (middle) 877 if (middle)
843 build_ahg(qp, bth2); 878 build_ahg(qp, bth2);
844 else 879 else
@@ -846,11 +881,6 @@ static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,
846 881
847 bth0 |= pkey; 882 bth0 |= pkey;
848 bth0 |= extra_bytes << 20; 883 bth0 |= extra_bytes << 20;
849 if (qp->s_flags & RVT_S_ECN) {
850 qp->s_flags &= ~RVT_S_ECN;
851 /* we recently received a FECN, so return a BECN */
852 bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);
853 }
854 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); 884 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);
855 hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh, 885 hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh,
856 lrh0, 886 lrh0,
diff --git a/drivers/infiniband/hw/hfi1/ud.c b/drivers/infiniband/hw/hfi1/ud.c
index bcf3b0bebac8..69c17a5ef038 100644
--- a/drivers/infiniband/hw/hfi1/ud.c
+++ b/drivers/infiniband/hw/hfi1/ud.c
@@ -628,7 +628,7 @@ int hfi1_lookup_pkey_idx(struct hfi1_ibport *ibp, u16 pkey)
628} 628}
629 629
630void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, 630void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
631 u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 631 u32 remote_qpn, u16 pkey, u32 slid, u32 dlid,
632 u8 sc5, const struct ib_grh *old_grh) 632 u8 sc5, const struct ib_grh *old_grh)
633{ 633{
634 u64 pbc, pbc_flags = 0; 634 u64 pbc, pbc_flags = 0;
@@ -687,7 +687,7 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp,
687} 687}
688 688
689void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, 689void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
690 u32 pkey, u32 slid, u32 dlid, u8 sc5, 690 u16 pkey, u32 slid, u32 dlid, u8 sc5,
691 const struct ib_grh *old_grh) 691 const struct ib_grh *old_grh)
692{ 692{
693 u64 pbc, pbc_flags = 0; 693 u64 pbc, pbc_flags = 0;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 0eeabfbee192..63b5b3edabcb 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -912,7 +912,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
912 obj_per_chunk = buf_chunk_size / obj_size; 912 obj_per_chunk = buf_chunk_size / obj_size;
913 num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk; 913 num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk;
914 bt_chunk_num = bt_chunk_size / 8; 914 bt_chunk_num = bt_chunk_size / 8;
915 if (table->type >= HEM_TYPE_MTT) 915 if (type >= HEM_TYPE_MTT)
916 num_bt_l0 = bt_chunk_num; 916 num_bt_l0 = bt_chunk_num;
917 917
918 table->hem = kcalloc(num_hem, sizeof(*table->hem), 918 table->hem = kcalloc(num_hem, sizeof(*table->hem),
@@ -920,7 +920,7 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
920 if (!table->hem) 920 if (!table->hem)
921 goto err_kcalloc_hem_buf; 921 goto err_kcalloc_hem_buf;
922 922
923 if (check_whether_bt_num_3(table->type, hop_num)) { 923 if (check_whether_bt_num_3(type, hop_num)) {
924 unsigned long num_bt_l1; 924 unsigned long num_bt_l1;
925 925
926 num_bt_l1 = (num_hem + bt_chunk_num - 1) / 926 num_bt_l1 = (num_hem + bt_chunk_num - 1) /
@@ -939,8 +939,8 @@ int hns_roce_init_hem_table(struct hns_roce_dev *hr_dev,
939 goto err_kcalloc_l1_dma; 939 goto err_kcalloc_l1_dma;
940 } 940 }
941 941
942 if (check_whether_bt_num_2(table->type, hop_num) || 942 if (check_whether_bt_num_2(type, hop_num) ||
943 check_whether_bt_num_3(table->type, hop_num)) { 943 check_whether_bt_num_3(type, hop_num)) {
944 table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0), 944 table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0),
945 GFP_KERNEL); 945 GFP_KERNEL);
946 if (!table->bt_l0) 946 if (!table->bt_l0)
@@ -1039,14 +1039,14 @@ void hns_roce_cleanup_hem_table(struct hns_roce_dev *hr_dev,
1039void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev) 1039void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev)
1040{ 1040{
1041 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table); 1041 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table);
1042 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table);
1043 if (hr_dev->caps.trrl_entry_sz) 1042 if (hr_dev->caps.trrl_entry_sz)
1044 hns_roce_cleanup_hem_table(hr_dev, 1043 hns_roce_cleanup_hem_table(hr_dev,
1045 &hr_dev->qp_table.trrl_table); 1044 &hr_dev->qp_table.trrl_table);
1045 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table);
1046 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.qp_table); 1046 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.qp_table);
1047 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table); 1047 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table);
1048 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table);
1049 if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE)) 1048 if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE))
1050 hns_roce_cleanup_hem_table(hr_dev, 1049 hns_roce_cleanup_hem_table(hr_dev,
1051 &hr_dev->mr_table.mtt_cqe_table); 1050 &hr_dev->mr_table.mtt_cqe_table);
1051 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table);
1052} 1052}
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 8b84ab7800d8..25916e8522ed 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -71,6 +71,11 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, struct ib_send_wr *wr,
71 return -EINVAL; 71 return -EINVAL;
72 } 72 }
73 73
74 if (wr->opcode == IB_WR_RDMA_READ) {
75 dev_err(hr_dev->dev, "Not support inline data!\n");
76 return -EINVAL;
77 }
78
74 for (i = 0; i < wr->num_sge; i++) { 79 for (i = 0; i < wr->num_sge; i++) {
75 memcpy(wqe, ((void *)wr->sg_list[i].addr), 80 memcpy(wqe, ((void *)wr->sg_list[i].addr),
76 wr->sg_list[i].length); 81 wr->sg_list[i].length);
@@ -148,7 +153,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
148 ibqp->qp_type != IB_QPT_GSI && 153 ibqp->qp_type != IB_QPT_GSI &&
149 ibqp->qp_type != IB_QPT_UD)) { 154 ibqp->qp_type != IB_QPT_UD)) {
150 dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type); 155 dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type);
151 *bad_wr = NULL; 156 *bad_wr = wr;
152 return -EOPNOTSUPP; 157 return -EOPNOTSUPP;
153 } 158 }
154 159
@@ -182,7 +187,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
182 qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] = 187 qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] =
183 wr->wr_id; 188 wr->wr_id;
184 189
185 owner_bit = ~(qp->sq.head >> ilog2(qp->sq.wqe_cnt)) & 0x1; 190 owner_bit =
191 ~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1);
186 192
187 /* Corresponding to the QP type, wqe process separately */ 193 /* Corresponding to the QP type, wqe process separately */
188 if (ibqp->qp_type == IB_QPT_GSI) { 194 if (ibqp->qp_type == IB_QPT_GSI) {
@@ -456,6 +462,7 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
456 } else { 462 } else {
457 dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type); 463 dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type);
458 spin_unlock_irqrestore(&qp->sq.lock, flags); 464 spin_unlock_irqrestore(&qp->sq.lock, flags);
465 *bad_wr = wr;
459 return -EOPNOTSUPP; 466 return -EOPNOTSUPP;
460 } 467 }
461 } 468 }
@@ -2592,10 +2599,12 @@ static void modify_qp_init_to_init(struct ib_qp *ibqp,
2592 roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M, 2599 roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M,
2593 V2_QPC_BYTE_4_SQPN_S, 0); 2600 V2_QPC_BYTE_4_SQPN_S, 0);
2594 2601
2595 roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2602 if (attr_mask & IB_QP_DEST_QPN) {
2596 V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn); 2603 roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M,
2597 roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2604 V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn);
2598 V2_QPC_BYTE_56_DQPN_S, 0); 2605 roce_set_field(qpc_mask->byte_56_dqpn_err,
2606 V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0);
2607 }
2599 roce_set_field(context->byte_168_irrl_idx, 2608 roce_set_field(context->byte_168_irrl_idx,
2600 V2_QPC_BYTE_168_SQ_SHIFT_BAK_M, 2609 V2_QPC_BYTE_168_SQ_SHIFT_BAK_M,
2601 V2_QPC_BYTE_168_SQ_SHIFT_BAK_S, 2610 V2_QPC_BYTE_168_SQ_SHIFT_BAK_S,
@@ -2650,8 +2659,7 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
2650 return -EINVAL; 2659 return -EINVAL;
2651 } 2660 }
2652 2661
2653 if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || 2662 if (attr_mask & IB_QP_ALT_PATH) {
2654 (attr_mask & IB_QP_PKEY_INDEX) || (attr_mask & IB_QP_QKEY)) {
2655 dev_err(dev, "INIT2RTR attr_mask (0x%x) error\n", attr_mask); 2663 dev_err(dev, "INIT2RTR attr_mask (0x%x) error\n", attr_mask);
2656 return -EINVAL; 2664 return -EINVAL;
2657 } 2665 }
@@ -2800,10 +2808,12 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
2800 V2_QPC_BYTE_140_RR_MAX_S, 0); 2808 V2_QPC_BYTE_140_RR_MAX_S, 0);
2801 } 2809 }
2802 2810
2803 roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2811 if (attr_mask & IB_QP_DEST_QPN) {
2804 V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num); 2812 roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M,
2805 roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2813 V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num);
2806 V2_QPC_BYTE_56_DQPN_S, 0); 2814 roce_set_field(qpc_mask->byte_56_dqpn_err,
2815 V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0);
2816 }
2807 2817
2808 /* Configure GID index */ 2818 /* Configure GID index */
2809 port_num = rdma_ah_get_port_num(&attr->ah_attr); 2819 port_num = rdma_ah_get_port_num(&attr->ah_attr);
@@ -2845,7 +2855,7 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
2845 if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) 2855 if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD)
2846 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 2856 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
2847 V2_QPC_BYTE_24_MTU_S, IB_MTU_4096); 2857 V2_QPC_BYTE_24_MTU_S, IB_MTU_4096);
2848 else 2858 else if (attr_mask & IB_QP_PATH_MTU)
2849 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 2859 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M,
2850 V2_QPC_BYTE_24_MTU_S, attr->path_mtu); 2860 V2_QPC_BYTE_24_MTU_S, attr->path_mtu);
2851 2861
@@ -2922,11 +2932,9 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
2922 return -EINVAL; 2932 return -EINVAL;
2923 } 2933 }
2924 2934
2925 /* If exist optional param, return error */ 2935 /* Not support alternate path and path migration */
2926 if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || 2936 if ((attr_mask & IB_QP_ALT_PATH) ||
2927 (attr_mask & IB_QP_QKEY) || (attr_mask & IB_QP_PATH_MIG_STATE) || 2937 (attr_mask & IB_QP_PATH_MIG_STATE)) {
2928 (attr_mask & IB_QP_CUR_STATE) ||
2929 (attr_mask & IB_QP_MIN_RNR_TIMER)) {
2930 dev_err(dev, "RTR2RTS attr_mask (0x%x)error\n", attr_mask); 2938 dev_err(dev, "RTR2RTS attr_mask (0x%x)error\n", attr_mask);
2931 return -EINVAL; 2939 return -EINVAL;
2932 } 2940 }
@@ -3161,7 +3169,8 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
3161 (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) || 3169 (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) ||
3162 (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) || 3170 (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) ||
3163 (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) || 3171 (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) ||
3164 (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR)) { 3172 (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR) ||
3173 (cur_state == IB_QPS_ERR && new_state == IB_QPS_ERR)) {
3165 /* Nothing */ 3174 /* Nothing */
3166 ; 3175 ;
3167 } else { 3176 } else {
@@ -4478,7 +4487,7 @@ static int hns_roce_v2_create_eq(struct hns_roce_dev *hr_dev,
4478 ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0, 4487 ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0,
4479 eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS); 4488 eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS);
4480 if (ret) { 4489 if (ret) {
4481 dev_err(dev, "[mailbox cmd] creat eqc failed.\n"); 4490 dev_err(dev, "[mailbox cmd] create eqc failed.\n");
4482 goto err_cmd_mbox; 4491 goto err_cmd_mbox;
4483 } 4492 }
4484 4493
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index e289a924e789..d4aad34c21e2 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -620,7 +620,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
620 to_hr_ucontext(ib_pd->uobject->context), 620 to_hr_ucontext(ib_pd->uobject->context),
621 ucmd.db_addr, &hr_qp->rdb); 621 ucmd.db_addr, &hr_qp->rdb);
622 if (ret) { 622 if (ret) {
623 dev_err(dev, "rp record doorbell map failed!\n"); 623 dev_err(dev, "rq record doorbell map failed!\n");
624 goto err_mtt; 624 goto err_mtt;
625 } 625 }
626 } 626 }
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index 17f4f151a97f..61d8b06375bb 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -346,7 +346,7 @@ int mlx4_ib_umem_calc_optimal_mtt_size(struct ib_umem *umem, u64 start_va,
346 /* Add to the first block the misalignment that it suffers from. */ 346 /* Add to the first block the misalignment that it suffers from. */
347 total_len += (first_block_start & ((1ULL << block_shift) - 1ULL)); 347 total_len += (first_block_start & ((1ULL << block_shift) - 1ULL));
348 last_block_end = current_block_start + current_block_len; 348 last_block_end = current_block_start + current_block_len;
349 last_block_aligned_end = round_up(last_block_end, 1 << block_shift); 349 last_block_aligned_end = round_up(last_block_end, 1ULL << block_shift);
350 total_len += (last_block_aligned_end - last_block_end); 350 total_len += (last_block_aligned_end - last_block_end);
351 351
352 if (total_len & ((1ULL << block_shift) - 1ULL)) 352 if (total_len & ((1ULL << block_shift) - 1ULL))
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index 50af8915e7ec..199648adac74 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -673,7 +673,8 @@ static int set_qp_rss(struct mlx4_ib_dev *dev, struct mlx4_ib_rss *rss_ctx,
673 MLX4_IB_RX_HASH_SRC_PORT_TCP | 673 MLX4_IB_RX_HASH_SRC_PORT_TCP |
674 MLX4_IB_RX_HASH_DST_PORT_TCP | 674 MLX4_IB_RX_HASH_DST_PORT_TCP |
675 MLX4_IB_RX_HASH_SRC_PORT_UDP | 675 MLX4_IB_RX_HASH_SRC_PORT_UDP |
676 MLX4_IB_RX_HASH_DST_PORT_UDP)) { 676 MLX4_IB_RX_HASH_DST_PORT_UDP |
677 MLX4_IB_RX_HASH_INNER)) {
677 pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n", 678 pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n",
678 ucmd->rx_hash_fields_mask); 679 ucmd->rx_hash_fields_mask);
679 return (-EOPNOTSUPP); 680 return (-EOPNOTSUPP);
diff --git a/drivers/infiniband/hw/mlx5/Kconfig b/drivers/infiniband/hw/mlx5/Kconfig
index bce263b92821..fb4d77be019b 100644
--- a/drivers/infiniband/hw/mlx5/Kconfig
+++ b/drivers/infiniband/hw/mlx5/Kconfig
@@ -1,6 +1,7 @@
1config MLX5_INFINIBAND 1config MLX5_INFINIBAND
2 tristate "Mellanox Connect-IB HCA support" 2 tristate "Mellanox Connect-IB HCA support"
3 depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE 3 depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
4 depends on INFINIBAND_USER_ACCESS || INFINIBAND_USER_ACCESS=n
4 ---help--- 5 ---help---
5 This driver provides low-level InfiniBand support for 6 This driver provides low-level InfiniBand support for
6 Mellanox Connect-IB PCI Express host channel adapters (HCAs). 7 Mellanox Connect-IB PCI Express host channel adapters (HCAs).
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 241cf4ff9901..b4d8ff8ab807 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -52,7 +52,6 @@
52#include <linux/mlx5/port.h> 52#include <linux/mlx5/port.h>
53#include <linux/mlx5/vport.h> 53#include <linux/mlx5/vport.h>
54#include <linux/mlx5/fs.h> 54#include <linux/mlx5/fs.h>
55#include <linux/mlx5/fs_helpers.h>
56#include <linux/list.h> 55#include <linux/list.h>
57#include <rdma/ib_smi.h> 56#include <rdma/ib_smi.h>
58#include <rdma/ib_umem.h> 57#include <rdma/ib_umem.h>
@@ -180,7 +179,7 @@ static int mlx5_netdev_event(struct notifier_block *this,
180 if (rep_ndev == ndev) 179 if (rep_ndev == ndev)
181 roce->netdev = (event == NETDEV_UNREGISTER) ? 180 roce->netdev = (event == NETDEV_UNREGISTER) ?
182 NULL : ndev; 181 NULL : ndev;
183 } else if (ndev->dev.parent == &ibdev->mdev->pdev->dev) { 182 } else if (ndev->dev.parent == &mdev->pdev->dev) {
184 roce->netdev = (event == NETDEV_UNREGISTER) ? 183 roce->netdev = (event == NETDEV_UNREGISTER) ?
185 NULL : ndev; 184 NULL : ndev;
186 } 185 }
@@ -5427,9 +5426,7 @@ static void mlx5_ib_stage_cong_debugfs_cleanup(struct mlx5_ib_dev *dev)
5427static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev) 5426static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev)
5428{ 5427{
5429 dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev); 5428 dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev);
5430 if (!dev->mdev->priv.uar) 5429 return PTR_ERR_OR_ZERO(dev->mdev->priv.uar);
5431 return -ENOMEM;
5432 return 0;
5433} 5430}
5434 5431
5435static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev) 5432static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev)
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 1520a2f20f98..90a9c461cedc 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -866,25 +866,28 @@ static int mr_umem_get(struct ib_pd *pd, u64 start, u64 length,
866 int *order) 866 int *order)
867{ 867{
868 struct mlx5_ib_dev *dev = to_mdev(pd->device); 868 struct mlx5_ib_dev *dev = to_mdev(pd->device);
869 struct ib_umem *u;
869 int err; 870 int err;
870 871
871 *umem = ib_umem_get(pd->uobject->context, start, length, 872 *umem = NULL;
872 access_flags, 0); 873
873 err = PTR_ERR_OR_ZERO(*umem); 874 u = ib_umem_get(pd->uobject->context, start, length, access_flags, 0);
875 err = PTR_ERR_OR_ZERO(u);
874 if (err) { 876 if (err) {
875 *umem = NULL; 877 mlx5_ib_dbg(dev, "umem get failed (%d)\n", err);
876 mlx5_ib_err(dev, "umem get failed (%d)\n", err);
877 return err; 878 return err;
878 } 879 }
879 880
880 mlx5_ib_cont_pages(*umem, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, 881 mlx5_ib_cont_pages(u, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages,
881 page_shift, ncont, order); 882 page_shift, ncont, order);
882 if (!*npages) { 883 if (!*npages) {
883 mlx5_ib_warn(dev, "avoid zero region\n"); 884 mlx5_ib_warn(dev, "avoid zero region\n");
884 ib_umem_release(*umem); 885 ib_umem_release(u);
885 return -EINVAL; 886 return -EINVAL;
886 } 887 }
887 888
889 *umem = u;
890
888 mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n", 891 mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n",
889 *npages, *ncont, *order, *page_shift); 892 *npages, *ncont, *order, *page_shift);
890 893
@@ -1458,13 +1461,12 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
1458 int access_flags = flags & IB_MR_REREG_ACCESS ? 1461 int access_flags = flags & IB_MR_REREG_ACCESS ?
1459 new_access_flags : 1462 new_access_flags :
1460 mr->access_flags; 1463 mr->access_flags;
1461 u64 addr = (flags & IB_MR_REREG_TRANS) ? virt_addr : mr->umem->address;
1462 u64 len = (flags & IB_MR_REREG_TRANS) ? length : mr->umem->length;
1463 int page_shift = 0; 1464 int page_shift = 0;
1464 int upd_flags = 0; 1465 int upd_flags = 0;
1465 int npages = 0; 1466 int npages = 0;
1466 int ncont = 0; 1467 int ncont = 0;
1467 int order = 0; 1468 int order = 0;
1469 u64 addr, len;
1468 int err; 1470 int err;
1469 1471
1470 mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n", 1472 mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n",
@@ -1472,6 +1474,17 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
1472 1474
1473 atomic_sub(mr->npages, &dev->mdev->priv.reg_pages); 1475 atomic_sub(mr->npages, &dev->mdev->priv.reg_pages);
1474 1476
1477 if (!mr->umem)
1478 return -EINVAL;
1479
1480 if (flags & IB_MR_REREG_TRANS) {
1481 addr = virt_addr;
1482 len = length;
1483 } else {
1484 addr = mr->umem->address;
1485 len = mr->umem->length;
1486 }
1487
1475 if (flags != IB_MR_REREG_PD) { 1488 if (flags != IB_MR_REREG_PD) {
1476 /* 1489 /*
1477 * Replace umem. This needs to be done whether or not UMR is 1490 * Replace umem. This needs to be done whether or not UMR is
@@ -1479,6 +1492,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
1479 */ 1492 */
1480 flags |= IB_MR_REREG_TRANS; 1493 flags |= IB_MR_REREG_TRANS;
1481 ib_umem_release(mr->umem); 1494 ib_umem_release(mr->umem);
1495 mr->umem = NULL;
1482 err = mr_umem_get(pd, addr, len, access_flags, &mr->umem, 1496 err = mr_umem_get(pd, addr, len, access_flags, &mr->umem,
1483 &npages, &page_shift, &ncont, &order); 1497 &npages, &page_shift, &ncont, &order);
1484 if (err) 1498 if (err)
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 7ed4b70f6447..87b7c1be2a11 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -259,7 +259,11 @@ static int set_rq_size(struct mlx5_ib_dev *dev, struct ib_qp_cap *cap,
259 } else { 259 } else {
260 if (ucmd) { 260 if (ucmd) {
261 qp->rq.wqe_cnt = ucmd->rq_wqe_count; 261 qp->rq.wqe_cnt = ucmd->rq_wqe_count;
262 if (ucmd->rq_wqe_shift > BITS_PER_BYTE * sizeof(ucmd->rq_wqe_shift))
263 return -EINVAL;
262 qp->rq.wqe_shift = ucmd->rq_wqe_shift; 264 qp->rq.wqe_shift = ucmd->rq_wqe_shift;
265 if ((1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) < qp->wq_sig)
266 return -EINVAL;
263 qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig; 267 qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig;
264 qp->rq.max_post = qp->rq.wqe_cnt; 268 qp->rq.max_post = qp->rq.wqe_cnt;
265 } else { 269 } else {
@@ -2451,18 +2455,18 @@ enum {
2451 2455
2452static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) 2456static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
2453{ 2457{
2454 if (rate == IB_RATE_PORT_CURRENT) { 2458 if (rate == IB_RATE_PORT_CURRENT)
2455 return 0; 2459 return 0;
2456 } else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) { 2460
2461 if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS)
2457 return -EINVAL; 2462 return -EINVAL;
2458 } else {
2459 while (rate != IB_RATE_2_5_GBPS &&
2460 !(1 << (rate + MLX5_STAT_RATE_OFFSET) &
2461 MLX5_CAP_GEN(dev->mdev, stat_rate_support)))
2462 --rate;
2463 }
2464 2463
2465 return rate + MLX5_STAT_RATE_OFFSET; 2464 while (rate != IB_RATE_PORT_CURRENT &&
2465 !(1 << (rate + MLX5_STAT_RATE_OFFSET) &
2466 MLX5_CAP_GEN(dev->mdev, stat_rate_support)))
2467 --rate;
2468
2469 return rate ? rate + MLX5_STAT_RATE_OFFSET : rate;
2466} 2470}
2467 2471
2468static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev, 2472static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev,
diff --git a/drivers/infiniband/hw/nes/nes_nic.c b/drivers/infiniband/hw/nes/nes_nic.c
index 0a75164cedea..007d5e8a0121 100644
--- a/drivers/infiniband/hw/nes/nes_nic.c
+++ b/drivers/infiniband/hw/nes/nes_nic.c
@@ -461,7 +461,7 @@ static bool nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
461/** 461/**
462 * nes_netdev_start_xmit 462 * nes_netdev_start_xmit
463 */ 463 */
464static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev) 464static netdev_tx_t nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev)
465{ 465{
466 struct nes_vnic *nesvnic = netdev_priv(netdev); 466 struct nes_vnic *nesvnic = netdev_priv(netdev);
467 struct nes_device *nesdev = nesvnic->nesdev; 467 struct nes_device *nesdev = nesvnic->nesdev;
diff --git a/drivers/infiniband/sw/rxe/rxe_opcode.c b/drivers/infiniband/sw/rxe/rxe_opcode.c
index 61927c165b59..4cf11063e0b5 100644
--- a/drivers/infiniband/sw/rxe/rxe_opcode.c
+++ b/drivers/infiniband/sw/rxe/rxe_opcode.c
@@ -390,7 +390,7 @@ struct rxe_opcode_info rxe_opcode[RXE_NUM_OPCODE] = {
390 .name = "IB_OPCODE_RC_SEND_ONLY_INV", 390 .name = "IB_OPCODE_RC_SEND_ONLY_INV",
391 .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK 391 .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK
392 | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK 392 | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK
393 | RXE_END_MASK, 393 | RXE_END_MASK | RXE_START_MASK,
394 .length = RXE_BTH_BYTES + RXE_IETH_BYTES, 394 .length = RXE_BTH_BYTES + RXE_IETH_BYTES,
395 .offset = { 395 .offset = {
396 [RXE_BTH] = 0, 396 [RXE_BTH] = 0,
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 7bdaf71b8221..785199990457 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -728,7 +728,6 @@ next_wqe:
728 rollback_state(wqe, qp, &rollback_wqe, rollback_psn); 728 rollback_state(wqe, qp, &rollback_wqe, rollback_psn);
729 729
730 if (ret == -EAGAIN) { 730 if (ret == -EAGAIN) {
731 kfree_skb(skb);
732 rxe_run_task(&qp->req.task, 1); 731 rxe_run_task(&qp->req.task, 1);
733 goto exit; 732 goto exit;
734 } 733 }
diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index a65c9969f7fc..955ff3b6da9c 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -742,7 +742,6 @@ static enum resp_states read_reply(struct rxe_qp *qp,
742 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); 742 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb);
743 if (err) { 743 if (err) {
744 pr_err("Failed sending RDMA reply.\n"); 744 pr_err("Failed sending RDMA reply.\n");
745 kfree_skb(skb);
746 return RESPST_ERR_RNR; 745 return RESPST_ERR_RNR;
747 } 746 }
748 747
@@ -954,10 +953,8 @@ static int send_ack(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
954 } 953 }
955 954
956 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); 955 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb);
957 if (err) { 956 if (err)
958 pr_err_ratelimited("Failed sending ack\n"); 957 pr_err_ratelimited("Failed sending ack\n");
959 kfree_skb(skb);
960 }
961 958
962err1: 959err1:
963 return err; 960 return err;
@@ -1141,7 +1138,6 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
1141 if (rc) { 1138 if (rc) {
1142 pr_err("Failed resending result. This flow is not handled - skb ignored\n"); 1139 pr_err("Failed resending result. This flow is not handled - skb ignored\n");
1143 rxe_drop_ref(qp); 1140 rxe_drop_ref(qp);
1144 kfree_skb(skb_copy);
1145 rc = RESPST_CLEANUP; 1141 rc = RESPST_CLEANUP;
1146 goto out; 1142 goto out;
1147 } 1143 }
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 161ba8c76285..cf291f90b58f 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -1094,7 +1094,7 @@ drop_and_unlock:
1094 spin_unlock_irqrestore(&priv->lock, flags); 1094 spin_unlock_irqrestore(&priv->lock, flags);
1095} 1095}
1096 1096
1097static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) 1097static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
1098{ 1098{
1099 struct ipoib_dev_priv *priv = ipoib_priv(dev); 1099 struct ipoib_dev_priv *priv = ipoib_priv(dev);
1100 struct rdma_netdev *rn = netdev_priv(dev); 1100 struct rdma_netdev *rn = netdev_priv(dev);
diff --git a/drivers/infiniband/ulp/srp/Kconfig b/drivers/infiniband/ulp/srp/Kconfig
index c74ee9633041..99db8fe5173a 100644
--- a/drivers/infiniband/ulp/srp/Kconfig
+++ b/drivers/infiniband/ulp/srp/Kconfig
@@ -1,6 +1,6 @@
1config INFINIBAND_SRP 1config INFINIBAND_SRP
2 tristate "InfiniBand SCSI RDMA Protocol" 2 tristate "InfiniBand SCSI RDMA Protocol"
3 depends on SCSI 3 depends on SCSI && INFINIBAND_ADDR_TRANS
4 select SCSI_SRP_ATTRS 4 select SCSI_SRP_ATTRS
5 ---help--- 5 ---help---
6 Support for the SCSI RDMA Protocol over InfiniBand. This 6 Support for the SCSI RDMA Protocol over InfiniBand. This
diff --git a/drivers/infiniband/ulp/srpt/Kconfig b/drivers/infiniband/ulp/srpt/Kconfig
index 31ee83d528d9..fb8b7182f05e 100644
--- a/drivers/infiniband/ulp/srpt/Kconfig
+++ b/drivers/infiniband/ulp/srpt/Kconfig
@@ -1,6 +1,6 @@
1config INFINIBAND_SRPT 1config INFINIBAND_SRPT
2 tristate "InfiniBand SCSI RDMA Protocol target support" 2 tristate "InfiniBand SCSI RDMA Protocol target support"
3 depends on INFINIBAND && TARGET_CORE 3 depends on INFINIBAND && INFINIBAND_ADDR_TRANS && TARGET_CORE
4 ---help--- 4 ---help---
5 5
6 Support for the SCSI RDMA Protocol (SRP) Target driver. The 6 Support for the SCSI RDMA Protocol (SRP) Target driver. The
diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
index b979cf3bce65..88a8b5916624 100644
--- a/drivers/nvme/host/Kconfig
+++ b/drivers/nvme/host/Kconfig
@@ -27,7 +27,7 @@ config NVME_FABRICS
27 27
28config NVME_RDMA 28config NVME_RDMA
29 tristate "NVM Express over Fabrics RDMA host driver" 29 tristate "NVM Express over Fabrics RDMA host driver"
30 depends on INFINIBAND && BLOCK 30 depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK
31 select NVME_CORE 31 select NVME_CORE
32 select NVME_FABRICS 32 select NVME_FABRICS
33 select SG_POOL 33 select SG_POOL
diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
index 5f4f8b16685f..3c7b61ddb0d1 100644
--- a/drivers/nvme/target/Kconfig
+++ b/drivers/nvme/target/Kconfig
@@ -27,7 +27,7 @@ config NVME_TARGET_LOOP
27 27
28config NVME_TARGET_RDMA 28config NVME_TARGET_RDMA
29 tristate "NVMe over Fabrics RDMA target support" 29 tristate "NVMe over Fabrics RDMA target support"
30 depends on INFINIBAND 30 depends on INFINIBAND && INFINIBAND_ADDR_TRANS
31 depends on NVME_TARGET 31 depends on NVME_TARGET
32 select SGL_ALLOC 32 select SGL_ALLOC
33 help 33 help
diff --git a/fs/cifs/Kconfig b/fs/cifs/Kconfig
index 741749a98614..5f132d59dfc2 100644
--- a/fs/cifs/Kconfig
+++ b/fs/cifs/Kconfig
@@ -197,7 +197,7 @@ config CIFS_SMB311
197 197
198config CIFS_SMB_DIRECT 198config CIFS_SMB_DIRECT
199 bool "SMB Direct support (Experimental)" 199 bool "SMB Direct support (Experimental)"
200 depends on CIFS=m && INFINIBAND || CIFS=y && INFINIBAND=y 200 depends on CIFS=m && INFINIBAND && INFINIBAND_ADDR_TRANS || CIFS=y && INFINIBAND=y && INFINIBAND_ADDR_TRANS=y
201 help 201 help
202 Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1. 202 Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1.
203 SMB Direct allows transferring SMB packets over RDMA. If unsure, 203 SMB Direct allows transferring SMB packets over RDMA. If unsure,
diff --git a/include/uapi/linux/if_infiniband.h b/include/uapi/linux/if_infiniband.h
index 050b92dcf8cf..0fc33bf30e45 100644
--- a/include/uapi/linux/if_infiniband.h
+++ b/include/uapi/linux/if_infiniband.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */
2/* 2/*
3 * This software is available to you under a choice of one of two 3 * This software is available to you under a choice of one of two
4 * licenses. You may choose to be licensed under the terms of the GNU 4 * licenses. You may choose to be licensed under the terms of the GNU
diff --git a/include/uapi/linux/rds.h b/include/uapi/linux/rds.h
index a66b213de3d7..20c6bd0b0007 100644
--- a/include/uapi/linux/rds.h
+++ b/include/uapi/linux/rds.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2008 Oracle. All rights reserved. 3 * Copyright (c) 2008 Oracle. All rights reserved.
4 * 4 *
diff --git a/include/uapi/linux/tls.h b/include/uapi/linux/tls.h
index c6633e97eca4..ff02287495ac 100644
--- a/include/uapi/linux/tls.h
+++ b/include/uapi/linux/tls.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved. 3 * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/cxgb3-abi.h b/include/uapi/rdma/cxgb3-abi.h
index 9acb4b7a6246..85aed672f43e 100644
--- a/include/uapi/rdma/cxgb3-abi.h
+++ b/include/uapi/rdma/cxgb3-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2006 Chelsio, Inc. All rights reserved. 3 * Copyright (c) 2006 Chelsio, Inc. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/cxgb4-abi.h b/include/uapi/rdma/cxgb4-abi.h
index 1fefd0140c26..a159ba8dcf8f 100644
--- a/include/uapi/rdma/cxgb4-abi.h
+++ b/include/uapi/rdma/cxgb4-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved. 3 * Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/hns-abi.h b/include/uapi/rdma/hns-abi.h
index 7092c8de4bd8..78613b609fa8 100644
--- a/include/uapi/rdma/hns-abi.h
+++ b/include/uapi/rdma/hns-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2016 Hisilicon Limited. 3 * Copyright (c) 2016 Hisilicon Limited.
4 * 4 *
diff --git a/include/uapi/rdma/ib_user_cm.h b/include/uapi/rdma/ib_user_cm.h
index 4a8f9562f7cd..e2709bb8cb18 100644
--- a/include/uapi/rdma/ib_user_cm.h
+++ b/include/uapi/rdma/ib_user_cm.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 3 * Copyright (c) 2005 Topspin Communications. All rights reserved.
4 * Copyright (c) 2005 Intel Corporation. All rights reserved. 4 * Copyright (c) 2005 Intel Corporation. All rights reserved.
diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h
index 04e46ea517d3..625545d862d7 100644
--- a/include/uapi/rdma/ib_user_ioctl_verbs.h
+++ b/include/uapi/rdma/ib_user_ioctl_verbs.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved. 3 * Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/ib_user_mad.h b/include/uapi/rdma/ib_user_mad.h
index ef92118dad97..90c0cf228020 100644
--- a/include/uapi/rdma/ib_user_mad.h
+++ b/include/uapi/rdma/ib_user_mad.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2004 Topspin Communications. All rights reserved. 3 * Copyright (c) 2004 Topspin Communications. All rights reserved.
4 * Copyright (c) 2005 Voltaire, Inc. All rights reserved. 4 * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
diff --git a/include/uapi/rdma/ib_user_sa.h b/include/uapi/rdma/ib_user_sa.h
index 0d2607f0cd20..435155d6e1c6 100644
--- a/include/uapi/rdma/ib_user_sa.h
+++ b/include/uapi/rdma/ib_user_sa.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2005 Intel Corporation. All rights reserved. 3 * Copyright (c) 2005 Intel Corporation. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
index 9be07394fdbe..6aeb03315b0b 100644
--- a/include/uapi/rdma/ib_user_verbs.h
+++ b/include/uapi/rdma/ib_user_verbs.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 3 * Copyright (c) 2005 Topspin Communications. All rights reserved.
4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved. 4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
diff --git a/include/uapi/rdma/mlx4-abi.h b/include/uapi/rdma/mlx4-abi.h
index 04f64bc4045f..f74557528175 100644
--- a/include/uapi/rdma/mlx4-abi.h
+++ b/include/uapi/rdma/mlx4-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2007 Cisco Systems, Inc. All rights reserved. 3 * Copyright (c) 2007 Cisco Systems, Inc. All rights reserved.
4 * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved. 4 * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.
diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
index cb4a02c4a1ce..fdaf00e20649 100644
--- a/include/uapi/rdma/mlx5-abi.h
+++ b/include/uapi/rdma/mlx5-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. 3 * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/mthca-abi.h b/include/uapi/rdma/mthca-abi.h
index ac756cd9e807..91b12e1a6f43 100644
--- a/include/uapi/rdma/mthca-abi.h
+++ b/include/uapi/rdma/mthca-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 3 * Copyright (c) 2005 Topspin Communications. All rights reserved.
4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved. 4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
diff --git a/include/uapi/rdma/nes-abi.h b/include/uapi/rdma/nes-abi.h
index 35bfd4015d07..f80495baa969 100644
--- a/include/uapi/rdma/nes-abi.h
+++ b/include/uapi/rdma/nes-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 3 * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.
4 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 * Copyright (c) 2005 Topspin Communications. All rights reserved.
diff --git a/include/uapi/rdma/qedr-abi.h b/include/uapi/rdma/qedr-abi.h
index 8ba098900e9a..24c658b3c790 100644
--- a/include/uapi/rdma/qedr-abi.h
+++ b/include/uapi/rdma/qedr-abi.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* QLogic qedr NIC Driver 2/* QLogic qedr NIC Driver
3 * Copyright (c) 2015-2016 QLogic Corporation 3 * Copyright (c) 2015-2016 QLogic Corporation
4 * 4 *
diff --git a/include/uapi/rdma/rdma_user_cm.h b/include/uapi/rdma/rdma_user_cm.h
index e1269024af47..0d1e78ebad05 100644
--- a/include/uapi/rdma/rdma_user_cm.h
+++ b/include/uapi/rdma/rdma_user_cm.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2005-2006 Intel Corporation. All rights reserved. 3 * Copyright (c) 2005-2006 Intel Corporation. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/rdma_user_ioctl.h b/include/uapi/rdma/rdma_user_ioctl.h
index d223f4164a0f..d92d2721b28c 100644
--- a/include/uapi/rdma/rdma_user_ioctl.h
+++ b/include/uapi/rdma/rdma_user_ioctl.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved. 3 * Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved.
4 * 4 *
diff --git a/include/uapi/rdma/rdma_user_rxe.h b/include/uapi/rdma/rdma_user_rxe.h
index 1f8a9e7daea4..44ef6a3b7afc 100644
--- a/include/uapi/rdma/rdma_user_rxe.h
+++ b/include/uapi/rdma/rdma_user_rxe.h
@@ -1,4 +1,4 @@
1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */
2/* 2/*
3 * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. 3 * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.
4 * 4 *