summaryrefslogtreecommitdiff
path: root/drivers/infiniband/hw
AgeCommit message (Collapse)Author
2020-03-30RDMA/bnxt_re: make bnxt_re_ib_init staticYueHaibing
Fix sparse warning: drivers/infiniband/hw/bnxt_re/main.c:1313:5: warning: symbol 'bnxt_re_ib_init' was not declared. Should it be static? Link: https://lore.kernel.org/r/20200330110219.24448-1-yuehaibing@huawei.com Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-29IB/qib: Delete struct qib_ivdev.qp_rndGeorge Spelvin
I was checking the field to see if it needed the full get_random_bytes() and discovered it's unused. Only compile-tested, as I don't have the hardware, but I'm still pretty confident. Link: https://lore.kernel.org/r/202003281643.02SGh6eG002694@sdf.org Signed-off-by: George Spelvin <lkml@sdf.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-29RDMA/hns: Fix uninitialized variable bugGustavo A. R. Silva
There is a potential execution path in which variable *ret* is returned without being properly initialized, previously. Fix this by initializing variable *ret* to 0. Link: https://lore.kernel.org/r/20200328023539.GA32016@embeddedor Addresses-Coverity-ID: 1491917 ("Uninitialized scalar variable") Fixes: 2f49de21f3e9 ("RDMA/hns: Optimize mhop get flow for multi-hop addressing") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-29RDMA/hns: Modify the mask of QP number for CQE of hip08Lang Cheng
The hip08 supports up to 1M QPs, so the qpn mask of cqe should be modified. Link: https://lore.kernel.org/r/1585194018-4381-4-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-29RDMA/hns: Reduce the maximum number of extend SGE per WQELang Cheng
Just reduce the default number to 64 for backward compatibility, the driver can still get this configuration from the firmware. Link: https://lore.kernel.org/r/1585194018-4381-3-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-29RDMA/hns: Reduce PFC frames in congestion scenariosJihua Tao
The original value means sending 16 packets at a time, and it should be configured to 0 which means sending 1 packet instead. It is modified to reduce the number of PFC frames to make sure the performance meets expectations when flow control is enabled on hip08. Link: https://lore.kernel.org/r/1585194018-4381-2-git-send-email-liweihang@huawei.com Signed-off-by: Jihua Tao <taojihua4@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27Merge branch 'mlx5_tx_steering' into rdma.git for-nextJason Gunthorpe
Leon Romanovsky says: ==================== Those two patches from Michael extends mlx5_core and mlx5_ib flow steering to support RDMA TX in similar way to already supported RDMA RX. ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Due to dependencies * branch 'mlx5_tx_steering': RDMA/mlx5: Add support for RDMA TX flow table net/mlx5: Add support for RDMA TX steering
2020-03-27RDMA/mlx5: Add support for RDMA TX flow tableMichael Guralnik
Enable user application to add rules for RDMA TX steering table. Rules in this steering table will allow to steer transmitted RDMA traffic. Link: https://lore.kernel.org/r/20200324061425.1570190-3-leon@kernel.org Signed-off-by: Michael Guralnik <michaelgur@mellanox.com> Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/hfi1: Call kobject_put() when kobject_init_and_add() failsKaike Wan
When kobject_init_and_add() returns an error in the function hfi1_create_port_files(), the function kobject_put() is not called for the corresponding kobject, which potentially leads to memory leak. This patch fixes the issue by calling kobject_put() even if kobject_init_and_add() fails. Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20200326163813.21129.44280.stgit@awfm-01.aw.intel.com Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/hfi1: Fix memory leaks in sysfs registration and unregistrationKaike Wan
When the hfi1 driver is unloaded, kmemleak will report the following issue: unreferenced object 0xffff8888461a4c08 (size 8): comm "kworker/0:0", pid 5, jiffies 4298601264 (age 2047.134s) hex dump (first 8 bytes): 73 64 6d 61 30 00 ff ff sdma0... backtrace: [<00000000311a6ef5>] kvasprintf+0x62/0xd0 [<00000000ade94d9f>] kobject_set_name_vargs+0x1c/0x90 [<0000000060657dbb>] kobject_init_and_add+0x5d/0xb0 [<00000000346fe72b>] 0xffffffffa0c5ecba [<000000006cfc5819>] 0xffffffffa0c866b9 [<0000000031c65580>] 0xffffffffa0c38e87 [<00000000e9739b3f>] local_pci_probe+0x41/0x80 [<000000006c69911d>] work_for_cpu_fn+0x16/0x20 [<00000000601267b5>] process_one_work+0x171/0x380 [<0000000049a0eefa>] worker_thread+0x1d1/0x3f0 [<00000000909cf2b9>] kthread+0xf8/0x130 [<0000000058f5f874>] ret_from_fork+0x35/0x40 This patch fixes the issue by: - Releasing dd->per_sdma[i].kobject in hfi1_unregister_sysfs(). - This will fix the memory leak. - Calling kobject_put() to unwind operations only for those entries in dd->per_sdma[] whose operations have succeeded (including the current one that has just failed) in hfi1_verbs_register_sysfs(). Cc: <stable@vger.kernel.org> Fixes: 0cb2aa690c7e ("IB/hfi1: Add sysfs interface for affinity setup") Link: https://lore.kernel.org/r/20200326163807.21129.27371.stgit@awfm-01.aw.intel.com Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/mlx5: Move to fully dynamic UAR mode once user space supports itYishai Hadas
Move to fully dynamic UAR mode once user space supports it. In this case we prevent any legacy mode of UARs on the allocated context and prevent redundant allocation of the static ones. Link: https://lore.kernel.org/r/20200324060143.1569116-6-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/mlx5: Limit the scope of struct mlx5_bfreg_info to mlx5_ibLeon Romanovsky
struct mlx5_bfreg_info is used by mlx5_ib only but is exposed to both RDMA and netdev parts of mlx5 driver. Move that struct to mlx5_ib namespace, clean vertical space alignment and convert lib_uar_4k from bool to bitfield. Link: https://lore.kernel.org/r/20200324060143.1569116-5-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/mlx5: Extend QP creation to get uar page index from user spaceYishai Hadas
Extend QP creation to get uar page index from user space, this mode can be used with the UAR dynamic mode APIs to allocate/destroy a UAR object. As part of enabling this option blocked the weird/un-supported cross channel option which uses index 0 hard-coded. This QP flag wasn't exposed to user space as part of any formal upstream release, the dynamic option can allow having valid UAR page index instead. Link: https://lore.kernel.org/r/20200324060143.1569116-4-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/mlx5: Extend CQ creation to get uar page index from user spaceYishai Hadas
Extend CQ creation to get uar page index from user space, this mode can be used with the UAR dynamic mode APIs to allocate/destroy a UAR object. Link: https://lore.kernel.org/r/20200324060143.1569116-3-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-27IB/mlx5: Expose UAR object and its alloc/destroy commandsYishai Hadas
Expose UAR object and its alloc/destroy commands to be used over the ioctl interface by user space applications. This API supports both BF & NC modes and enables a dynamic allocation of UARs once really needed. As the number of driver objects were limited by the core ones when the merged tree is prepared, had to decrease the number of core objects to enable the new UAR object usage. Link: https://lore.kernel.org/r/20200324060143.1569116-2-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Remove redundant judgment of qp_typeWeihang Li
Type of qp has been checked in check_send_valid(), so this judgment should be removed. Link: https://lore.kernel.org/r/1584674622-52773-11-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Remove redundant assignment of wc->smac when polling cqWeihang Li
The field smac in ib_wc was used for create AH and then it will be treated as destination mac address in UD sqwqe, but related code about filling smac into AH has been removed in core. Actually, the dmac in UD sqwqe is parsed from the dgid in grh which is passed in by ULP now, so this assignment should be removed. Link: https://lore.kernel.org/r/1584674622-52773-10-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Remove redundant qpc setup operationsLang Cheng
Before calling modify_qp_reset_to_init(), the entire qpc mask has been cleared, so it is no longer necessary to clear the specific fields in the mask. Link: https://lore.kernel.org/r/1584674622-52773-9-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Remove meaningless printsWenpeng Liang
ceq and aeq is a ring buffer, consumer index of them will be set to zero after reaching the maximum value. The warning should be removed or it may mislead the users. Link: https://lore.kernel.org/r/1584674622-52773-8-git-send-email-liweihang@huawei.com Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Remove definition of cq doorbell structureLang Cheng
The struct hns_roce_v2_cq_db is unused, it should be removed. Link: https://lore.kernel.org/r/1584674622-52773-7-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Adjust the qp status value sequence of the hardwareLang Cheng
Interchange SQD and SQE to match the protocol. Link: https://lore.kernel.org/r/1584674622-52773-6-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Optimize hns_roce_alloc_vf_resource()Lijun Ou
The capbilities of hardware should be got at first and then used in hns_roce_alloc_vf_resource(). Also removes an unnecessary if ... else condition in it. Link: https://lore.kernel.org/r/1584674622-52773-5-git-send-email-liweihang@huawei.com Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Simplify attribute judgment codeLang Cheng
Combine attribute flags before masking them. Link: https://lore.kernel.org/r/1584674622-52773-4-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Fix a wrong judgment of return valueWeihang Li
hns_roce_alloc_mtt_range() never return -1, ret should be checked whether it is zero instead of -1. Fixes: 1ceb0b11a8a2 ("RDMA/hns: Fix non-standard error codes") Link: https://lore.kernel.org/r/1584674622-52773-3-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26RDMA/hns: Unify format of printsLijun Ou
Use ibdev_err/dbg/warn() instead of dev_err/dbg/warn(), and modify some prints into format of "failed to do something, ret = n". Link: https://lore.kernel.org/r/1584674622-52773-2-git-send-email-liweihang@huawei.com Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-26IB/hfi1: Use scnprintf() for avoiding potential buffer overflowTakashi Iwai
Since snprintf() returns the would-be-output size instead of the actual output size, the succeeding calls may go beyond the given buffer limit. Fix it by replacing with scnprintf(). Link: https://lore.kernel.org/r/20200319154641.23711-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-24IB/mlx5: Generally use the WC auto detection test resultYishai Hadas
Now that we have direct and reliable detection of WC support by the system, use is broadly. The only case we have to worry about is when the WC autodetector cannot run. For this fringe case generally assume that that WC is available, except in the well defined case of no PAT support on x86 which is tested by calling arch_can_pci_mmap_wc(). If WC is wrongly assumed to be available then it causes a small performance hit on paths in userspace that are tuned to the assumption that WC is available. There is no functional loss. It is very unlikely that any platforms exist that lack WC and also care about the micro optimization of WC in the fringe case where autodetection does not work. By removing the fairly bogus CONFIG tests this makes WC work broadly on all arches and all platforms. Link: https://lore.kernel.org/r/20200318100323.46659-1-leon@kernel.org Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-24RDMA/hns: Optimize mhop put flow for multi-hop addressingXi Wang
Optimizes hns_roce_table_mhop_get() by encapsulating code about clearing hem into clear_mhop_hem(), which will make the code flow clearer. Link: https://lore.kernel.org/r/1584417324-2255-3-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-24RDMA/hns: Optimize mhop get flow for multi-hop addressingXi Wang
Splits hns_roce_table_mhop_get() into 4 sub-functions to make the code flow clearer. Link: https://lore.kernel.org/r/1584417324-2255-2-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-24RDMA/bnxt_re: Wait for all the CQ events before freeing CQ data structuresSelvin Xavier
Destroy CQ command to firmware returns the num_cnq_events as a response. This indicates the driver about the number of CQ events generated for this CQ. Driver should wait for all these events before freeing the CQ host structures. Also, add routine to clean all the pending notification for the CQs getting destroyed. This avoids the possibility of accessing the CQ data structures after its freed. Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Link: https://lore.kernel.org/r/1584120842-3200-1-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-24IB/mlx5: Fix a NULL vs IS_ERR() checkDan Carpenter
The kzalloc() function returns NULL, not error pointers. Fixes: 30f2fe40c72b ("IB/mlx5: Introduce UAPIs to manage packet pacing") Link: https://lore.kernel.org/r/20200320132641.GF95012@mwanda Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/efa: Use in-kernel offsetofend() to check field availabilityLeon Romanovsky
Remove custom and duplicated variant of offsetofend(). Link: https://lore.kernel.org/r/20200310091438.248429-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Acked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18IB/hfi1: Remove kobj from hfi1_devdataKaike Wan
The field kobj was added to hfi1_devdata structure to manage the life time of the hfi1_devdata structure for PSM accesses: commit e11ffbd57520 ("IB/hfi1: Do not free hfi1 cdev parent structure early") Later another mechanism user_refcount/user_comp was introduced to provide the same functionality: commit acd7c8fe1493 ("IB/hfi1: Fix an Oops on pci device force remove") This patch will remove this kobj field, as it is no longer needed. Link: https://lore.kernel.org/r/20200316210500.7753.4145.stgit@awfm-01.aw.intel.com Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/hns: Check if depth of qp is 0 before configureLang Cheng
Depth of qp shouldn't be allowed to be set to zero, after ensuring that, subsequent process can be simplified. And when qp is changed from reset to reset, the capability of minimum qp depth was used to identify hardware of hip06, it should be changed into a more readable form. Link: https://lore.kernel.org/r/1584006624-11846-1-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18i40iw: Report correct firmware versionSindhu, Devale
The driver uses a hard-coded value for FW version and reports an inconsistent FW version between ibv_devinfo and /sys/class/infiniband/i40iw/fw_ver. Retrieve the FW version via a Control QP (CQP) operation and report it consistently across sysfs and query device. Fixes: d37498417947 ("i40iw: add files for iwarp interface") Link: https://lore.kernel.org/r/20200313214406.2159-1-shiraz.saleem@intel.com Reported-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Sindhu, Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/hns: Optimize wqe buffer set flow for post sendXi Wang
Splits hns_roce_v2_post_send() into three sub-functions: set_rc_wqe(), set_ud_wqe() and update_sq_db() to simplify the code. Link: https://lore.kernel.org/r/1583839084-31579-6-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/hns: Optimize base address table config flow for qp bufferXi Wang
Currently, before the qp is created, a page size needs to be calculated for the base address table to store all base addresses in the mtr. As a result, the parameter configuration of the mtr is complex. So integrate the process of calculating the base table page size into the hem related interface to simplify the process of using mtr. Link: https://lore.kernel.org/r/1583839084-31579-5-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/hns: Optimize the wr opcode conversion from ib to hnsXi Wang
Simplify the wr opcode conversion from ib to hns by using a map table instead of the switch-case statement. Link: https://lore.kernel.org/r/1583839084-31579-4-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/hns: Optimize wqe buffer filling process for post sendXi Wang
Encapsulates the wqe buffer process details for datagram seg, fast mr seg and atomic seg. Link: https://lore.kernel.org/r/1583839084-31579-3-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-18RDMA/hns: Rename wqe buffer related functionsXi Wang
There are serval global functions related to wqe buffer in the hns driver and are called in different files. These symbols cannot directly represent the namespace they belong to. So add prefix 'hns_roce_' to 3 wqe buffer related global functions: get_recv_wqe(), get_send_wqe(), and get_send_extend_sge(). Link: https://lore.kernel.org/r/1583839084-31579-2-git-send-email-liweihang@huawei.com Signed-off-by: Xi Wang <wangxi11@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/bnxt_re: Remove unnecessary sched countSelvin Xavier
Since the lifetime of bnxt_re_task is controlled by the kref of device, sched_count is no longer required. Remove it. Link: https://lore.kernel.org/r/1584117207-2664-4-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/bnxt_re: Fix lifetimes in bnxt_re_taskJason Gunthorpe
A work queue cannot just rely on the ib_device not being freed, it must hold a kref on the memory so that the BNXT_RE_FLAG_IBDEV_REGISTERED check works. Fixes: 1ac5a4047975 ("RDMA/bnxt_re: Add bnxt_re RoCE driver") Link: https://lore.kernel.org/r/1584117207-2664-3-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-17RDMA/bnxt_re: Use ib_device_try_get()Jason Gunthorpe
There are a couple places in this driver running from a work queue that need the ib_device to be registered. Instead of using a broken internal bit rely on the new core code to guarantee device registration. Link: https://lore.kernel.org/r/1584117207-2664-2-git-send-email-selvin.xavier@broadcom.com Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/hns: Fix wrong judgments of udata->outlenWeihang Li
These judgments were used to keep the compatibility with older versions of userspace that don't have the field named "cap_flags" in structure hns_roce_ib_create_cq_resp. But it will be wrong to compare outlen with the size of resp if another new field were added in resp. oulen should be compared with the end offset of cap_flags in resp. Fixes: 4f8f0d5e33dd ("RDMA/hns: Package the flow of creating cq") Link: https://lore.kernel.org/r/1583845569-47257-1-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13Merge branch 'mlx5_mr_cache' into rdma.git for-nextJason Gunthorpe
Leon Romanovsky says: ==================== This series fixes various corner cases in the mlx5_ib MR cache implementation, see specific commit messages for more information. ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Due to dependencies * branch 'mlx5_mr-cache': RDMA/mlx5: Allow MRs to be created in the cache synchronously RDMA/mlx5: Revise how the hysteresis scheme works for cache filling RDMA/mlx5: Fix locking in MR cache work queue RDMA/mlx5: Lock access to ent->available_mrs/limit when doing queue_work RDMA/mlx5: Fix MR cache size and limit debugfs RDMA/mlx5: Always remove MRs from the cache before destroying them RDMA/mlx5: Simplify how the MR cache bucket is located RDMA/mlx5: Rename the tracking variables for the MR cache RDMA/mlx5: Replace spinlock protected write with atomic var {IB,net}/mlx5: Move asynchronous mkey creation to mlx5_ib {IB,net}/mlx5: Assign mkey variant in mlx5_ib only {IB,net}/mlx5: Setup mkey variant before mr create command invocation
2020-03-13RDMA/mlx5: Allow MRs to be created in the cache synchronouslyJason Gunthorpe
If the cache is completely out of MRs, and we are running in cache mode, then directly, and synchronously, create an MR that is compatible with the cache bucket using a sleeping mailbox command. This ensures that the thread that is waiting for the MR absolutely will get one. When a MR allocated in this way becomes freed then it is compatible with the cache bucket and will be recycled back into it. Deletes the very buggy ent->compl scheme to create a synchronous MR allocation. Link: https://lore.kernel.org/r/20200310082238.239865-13-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Revise how the hysteresis scheme works for cache fillingJason Gunthorpe
Currently if the work queue is running then it is in 'hysteresis' mode and will fill until the cache reaches the high water mark. This implicit state is very tricky and doesn't interact with pending very well. Instead of self re-scheduling the work queue after the add_keys() has started to create the new MR, have the queue scheduled from reg_mr_callback() only after the requested MR has been added. This avoids the bad design of an in-rush of queue'd work doing back to back add_keys() until EAGAIN then sleeping. The add_keys() will be paced one at a time as they complete, slowly filling up the cache. Also, fix pending to be only manipulated under lock. Link: https://lore.kernel.org/r/20200310082238.239865-12-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Fix locking in MR cache work queueJason Gunthorpe
All of the members of mlx5_cache_ent must be accessed while holding the spinlock, add the missing spinlock in the __cache_work_func(). Using cache->stopped and flush_workqueue() is an inherently racy way to shutdown self-scheduling work on a queue. Replace it with ent->disabled under lock, and always check disabled before queuing any new work. Use cancel_work_sync() to shutdown the queue. Use READ_ONCE/WRITE_ONCE for dev->last_add to manage concurrency as coherency is less important here. Split fill_delay from the bitfield. C bitfield updates are not atomic and this is just a mess. Use READ_ONCE/WRITE_ONCE, but this could also use test_bit()/set_bit(). Link: https://lore.kernel.org/r/20200310082238.239865-11-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Lock access to ent->available_mrs/limit when doing queue_workJason Gunthorpe
Accesses to these members needs to be locked. There is no reason not to hold a spinlock while calling queue_work(), so move the tests into a helper and always call it under lock. The helper should be called when available_mrs is adjusted. Link: https://lore.kernel.org/r/20200310082238.239865-10-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-13RDMA/mlx5: Fix MR cache size and limit debugfsJason Gunthorpe
The size_write function is supposed to adjust the total_mr's to match the user's request, but lacks locking and safety checking. total_mrs can only be adjusted by at most available_mrs. mrs already assigned to users cannot be revoked. Ensure that the user provides a target value within the range of available_mrs and within the high/low water mark. limit_write has confusing and wrong sanity checking, and doesn't have the ability to deallocate on limit reduction. Since both functions use the same algorithm to adjust the available_mrs, consolidate it into one function and write it correctly. Fix the locking and by holding the spinlock for all accesses to ent->X. Always fail if the user provides a malformed string. Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters") Link: https://lore.kernel.org/r/20200310082238.239865-9-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>