summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
AgeCommit message (Collapse)Author
2023-07-26net/mlx5e: kTLS, Fix protection domain in use syndrome when devlink reloadJianbo Liu
There are DEK objects cached in DEK pool after kTLS is used, and they are freed only in mlx5e_ktls_cleanup(). mlx5e_destroy_mdev_resources() is called in mlx5e_suspend() to free mdev resources, including protection domain (PD). However, PD is still referenced by the cached DEK objects in this case, because profile->cleanup() (and therefore mlx5e_ktls_cleanup()) is called after mlx5e_suspend() during devlink reload. So the following FW syndrome is generated: mlx5_cmd_out_err:803:(pid 12948): DEALLOC_PD(0x801) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0xef0c8a), err(-22) To avoid this syndrome, move DEK pool destruction to mlx5e_ktls_cleanup_tx(), which is called by profile->cleanup_tx(). And move pool creation to mlx5e_ktls_init_tx() for symmetry. Fixes: f741db1a5171 ("net/mlx5e: kTLS, Improve connection rate by using fast update encryption key") Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-15net: tls: make the offload check helper take skb not socketJakub Kicinski
All callers of tls_is_sk_tx_device_offloaded() currently do an equivalent of: if (skb->sk && tls_is_skb_tx_device_offloaded(skb->sk)) Have the helper accept skb and do the skb->sk check locally. Two drivers have local static inlines with similar wrappers already. While at it change the ifdef condition to TLS_DEVICE. Only TLS_DEVICE selects SOCK_VALIDATE_XMIT, so the two are equivalent. This makes removing the duplicated IS_ENABLED() check in funeth more obviously correct. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Maxim Mikityanskiy <maxtram95@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Acked-by: Tariq Toukan <tariqt@nvidia.com> Acked-by: Dimitris Michailidis <dmichail@fungible.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-15net/mlx5e: kTLS, Fix missing error unwind on unsupported cipher typeGal Pressman
Do proper error unwinding when adding an unsupported TX/RX cipher type. Move the switch case prior to key creation so there's less to unwind, and change the goto label name to describe the action performed instead of what failed. Fixes: 4960c414db35 ("net/mlx5e: Support 256 bit keys with kTLS device offload") Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-07net/mlx5e: Remove incorrect debugfs_create_dir NULL check in TLSGal Pressman
Remove the NULL check on debugfs_create_dir() return value as the function returns an ERR pointer on failure, not NULL. The check is not replaced with a IS_ERR_OR_NULL() as debugfs_create_file(), and debugfs functions in general don't need error checking. Fixes: 0fedee1ae9ef ("net/mlx5e: kTLS, Add debugfs") Reported-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-01-30net/mlx5e: kTLS, Improve connection rate by using fast update encryption keyJianbo Liu
As the fast DEK update is fully implemented, use it for kTLS to get better performance. TIS pool was already supported to recycle the TISes. With this series and TIS pool, TLS CPS is improved by 9x higher, from 11k/s to 101k/s. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-01-10net/mlx5e: kTLS, Add debugfsTariq Toukan
Add TLS debugfs to improve observability by exposing the size of the tls TX pool. To observe the size of the TX pool: $ cat /sys/kernel/debug/mlx5/<pci>/nic/tls/tx/pool_size Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Co-developed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-11-12net/mlx5e: kTLS, Use a single async context object per a callback bulkTariq Toukan
A single async context object is sufficient to wait for the completions of many callbacks. Switch to using one instance per a bulk of commands. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-11-12net/mlx5e: kTLS, Remove unnecessary per-callback completionTariq Toukan
Waiting on a completion object for each callback before cleaning up their async contexts is not necessary, as this is already implied in the mlx5_cmd_cleanup_async_ctx() API. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-11-12net/mlx5e: kTLS, Remove unused work fieldTariq Toukan
Work field in struct mlx5e_async_ctx is not used. Remove it. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-22net/mlx5e: Support 256 bit keys with kTLS device offloadGal Pressman
Add support for 256 bit TLS keys using device offload. Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-08-22net/mlx5e: kTLS, Use _safe() iterator in mlx5e_tls_priv_tx_list_cleanup()Dan Carpenter
Use the list_for_each_entry_safe() macro to prevent dereferencing "obj" after it has been freed. Fixes: c4dfe704f53f ("net/mlx5e: kTLS, Recycle objects of device-offloaded TLS TX connections") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-08-10net/tls: Use RCU API to access tls_ctx->netdevMaxim Mikityanskiy
Currently, tls_device_down synchronizes with tls_device_resync_rx using RCU, however, the pointer to netdev is stored using WRITE_ONCE and loaded using READ_ONCE. Although such approach is technically correct (rcu_dereference is essentially a READ_ONCE, and rcu_assign_pointer uses WRITE_ONCE to store NULL), using special RCU helpers for pointers is more valid, as it includes additional checks and might change the implementation transparently to the callers. Mark the netdev pointer as __rcu and use the correct RCU helpers to access it. For non-concurrent access pass the right conditions that guarantee safe access (locks taken, refcount value). Also use the correct helper in mlx5e, where even READ_ONCE was missing. The transition to RCU exposes existing issues, fixed by this commit: 1. bond_tls_device_xmit could read netdev twice, and it could become NULL the second time, after the NULL check passed. 2. Drivers shouldn't stop processing the last packet if tls_device_down just set netdev to NULL, before tls_dev_del was called. This prevents a possible packet drop when transitioning to the fallback software mode. Fixes: 89df6a810470 ("net/bonding: Implement TLS TX device offload") Fixes: c55dcdd435aa ("net/tls: Fix use-after-free after the TLS device goes down and up") Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Link: https://lore.kernel.org/r/20220810081602.1435800-1-maximmi@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-28net/mlx5e: kTLS, Dynamically re-size TX recycling poolTariq Toukan
Let the TLS TX recycle pool be more flexible in size, by continuously and dynamically allocating and releasing HW resources in response to changes in the connections rate and load. Allocate and release pool entries in bulks (16). Use a workqueue to release/allocate in the background. Allocate a new bulk when the pool size goes lower than the low threshold (1K). Symmetric operation is done when the pool size gets greater than the upper threshold (4K). Every idle pool entry holds: 1 TIS, 1 DEK (HW resources), in addition to ~100 bytes in host memory. Start with an empty pool to minimize memory and HW resources waste for non-TLS users that have the device-offload TLS enabled. Upon a new request, in case the pool is empty, do not wait for a whole bulk allocation to complete. Instead, trigger an instant allocation of a single resource to reduce latency. Performance tests: Before: 11,684 CPS After: 16,556 CPS Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-28net/mlx5e: kTLS, Recycle objects of device-offloaded TLS TX connectionsTariq Toukan
The transport interface send (TIS) object is responsible for performing all transport related operations of the transmit side. The ConnectX HW uses a TIS object to save and access the TLS crypto information and state of an offloaded TX kTLS connection. Before this patch, we used to create a new TIS per connection and destroy it once it’s closed. Every create and destroy of a TIS is a FW command. Same applies for the private TLS context, where we used to dynamically allocate and free it per connection. Resources recycling reduce the impact of the allocation/free operations and helps speeding up the connection rate. In this feature we maintain a pool of TX objects and use it to recycle the resources instead of re-creating them per connection. A cached TIS popped from the pool is updated to serve the new connection via the fast-path HW interface, updating the tls static and progress params. This is a very fast operation, significantly faster than FW commands. On recycling, a WQE fence is required after the context params change. This guarantees that the data is sent after the context has been successfully updated in hardware, and that the context modification doesn't interfere with existing traffic. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-28net/mlx5e: kTLS, Take stats out of OOO handlerTariq Toukan
Let the caller of mlx5e_ktls_tx_handle_ooo() take care of updating the stats, according to the returned value. As the switch/case blocks are already there, this change saves unnecessary branches in the handler. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-28net/mlx5e: kTLS, Introduce TLS-specific create TISTariq Toukan
TLS TIS objects have a defined role in mapping and reaching the HW TLS contexts. Some standard TIS attributes (like LAG port affinity) are not relevant for them. Use a dedicated TLS TIS create function instead of the generic mlx5e_create_tis. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
include/net/sock.h 310731e2f161 ("net: Fix data-races around sysctl_mem.") e70f3c701276 ("Revert "net: set SK_MEM_QUANTUM to 4096"") https://lore.kernel.org/all/20220711120211.7c8b7cba@canb.auug.org.au/ net/ipv4/fib_semantics.c 747c14307214 ("ip: fix dflt addr selection for connected nexthop") d62607c3fe45 ("net: rename reference+tracking helpers") net/tls/tls.h include/net/tls.h 3d8c51b25a23 ("net/tls: Check for errors in tls_device_init") 587903142308 ("tls: create an internal header") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-06net/mlx5e: kTLS, Fix build time constant test in TXTariq Toukan
Use the correct constant (TLS_DRIVER_STATE_SIZE_TX) in the comparison against the size of the private TX TLS driver context. Fixes: df8d866770f9 ("net/mlx5e: kTLS, Use kernel API to extract private offload context") Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-02net: add skb_[inner_]tcp_all_headers helpersEric Dumazet
Most drivers use "skb_transport_offset(skb) + tcp_hdrlen(skb)" to compute headers length for a TCP packet, but others use more convoluted (but equivalent) ways. Add skb_tcp_all_headers() and skb_inner_tcp_all_headers() helpers to harmonize this a bit. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-04-06net/mlx5: Cleanup kTLS function names and their exposureLeon Romanovsky
The _accel_ part of the function is not relevant anymore, so rename kTLS functions to be without it, together with header cleanup to do not have declarations that are not used. Link: https://lore.kernel.org/r/72319e6020fb2553d02b3bbc7476bda363f6d60c.1649073691.git.leonro@nvidia.com Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-04-06net/mlx5: Remove tls vs. ktls separation as it is the sameLeon Romanovsky
After removal FPGA TLS, we can remove tls->ktls indirection too, as it is the same thing. Link: https://lore.kernel.org/r/67e596599edcffb0de43f26551208dfd34ac777e.1649073691.git.leonro@nvidia.com Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-02-14net/mlx5e: Read max WQEBBs on the SQ from firmwareAya Levin
Prior to this patch the maximal value for max WQEBBs (WQE Basic Blocks, where WQE is a Work Queue Element) on the TX side was assumed to be 16 (fixed value). All firmware versions till today comply to this. In order to be more flexible and resilient, read from FW the corresponding: max_wqe_sz_sq. This value describes the maximum WQE size given in bytes, thus max WQEBBs is given by the division in WQEBB's byte size. The driver uses the top between 16 and the division result. This ensures synchronization between driver and firmware and avoids unexpected behavior. Store this value on the different SQs (Send Queues) for easy access. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-26net/mlx5e: kTLS, Add stats for number of deleted kTLS TX offloaded connectionsTariq Toukan
Expose ethtool SW counter for the number of kTLS device-offloaded TX connections that are finished and deleted. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-03net/mlx5e: Disable TLS device offload in kdump modeAlaa Hleihel
Under kdump environment we want to use the smallest possible amount of resources, that includes setting SQ size to minimum. However, when running on a device that supports TLS device offload, then the SQ stop room becomes larger than with non-capable device and requires increasing the SQ size. Since TLS device offload is not necessary in kdump mode, disable it to reduce the memory requirements for capable devices. With this change, the needed SQ stop room size drops by 33. Signed-off-by: Alaa Hleihel <alaa@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-31net/mlx5e: kTLS, Fix TX counters atomicityTariq Toukan
Some TLS TX counters increment per socket/connection, and are not protected against parallel modifications from several cores. Switch them to atomic counters by taking them out of the SQ stats into the global atomic TLS stats. In this patch, we touch a single counter 'tx_tls_ctx' that counts the number of device-offloaded TX TLS connections added. Now that this counter can be increased without the for having the SQ context in hand, move it to the mlx5e_ktls_add_tx() callback where it really belongs, out of the fast data-path. This change is not needed for counters that increment only in NAPI context or under the TX lock, as they are already protected. Keep them as tls_* counters under 'struct mlx5e_sq_stats'. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2020-11-05net/mlx5e: Validate stop_room size upon user inputVladyslav Tarasiuk
Stop room is a space that may be taken by WQEs in the SQ during a packet transmit. It is used to check if next packet has enough room in the SQ. Stop room guarantees this packet can be served and if not, the queue is stopped, so no more packets are passed to the driver until it's ready. Currently, stop_room size is calculated and validated upon tx queues allocation. This makes it impossible to know if user provided valid input for certain parameters when interface is down. Instead, store stop_room in mlx5e_sq_param and create mlx5e_validate_params(), to validate its fields upon user input even when the interface is down. Signed-off-by: Vladyslav Tarasiuk <vladyslavt@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2020-09-21net/mlx5e: Move the TLS resync check out of the functionMaxim Mikityanskiy
Before this patch, mlx5e_ktls_tx_handle_resync_dump_comp checked for resync_dump_frag_page. It happened for all WQEs without an SKB, including padding WQEs, and required a function call. Normally, padding WQEs happen more often than TLS resyncs. Take this check out of the function and put it to an inline function to save a call on all padding WQEs. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2020-07-28net/mlx5: Use fallthrough pseudo-keywordGustavo A. R. Silva
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-06-27net/mlx5e: kTLS, Add kTLS RX resync supportTariq Toukan
Implement the RX resync procedure, using the TLS async resync API. The HW offload of TLS decryption in RX side might get out-of-sync due to out-of-order reception of packets. This requires SW intervention to update the HW context and get it back in-sync. Performance: CPU: Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz, 24 cores, HT off NIC: ConnectX-6 Dx 100GbE dual port Goodput (app-layer throughput) comparison: +---------------+-------+-------+---------+ | # connections | 1 | 4 | 8 | +---------------+-------+-------+---------+ | SW (Gbps) | 7.26 | 24.70 | 50.30 | +---------------+-------+-------+---------+ | HW (Gbps) | 18.50 | 64.30 | 92.90 | +---------------+-------+-------+---------+ | Speedup | 2.55x | 2.56x | 1.85x * | +---------------+-------+-------+---------+ * After linerate is reached, diff is observed in CPU util. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-06-27net/mlx5e: kTLS, Add kTLS RX HW offload supportTariq Toukan
Implement driver support for the kTLS RX HW offload feature. Resync support is added in a downstream patch. New offload contexts post their static/progress params WQEs over the per-channel async ICOSQ, protected under a spin-lock. The Channel/RQ is selected according to the socket's rxq index. Feature is OFF by default. Can be turned on by: $ ethtool -K <if> tls-hw-rx-offload on A new TLS-RX workqueue is used to allow asynchronous addition of steering rules, out of the NAPI context. It will be also used in a downstream patch in the resync procedure. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-06-27net/mlx5e: kTLS, Use kernel API to extract private offload contextTariq Toukan
Modify the implementation of the private kTLS TX HW offload context getter and setter, so it uses the kernel API functions, instead of a local shadow structure. A single BUILD_BUG_ON check is sufficient, remove the duplicate. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-06-27net/mlx5e: kTLS, Improve TLS feature modularityTariq Toukan
Better separate the code into c/h files, so that kTLS internals are exposed to the corresponding non-accel flow as follows: - Necessary datapath functions are exposed via ktls_txrx.h. - Necessary caps and configuration functions are exposed via ktls.h, which became very small. In addition, kTLS internal code sharing is done via ktls_utils.h, which is not exposed to any non-accel file. Add explicit WQE structures for the TLS static and progress params, breaking the union of the static with UMR, and the progress with PSV. Generalize the API as a preparation for TLS RX offload support. Move kTLS TX-specific code to the proper file. Remove the inline tag for function in C files, let the compiler decide. Use kzalloc/kfree for the priv_tx context. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com>
2020-06-27net/mlx5: kTLS, Improve TLS params layout structuresTariq Toukan
Add explicit WQE segment structures for the TLS static and progress params. According to the HW spec, TISN is not part of the progress params context, take it out of it. Rename the control segment tisn field as it could hold either a TIS or a TIR number. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: Use struct assignment for WQE info updatesTariq Toukan
Struct assignment looks more clean, and implies resetting the not assigned fields to zero, instead of holding values from older ring cycles. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: kTLS, Do not fill edge for the DUMP WQEs in TX flowTariq Toukan
Every single DUMP WQE resides in a single WQEBB. As the pi is calculated per each one separately, there is no real need for a contiguous room for them, allow them to populate different WQ fragments. This reduces WQ waste and improves its utilization. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: kTLS, Fill work queue edge separately in TX flowTariq Toukan
For the static and progress context params WQEs, do the edge filling separately. This improves the WQ utilization, code readability, and reduces the chance of future bugs. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: Split TX acceleration offloads into two phasesMaxim Mikityanskiy
After previous modifications, the offloads are no longer called one by one, the pi is calculated and the wqe is cleared on between of TLS and IPSEC offloads, which doesn't quite fit mlx5e_accel_handle_tx's purpose. This patch splits mlx5e_accel_handle_tx into two functions that correspond to two logical phases of running offloads: 1. Before fetching a WQE. Here runs the code that can post WQEs on its own, before the main WQE is fetched. It's the main part of TLS offload. 2. After fetching a WQE. Here runs the code that updates the WQE's fields, but can't post other WQEs any more. It's a minor part of TLS offload that sets the tisn field in the cseg, and eseg-based offloads (currently IPSEC, and later patches will move GENEVE and checksum offloads there, too). It allows to make mlx5e_xmit take care of all actions needed to transmit a packet in the right order, improve the structure of the code and reduce unnecessary operations. The structure will be further improved in the following patches (all eseg-based offloads will be moved to a single place, and reserving space for the main WQE will happen between phase 1 and phase 2 of offloads to eliminate unneeded data movements). Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: Make TLS offload independent of wqe and piMaxim Mikityanskiy
TLS offload may write a 32-bit field (tisn) to the cseg of the WQE. To do that, it receives pi and wqe pointers. As TLS offload may also send additional WQEs, it has to update pi and wqe, and in many cases it even doesn't use pi calculated before and wqe zeroed before and does it itself. Also, mlx5e_sq_xmit has to copy the whole cseg if it goes to the mlx5e_fill_sq_frag_edge flow. This all is not efficient. It's more efficient to do the following: 1. Just return tisn from TLS offload and make the caller fill it in a more appropriate place. 2. Calculate pi and clear wqe after calling TLS offload. 3. If TLS offload has to send WQEs, calculate pi and clear wqe just before that. It's already done in all places anyway, so this commit allows to remove some redundant memsets and calls. Copying of cseg will be eliminated in one of the following commits, and all other stuff is done here. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: Unify checks of TLS offloadsMaxim Mikityanskiy
Both INNOVA and ConnectX TLS offloads perform the same checks in the beginning. Unify them to reduce repeating code. Do WARN_ON_ONCE on netdev mismatch and finish with an error in both offloads, not only in the ConnectX one. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-05-09net/mlx5e: Return bool from TLS and IPSEC offloadsMaxim Mikityanskiy
TLS and IPSEC offloads currently return struct sk_buff *, but the value is either NULL or the same skb that was passed as a parameter. Return bool instead to provide stronger guarantees to the calling code (it won't need to support handling a different SKB that could be potentially returned before this change) and to simplify restructuring this code in the following commits. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-04-30net/mlx5e: Unify reserving space for WQEsMaxim Mikityanskiy
In our fast-path design, a WQE (Work Queue Element) must not cross the page boundary. To enforce that, for WQEs consisting of more than one BB (Basic Block), the driver checks the available contiguous space in the WQ in advance, and if it's not enough, it pads it with NOPs. This patch modifies the code that calculates the position of next WQE, considering the padding, and prepares the WQE. This code is common for all SQ types. In this patch it's reorganized in a way that makes the usage pattern unified for all SQ types, and makes the implementations self-contained and look almost the same, preparing the repeating code to further attempts to deduplicate it. One place is left as is: mlx5e_sq_xmit and mlx5e_fill_sq_frag_edge call inside, because it is special in a way that it may also copy WQE's cseg and eseg when reserving space. This will be eliminated in one of the following patches, and this place will be converted to the new approach, too. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-04-30net/mlx5e: Fetch WQE: reuse code and enforce typingMaxim Mikityanskiy
There are multiple functions mlx5{e,i}_*_fetch_wqe that contain the same code, that is repeated, because they operate on different SQ struct types. mlx5e_sq_fetch_wqe also returns void *, instead of the concrete WQE type. This commit generalizes the fetch WQE operation by putting this code into a single function. To simplify calls of the generic function in concrete use cases, macros are provided that substitute the right WQE size and cast the return type. Before this patch, fetch_wqe used to calculate pi itself, but the value was often known to the caller. This calculation is moved outside to eliminate this unnecessary step and prepare for the fill_frag_edge refactoring in the next patch. Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-03-05net/mlx5e: kTLS, Fix TCP seq off-by-1 issue in TX resync flowTariq Toukan
We have an off-by-1 issue in the TCP seq comparison. The last sequence number that belongs to the TCP packet's payload is not "start_seq + len", but one byte before it. Fix it so the 'ends_before' is evaluated properly. This fixes a bug that results in error completions in the kTLS HW offload flows. Fixes: ffbd9ca94e2e ("net/mlx5e: kTLS, Fix corner-case checks in TX resync flow") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-01-24net/mlx5e: kTLS, Do not send decrypted-marked SKBs via non-accel pathTariq Toukan
When TCP out-of-order is identified (unexpected tcp seq mismatch), driver analyzes the packet and decides what handling should it get: 1. go to accelerated path (to be encrypted in HW), 2. go to regular xmit path (send w/o encryption), 3. drop. Packets marked with skb->decrypted by the TLS stack in the TX flow skips SW encryption, and rely on the HW offload. Verify that such packets are never sent un-encrypted on the wire. Add a WARN to catch such bugs, and prefer dropping the packet in these cases. Fixes: 46a3ea98074e ("net/mlx5e: kTLS, Enhance TX resync flow") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Boris Pismenny <borisp@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-01-24net/mlx5e: kTLS, Remove redundant posts in TX resync flowTariq Toukan
The call to tx_post_resync_params() is done earlier in the flow, the post of the control WQEs is unnecessarily repeated. Remove it. Fixes: 700ec4974240 ("net/mlx5e: kTLS, Fix missing SQ edge fill") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Boris Pismenny <borisp@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2020-01-24net/mlx5e: kTLS, Fix corner-case checks in TX resync flowTariq Toukan
There are the following cases: 1. Packet ends before start marker: bypass offload. 2. Packet starts before start marker and ends after it: drop, not supported, breaks contract with kernel. 3. packet ends before tls record info starts: drop, this packet was already acknowledged and its record info was released. Add the above as comment in code. Mind possible wraparounds of the TCP seq, replace the simple comparison with a call to the TCP before() method. In addition, remove logic that handles negative sync_len values, as it became impossible. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Fixes: 46a3ea98074e ("net/mlx5e: kTLS, Enhance TX resync flow") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Boris Pismenny <borisp@mellanox.com> Reviewed-by: Boris Pismenny <borisp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-10-18net/mlx5e: kTLS, Enhance TX resync flowTariq Toukan
Once the kTLS TX resync function is called, it used to return a binary value, for success or failure. However, in case the TLS SKB is a retransmission of the connection handshake, it initiates the resync flow (as the tcp seq check holds), while regular packet handle is expected. In this patch, we identify this case and skip the resync operation accordingly. Counters: - Add a counter (tls_skip_no_sync_data) to monitor this. - Bump the dump counters up as they are used more frequently. - Add a missing counter descriptor declaration for tls_resync_bytes in sq_stats_desc. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-10-18net/mlx5e: kTLS, Save a copy of the crypto infoTariq Toukan
Do not assume the crypto info is accessible during the connection lifetime. Save a copy of it in the private TX context. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-10-18net/mlx5e: kTLS, Remove unneeded cipher type checksTariq Toukan
Cipher type is checked upon connection addition. No need to recheck it per every TX resync invocation. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-10-18net/mlx5e: kTLS, Limit DUMP wqe sizeTariq Toukan
HW expects the data size in DUMP WQEs to be up to MTU. Make sure they are in range. We elevate the frag page refcount by 'n-1', in addition to the one obtained in tx_sync_info_get(), having an overall of 'n' references. We bulk increments by using a single page_ref_add() command, to optimize perfermance. The refcounts are released one by one, by the corresponding completions. Fixes: d2ead1f360e8 ("net/mlx5e: Add kTLS TX HW offload support") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>