summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-12-06drm/amd/display: Enable dp_hdmi21_pcon supportDavid Galiffi
[Why] It is not enabled for DCN3.0.1, 3.0.2, 3.0.3. [How] Add `dc->caps.dp_hdmi21_pcon_support = true` to these DCN versions. Reviewed-by: Martin Leung <Martin.Leung@amd.com> Acked-by: Stylon Wang <stylon.wang@amd.com> Signed-off-by: David Galiffi <David.Galiffi@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-12-06drm/amd/display: prevent seamless boot on displays that don't have the ↵Dmytro Laktyushkin
preferred dig Seamless boot requires VBIOS to select dig matching to link order wise. A significant amount of dal logic makes assumption we are using preferred dig for eDP and if this isn't the case then seamless boot is not supported. Reviewed-by: Martin Leung <Martin.Leung@amd.com> Acked-by: Stylon Wang <stylon.wang@amd.com> Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-12-06drm/amd/display: trigger timing sync only if TG is runningAurabindo Pillai
[Why&How] If the timing generator isnt running, it does not make sense to trigger a sync on the corresponding OTG. Check this condition before starting. Otherwise, this will cause error like: *ERROR* GSL: Timeout on reset trigger! Fixes: dc55b106ad477c ("drm/amd/display: Disable phantom OTG after enable for plane disable") Reviewed-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com> Reviewed-by: Alvin Lee <Alvin.Lee2@amd.com> Acked-by: Stylon Wang <stylon.wang@amd.com> Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-12-06drm/amd/display: Remove DTB DTO on CLK updateChris Park
[Why] DTB DTO is programmed more correctly during link enable. Programming them on CLK update which may arrive frequently and sporadically per flip throws off DTB DTO. [How] Remove DTB DTO programming on clock update. Reviewed-by: Alvin Lee <Alvin.Lee2@amd.com> Acked-by: Jasdeep Dhillon <jdhillon@amd.com> Signed-off-by: Chris Park <Chris.Park@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-12-06gfs2: Partially revert gfs2_inode_lookup changeAndreas Gruenbacher
Commit c412a97cf6c5 changed delete_work_func() to always perform an inode lookup when gfs2_try_evict() fails. This doesn't make sense as a gfs2_try_evict() failure indicates that the inode is likely still in use. Revert that change. Fixes: c412a97cf6c5 ("gfs2: Use TRY lock in gfs2_inode_lookup for UNLINKED inodes") Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Add gfs2_inode_lookup commentAndreas Gruenbacher
Add comment on when and why gfs2_cancel_delete_work() needs to be skipped in gfs2_inode_lookup(). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Uninline and improve glock_{set,clear}_objectAndreas Gruenbacher
Those functions have reached a size at which having them inline isn't useful anymore, so uninline them. In addition, report the glock name on assertion failures. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Simply dequeue iopen glock in gfs2_evict_inodeAndreas Gruenbacher
With the previous change, to simplify things, we can always just dequeue and uninitialize the iopen glock in gfs2_evict_inode() even if it isn't queued anymore. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Clean up after gfs2_create_inode reworkAndreas Gruenbacher
Since commit 3d36e57ff768 ("gfs2: gfs2_create_inode rework"), gfs2_evict_inode() and gfs2_create_inode() / gfs2_inode_lookup() will synchronize via the inode hash table and we can be certain that once a new inode is inserted into the inode hash table(), gfs2_evict_inode() has completely destroyed any previous versions. We no longer need to worry about overlapping inode object lifespans. Update the code and comments accordingly. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Avoid dequeuing GL_ASYNC glock holders twiceAndreas Gruenbacher
When a locking request fails, the associated glock holder is automatically dequeued from the list of active and waiting holders. For GL_ASYNC locking requests, this will obviously happen asynchronously and it can race with attempts to cancel that locking request via gfs2_glock_dq(). Therefore, don't forget to check if a locking request has already been dequeued in gfs2_glock_dq(). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Make gfs2_glock_hold return its glock argumentAndreas Gruenbacher
This allows code like 'gl = gfs2_glock_hold(...)'. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Always check inode size of inline inodesAndreas Gruenbacher
Check if the inode size of stuffed (inline) inodes is within the allowed range when reading inodes from disk (gfs2_dinode_in()). This prevents us from on-disk corruption. The two checks in stuffed_readpage() and gfs2_unstuffer_page() that just truncate inline data to the maximum allowed size don't actually make sense, and they can be removed now as well. Reported-by: syzbot+7bb81dfa9cda07d9cd9d@syzkaller.appspotmail.com Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06gfs2: Cosmetic gfs2_dinode_{in,out} cleanupAndreas Gruenbacher
In each of the two functions, add an inode variable that points to &ip->i_inode and use that throughout the rest of the function. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2022-12-06xen/netback: don't call kfree_skb() with interrupts disabledJuergen Gross
It is not allowed to call kfree_skb() from hardware interrupt context or with interrupts being disabled. So remove kfree_skb() from the spin_lock_irqsave() section and use the already existing "drop" label in xenvif_start_xmit() for dropping the SKB. At the same time replace the dev_kfree_skb() call there with a call of dev_kfree_skb_any(), as xenvif_start_xmit() can be called with disabled interrupts. This is XSA-424 / CVE-2022-42328 / CVE-2022-42329. Fixes: be81992f9086 ("xen/netback: don't queue unlimited number of packages") Reported-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2022-12-06xen/netback: Ensure protocol headers don't fall in the non-linear areaRoss Lagerwall
In some cases, the frontend may send a packet where the protocol headers are spread across multiple slots. This would result in netback creating an skb where the protocol headers spill over into the non-linear area. Some drivers and NICs don't handle this properly resulting in an interface reset or worse. This issue was introduced by the removal of an unconditional skb pull in the tx path to improve performance. Fix this without reintroducing the pull by setting up grant copy ops for as many slots as needed to reach the XEN_NETBACK_TX_COPY_LEN size. Adjust the rest of the code to handle multiple copy operations per skb. This is XSA-423 / CVE-2022-3643. Fixes: 7e5d7753956b ("xen-netback: remove unconditional __pskb_pull_tail() in guest Tx path") Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com> Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: Juergen Gross <jgross@suse.com>
2022-12-06pinctrl: thunderbay: fix possible memory leak in thunderbay_build_functions()Gaosheng Cui
The thunderbay_add_functions() will free memory of thunderbay_funcs when everything is ok, but thunderbay_funcs will not be freed when thunderbay_add_functions() fails, then there will be a memory leak, so we need to add kfree() when thunderbay_add_functions() fails to fix it. In addition, doing some cleaner works, moving kfree(funcs) from thunderbay_add_functions() to thunderbay_build_functions(). Fixes: 12422af8194d ("pinctrl: Add Intel Thunder Bay pinctrl driver") Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Reviewed-by: Rafał Miłecki <rafal@milecki.pl> Link: https://lore.kernel.org/r/20221129120126.1567338-1-cuigaosheng1@huawei.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-12-06nvme-pci: split out a nvme_pci_ctrl_is_dead helperChristoph Hellwig
Clean up nvme_dev_disable by splitting the logic to detect if a controller is dead into a separate helper. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2022-12-06nvme-pci: return early on ctrl state mismatch in nvme_reset_workChristoph Hellwig
The only way nvme_reset_work could be called when not in resetting state is if a reset and remove happen near the same time. This should not happen, but if it did we don't want the reset work to disable the controller because the remove is already doing that. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2022-12-06nvme-pci: rename nvme_disable_io_queuesChristoph Hellwig
This function really deletes the I/O queues, so rename it to match the functionality. Also move the main wrapper right next to the actual underlying implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2022-12-06nvme-pci: cleanup nvme_suspend_queueChristoph Hellwig
Remove the unused returne value, pass a dev + qid instead of the queue as that is better for the callers as well as the function itself, and remove the entirely pointless kerneldoc comment. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2022-12-06nvme-pci: remove nvme_pci_disableChristoph Hellwig
nvme_pci_disable has a single caller, fold it into that. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Eric Curtin <ecurtin@redhat.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2022-12-06nvme-pci: remove nvme_disable_admin_queueChristoph Hellwig
nvme_disable_admin_queue has only a single caller, and just calls two other funtions, so remove it to clean up the remove path a little more. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Eric Curtin <ecurtin@redhat.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2022-12-06nvme: merge nvme_shutdown_ctrl into nvme_disable_ctrlChristoph Hellwig
Many of the callers decide which one to use based on a bool argument and there is at least some code to be shared, so merge these two. Also move a comment specific to a single callsite to that callsite. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Hector Martin <marcan@marcan.st>
2022-12-06nvme: use nvme_wait_ready in nvme_shutdown_ctrlChristoph Hellwig
Refactor the code to wait for CSTS state changes so that it can be reused by nvme_shutdown_ctrl. This reduces the delay between each iteration that checks CSTS from 100ms in the shutdown code to the 1 to 2ms range done during enable, matching the changes from commit 3e98c2443f5c that were only applied to the enable/disable path. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Pankaj Raghav <p.raghav@samsung.com>
2022-12-06nvme-apple: fix controller shutdown in apple_nvme_disableChristoph Hellwig
nvme_shutdown_ctrl already shuts the controller down, there is no need to also call nvme_disable_ctrl for the shutdown case. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Eric Curtin <ecurtin@redhat.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Hector Martin <marcan@marcan.st>
2022-12-06net/mlx5e: Generalize creation of default IPsec miss group and ruleLeon Romanovsky
Create general function that sets miss group and rule to forward all not-matched traffic to the next table. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Group IPsec miss handles into separate structLeon Romanovsky
Move miss handles into dedicated struct, so we can reuse it in next patch when creating IPsec policy flow table. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Make clear what IPsec rx_err doesLeon Romanovsky
Reuse existing struct what holds all information about modify header pointer and rule. This helps to reduce ambiguity from the name _err_ that doesn't describe the real purpose of that flow table, rule and function - to copy status result from HW to the stack. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Flatten the IPsec RX add rule pathLeon Romanovsky
Rewrote the IPsec RX add rule path to be less convoluted and don't rely on pre-initialized variables. The code now has clean linear flow with clean separation between error and success paths. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Refactor FTE setup code to be more clearLeon Romanovsky
The policy offload logic needs to set flow steering rule that match on saddr and daddr too, so factor out this code to separate functions, together with code alignment to netdev coding pattern of relying on family type. As part of this change, let's separate more logic from setup_fte_common to make sure that the function names describe that is done in the function better than general *common* name. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Move IPsec flow table creation to separate functionLeon Romanovsky
Even now, to support IPsec crypto, the RX and TX paths use same logic to create flow tables. In the following patches, we will add more tables to support IPsec packet offload. So reuse existing code and rewrite it to support IPsec packet offload from the beginning. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Create hardware IPsec packet offload objectsLeon Romanovsky
Create initial hardware IPsec packet offload object and connect it to advanced steering operation (ASO) context and queue, so the data path can communicate with the stack. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Create Advanced Steering Operation object for IPsecLeon Romanovsky
Setup the ASO (Advanced Steering Operation) object that is needed for IPsec to interact with SW stack about various fast changing events: replay window, lifetime limits, e.t.c Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Remove accesses to priv for low level IPsec FS codeLeon Romanovsky
mlx5 priv structure is driver main structure that holds high level data. That information is not needed for IPsec flow steering logic and the pointer to mlx5e_priv was not supposed to be passed in the first place. This change "cleans" the logic to rely on internal to IPsec structures without touching global mlx5e_priv. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Use mlx5 print routines for low level IPsec codeLeon Romanovsky
Low level mlx5 code needs to use mlx5_core print routines and not netdev ones, as the failures are relevant to the HW itself and not to its netdev. This change allows us to remove access to mlx5 priv structure, which holds high level driver data that isn't needed for mlx5 IPsec code. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Create symmetric IPsec RX and TX flow steering structsLeon Romanovsky
Remove AF family obfuscation by creating symmetric structs for RX and TX IPsec flow steering chains. This simplifies to us low level IPsec FS creation logic without need to dig into multiple levels of structs. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Remove extra layers of definesLeon Romanovsky
Instead of performing redefinition of XFRM core defines to same values but with MLX5_* prefix, cache the input values as is by making sure that the proper storage objects are used. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Store replay window in XFRM attributesLeon Romanovsky
As a preparation for future extension of IPsec hardware object to allow configuration of packet offload mode, extend the XFRM validator to check replay window values. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5e: Advertise IPsec packet offload supportLeon Romanovsky
Add needed capabilities check to determine if device supports IPsec packet offload mode. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5: Add HW definitions for IPsec packet offloadLeon Romanovsky
Add all needed bits to support IPsec packet offload mode. Reviewed-by: Raed Salem <raeds@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06net/mlx5: Return ready to use ASO WQELeon Romanovsky
There is no need in hiding returned ASO WQE type by providing void*, use the real type instead. Do it together with zeroing that memory, so ASO WQE will be ready to use immediately. Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06Merge branch 'Extend XFRM core to allow packet offload configuration'Steffen Klassert
Leon Romanovsky says: ============ The following series extends XFRM core code to handle a new type of IPsec offload - packet offload. In this mode, the HW is going to be responsible for the whole data path, so both policy and state should be offloaded. IPsec packet offload is an improved version of IPsec crypto mode, In packet mode, HW is responsible to trim/add headers in addition to decrypt/encrypt. In this mode, the packet arrives to the stack as already decrypted and vice versa for TX (exits to HW as not-encrypted). Devices that implement IPsec packet offload mode offload policies too. In the RX path, it causes the situation that HW can't effectively handle mixed SW and HW priorities unless users make sure that HW offloaded policies have higher priorities. It means that we don't need to perform any search of inexact policies and/or priority checks if HW policy was discovered. In such situation, the HW will catch the packets anyway and HW can still implement inexact lookups. In case specific policy is not found, we will continue with packet lookup and check for existence of HW policies in inexact list. HW policies are added to the head of SPD to ensure fast lookup, as XFRM iterates over all policies in the loop. This simple solution allows us to achieve same benefits of separate HW/SW policies databases without over-engineering the code to iterate and manage two databases at the same path. To not over-engineer the code, HW policies are treated as SW ones and don't take into account netdev to allow reuse of the same priorities for policies databases without over-engineering the code to iterate and manage two databases at the same path. To not over-engineer the code, HW policies are treated as SW ones and don't take into account netdev to allow reuse of the same priorities for different devices. * No software fallback * Fragments are dropped, both in RX and TX * No sockets policies * Only IPsec transport mode is implemented ================================================================================ Rekeying: In order to support rekeying, as XFRM core is skipped, the HW/driver should do the following: * Count the handled packets * Raise event that limits are reached * Drop packets once hard limit is occurred. The XFRM core calls to newly introduced xfrm_dev_state_update_curlft() function in order to perform sync between device statistics and internal structures. On HW limit event, driver calls to xfrm_state_check_expire() to allow XFRM core take relevant decisions. This separation between control logic (in XFRM) and data plane allows us to packet reuse SW stack. ================================================================================ Configuration: iproute2: https://lore.kernel.org/netdev/cover.1652179360.git.leonro@nvidia.com/ Packet offload mode: ip xfrm state offload packet dev <if-name> dir <in|out> ip xfrm policy .... offload packet dev <if-name> Crypto offload mode: ip xfrm state offload crypto dev <if-name> dir <in|out> or (backward compatibility) ip xfrm state offload dev <if-name> dir <in|out> ================================================================================ Performance results: TCP multi-stream, using iperf3 instance per-CPU. +----------------------+--------+--------+--------+--------+---------+---------+ | | 1 CPU | 2 CPUs | 4 CPUs | 8 CPUs | 16 CPUs | 32 CPUs | | +--------+--------+--------+--------+---------+---------+ | | BW (Gbps) | +----------------------+--------+--------+-------+---------+---------+---------+ | Baseline | 27.9 | 59 | 93.1 | 92.8 | 93.7 | 94.4 | +----------------------+--------+--------+-------+---------+---------+---------+ | Software IPsec | 6 | 11.9 | 23.3 | 45.9 | 83.8 | 91.8 | +----------------------+--------+--------+-------+---------+---------+---------+ | IPsec crypto offload | 15 | 29.7 | 58.5 | 89.6 | 90.4 | 90.8 | +----------------------+--------+--------+-------+---------+---------+---------+ | IPsec packet offload | 28 | 57 | 90.7 | 91 | 91.3 | 91.9 | +----------------------+--------+--------+-------+---------+---------+---------+ IPsec packet offload mode behaves as baseline and reaches linerate with same amount of CPUs. Setups details (similar for both sides): * NIC: ConnectX6-DX dual port, 100 Gbps each. Single port used in the tests. * CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz ================================================================================ Series together with mlx5 part: https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=xfrm-next ================================================================================ Changelog: v10: * Added forgotten xdo_dev_state_del. Patch #4. * Moved changelog in cover letter to the end. * Added "if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {" line to newly added netronome IPsec support. Patch #2. v9: https://lore.kernel.org/all/cover.1669547603.git.leonro@nvidia.com * Added acquire support v8: https://lore.kernel.org/all/cover.1668753030.git.leonro@nvidia.com * Removed not-related blank line * Fixed typos in documentation v7: https://lore.kernel.org/all/cover.1667997522.git.leonro@nvidia.com As was discussed in IPsec workshop: * Renamed "full offload" to be "packet offload". * Added check that offloaded SA and policy have same device while sending packet * Added to SAD same optimization as was done for SPD to speed-up lookups. v6: https://lore.kernel.org/all/cover.1666692948.git.leonro@nvidia.com * Fixed misplaced "!" in sixth patch. v5: https://lore.kernel.org/all/cover.1666525321.git.leonro@nvidia.com * Rebased to latest ipsec-next. * Replaced HW priority patch with solution which mimics separated SPDs for SW and HW. See more description in this cover letter. * Dropped RFC tag, usecase, API and implementation are clear. v4: https://lore.kernel.org/all/cover.1662295929.git.leonro@nvidia.com * Changed title from "PATCH" to "PATCH RFC" per-request. * Added two new patches: one to update hard/soft limits and another initial take on documentation. * Added more info about lifetime/rekeying flow to cover letter, see relevant section. * perf traces for crypto mode will come later. v3: https://lore.kernel.org/all/cover.1661260787.git.leonro@nvidia.com * I didn't hear any suggestion what term to use instead of "packet offload", so left it as is. It is used in commit messages and documentation only and easy to rename. * Added performance data and background info to cover letter * Reused xfrm_output_resume() function to support multiple XFRM transformations * Add PMTU check in addition to driver .xdo_dev_offload_ok validation * Documentation is in progress, but not part of this series yet. v2: https://lore.kernel.org/all/cover.1660639789.git.leonro@nvidia.com * Rebased to latest 6.0-rc1 * Add an extra check in TX datapath patch to validate packets before forwarding to HW. * Added policy cleanup logic in case of netdev down event v1: https://lore.kernel.org/all/cover.1652851393.git.leonro@nvidia.com * Moved comment to be before if (...) in third patch. v0: https://lore.kernel.org/all/cover.1652176932.git.leonro@nvidia.com ----------------------------------------------------------------------- ============ Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-12-06spi: mtk-snfi: Add snfi support for MT7986 ICXiangsheng Hou
Add snfi support for MT7986 IC. Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20221205065756.26875-2-xiangsheng.hou@mediatek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-12-06Merge branch 'net-lan966x-enable-ptp-on-bridge-interfaces'Paolo Abeni
Horatiu Vultur says: ==================== net: lan966x: Enable PTP on bridge interfaces Before it was not allowed to run ptp on ports that are part of a bridge because in case of transparent clock the HW will still forward the frames so there would be duplicate frames. Now that there is VCAP support, it is possible to add entries in the VCAP to trap frames to the CPU and the CPU will forward these frames. The first part of the patch series, extends the VCAP support to be able to modify and get the rule, while the last patch uses the VCAP to trap the ptp frames. ==================== Link: https://lore.kernel.org/r/20221203104348.1749811-1-horatiu.vultur@microchip.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-06net: lan966x: Add ptp trap rulesHoratiu Vultur
Currently lan966x, doesn't allow to run PTP over interfaces that are part of the bridge. The reason is when the lan966x was receiving a PTP frame (regardless if L2/IPv4/IPv6) the HW it would flood this frame. Now that it is possible to add VCAP rules to the HW, such to trap these frames to the CPU, it is possible to run PTP also over interfaces that are part of the bridge. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-06net: microchip: vcap: Add vcap_rule_get_key_u32Horatiu Vultur
Add the function vcap_rule_get_key_u32 which allows to get the value and the mask of a key that exist on the rule. If the key doesn't exist, it would return error. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-06net: microchip: vcap: Add vcap_mod_ruleHoratiu Vultur
Add the function vcap_mod_rule which allows to update an existing rule in the vcap. It is required for the rule to exist in the vcap to be able to modify it. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-06net: microchip: vcap: Add vcap_get_ruleHoratiu Vultur
Add function vcap_get_rule which returns a rule based on the internal rule id. The entire functionality of reading and decoding the rule from the VCAP was inside vcap_api_debugfs file. So move the entire implementation in vcap_api as this is used also by vcap_get_rule. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-06dt-bindings: PCI: qcom: Allow 'dma-coherent' propertyJohan Hovold
Devices on some PCIe buses may be cache coherent and must be marked as such in the devicetree to avoid data corruption. This is specifically needed on recent Qualcomm platforms like SC8280XP. Link: https://lore.kernel.org/r/20221205094530.12883-1-johan+linaro@kernel.org Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Acked-by: Rob Herring <robh@kernel.org>
2022-12-06powerpc/cpuidle: Set CPUIDLE_FLAG_POLLING for snooze stateAboorva Devarajan
During the comparative study of cpuidle governors, it is noticed that the menu governor does not select CEDE state in some scenarios even though when the sleep duration of the CPU exceeds the target residency of the CEDE idle state this is because the CPU exits the snooze "polling" state when snooze time limit is reached in the snooze_loop(), which is not a real wake up and it just means that the polling state selection was not adequate. cpuidle governors rely on CPUIDLE_FLAG_POLLING flag to be set for the polling states to handle the condition mentioned above. Hence, set the CPUIDLE_FLAG_POLLING flag for snooze state (polling state) in powerpc arch to make the cpuidle governor work as expected. Reference Commits: - Timeout enabled for snooze state: commit 78eaa10f027c ("cpuidle: powernv/pseries: Auto-promotion of snooze to deeper idle state") - commit dc2251bf98c6 ("cpuidle: Eliminate the CPUIDLE_DRIVER_STATE_START symbol") - Fix wakeup stats in governor for polling states commit 5f26bdceb9c0 ("cpuidle: menu: Fix wakeup statistics updates for polling state") Signed-off-by: Aboorva Devarajan <aboorvad@linux.vnet.ibm.com> Tested-by: Vishal Chourasia <vishalc@linux.vnet.ibm.com> Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.ibm.com> Reviewed-by: Vishal Chourasia <vishalc@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221114145611.37669-1-aboorvad@linux.vnet.ibm.com