summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2022-01-06mlxsw: spectrum_acl_bloom_filter: Reorder functions to make the code more ↵Amit Cohen
aesthetic Currently, mlxsw_sp_acl_bf_rule_count_index_get() is implemented before mlxsw_sp_acl_bf_index_get() but is used after it. Adding a new function for Spectrum-4 would make them further apart still. Fix by moving them around. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: Introduce flex key elements for Spectrum-4Amit Cohen
Spectrum-4 ASIC will support more virtual routers and local ports compared to the existing ASICs. Therefore, the virtual router and local port ACL key elements need to be increased. Introduce new key elements for Spectrum-4 to be aligned with the elements used already for other Spectrum ASICs. The key blocks layout is the same for Spectrum-4, so use the existing code for encode_block() and clear_block(), just create separate blocks. Note that size of `VIRT_ROUTER_MSB` is 4 bits in Spectrum-4, therefore declare it using `MLXSW_AFK_ELEMENT_INST_U32()`, in order to be able to set `.avoid_size_check` to true. Otherwise, `mlxsw_afk_blocks_check()` will fail and warn. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06mlxsw: Rename virtual router flex key elementAmit Cohen
In Spectrum-4, the size of the virtual router ACL key element increased from 11 bits to 12 bits. In order to reuse the existing virtual router ACL key element enumerators for Spectrum-4, rename 'VIRT_ROUTER_8_10' and 'VIRT_ROUTER_0_7' to 'VIRT_ROUTER_MSB' and 'VIRT_ROUTER_LSB', respectively. No functional changes intended. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06dpaa2-switch: check if the port priv is validIoana Ciornei
Before accessing the port private structure make sure that there is still a non-NULL pointer there. A NULL pointer access can happen when we are on the remove path, some switch ports are unregistered and some are in the process of unregistering. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06dpaa2-mac: return -EPROBE_DEFER from dpaa2_mac_open in case the fwnode is ↵Ioana Ciornei
not set We could get into a situation when the fwnode of the parent device is not yet set because its probe didn't yet finish. When this happens, any caller of the dpaa2_mac_open() will not have the fwnode available, thus cause problems at the PHY connect time. Avoid this by just returning -EPROBE_DEFER from the dpaa2_mac_open when this happens. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06dpaa2-mac: bail if the dpmacs fwnode is not foundRobert-Ionut Alexa
The parent pointer node handler must be declared with a NULL initializer. Before using it, a check must be performed to make sure that a valid address has been assigned to it. Signed-off-by: Robert-Ionut Alexa <robert-ionut.alexa@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-06clk: mediatek: add mt7986 clock supportSam Shih
Add MT7986 clock support, include topckgen, apmixedsys, infracfg, and ethernet subsystem clocks. Signed-off-by: Sam Shih <sam.shih@mediatek.com> Link: https://lore.kernel.org/r/20211217121148.6753-4-sam.shih@mediatek.com Reviewed-by: Ryder Lee <ryder.lee@kernel.org> [sboyd@kernel.org: Fix typos in Kconfig, there are more existing typos from where they were copied from of but whatever] Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds
Pull rdma fixes from Jason Gunthorpe: "Last pull for 5.16, the reversion has been known for a while now but didn't get a proper fix in time. Looks like we will have several info-leak bugs to take care of going foward. - Revert the patch fixing the DM related crash causing a widespread regression for kernel ULPs. A proper fix just didn't appear this cycle due to the holidays - Missing NULL check on alloc in uverbs - Double free in rxe error paths - Fix a new kernel-infoleak report when forming ah_attr's without GRH's in ucma" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: RDMA/core: Don't infoleak GRH fields RDMA/uverbs: Check for null return of kmalloc_array Revert "RDMA/mlx5: Fix releasing unallocated memory in dereg MR flow" RDMA/rxe: Prevent double freeing rxe_map_set()
2022-01-06clk: mediatek: clk-gate: Use regmap_{set/clear}_bits helpersAngeloGioacchino Del Regno
Appropriately change calls to regmap_update_bits() with regmap_set_bits() and regmap_clear_bits() for improved readability. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20220103143712.46675-2-angelogioacchino.delregno@collabora.com Reviewed-by: Chen-Yu Tsai <wenst@chromium.org> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06clk: mediatek: clk-gate: Shrink by adding clockgating bit check helperAngeloGioacchino Del Regno
Add a clockgating bit check helper and use it in functions mtk_cg_bit_is_cleared(), mtk_cg_bit_is_set() to shrink the file size. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20220103143712.46675-1-angelogioacchino.delregno@collabora.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06clk: x86: Fix clk_gate_flags for RV_CLK_GATEAjit Kumar Pandey
In newer SoC we have to clear bit for disabling 48MHz oscillator clock gate. Remove CLK_GATE_SET_TO_DISABLE flag for proper enable and disable of 48MHz clock. Signed-off-by: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com> Reviewed-by: Mario Limonciello <Mario.Limonciello@amd.com> Link: https://lore.kernel.org/r/20211212180527.1641362-6-AjitKumar.Pandey@amd.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06clk: x86: Use dynamic con_id string during clk registrationAjit Kumar Pandey
Replace hard coded con_id string with fch_data->name. We have clk consumers looking up with different clock names, hence use dynamic con_id string during clk lookup registration. fch_data->name will be initialized in acpi driver based on fmw property value. Signed-off-by: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com> Reviewed-by: Mario Limonciello <Mario.Limonciello@amd.com> Link: https://lore.kernel.org/r/20211212180527.1641362-5-AjitKumar.Pandey@amd.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06ACPI: APD: Add a fmw property clk-nameAjit Kumar Pandey
Add a new device property to fetch clk-name from firmware. Signed-off-by: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com> Reviewed-by: Mario Limonciello <Mario.Limonciello@amd.com> Link: https://lore.kernel.org/r/20211212180527.1641362-4-AjitKumar.Pandey@amd.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06drivers: acpi: acpi_apd: Remove unused device property "is-rv"Ajit Kumar Pandey
Initially "is-rv" device property is added for 48MHz fixed clock support on Raven or RV architecture. It's unused now as we moved to pci device_id based selection to extend such support on other architectures. This change removed unused code from acpi driver. Signed-off-by: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com> Reviewed-by: Mario Limonciello <Mario.Limonciello@amd.com> Link: https://lore.kernel.org/r/20211212180527.1641362-3-AjitKumar.Pandey@amd.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06x86: clk: clk-fch: Add support for newer family of AMD's SOCAjit Kumar Pandey
FCH controller clock configuration slightly differs across AMD's SOC architectures. Newer family of SOC only support a 48MHz fix clock while stoney SOC family has a clk_mux to choose 48MHz and 25 MHz clk. At present fixed clk support is only enabled for RV architecture using "is-rv" device property initialized from boot loader. This limit 48MHz fixed clock gate support to RV platform unless we add similar device property in boot loader for other architectures. Add pci_device_id table with Stoney platform id and replace "is-rv" device property check with pci id match to add clk mux support with 25MHz and 48MHz clk support based on clk mux selection. This enable 48Mhz fixed fch clock support by default on all newer SOC's except stoney. Also replace RV with FIXED as a generic naming conventions across all platforms and changed module description. Signed-off-by: Ajit Kumar Pandey <AjitKumar.Pandey@amd.com> Reviewed-by: Mario Limonciello <Mario.Limonciello@amd.com> Link: https://lore.kernel.org/r/20211212180527.1641362-2-AjitKumar.Pandey@amd.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06clk: ingenic: Add MDMA and BDMA clocksPaul Cercueil
The Ingenic JZ4760 and JZ4770 both have an extra DMA core named BDMA dedicated to the NAND and BCH controller, but which can also do memory-to-memory transfers. The JZ4760 additionally has a DMA core named MDMA dedicated to memory-to-memory transfers. The programming manual for the JZ4770 does have a bit for a MDMA clock, but does not seem to have the hardware wired in. Add the BDMA and MDMA clocks to the JZ4760 CGU code, and the BDMA clock to the JZ4770 code, so that the BDMA and MDMA controllers can be used. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Link: https://lore.kernel.org/r/20211220193319.114974-3-paul@crapouillou.net Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06clk: bm1880: remove kfrees on static allocationsConor Dooley
bm1880_clk_unregister_pll & bm1880_clk_unregister_div both try to free statically allocated variables, so remove those kfrees. For example, if we take L703 kfree(div_hw): - div_hw is a bm1880_div_hw_clock pointer - in bm1880_clk_register_plls this is pointed to an element of arg1: struct bm1880_div_hw_clock *clks - in the probe, where bm1880_clk_register_plls is called arg1 is bm1880_div_clks, defined on L371: static struct bm1880_div_hw_clock bm1880_div_clks[] Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Fixes: 1ab4601da55b ("clk: Add common clock driver for BM1880 SoC") Link: https://lore.kernel.org/r/20211223154244.1024062-1-conor.dooley@microchip.com Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2022-01-06Revert "net/mlx5: Add retry mechanism to the command entry index allocation"Moshe Shemesh
This reverts commit 410bd754cd73c4a2ac3856d9a03d7b08f9c906bf. The reverted commit had added a retry mechanism to the command entry index allocation. The previous patch ensures that there is a free command entry index once the command work handler holds the command semaphore. Thus the retry mechanism is not needed. Fixes: 410bd754cd73 ("net/mlx5: Add retry mechanism to the command entry index allocation") Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Eran Ben Elisha <eranbe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Set command entry semaphore up once got index freeMoshe Shemesh
Avoid a race where command work handler may fail to allocate command entry index, by holding the command semaphore down till command entry index is being freed. Fixes: 410bd754cd73 ("net/mlx5: Add retry mechanism to the command entry index allocation") Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Eran Ben Elisha <eranbe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Sync VXLAN udp ports during uplink representor profile changeMaor Dickman
Currently during NIC profile disablement all VXLAN udp ports offloaded to the HW are flushed and during its enablement the driver send notification to the stack to inform the core that the entire UDP tunnel port state has been lost, uplink representor doesn't have the same behavior which can cause VXLAN udp ports offload to be in bad state while moving between modes while VXLAN interface exist. Fixed by aligning the uplink representor profile behavior to the NIC behavior. Fixes: 84db66124714 ("net/mlx5e: Move set vxlan nic info to profile init") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Fix access to sf_dev_table on allocation failureShay Drory
Even when SF devices are supported, the SF device table allocation can still fail. In such case mlx5_sf_dev_supported still reports true, but SF device table is invalid. This can result in NULL table access. Hence, fix it by adding NULL table check. Fixes: 1958fc2f0712 ("net/mlx5: SF, Add auxiliary device driver") Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix matching on modified inner ip_ecn bitsPaul Blakey
Tunnel device follows RFC 6040, and during decapsulation inner ip_ecn might change depending on inner and outer ip_ecn as follows: +---------+----------------------------------------+ |Arriving | Arriving Outer Header | | Inner +---------+---------+---------+----------+ | Header | Not-ECT | ECT(0) | ECT(1) | CE | +---------+---------+---------+---------+----------+ | Not-ECT | Not-ECT | Not-ECT | Not-ECT | <drop> | | ECT(0) | ECT(0) | ECT(0) | ECT(1) | CE* | | ECT(1) | ECT(1) | ECT(1) | ECT(1)* | CE* | | CE | CE | CE | CE | CE | +---------+---------+---------+---------+----------+ Cells marked above are changed from original inner packet ip_ecn value. Tc then matches on the modified inner ip_ecn, but hw offload which matches the inner ip_ecn value before decap, will fail. Fix that by mapping all the cases of outer and inner ip_ecn matching, and only supporting cases where we know inner wouldn't be changed by decap, or in the outer ip_ecn=CE case, inner ip_ecn didn't matter. Fixes: bcef735c59f2 ("net/mlx5e: Offload TC matching on tos/ttl for ip tunnels") Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06Revert "net/mlx5e: Block offload of outer header csum for GRE tunnel"Aya Levin
This reverts commit 54e1217b90486c94b26f24dcee1ee5ef5372f832. Although the NIC doesn't support offload of outer header CSUM, using gso_partial_features allows offloading the tunnel's segmentation. The driver relies on the stack CSUM calculation of the outer header. For this, NETIF_F_GSO_GRE_CSUM must be a member of the device's features. Fixes: 54e1217b9048 ("net/mlx5e: Block offload of outer header csum for GRE tunnel") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06Revert "net/mlx5e: Block offload of outer header csum for UDP tunnels"Aya Levin
This reverts commit 6d6727dddc7f93fcc155cb8d0c49c29ae0e71122. Although the NIC doesn't support offload of outer header CSUM, using gso_partial_features allows offloading the tunnel's segmentation. The driver relies on the stack CSUM calculation of the outer header. For this, NETIF_F_GSO_UDP_TUNNEL_CSUM must be a member of the device's features. Fixes: 6d6727dddc7f ("net/mlx5e: Block offload of outer header csum for UDP tunnels") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Don't block routes with nexthop objects in SWMaor Dickman
Routes with nexthop objects is currently not supported by multipath offload and any attempts to use it is blocked, however this also block adding SW routes with nexthop. Resolve this by returning NOTIFY_DONE instead of an error which will allow such a route to be created in SW but not offloaded. This fix also solve an issue which block adding such routes on different devices due to missing check if the route FIB device is one of multipath devices. Fixes: 6a87afc072c3 ("mlx5: Fail attempts to use routes with nexthop objects") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix wrong usage of fib_info_nh when routes with nexthop objects ↵Maor Dickman
are used Creating routes with nexthop objects while in switchdev mode leads to access to un-allocated memory and trigger bellow call trace due to hitting WARN_ON. This is caused due to illegal usage of fib_info_nh in TC tunnel FIB event handling to resolve the FIB device while fib_info built in with nexthop. Fixed by ignoring attempts to use nexthop objects with routes until support can be properly added. WARNING: CPU: 1 PID: 1724 at include/net/nexthop.h:468 mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core] CPU: 1 PID: 1724 Comm: ip Not tainted 5.15.0_for_upstream_min_debug_2021_11_09_02_04 #1 RIP: 0010:mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core] RSP: 0018:ffff8881349f7910 EFLAGS: 00010202 RAX: ffff8881492f1980 RBX: ffff8881349f79e8 RCX: 0000000000000000 RDX: ffff8881349f79e8 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff8881349f7950 R08: 00000000000000fe R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000000 R12: ffff88811e9d0000 R13: ffff88810eb62000 R14: ffff888106710268 R15: 0000000000000018 FS: 00007f1d5ca6e800(0000) GS:ffff88852c880000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007ffedba44ff8 CR3: 0000000129808004 CR4: 0000000000370ea0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> atomic_notifier_call_chain+0x42/0x60 call_fib_notifiers+0x21/0x40 fib_table_insert+0x479/0x6d0 ? try_charge_memcg+0x480/0x6d0 inet_rtm_newroute+0x65/0xb0 rtnetlink_rcv_msg+0x2af/0x360 ? page_add_file_rmap+0x13/0x130 ? do_set_pte+0xcd/0x120 ? rtnl_calcit.isra.0+0x120/0x120 netlink_rcv_skb+0x4e/0xf0 netlink_unicast+0x1ee/0x2b0 netlink_sendmsg+0x22e/0x460 sock_sendmsg+0x33/0x40 ____sys_sendmsg+0x1d1/0x1f0 ___sys_sendmsg+0xab/0xf0 ? __mod_memcg_lruvec_state+0x40/0x60 ? __mod_lruvec_page_state+0x95/0xd0 ? page_add_new_anon_rmap+0x4e/0xf0 ? __handle_mm_fault+0xec6/0x1470 __sys_sendmsg+0x51/0x90 ? internal_get_user_pages_fast+0x480/0xa10 do_syscall_64+0x3d/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix nullptr on deleting mirroring ruleDima Chumak
Deleting a Tc rule with multiple outputs, one of which is internal port, like this one: tc filter del dev enp8s0f0_0 ingress protocol ip pref 5 flower \ dst_mac 0c:42:a1:d1:d0:88 \ src_mac e4:ea:09:08:00:02 \ action tunnel_key set \ src_ip 0.0.0.0 \ dst_ip 7.7.7.8 \ id 8 \ dst_port 4789 \ action mirred egress mirror dev vxlan_sys_4789 pipe \ action mirred egress redirect dev enp8s0f0_1 Triggers a call trace: BUG: kernel NULL pointer dereference, address: 0000000000000230 RIP: 0010:del_sw_hw_rule+0x2b/0x1f0 [mlx5_core] Call Trace: tree_remove_node+0x16/0x30 [mlx5_core] mlx5_del_flow_rules+0x51/0x160 [mlx5_core] __mlx5_eswitch_del_rule+0x4b/0x170 [mlx5_core] mlx5e_tc_del_fdb_flow+0x295/0x550 [mlx5_core] mlx5e_flow_put+0x1f/0x70 [mlx5_core] mlx5e_delete_flower+0x286/0x390 [mlx5_core] tc_setup_cb_destroy+0xac/0x170 fl_hw_destroy_filter+0x94/0xc0 [cls_flower] __fl_delete+0x15e/0x170 [cls_flower] fl_delete+0x36/0x80 [cls_flower] tc_del_tfilter+0x3a6/0x6e0 rtnetlink_rcv_msg+0xe5/0x360 ? rtnl_calcit.isra.0+0x110/0x110 netlink_rcv_skb+0x46/0x110 netlink_unicast+0x16b/0x200 netlink_sendmsg+0x202/0x3d0 sock_sendmsg+0x33/0x40 ____sys_sendmsg+0x1c3/0x200 ? copy_msghdr_from_user+0xd6/0x150 ___sys_sendmsg+0x88/0xd0 ? ___sys_recvmsg+0x88/0xc0 ? do_futex+0x10c/0x460 __sys_sendmsg+0x59/0xa0 do_syscall_64+0x48/0x140 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fix by disabling offloading for flows matching esw_is_chain_src_port_rewrite() which have more than one output. Fixes: 10742efc20a4 ("net/mlx5e: VF tunnel TX traffic offloading") Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix page DMA map/unmap attributesAya Levin
Driver initiates DMA sync, hence it may skip CPU sync. Add DMA_ATTR_SKIP_CPU_SYNC as input attribute both to dma_map_page and dma_unmap_page to avoid redundant sync with the CPU. When forcing the device to work with SWIOTLB, the extra sync might cause data corruption. The driver unmaps the whole page while the hardware used just a part of the bounce buffer. So syncing overrides the entire page with bounce buffer that only partially contains real data. Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") Fixes: db05815b36cb ("net/mlx5e: Add XSK zero-copy support") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-07parisc: pdc_stable: use default_groups in kobj_typeGreg Kroah-Hartman
There are currently 2 ways to create a set of sysfs files for a kobj_type, through the default_attrs field, and the default_groups field. Move the parisc pdc_stable sysfs code to use default_groups field which has been the preferred way since aa30f47cf666 ("kobject: Add support for default attribute groups to kobj_type") so that we can soon get rid of the obsolete default_attrs field. Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: linux-parisc@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Helge Deller <deller@gmx.de>
2022-01-06net/mlx5e: Add recovery flow in case of error CQEGal Pressman
The rep legacy RQ completion handling was missing the appropriate handling of error CQEs (dump the CQE and queue a recover work), fix it by calling trigger_report() when needed. Since all CQE handling flows do the exact same error CQE handling, extract it to a common helper function. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Aya Levin <ayal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: TC, Remove redundant error loggingRoi Dayan
Remove redundant and trivial error logging when trying to offload mirred device with unsupported devices. Using OVS could hit those a lot and the errors are still logged in extack. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Refactor set_pflag_cqe_based_moderSaeed Mahameed
Rearrange the code and use cqe_mode_to_period_mode() helper. Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Move HW-GRO and CQE compression check to fix features flowGal Pressman
Feature dependencies should be resolved in fix features rather than in set features flow. Move the check that disables HW-GRO in case CQE compression is enabled from set_feature_hw_gro() to mlx5e_fix_features(). Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix feature check per profileAya Levin
Remove redundant space when constructing the feature's enum. Validate against the indented enum value. Fixes: 6c72cb05d4b8 ("net/mlx5e: Use bitmap field for profile features") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Unblock setting vid 0 for VF in case PF isn't eswitch managerMaor Dickman
When using libvirt to passthrough VF to VM it will always set the VF vlan to 0 even if user didn’t request it, this will cause libvirt to fail to boot in case the PF isn't eswitch owner. Example of such case is the DPU host PF which isn't eswitch manager, so any attempt to passthrough VF of it using libvirt will fail. Fix it by not returning error in case set VF vlan is called with vid 0. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Expose FEC counters via ethtoolLama Kayal
Add FEC counters' statistics of corrected_blocks and uncorrectable_blocks, along with their lanes via ethtool. HW supports corrected_blocks and uncorrectable_blocks counters both for RS-FEC mode and FC-FEC mode. In FC mode these counters are accumulated per lane, while in RS mode the correction method crosses lanes, thus only total corrected_blocks and uncorrectable_blocks are reported in this mode. Signed-off-by: Lama Kayal <lkayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Update log_max_qp value to FW max capabilityMaher Sanalla
log_max_qp in driver's default profile #2 was set to 18, but FW actually supports 17 at the most - a situation that led to the concerning print when the driver is loaded: "log_max_qp value in current profile is 18, changing to HCA capabaility limit (17)" The expected behavior from mlx5_profile #2 is to match the maximum FW capability in regards to log_max_qp. Thus, log_max_qp in profile #2 is initialized to a defined static value (0xff) - which basically means that when loading this profile, log_max_qp value will be what the currently installed FW supports at most. Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: SF, Use all available cpu for setting cpu affinityShay Drory
Currently all SFs are using the same CPUs. Spreading SF over CPUs, in round-robin manner, in order to achieve better distribution of the SFs over available CPUs. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Introduce API for bulk request and release of IRQsShay Drory
Currently IRQs are requested one by one. To balance spreading IRQs among cpus using such scheme requires remembering cpu mask for the cpus used for a given device. This complicates the IRQ allocation scheme in subsequent patch. Hence, prepare the code for bulk IRQs allocation. This enables spreading IRQs among cpus in subsequent patch. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Split irq_pool_affinity logic to new fileShay Drory
The downstream patches add more functionality to irq_pool_affinity. Move the irq_pool_affinity logic to a new file in order to ease the coding and maintenance of it. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Move affinity assignment into irq_requestShay Drory
Move affinity binding of the IRQ to irq_request function in order to bind the IRQ before inserting it to the xarray. After this change, the IRQ is ready for use when inserted to the xarray. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Introduce control IRQ request APIShay Drory
Currently, IRQ layer have a separate flow for ctrl and comp IRQs, and the distinction between ctrl and comp IRQs is done in the IRQ layer. In order to ease the coding and maintenance of the IRQ layer, introduce a new API for requesting control IRQs - mlx5_ctrl_irq_request(struct mlx5_core_dev *dev). Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: mlx5e_hv_vhca_stats_create return type to voidSaeed Mahameed
Callers of this functions ignore its return value, as reported by Wang Qing, in one of the return paths, it returns positive values. Since return value is ignored anyways, void out the return type of the function. Reported-by: Wang Qing <wangqing@vivo.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-07random: don't reset crng_init_cnt on urandom_read()Jann Horn
At the moment, urandom_read() (used for /dev/urandom) resets crng_init_cnt to zero when it is called at crng_init<2. This is inconsistent: We do it for /dev/urandom reads, but not for the equivalent getrandom(GRND_INSECURE). (And worse, as Jason pointed out, we're only doing this as long as maxwarn>0.) crng_init_cnt is only read in crng_fast_load(); it is relevant at crng_init==0 for determining when to switch to crng_init==1 (and where in the RNG state array to write). As far as I understand: - crng_init==0 means "we have nothing, we might just be returning the same exact numbers on every boot on every machine, we don't even have non-cryptographic randomness; we should shove every bit of entropy we can get into the RNG immediately" - crng_init==1 means "well we have something, it might not be cryptographic, but at least we're not gonna return the same data every time or whatever, it's probably good enough for TCP and ASLR and stuff; we now have time to build up actual cryptographic entropy in the input pool" - crng_init==2 means "this is supposed to be cryptographically secure now, but we'll keep adding more entropy just to be sure". The current code means that if someone is pulling data from /dev/urandom fast enough at crng_init==0, we'll keep resetting crng_init_cnt, and we'll never make forward progress to crng_init==1. It seems to be intended to prevent an attacker from bruteforcing the contents of small individual RNG inputs on the way from crng_init==0 to crng_init==1, but that's misguided; crng_init==1 isn't supposed to provide proper cryptographic security anyway, RNG users who care about getting secure RNG output have to wait until crng_init==2. This code was inconsistent, and it probably made things worse - just get rid of it. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: avoid superfluous call to RDRAND in CRNG extractionJason A. Donenfeld
RDRAND is not fast. RDRAND is actually quite slow. We've known this for a while, which is why functions like get_random_u{32,64} were converted to use batching of our ChaCha-based CRNG instead. Yet CRNG extraction still includes a call to RDRAND, in the hot path of every call to get_random_bytes(), /dev/urandom, and getrandom(2). This call to RDRAND here seems quite superfluous. CRNG is already extracting things based on a 256-bit key, based on good entropy, which is then reseeded periodically, updated, backtrack-mutated, and so forth. The CRNG extraction construction is something that we're already relying on to be secure and solid. If it's not, that's a serious problem, and it's unlikely that mixing in a measly 32 bits from RDRAND is going to alleviate things. And in the case where the CRNG doesn't have enough entropy yet, we're already initializing the ChaCha key row with RDRAND in crng_init_try_arch_early(). Removing the call to RDRAND improves performance on an i7-11850H by 370%. In other words, the vast majority of the work done by extract_crng() prior to this commit was devoted to fetching 32 bits of RDRAND. Reviewed-by: Theodore Ts'o <tytso@mit.edu> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: early initialization of ChaCha constantsDominik Brodowski
Previously, the ChaCha constants for the primary pool were only initialized in crng_initialize_primary(), called by rand_initialize(). However, some randomness is actually extracted from the primary pool beforehand, e.g. by kmem_cache_create(). Therefore, statically initialize the ChaCha constants for the primary pool. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: <linux-crypto@vger.kernel.org> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: use IS_ENABLED(CONFIG_NUMA) instead of ifdefsJason A. Donenfeld
Rather than an awkward combination of ifdefs and __maybe_unused, we can ensure more source gets parsed, regardless of the configuration, by using IS_ENABLED for the CONFIG_NUMA conditional code. This makes things cleaner and easier to follow. I've confirmed that on !CONFIG_NUMA, we don't wind up with excess code by accident; the generated object file is the same. Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: harmonize "crng init done" messagesDominik Brodowski
We print out "crng init done" for !TRUST_CPU, so we should also print out the same for TRUST_CPU. Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: mix bootloader randomness into poolJason A. Donenfeld
If we're trusting bootloader randomness, crng_fast_load() is called by add_hwgenerator_randomness(), which sets us to crng_init==1. However, usually it is only called once for an initial 64-byte push, so bootloader entropy will not mix any bytes into the input pool. So it's conceivable that crng_init==1 when crng_initialize_primary() is called later, but then the input pool is empty. When that happens, the crng state key will be overwritten with extracted output from the empty input pool. That's bad. In contrast, if we're not trusting bootloader randomness, we call crng_slow_load() *and* we call mix_pool_bytes(), so that later crng_initialize_primary() isn't drawing on nothing. In order to prevent crng_initialize_primary() from extracting an empty pool, have the trusted bootloader case mirror that of the untrusted bootloader case, mixing the input into the pool. [linux@dominikbrodowski.net: rewrite commit message] Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: do not throw away excess input to crng_fast_loadJason A. Donenfeld
When crng_fast_load() is called by add_hwgenerator_randomness(), we currently will advance to crng_init==1 once we've acquired 64 bytes, and then throw away the rest of the buffer. Usually, that is not a problem: When add_hwgenerator_randomness() gets called via EFI or DT during setup_arch(), there won't be any IRQ randomness. Therefore, the 64 bytes passed by EFI exactly matches what is needed to advance to crng_init==1. Usually, DT seems to pass 64 bytes as well -- with one notable exception being kexec, which hands over 128 bytes of entropy to the kexec'd kernel. In that case, we'll advance to crng_init==1 once 64 of those bytes are consumed by crng_fast_load(), but won't continue onward feeding in bytes to progress to crng_init==2. This commit fixes the issue by feeding any leftover bytes into the next phase in add_hwgenerator_randomness(). [linux@dominikbrodowski.net: rewrite commit message] Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>