summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-08-20net: qrtr: fix another OOB Read in qrtr_endpoint_postXiaolong Huang
This check was incomplete, did not consider size is 0: if (len != ALIGN(size, 4) + hdrlen) goto err; if size from qrtr_hdr is 0, the result of ALIGN(size, 4) will be 0, In case of len == hdrlen and size == 0 in header this check won't fail and if (cb->type == QRTR_TYPE_NEW_SERVER) { /* Remote node endpoint can bridge other distant nodes */ const struct qrtr_ctrl_pkt *pkt = data + hdrlen; qrtr_node_assign(node, le32_to_cpu(pkt->server.node)); } will also read out of bound from data, which is hdrlen allocated block. Fixes: 194ccc88297a ("net: qrtr: Support decoding incoming v2 packets") Fixes: ad9d24c9429e ("net: qrtr: fix OOB Read in qrtr_endpoint_post") Signed-off-by: Xiaolong Huang <butterflyhuangxx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20Merge branch 'ocelot-vlan'David S. Miller
Vladimir Oltean says: ==================== Small ocelot VLAN improvements This small series propagates some VLAN restrictions via netlink extack and creates some helper functions instead of open-coding VLAN table manipulations from multiple places. This is split from the larger "DSA FDB isolation" series, hence the v2 tag: https://patchwork.kernel.org/project/netdevbpf/cover/20210818120150.892647-1-vladimir.oltean@nxp.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: mscc: ocelot: use helpers for port VLAN membershipVladimir Oltean
This is a mostly cosmetic patch that creates some helpers for accessing the VLAN table. These helpers are also a bit more careful in that they do not modify the ocelot->vlan_mask unless the hardware operation succeeded. Not all callers check the return value (the init code doesn't), but anyway. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: mscc: ocelot: transmit the VLAN filtering restrictions via extackVladimir Oltean
We need to transmit more restrictions in future patches, convert this one to netlink extack. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: mscc: ocelot: transmit the "native VLAN" error via extackVladimir Oltean
We need to reject some more configurations in future patches, convert the existing one to netlink extack. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20Merge branch 'ocelot-phylink-fixes'David S. Miller
Vladimir Oltean says: ==================== Ocelot phylink fixes This series addresses a regression reported by Horatiu which introduced by the ocelot conversion to phylink: there are broken device trees in the wild, and the driver fails to probe the entire switch when a port fails to probe, which it previously did not do. Continue probing even when some ports fail to initialize properly. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: mscc: ocelot: allow probing to continue with ports that fail to registerVladimir Oltean
The existing ocelot device trees, like ocelot_pcb123.dts for example, have SERDES ports (ports 4 and higher) that do not have status = "disabled"; but on the other hand do not have a phy-handle or a fixed-link either. So from the perspective of phylink, they have broken DT bindings. Since the blamed commit, probing for the entire switch will fail when such a device tree binding is encountered on a port. There used to be this piece of code which skipped ports without a phy-handle: phy_node = of_parse_phandle(portnp, "phy-handle", 0); if (!phy_node) continue; but now it is gone. Anyway, fixed-link setups are a thing which should work out of the box with phylink, so it would not be in the best interest of the driver to add that check back. Instead, let's look at what other drivers do. Since commit 86f8b1c01a0a ("net: dsa: Do not make user port errors fatal"), DSA continues after a switch port fails to register, and works only with the ports that succeeded. We can achieve the same behavior in ocelot by unregistering the devlink port for ports where ocelot_port_phylink_create() failed (called via ocelot_probe_port), and clear the bit in devlink_ports_registered for that port. This will make the next iteration reconsider the port that failed to probe as an unused port, and re-register a devlink port of type UNUSED for it. No other cleanup should need to be performed, since ocelot_probe_port() should be self-contained when it fails. Fixes: e6e12df625f2 ("net: mscc: ocelot: convert to phylink") Reported-and-tested-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: mscc: ocelot: be able to reuse a devlink_port after teardownHoratiu Vultur
There are cases where we would like to continue probing the switch even if one port has failed to probe. When that happens, we need to unregister a devlink_port of type DEVLINK_PORT_FLAVOUR_PHYSICAL and re-register it of type DEVLINK_PORT_FLAVOUR_UNUSED. This is fine, except when calling devlink_port_attrs_set on a structure on which devlink_port_register has been previously called, there is a WARN_ON in devlink_port_attrs_set that devlink_port->devlink must be NULL. So don't assume that the memory behind dlp is clean when calling ocelot_port_devlink_init, just zero-initialize it. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20Merge branch 'dpaa2-switch-phylikn-fixes'David S. Miller
Vladimir Oltean says: ==================== dpaa2-switch phylink fixes This is fixing two regressions introduced by the recent conversion of the dpaa2-switch driver to phylink. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: dpaa2-switch: call dpaa2_switch_port_disconnect_mac on probe error pathVladimir Oltean
Currently when probing returns an error, the netdev is freed but phylink_disconnect is not called. Create a common function between the unbind path and the error path, call it the opposite of dpaa2_switch_probe_port: dpaa2_switch_remove_port, and call it from both the unbind and the error path. Fixes: 84cba72956fd ("dpaa2-switch: integrate the MAC endpoint support") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: dpaa2-switch: phylink_disconnect_phy needs rtnl_lockVladimir Oltean
There is an ASSERT_RTNL in phylink_disconnect_phy which triggers whenever dpaa2_switch_port_disconnect_mac is called. To follow the pattern established by dpaa2_eth_disconnect_mac, take the rtnl_mutex every time we call dpaa2_switch_port_disconnect_mac. Fixes: 84cba72956fd ("dpaa2-switch: integrate the MAC endpoint support") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20irqchip/apple-aic: Fix irq_disable from within irq handlersSven Peter
When disable_irq_nosync for an interrupt is called from within its interrupt handler, this interrupt is only marked as disabled with the intention to mask it when it triggers again. The AIC hardware however automatically masks the interrupt when it is read. aic_irq_eoi then unmasks it again if it's not disabled *and* not masked. This results in a state mismatch between the hardware state and the state kept in irq_data: The hardware interrupt is masked but IRQD_IRQ_MASKED is not set. Any further calls to unmask_irq will directly return and the interrupt can never be enabled again. Fix this by keeping the hardware and irq_data state in sync by unmasking in aic_irq_eoi if and only if the irq_data state also assumes the interrupt to be unmasked. Fixes: 76cde2639411 ("irqchip/apple-aic: Add support for the Apple Interrupt Controller") Signed-off-by: Sven Peter <sven@svenpeter.dev> Acked-by: Hector Martin <marcan@marcan.st> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210812100942.17206-1-sven@svenpeter.dev
2021-08-20Merge branch 'gmii2rgmii-loopback'David S. Miller
Gerhard Engleder says: ==================== Add Xilinx GMII2RGMII loopback support The Xilinx GMII2RGMII driver overrides PHY driver functions in order to configure the device according to the link speed of the PHY attached to it. This is implemented for a normal link but not for loopback. Andrew told me to use phy_loopback and this changes make phy_loopback work in combination with Xilinx GMII2RGMII. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: phy: gmii2rgmii: Support PHY loopbackGerhard Engleder
Configure speed if loopback is used. read_status is not called for loopback. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: phy: Uniform PHY driver accessGerhard Engleder
struct phy_device contains a pointer to the PHY driver and nearly everywhere this pointer is used to access the PHY driver. Only mdio_bus_phy_may_suspend() is still using to_phy_driver() instead of the PHY driver pointer. Uniform PHY driver access by eliminating to_phy_driver() use in mdio_bus_phy_may_suspend(). Only phy_bus_match() and phy_probe() are still using to_phy_driver(), because PHY driver pointer is not available there. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: phy: Support set_loopback overrideGerhard Engleder
phy_read_status and various other PHY functions support PHY specific overriding of driver functions by using a PHY specific pointer to the PHY driver. Add support of PHY specific override to phy_loopback too. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20Merge branch 'sparx5-dma'David S. Miller
Steen Hegelund says: ==================== Adding Frame DMA functionality to Sparx5 v2: Removed an unused variable (proc_ctrl) from sparx5_fdma_start. This add frame DMA functionality to the Sparx5 platform. Until now the Sparx5 SwitchDev driver has been using register based injection and extraction when sending frames to/from the host CPU. With this series the Frame DMA functionality now added. The Frame DMA is only used if the Frame DMA interrupt is configured in the device tree; otherwise the existing register based injection and extraction is used. The Sparx5 has two ports that can be used for sending and receiving frames, but there are 8 channels that can be configured: 6 for injection and 2 for extraction. The additional channels can be used for more advanced scenarios e.g. where virtual cores are used, but currently the driver only uses port 0 and channel 0 and 6 respectively. DCB (data control block) structures are passed to the Frame DMA with suitable information about frame start/end etc, as well as pointers to DB (data blocks) buffers. The Frame DMA engine can use interrupts to signal back when the frames have been injected or extracted. There is a limitation on the DB alignment also for injection: Block must start on 16byte boundaries, and this is why the driver currently copies the data to into separate buffers. The Sparx5 switch core needs a IFH (Internal Frame Header) to pass information from the port to the switch core, and this header is added before injection and stripped after extraction. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20arm64: dts: sparx5: Add the Sparx5 switch frame DMA supportSteen Hegelund
This adds the interrupt for the Sparx5 Frame DMA. If this configuration is present the Sparx5 SwitchDev driver will use the Frame DMA feature, and if not it will use register based injection and extraction for sending and receiving frames to the CPU. Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20net: sparx5: switchdev: adding frame DMA functionalitySteen Hegelund
This add frame DMA functionality to the Sparx5 platform. Ethernet frames can be extracted or injected autonomously to or from the device’s DDR3/DDR3L memory and/or PCIe memory space. Linked list data structures in memory are used for injecting or extracting Ethernet frames. The FDMA generates interrupts when frame extraction or injection is done and when the linked lists need updating. The FDMA implements two extraction channels, one per switch core port towards the VCore CPU system and a total of six injection channels. Extraction channels are mapped one-to-one to the CPU ports, while injection channels can be individually assigned to any CPU port. - FDMA channel 0 through 5 corresponds to CPU port 0 injection direction FDMA_CH_CFG[channel].CH_INJ_PORT is set to 0. - FDMA channel 0 through 5 corresponds to CPU port 1 injection direction when FDMA_CH_CFG[channel].CH_INJ_PORT is set to 1. - FDMA channel 6 corresponds to CPU port 0 extraction direction. - FDMA channel 7 corresponds to CPU port 1 extraction direction. The FDMA implements a strict priority scheme among channels. Extraction channels are prioritized over injection channels and secondarily channels with higher channel number are prioritized over channels with lower number. On the other hand, ports are being served on an equal-bandwidth principle both on injection and extraction directions. The equal-bandwidth principle will not force an equal bandwidth. Instead, it ensures that the ports perform at their best considering the operating conditions. When more than one injection channel is enabled for injection on the same CPU port, priority determines which channel can inject data. Ownership is re-arbitrated on frame boundaries. The FDMA processes linked lists of DMA Control Block Structures (DCBs). The DCBs have the same basic structure for both injection and extraction. A DCB must be placed on a 64-bit word-aligned address in memory. Each DCB has a per-channel configurable amount of associated data blocks in memory, where the frame data is stored. The data blocks that are used by extraction channels must be placed on 64-bit word aligned addresses in memory, and their length must be a multiple of 128 bytes. A DCB carries the pointer to the next DCB of the linked list, the INFO word which holds information for the DCB, and a pair of status word and memory pointer for every data block that it is associated with. Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20dt-bindings: Output yamllint warnings to stderrRob Herring
yamllint warnings go to stdout which means on a quiet build no warnings are output. Fix this and redirect the yamllint output to stderr. Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20210820000047.1667819-1-robh@kernel.org
2021-08-20powerpc/64s: Fix scv implicit soft-mask table for relocated kernelsNicholas Piggin
The implict soft-mask table addresses get relocated if they use a relative symbol like a label. This is right for code that runs relocated but not for unrelocated. The scv interrupt vectors run unrelocated, so absolute addresses are required for their soft-mask table entry. This fixes crashing with relocated kernels, usually an asynchronous interrupt hitting in the scv handler, then hitting the trap that checks whether r1 is in userspace. Fixes: 325678fd0522 ("powerpc/64s: add a table of implicit soft-masked addresses") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210820103431.1701240-1-npiggin@gmail.com
2021-08-20Merge tag 'batadv-next-pullrequest-20210820' of ↵David S. Miller
git://git.open-mesh.org/linux-merge Simon Wunderlich says: ==================== This (updated) cleanup patchset includes the following patches: - bump version strings, by Simon Wunderlich - update docs about move IRC channel away from freenode, by Sven Eckelmann (updated, added missing sign-off) - Switch to kstrtox.h for kstrtou64, by Sven Eckelmann - Update NULL checks, by Sven Eckelmann (2 patches) - remove remaining skb-copy calls for broadcast packets, by Linus Lüssing ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20dt-bindings: eeprom-93xx46: Convert to json schemaAswath Govindraju
Convert eeprom-93xx46 binding documentation from txt to yaml format Signed-off-by: Aswath Govindraju <a-govindraju@ti.com> Link: https://lore.kernel.org/r/20210818105626.31800-1-a-govindraju@ti.com Signed-off-by: Rob Herring <robh@kernel.org>
2021-08-20spi: stm32: fix excluded_middle.cocci warningskernel test robot
drivers/spi/spi-stm32.c:915:23-25: WARNING !A || A && B is equivalent to !A || B Condition !A || A && B is equivalent to !A || B. Generated by: scripts/coccinelle/misc/excluded_middle.cocci Fixes: 7ceb0b8a3ced ("spi: stm32: finalize message either on dma callback or EOT") CC: Alain Volmat <alain.volmat@foss.st.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: kernel test robot <lkp@intel.com> Reviewed-by: Alain Volmat <alain.volmat@foss.st.com> Link: https://lore.kernel.org/r/20210713191004.GA14729@5eb5c2cbef84 Signed-off-by: Mark Brown <broonie@kernel.org>
2021-08-20ASoC: rsnd: adg: clearly handle clock error / NULL caseKuninori Morimoto
This driver is assuming that all adg->clk[i] is not NULL. Because of this prerequisites, for_each_rsnd_clk() is possible to work for all clk without checking NULL. In other words, all adg->clk[i] should not NULL. Some SoC might doesn't have clk_a/b/c/i. devm_clk_get() returns error in such case. This driver calls rsnd_adg_null_clk_get() and use null_clk instead of NULL in such cases. But devm_clk_get() might returns NULL even though such clocks exist, but it doesn't mean error (user deliberately chose to disable the feature). NULL clk itself is not error from clk point of view, but is error from this driver point of view because it is not assuming such case. But current code is using IS_ERR() which doesn't care NULL. This driver uses IS_ERR_OR_NULL() instead of IS_ERR() for clk check. And it uses ERR_CAST() to clarify null_clk error. One concern here is that it unconditionally uses null_clk if clk_a/b/c/i was error. It is correct if it doesn't exist, but is not correct if it returns error even though it exist. It needs to check "clock-names" from DT before calling devm_clk_get() to handling such case. But let's assume it is overkill so far. Link: https://lore.kernel.org/r/YMCmhfQUimHCSH/n@mwanda Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Link: https://lore.kernel.org/r/87v940wyf9.wl-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-08-20ASoC: rsnd: core: make some arrays static const, makes object smallerKuninori Morimoto
Don't populate arrays on the stack but instead them static const. Makes the object code smaller by 48 bytes. Before: text data bss dec hex filename 20938 916 104 21958 55c6 ./sound/soc/sh/rcar/core.o After: text data bss dec hex filename 20890 916 104 21910 5596 ./sound/soc/sh/rcar/core.o gcc version 11.1.0) Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Link: https://lore.kernel.org/r/87tujkwydx.wl-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-08-20Merge tag 'mlx5-updates-2021-08-19' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2021-08-19 This series introduces the support for two new mlx5 features: 1) Sample offload for tunneled traffic 2) devlink rate objects support 1) From Chris Mi: Sample offload for tunneled traffic ===================================================== Background and solution ----------------------- Currently the sample offload actions send the encapsulated packet to software. This series de-capsulates the packet before performing the sampling and set the tunnel properties on the skb metadata fields to make the behavior consistent with OVS sFlow. If de-capsulating first, we can't use the same match like before in default table. So instantiate a post action instance to continue processing the action list. If HW can preserve reg_c, also use the post action instance. Post action infrastructure -------------------------- Some tc actions are modeled in hardware using multiple tables causing a tc action list split. For example, CT action is modeled by jumping to a ct table which is controlled by nf flow table. sFlow jumps in hardware to a sample table, which continues to a "default table" where it should continue processing the action list. Multi table actions are modeled in hardware using a unique fte_id. The fte_id is set before jumping to a table. Split actions continue to a post-action table where the matched fte_id value continues the execution the tc action list. This series also introduces post action infrastructure. Both ct and sample use it. Sample for tunnel in TC SW -------------------------- tc filter add dev vxlan1 protocol ip parent ffff: prio 3 \ flower src_mac 24:25:d0:e1:00:00 dst_mac 02:25:d0:13:01:02 \ enc_src_ip 192.168.1.14 enc_dst_ip 192.168.1.13 \ enc_dst_port 4789 enc_key_id 4 \ action sample rate 1 group 6 \ action tunnel_key unset \ action mirred egress redirect dev enp4s0f0_1 MLX5 sample HW offload ---------------------- For the following typical flow table: +-------------------------------+ + original flow table + +-------------------------------+ + original match + +-------------------------------+ + sample action + other actions + +-------------------------------+ We translate the tc filter with sample action to the following HW model: +---------------------+ + original flow table + +---------------------+ + original match + +---------------------+ | set fte_id (if reg_c preserve cap) | do decap v +------------------------------------------------+ + Flow Sampler Object + +------------------------------------------------+ + sample ratio + +------------------------------------------------+ + sample table id | default table id + +------------------------------------------------+ | | v v +-----------------------------+ +-------------------+ + sample table + + default table + +-----------------------------+ +-------------------+ + forward to management vport + | +-----------------------------+ | +-------+------+ | |reg_c preserve cap | |or decap action v v +-----------------+ +-------------+ + per vport table + + post action + +-----------------+ +-------------+ + original match + +-----------------+ + other actions + +-----------------+ 2) From Dmytro Linkin: devlink rate object support for mlx5_core driver ======================================================================= HIGH-LEVEL OVERVIEW Devlink leaf rate objects created per vport (VF/SF, and PF on BlueField) in switchdev mode on devlink port registration. Implement devlink ops callbacks to create/destroy rate groups, set TX rate values of the vport/group, assign vport to the group. Driver accepts TX rate values as fraction of 1Mbps. Refactor existing eswitch QoS infrastructure to be accessible by legacy NDO rate API and new devlink rate API. NDO rate API is not removed/disabled in switchdev mode to not break existing users. Rate values configured with NDO rate API are not visible for devlink infrastructure, therefore APIs should not be used simultaneously. IMPLEMENTATION DETAILS Driver provide two level rate hierarchy to manage bandwidth - group level and vport level. Initially each vport added to internal unlimited group created by default. Each rate element (vport or group) receive bandwidth relative to its parent element (for groups the parent is a physical link itself) in a Round Robin manner, where element get bandwidth value according to its weight. Example: Created four rate groups with tx_share limits: $ devlink port function rate add \ pci/0000:06:00.0/group_1 tx_share 30gbit $ devlink port function rate add \ pci/0000:06:00.0/group_2 tx_share 20gbit $ devlink port function rate add \ pci/0000:06:00.0/group_3 tx_share 20gbit $ devlink port function rate add \ pci/0000:06:00.0/group_4 tx_share 10gbit Weights created in HW for each group are relative to the bigest tx_share value, which is 30gbit: <group_1> 1.0 <group_2> 0.67 <group_3> 0.67 <group_4> 0.33 Assuming link speed is 50 Gbit/sec and each group can sustain such amount of traffic, maximum bandwidth is 50 / (1.0 + 0.67 + 0.67 + 0.33) = ~18.75 Gbit/sec. Normilized bandwidth values for groups: <group_1> 18.75 * 1.0 = 18.75 Gbit/sec <group_2> 18.75 * 0.67 = 12.5 Gbit/sec <group_3> 18.75 * 0.67 = 12.5 Gbit/sec <group_4> 18.75 * 0.33 = 6.25 Gbit/sec If in example above group_1 doesn't produce any traffic, then maximum bandwidth becomes 50 / (0.67 + 0.67 + 0.33) = ~30.0 Gbit/sec. Normalized values: <group_2> 30.0 * 0.67 = 20.0 Gbit/sec <group_3> 30.0 * 0.67 = 20.0 Gbit/sec <group_4> 30.0 * 0.33 = 10.0 Gbit/sec Same normalization applied to each vport in the group. Normalized values are internal, therefore driver provides QoS tracepoints for next events: * vport rate element creation/deletion: * vport rate element configuration; * group rate element creation/deletion; * group rate element configuration. PATCHES OVERVIEW 1 - Moving and isolation of eswitch QoS logic in separate file; 2 - Implement devlink leaf rate object support for vports; 3 - Implement rate groups creation/deletion; 4 - Implement TX rate management for the groups; 5 - Implement parent set for vports; 6 - Eswitch QoS tracepoints. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20Merge tag 'for-net-next-2021-08-19' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next Luiz Augusto von Dentz says: ==================== bluetooth-next pull request for net-next: - Add support for Foxconn Mediatek Chip - Add support for LG LGSBWAC92/TWCM-K505D - hci_h5 flow control fixes and suspend support - Switch to use lock_sock for SCO and RFCOMM - Various fixes for extended advertising - Reword Intel's setup on btusb unifying the supported generations ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20kselftest/arm64: pac: Fix skipping of tests on systems without PACMark Brown
The PAC tests check to see if the system supports the relevant PAC features but instead of skipping the tests if they can't be executed they fail the tests which makes things look like they're not working when they are. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819165723.43903-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-08-20Merge tag 'batadv-next-pullrequest-20210819' of ↵David S. Miller
git://git.open-mesh.org/linux-merge Simon Wunderlich says: ==================== This cleanup patchset includes the following patches: - bump version strings, by Simon Wunderlich - update docs about move IRC channel away from freenode, by Sven Eckelmann - Switch to kstrtox.h for kstrtou64, by Sven Eckelmann - Update NULL checks, by Sven Eckelmann (2 patches) - remove remaining skb-copy calls for broadcast packets, by Linus Lüssing ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-20locking/semaphore: Add might_sleep() to down_*() familyXiaoming Ni
Semaphore is sleeping lock. Add might_sleep() to down*() family (with exception of down_trylock()) to detect atomic context sleep. Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210809021215.19991-1-nixiaoming@huawei.com
2021-08-20Documentation: arm64: describe asymmetric 32-bit supportWill Deacon
Document support for running 32-bit tasks on asymmetric 32-bit systems and its impact on the user ABI when enabled. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210730112443.23245-17-will@kernel.org
2021-08-20arm64: Remove logic to kill 32-bit tasks on 64-bit-only coresWill Deacon
The scheduler now knows enough about these braindead systems to place 32-bit tasks accordingly, so throw out the safety checks and allow the ret-to-user path to avoid do_notify_resume() if there is nothing to do. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-16-will@kernel.org
2021-08-20arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0Will Deacon
Allow systems with mismatched 32-bit support at EL0 to run 32-bit applications based on a new kernel parameter. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-15-will@kernel.org
2021-08-20arm64: Advertise CPUs capable of running 32-bit applications in sysfsWill Deacon
Since 32-bit applications will be killed if they are caught trying to execute on a 64-bit-only CPU in a mismatched system, advertise the set of 32-bit capable CPUs to userspace in sysfs. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-14-will@kernel.org
2021-08-20arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched systemWill Deacon
If we want to support 32-bit applications, then when we identify a CPU with mismatched 32-bit EL0 support we must ensure that we will always have an active 32-bit CPU available to us from then on. This is important for the scheduler, because is_cpu_allowed() will be constrained to 32-bit CPUs for compat tasks and forced migration due to a hotplug event will hang if no 32-bit CPUs are available. On detecting a mismatch, prevent offlining of either the mismatching CPU if it is 32-bit capable, or find the first active 32-bit capable CPU otherwise. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-13-will@kernel.org
2021-08-20arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0Will Deacon
When exec'ing a 32-bit task on a system with mismatched support for 32-bit EL0, try to ensure that it starts life on a CPU that can actually run it. Similarly, when exec'ing a 64-bit task on such a system, try to restore the old affinity mask if it was previously restricted. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com> Reviewed-by: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-12-will@kernel.org
2021-08-20arm64: Implement task_cpu_possible_mask()Will Deacon
Provide an implementation of task_cpu_possible_mask() so that we can prevent 64-bit-only cores being added to the 'cpus_mask' for compat tasks on systems with mismatched 32-bit support at EL0, Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-11-will@kernel.org
2021-08-20Merge branch 'sched/core'Peter Zijlstra
2021-08-20sched: Introduce dl_task_check_affinity() to check proposed affinityWill Deacon
In preparation for restricting the affinity of a task during execve() on arm64, introduce a new dl_task_check_affinity() helper function to give an indication as to whether the restricted mask is admissible for a deadline task. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com> Link: https://lore.kernel.org/r/20210730112443.23245-10-will@kernel.org
2021-08-20sched: Allow task CPU affinity to be restricted on asymmetric systemsWill Deacon
Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Although userspace can carefully manage the affinity masks for such tasks, one place where it is particularly problematic is execve() because the CPU on which the execve() is occurring may be incompatible with the new application image. In such a situation, it is desirable to restrict the affinity mask of the task and ensure that the new image is entered on a compatible CPU. From userspace's point of view, this looks the same as if the incompatible CPUs have been hotplugged off in the task's affinity mask. Similarly, if a subsequent execve() reverts to a compatible image, then the old affinity is restored if it is still valid. In preparation for restricting the affinity mask for compat tasks on arm64 systems without uniform support for 32-bit applications, introduce {force,relax}_compatible_cpus_allowed_ptr(), which respectively restrict and restore the affinity mask for a task based on the compatible CPUs. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-9-will@kernel.org
2021-08-20sched: Split the guts of sched_setaffinity() into a helper functionWill Deacon
In preparation for replaying user affinity requests using a saved mask, split sched_setaffinity() up so that the initial task lookup and security checks are only performed when the request is coming directly from userspace. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-8-will@kernel.org
2021-08-20sched: Introduce task_struct::user_cpus_ptr to track requested affinityWill Deacon
In preparation for saving and restoring the user-requested CPU affinity mask of a task, add a new cpumask_t pointer to 'struct task_struct'. If the pointer is non-NULL, then the mask is copied across fork() and freed on task exit. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-7-will@kernel.org
2021-08-20sched: Reject CPU affinity changes based on task_cpu_possible_mask()Will Deacon
Reject explicit requests to change the affinity mask of a task via set_cpus_allowed_ptr() if the requested mask is not a subset of the mask returned by task_cpu_possible_mask(). This ensures that the 'cpus_mask' for a given task cannot contain CPUs which are incapable of executing it, except in cases where the affinity is forced. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com> Reviewed-by: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-6-will@kernel.org
2021-08-20cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq()Will Deacon
select_fallback_rq() only needs to recheck for an allowed CPU if the affinity mask of the task has changed since the last check. Return a 'bool' from cpuset_cpus_allowed_fallback() to indicate whether the affinity mask was updated, and use this to elide the allowed check when the mask has been left alone. No functional change. Suggested-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lore.kernel.org/r/20210730112443.23245-5-will@kernel.org
2021-08-20cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus()Will Deacon
Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Modify guarantee_online_cpus() to take task_cpu_possible_mask() into account when trying to find a suitable set of online CPUs for a given task. This will avoid passing an invalid mask to set_cpus_allowed_ptr() during ->attach() and will subsequently allow the cpuset hierarchy to be taken into account when forcefully overriding the affinity mask for a task which requires migration to a compatible CPU. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com> Link: https://lkml.kernel.org/r/20210730112443.23245-4-will@kernel.org
2021-08-20cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1Will Deacon
If the scheduler cannot find an allowed CPU for a task, cpuset_cpus_allowed_fallback() will widen the affinity to cpu_possible_mask if cgroup v1 is in use. In preparation for allowing architectures to provide their own fallback mask, just return early if we're either using cgroup v1 or we're using cgroup v2 with a mask that contains invalid CPUs. This will allow select_fallback_rq() to figure out the mask by itself. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Quentin Perret <qperret@google.com> Link: https://lkml.kernel.org/r/20210730112443.23245-3-will@kernel.org
2021-08-20sched: Introduce task_cpu_possible_mask() to limit fallback rq selectionWill Deacon
Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. On such a system, we must take care not to migrate a task to an unsupported CPU when forcefully moving tasks in select_fallback_rq() in response to a CPU hot-unplug operation. Introduce a task_cpu_possible_mask() hook which, given a task argument, allows an architecture to return a cpumask of CPUs that are capable of executing that task. The default implementation returns the cpu_possible_mask, since sane machines do not suffer from per-cpu ISA limitations that affect scheduling. The new mask is used when selecting the fallback runqueue as a last resort before forcing a migration to the first active CPU. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <Valentin.Schneider@arm.com> Reviewed-by: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210730112443.23245-2-will@kernel.org
2021-08-20sched: Cgroup SCHED_IDLE supportJosh Don
This extends SCHED_IDLE to cgroups. Interface: cgroup/cpu.idle. 0: default behavior 1: SCHED_IDLE Extending SCHED_IDLE to cgroups means that we incorporate the existing aspects of SCHED_IDLE; a SCHED_IDLE cgroup will count all of its descendant threads towards the idle_h_nr_running count of all of its ancestor cgroups. Thus, sched_idle_rq() will work properly. Additionally, SCHED_IDLE cgroups are configured with minimum weight. There are two key differences between the per-task and per-cgroup SCHED_IDLE interface: - The cgroup interface allows tasks within a SCHED_IDLE hierarchy to maintain their relative weights. The entity that is "idle" is the cgroup, not the tasks themselves. - Since the idle entity is the cgroup, our SCHED_IDLE wakeup preemption decision is not made by comparing the current task with the woken task, but rather by comparing their matching sched_entity. A typical use-case for this is a user that creates an idle and a non-idle subtree. The non-idle subtree will dominate competition vs the idle subtree, but the idle subtree will still be high priority vs other users on the system. The latter is accomplished via comparing matching sched_entity in the waken preemption path (this could also be improved by making the sched_idle_rq() decision dependent on the perspective of a specific task). For now, we maintain the existing SCHED_IDLE semantics. Future patches may make improvements that extend how we treat SCHED_IDLE entities. The per-task_group idle field is an integer that currently only holds either a 0 or a 1. This is explicitly typed as an integer to allow for further extensions to this API. For example, a negative value may indicate a highly latency-sensitive cgroup that should be preferred for preemption/placement/etc. Signed-off-by: Josh Don <joshdon@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20210730020019.1487127-2-joshdon@google.com
2021-08-20sched/topology: Skip updating masks for non-online nodesValentin Schneider
The scheduler currently expects NUMA node distances to be stable from init onwards, and as a consequence builds the related data structures once-and-for-all at init (see sched_init_numa()). Unfortunately, on some architectures node distance is unreliable for offline nodes and may very well change upon onlining. Skip over offline nodes during sched_init_numa(). Track nodes that have been onlined at least once, and trigger a build of a node's NUMA masks when it is first onlined post-init. Reported-by: Geetika Moolchandani <Geetika.Moolchandani1@ibm.com> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210818074333.48645-1-srikar@linux.vnet.ibm.com