summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2023-10-16arm64: dts: rockchip: add ADC buttons to rk3588-evb1Sebastian Reichel
The Rockchip EVB1 has a couple of buttons connected via an ADC line. Let's add them to its devicetree. Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com> Link: https://lore.kernel.org/r/20231005134357.37171-1-sebastian.reichel@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2023-10-16arm64: dts: rockchip: Add AV1 decoder node to rk3588sBenjamin Gaignard
Add node for AV1 video decoder. Signed-off-by: Benjamin Gaignard <benjamin.gaignard@collabora.com> Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com> Link: https://lore.kernel.org/r/20231006065334.8117-1-benjamin.gaignard@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2023-10-16arm64: dts: rockchip: Add missing sdmmc2 SDR rates to rock-3aTamás Szűcs
Add missing UHS-I SDR rates to sdmmc2. Add explicit alias as mmc2 while at it. It would be good to have matching timings enabled in case slower SDIO devices are encountered. Signed-off-by: Tamás Szűcs <tszucs@protonmail.ch> Link: https://lore.kernel.org/r/20231011191448.58936-1-tszucs@protonmail.ch Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2023-10-16arm64: dts: rockchip: Fix i2s0 pin conflict on ROCK Pi 4 boardsChristopher Obbard
Commit 91419ae0420f ("arm64: dts: rockchip: use BCLK to GPIO switch on rk3399") modified i2s0 to switch the corresponding pins off when idle. For the ROCK Pi 4 boards, this means that i2s0 has the following pinctrl setting: pinctrl-names = "bclk_on", "bclk_off"; pinctrl-0 = <&i2s0_2ch_bus>; pinctrl-1 = <&i2s0_8ch_bus_bclk_off>; Due to this change, i2s0 fails to probe on my Radxa ROCK 4SE and ROCK Pi 4B boards: rockchip-pinctrl pinctrl: pin gpio3-29 already requested by leds; cannot claim for ff880000.i2s rockchip-pinctrl pinctrl: pin-125 (ff880000.i2s) status -22 rockchip-pinctrl pinctrl: could not request pin 125 (gpio3-29) from group i2s0-8ch-bus-bclk-off on device rockchip-pinctrl rockchip-i2s ff880000.i2s: Error applying setting, reverse things back rockchip-i2s ff880000.i2s: bclk disable failed -22 A pin requested for i2s0_8ch_bus_bclk_off has already been requested by user_led2, so whichever driver probes first will have the pin allocated. The hardware uses 2-channel i2s so fix this error by setting pinctl-1 to i2s0_2ch_bus_bclk_off which doesn't contain the pin allocated to user_led2. I checked the schematics for all Radxa boards based on ROCK Pi 4 and this change is compatible with all boards. Fixes: 91419ae0420f ("arm64: dts: rockchip: use BCLK to GPIO switch on rk3399") Signed-off-by: Christopher Obbard <chris.obbard@collabora.com> Link: https://lore.kernel.org/r/20231013114737.494410-3-chris.obbard@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2023-10-16arm64: dts: rockchip: Add i2s0-2ch-bus-bclk-off pins to RK3399Christopher Obbard
Commit 0efaf8078393 ("arm64: dts: rockchip: add i2s0-2ch-bus pins on rk3399") introduced a pinctl for i2s0 in two-channel mode. Commit 91419ae0420f ("arm64: dts: rockchip: use BCLK to GPIO switch on rk3399") modified i2s0 to switch the corresponding pins off when idle. Although an idle pinctrl node was added for i2s0 in 8-channel mode, a similar idle pinctrl node for i2s0 in 2-channel mode was not added. Add it. Fixes: 91419ae0420f ("arm64: dts: rockchip: use BCLK to GPIO switch on rk3399") Signed-off-by: Christopher Obbard <chris.obbard@collabora.com> Link: https://lore.kernel.org/r/20231013114737.494410-2-chris.obbard@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2023-10-16arm64: dts: rockchip: Enable UART6 on rock-5bTamás Szűcs
Enable UART lines on Radxa ROCK 5 Model B M.2 Key E. Signed-off-by: Tamás Szűcs <szucst@iit.uni-miskolc.hu> Link: https://lore.kernel.org/r/20231013215208.81345-1-szucst@iit.uni-miskolc.hu Signed-off-by: Heiko Stuebner <heiko@sntech.de>
2023-10-16arm64/mm: Hoist synchronization out of set_ptes() loopRyan Roberts
set_ptes() sets a physically contiguous block of memory (which all belongs to the same folio) to a contiguous block of ptes. The arm64 implementation of this previously just looped, operating on each individual pte. But the __sync_icache_dcache() and mte_sync_tags() operations can both be hoisted out of the loop so that they are performed once for the contiguous set of pages (which may be less than the whole folio). This should result in minor performance gains. __sync_icache_dcache() already acts on the whole folio, and sets a flag in the folio so that it skips duplicate calls. But by hoisting the call, all the pte testing is done only once. mte_sync_tags() operates on each individual page with its own loop. But by passing the number of pages explicitly, we can rely solely on its loop and do the checks only once. This approach also makes it robust for the future, rather than assuming if a head page of a compound page is being mapped, then the whole compound page is being mapped, instead we explicitly know how many pages are being mapped. The old assumption may not continue to hold once the "anonymous large folios" feature is merged. Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20231005140730.2191134-1-ryan.roberts@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16Merge 6.6-rc6 into usb-nextGreg Kroah-Hartman
We need the USB and Thunderbolt fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-16Merge tag 'qcom-arm64-fixes-for-6.6' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux into arm/fixes Qualcomm ARM64 DeviceTree fixes for v6.6 This fixes an error with an incorrect gpio-ranges preventing the PMIC GPIO instances from being registered on SA877P, and fixes a regression from a refactoring of the top-level clocks node that caused divclocks to no longer probe on a few of the MSM8996 devices. * tag 'qcom-arm64-fixes-for-6.6' of https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux: arm64: dts: qcom: msm8996-xiaomi: fix missing clock populate arm64: dts: qcom: apq8096-db820c: fix missing clock populate arm64: dts: qcom: sa8775p: correct PMIC GPIO label in gpio-ranges Link: https://lore.kernel.org/r/20231015180112.853805-1-andersson@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16Merge tag 'stm32-dt-for-v6.7-1' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/atorgue/stm32 into soc/dt STM32 DT for v6.7, round 1 Highlights: ---------- - MCU: - Add SDIO sleep pins for F7 boards. - MPU: - STM32MP13: - Add HASH and RNG support. - STMP32MP15: - OCTAVO: - Fix regulators (LDO1/2/6 and 3v3_hdmi) by removing "always-on" property on OSD32 common file. - Add new OS32MP1-RED board. It embeds a STM32157C SoC, 512 MB of DDR3, CAN-FD, HDMI, USB-C OTG. - STM32MP25: - Add and enable SDCARD support. - Add and enable ARM watchdog support and set it to 32 seconds. * tag 'stm32-dt-for-v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/atorgue/stm32: ARM: dts: stm32: add SDIO pinctrl sleep support on stm32f7 boards ARM: dts: stm32: add stm32f7 SDIO sleep pins ARM: dts: stm32: add RNG node for STM32MP13x platforms ARM: dts: stm32: omit unused pinctrl groups from stm32mp15 dtb files ARM: dts: stm32: stm32f7-pinctrl: don't use multiple blank lines ARM: dts: stm32: add HASH on stm32mp131 arm64: dts: st: enable secure arm-wdt watchdog on stm32mp257f-ev1 arm64: dts: st: add arm-wdt node for watchdog support on stm32mp251 arm64: dts: st: add SD-card support on STM32MP257F-EV1 board arm64: dts: st: add sdmmc1 pins for stm32mp25 arm64: dts: st: add sdmmc1 node in stm32mp251 SoC file ARM: dts: stm32: Add Octavo OSD32MP1-RED board dt-bindings: arm: stm32: add extra SiP compatible for oct,stm32mp157c-osd32-red ARM: dts: stm32: osd32: fix ldo6 not required to be always-on ARM: dts: stm32: lxa-tac: remove v3v3_hdmi override ARM: dts: stm32: osd32: fix ldo2 not required to be always-on ARM: dts: stm32: osd32: fix ldo1 not required to be always-on ARM: dts: stm32: Add alternate pinmux for can pins ARM: dts: stm32: Add alternate pinmux for ldtc pins ARM: dts: stm32: Add alternate pinmux for i2s pins Link: https://lore.kernel.org/r/8a6b3ca9-f10d-825e-e371-8aeff3289a25@foss.st.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16Merge tag 'amlogic-arm64-dt-for-v6.7' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/amlogic/linux into soc/dt Amlogic ARM64 DT changes for v6.7: - Add audio DT nodes for p200/p201/u200 - Add a bunch of peripherals for Amlogic-T7 (watchdog, power domain, pinctrl) - Add a bunch or peripherals for Amlogic-A1 (clk, usb, efuse, spi, uarts, emmc, ADC, rng, i2c) - Add NAND node on Amlogic AXG - Again a bunch of DT fixups for DT bindings check - New boards: - Amlogic AD402 reference board based on A113L SoC - Libre Computer Cottonwood boards * tag 'amlogic-arm64-dt-for-v6.7' of https://git.kernel.org/pub/scm/linux/kernel/git/amlogic/linux: (38 commits) arm64: dts: amlogic: a1: support all i2c masters and their muxes arm64: dts: amlogic: add libretech cottonwood support dt-bindings: arm: amlogic: add libretech cottonwood support arm64: dts: meson-a1-ad402: set SPIFC pins arm64: dts: meson: a1: Add SPIFC mux pins arm64: dts: meson-s4: add hwrng node arm64: dts: meson: g12: name spdifout consistently arm64: dts: Add pinctrl node for Amlogic T7 SoCs arm64: dts: meson: u200: add onboard devices arm64: dts: meson: u200: use TDM C for HDMI arm64: dts: meson: u200: add spdifout b routes arm64: dts: meson: u200: add missing audio clock controller arm64: dts: meson: u200: fix spdif output pin arm64: dts: amlogic: t7: add power domain controller node dt-bindings: power: add Amlogic T7 power domains arm64: dts: meson-g12: Fix compatible for amlogic,g12a-tdmin arm64: dts: meson-g12: Fix clock order for amlogic,axg-tdm-iface devices arm64: dts: amlogic: meson-axg: Meson NAND node dt-bindings: arm: amlogic: add Amlogic AD402 bindings arm64: dts: introduce Amlogic AD402 reference board based on A113L SoC ... Link: https://lore.kernel.org/r/b3e1bf66-9182-4d48-88ef-7efc20466e7c@linaro.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16Merge tag 'aspeed-6.7-devicetree' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/joel/bmc into soc/dt ASPEED device tree updates for 6.7 - New machine: * Facebook's Minevra, as Chassis Management Controller using the AST2600 - Updates to Ampere's Mt Mitchell and Mt Jade systems, and IBM's Bonnell * tag 'aspeed-6.7-devicetree' of git://git.kernel.org/pub/scm/linux/kernel/git/joel/bmc: ARM: dts: aspeed: mtmitchell: Add I2C NVMe alias port ARM: dts: aspeed: mtmitchell: Remove redundant ADC configurations ARM: dts: aspeed: mtmitchell: Add inlet temperature sensor ARM: dts: aspeed: mtjade: Add the gpio-hog ARM: dts: aspeed: mtjade, mtmitchell: Add new gpio-line-names ARM: dts: aspeed: mtjade, mtmitchell: Update gpio-line-names ARM: dts: aspeed: Minerva: Add Facebook Minerva CMC board dt-bindings: arm: aspeed: document board compatibles ARM: dts: aspeed: bonnell: Add reserved memory for TPM event log Link: https://lore.kernel.org/r/CACPK8XebMAQvgQTRH+KoaTFg7CzRkS79Fz3Kn8p4mbaezWGkUQ@mail.gmail.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16Merge tag 'qcom-dts-for-6.7' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux into soc/dt Qualcomm ARM DeviceTree updates for v6.7 RPM master stats is introduced for MSM8226 and MSM8974. The PCIe PHY of SDX55 is transitioned to the new binding. The hall sensor on the Samsung Galaxy Tab 4 is inverted. A number of fixes reported from DeviceTree validation are fixed. * tag 'qcom-dts-for-6.7' of https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux: ARM: dts: qcom: ipq8064: move keys and leds out of soc node ARM: dts: qcom: mdm9615: populate vsdcc fixed regulator ARM: dts: qcom: apq8060: drop incorrect regulator-type ARM: dts: qcom: apq8064: drop incorrect regulator-type ARM: dts: qcom: sdx65: fix SDHCI clocks order ARM: dts: qcom: apq8064: drop label property from DSI ARM: qcom: msm8974: Add rpm-master-stats node ARM: qcom: msm8226: Add rpm-master-stats node ARM: dts: qcom: apq8026-samsung-matisse-wifi: Fix inverted hall sensor ARM: dts: qcom: drop incorrect cell-index from SPMI ARM: dts: qcom-sdx55: switch PCIe QMP PHY to new style of bindings Link: https://lore.kernel.org/r/20231015204558.855987-1-andersson@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16x86/mce: Cleanup mce_usable_address()Yazen Ghannam
Move Intel-specific checks into a helper function. Explicitly use "bool" for return type. No functional change intended. Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230613141142.36801-4-yazen.ghannam@amd.com
2023-10-16x86/mce: Define amd_mce_usable_address()Yazen Ghannam
Currently, all valid MCA_ADDR values are assumed to be usable on AMD systems. However, this is not correct in most cases. Notifiers expecting usable addresses may then operate on inappropriate values. Define a helper function to do AMD-specific checks for a usable memory address. List out all known cases. [ bp: Tone down the capitalized words. ] Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230613141142.36801-3-yazen.ghannam@amd.com
2023-10-16Merge tag 'qcom-arm64-for-6.7' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux into soc/dt Qualcomm ARM64 DeviceTree updates for v6.7 The SM7125 platform is introduced, with support for Xiaomi Redmi Note 9 Pro. Support for Fairphone 5, on QCM6490, and BQ Aquaris M5, on MSM8939, are introduced. With the various QMP PHY bindings having been refactored, SC7180, SC7280, SDM845, SM8150, and SM8250 are transitioned to the new USB/DP combo PHY binding. IPQ6018, IPQ8074 MSM8998, SC7280, SC8180X, SDM845, SM8150, SM8250, and SM8450 are transitioned to the new PCIe PHY binding, and SC8180X is transitioned to the new UFS phy binding. The UFS power supply situation is clarified, and a range of boards across MSM8996, MSM8998, SM4250, SM6115, SM6125, SM8350, SM8450, and SM8550 receives corrections for this. On IPQ5018 watchdog support is introduced, and the SCM driver has SDI (debug image) enabled - so that it can be disabled. On IPQ5332 USB is enabled. The hwspinlock identifier is corrected across IPQ5332, IPQ6018, IPQ8074 and IPQ9574. The reserved-memory ranges for the remoteprocs on MSM8916 boards are refactored, to reduce the amount of duplicated boilerplate definitions. A number of nodes are transitioned to be disabled by default, to facilitate new boards. Samsung Galaxy Tab A 8.0 and Samsung Galaxy Tab A 9.7 gains display support, and the latter capacitive keys. Samsung Galaxy J5 gains accelerometer support. The Dragonboard 410c gains missing ADC7533 regulator definition, and an overlay forcing the board to operate in host mode, for automation purposes. On MSM8976, the outgoing IPC bits for modem and wcss are corrected, and reserved-memory regions are updated. Incorrect reserved-memory regions are also corrected for MSM8992 and MSM8994 devices. The QRB2210 RB1 board gets debug UART moved per hardware update. regulator voltage ranges are corrected, remoteprocs are enabled, USB SuperSpeed PHY is enabled, and GPIO LEDs are introduced for Bluetooth, WiFi and a user LED. Interrupts are described for the SGMII PHYs on SA8775P Ride platform, and the inline crypto engine is introduced for UFS. On SC7180 the audio DSP remoteproc is introduced. Additional SKUs of the Lazor boards are added.The RT5682 audio codec part is reorganized to be easier to maintain. On Trogdor devices, the touchscreen and display panels are linked to improve the power cycling behavior across the two. On SC7280 the cpuidle states are rewritten to support OS-initiated PCSI mode. LMH interrupts are added, to receive feedback when throttling occurs. The embedded usb debugger (EUD) description and the dummy usb-c-connector node is removed, as this is not correctly described. The USB3 pipe clock input of the global clock controller is properly described. Modem remoteproc is introduced on SDM630, and the SDM670 PDC mapping is corrected. On the SDM845 MTP PCIe support is introduced. The volumn down and reset buttons are defined. Remoteproc firmware names and the WiFI configuration is corrected. On Sony Xperia XZ2, XZ2 Compact, and XZ3 GPIO lines names are provided for TLMM and PMICs. The camera regulators are also added. Display hardware blocks are added to SM6125, and enabled on Sony Xperia 10 II. The ref clock is wired up to PCIe PHY on SM8150. On SM8250/QRB5165, and the RB5 board, the DisplayPort controller and the TCPM is introduced, with all the plumbing to get USB role and orientation switching, as well as DisplayPort altmode to work. Interconnects and power-domains are also described for the QUPs on this platform. Previously ignored PMICs are described for the SM8350 Hardware Development Kit (HDK), and PMR735a regulators are introduced. The pinctrl state for uart18 is corrected. On SM8450 HDK audio routes are corrected, to enable the analog microphones on the board. The addition of the PRNG is reverted, in favor of an upcoming additon of a true RNG. Constants are replaced with QCOM_SCM_VMID_* defines on a variety of boards. The SM8550 QRD board gets Bluetooth support, and the camera clock controller is described. Additionally, a number of fixes are introduced in a variety of platforms and boards, to align with Devicetree bindings. * tag 'qcom-arm64-for-6.7' of https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux: (148 commits) arm64: dts: qcom: apq8016-sbc: Add missing ADV7533 regulators ARM: dts: qcom: sdx65-mtp: Specify PM7250B SID to use arm64: dts: qcom: apq8016-sbc: Add overlay for usb host mode arm64: dts: qcom: qcm6490: Add device-tree for Fairphone 5 dt-bindings: arm: qcom: Add QCM6490 Fairphone 5 arm64: dts: qcom: pm8350c: Add flash led node arm64: dts: qcom: pm7250b: make SID configurable arm64: dts: qcom: sc7280: Mark some nodes as 'reserved' arm64: dts: qcom: msm8939: Fix iommu local address range arm64: dts: qcom: ipq5018: indicate that SDI should be disabled arm64: dts: qcom: msm8976: Fix ipc bit shifts arm64: dts: qcom: msm8976: Split lpass region arm64: dts: qcom: pm8150l: Add wled node arm64: dts: qcom: sa8775p: enable the inline crypto engine arm64: dts: qcom: msm8916/39: Fix venus memory size arm64: dts: qcom: msm8916/39: Move mpss_mem size to boards arm64: dts: qcom: msm8916/39: Disable unneeded firmware reservations arm64: dts: qcom: msm8939: Reserve firmware memory dynamically arm64: dts: qcom: msm8916: Reserve MBA memory dynamically arm64: dts: qcom: msm8916: Reserve firmware memory dynamically ... Link: https://lore.kernel.org/r/20231015191107.854658-1-andersson@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16arm64: Remove cpus_have_const_cap()Mark Rutland
There are no longer any users of cpus_have_const_cap(), and therefore it can be removed. Remove cpus_have_const_cap(). At the same time, remove __cpus_have_const_cap(), as this is a trivial wrapper of alternative_has_cap_unlikely(), which can be used directly instead. The comment for __system_matches_cap() is updated to no longer refer to cpus_have_const_cap(). As we have a number of ways to check the cpucaps, the specific suggestions are removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBIMark Rutland
In arch_tlbbatch_should_defer() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_REPEAT_TLBI, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in arch_tlbbatch_should_defer() is an optimization to avoid some redundant work when the ARM64_WORKAROUND_REPEAT_TLBI cpucap is detected and forces the immediate use of TLBI + DSB ISH. In the window between detecting the ARM64_WORKAROUND_REPEAT_TLBI cpucap and patching alternatives this is not a big concern and there's no need to optimize this window at the expsense of subsequent usage at runtime. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_WORKAROUND_REPEAT_TLBI cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible without requiring ifdeffery or IS_ENABLED() checks at each usage. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNPMark Rutland
In has_useable_cnp() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, but this is not necessary and cpus_have_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. We use has_useable_cnp() to determine whether we have the system-wide ARM64_HAS_CNP cpucap. Due to the structure of the cpufeature code, we call has_useable_cnp() in two distinct cases: 1) When finalizing system capabilities, setup_system_capabilities() will call has_useable_cnp() with SCOPE_SYSTEM to determine whether all CPUs have the feature. This is called after we've detected any local cpucaps including ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, but prior to patching alternatives. If the ARM64_WORKAROUND_NVIDIA_CARMEL_CNP was detected, we will not detect ARM64_HAS_CNP. 2) After finalizing system capabilties, verify_local_cpu_capabilities() will call has_useable_cnp() with SCOPE_LOCAL_CPU to verify that CPUs have CNP if we previously detected it. Note that if ARM64_WORKAROUND_NVIDIA_CARMEL_CNP was detected, we will not have detected ARM64_HAS_CNP. For case 1 we must check the system_cpucaps bitmap as this occurs prior to patching the alternatives. For case 2 we'll only call has_useable_cnp() once per subsequent onlining of a CPU, and as this isn't a fast path it's not necessary to optimize for this case. This patch replaces the use of cpus_have_const_cap() with cpus_have_cap(), which will only generate the bitmap test and avoid generating an alternative sequence, resulting in slightly simpler annd smaller code being generated. The ARM64_WORKAROUND_NVIDIA_CARMEL_CNP cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154Mark Rutland
In gic_read_iar() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_CAVIUM_23154 but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_CAVIUM_23154 cpucap is detected and patched early on the boot CPU before the GICv3 driver is initialized and hence before gic_read_iar() is ever called. Thus it is not necessary to use cpus_have_const_cap(), and alternative_has_cap() is equivalent. In addition, arm64's gic_read_iar() lives in irq-gic-v3.c purely for historical reasons. It was originally added prior to 32-bit arm support in commit: 6d4e11c5e2e8cd54 ("irqchip/gicv3: Workaround for Cavium ThunderX erratum 23154") When support for 32-bit arm was added, 32-bit arm's gic_read_iar() implementation was placed in <asm/arch_gicv3.h>, but the arm64 version was kept within irq-gic-v3.c as it depended on a static key local to irq-gic-v3.c and it was easier to add ifdeffery, which is what we did in commit: 7936e914f7b0827c ("irqchip/gic-v3: Refactor the arm64 specific parts") Subsequently the static key was replaced with a cpucap in commit: a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities") Since that commit there has been no need to keep arm64's gic_read_iar() in irq-gic-v3.c. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. For consistency, move the arm64-specific gic_read_iar() implementation over to arm64's <asm/arch_gicv3.h>. The ARM64_WORKAROUND_CAVIUM_23154 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198Mark Rutland
We use cpus_have_const_cap() to check for ARM64_WORKAROUND_2645198 but this is not necessary and alternative_has_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_2645198 cpucap is detected and patched before any userspace translation table exist, and the workaround is only necessary when manipulating usrspace translation tables which are in use. Thus it is not necessary to use cpus_have_const_cap(), and alternative_has_cap() is equivalent. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_WORKAROUND_2645198 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible, and redundant IS_ENABLED() checks are removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098Mark Rutland
In elf_hwcap_fixup() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_1742098, but this is not necessary and cpus_have_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_1742098 cpucap is detected and patched before elf_hwcap_fixup() can run, and hence it is not necessary to use cpus_have_const_cap(). We run cpus_have_const_cap() at most twice: once after finalizing system cpucaps, and potentially once more after detecting mismatched CPUs which support AArch32 at EL0. Due to this, it's not necessary to optimize for many calls to elf_hwcap_fixup(), and it's fine to use cpus_have_cap(). This patch replaces the use of cpus_have_const_cap() with cpus_have_cap(), which will only generate the bitmap test and avoid generating an alternative sequence, resulting in slightly simpler annd smaller code being generated. For consistenct with other cpucaps, the ARM64_WORKAROUND_1742098 cpucap is added to cpucap_is_possible() so that code can be elided when this is not possible. However, as we only define compat_elf_hwcap2 when CONFIG_COMPAT=y, some ifdeffery is still required within user_feature_fixup() to avoid build errors when CONFIG_COMPAT=n. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419Mark Rutland
We use cpus_have_const_cap() to check for ARM64_WORKAROUND_1542419 but this is not necessary and cpus_have_final_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_WORKAROUND_1542419 cpucap is detected and patched before any userspace code can run, and the both __do_compat_cache_op() and ctr_read_handler() are only reachable from exceptions taken from userspace. Thus it is not necessary for either to use cpus_have_const_cap(), and cpus_have_final_cap() is equivalent. This patch replaces the use of cpus_have_const_cap() with cpus_have_final_cap(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Using cpus_have_final_cap() clearly documents that we do not expect this code to run before cpucaps are finalized, and will make it easier to spot issues if code is changed in future to allow these functions to be reached earlier. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419Mark Rutland
In count_plts() and is_forbidden_offset_for_adrp() we use cpus_have_const_cap() to check for ARM64_WORKAROUND_843419, but this is not necessary and cpus_have_final_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. It's not possible to load a module in the window between detecting the ARM64_WORKAROUND_843419 cpucap and patching alternatives. The module VA range limits are initialized much later in module_init_limits() which is a subsys_initcall, and module loading cannot happen before this. Hence it's not necessary for count_plts() or is_forbidden_offset_for_adrp() to use cpus_have_const_cap(). This patch replaces the use of cpus_have_const_cap() with cpus_have_final_cap() which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Using cpus_have_final_cap() clearly documents that we do not expect this code to run before cpucaps are finalized, and will make it easier to spot issues if code is changed in future to allow modules to be loaded earlier. The ARM64_WORKAROUND_843419 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible, and redundant IS_ENABLED() checks are removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0Mark Rutland
In arm64_kernel_unmapped_at_el0() we use cpus_have_const_cap() to check for ARM64_UNMAP_KERNEL_AT_EL0, but this is only necessary so that arm64_get_bp_hardening_vector() and this_cpu_set_vectors() can run prior to alternatives being patched. Otherwise this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_UNMAP_KERNEL_AT_EL0 cpucap is a system-wide feature that is detected and patched before any translation tables are created for userspace. In the window between detecting the ARM64_UNMAP_KERNEL_AT_EL0 cpucap and patching alternatives, most users of arm64_kernel_unmapped_at_el0() do not need to know that the cpucap has been detected: * As KVM is initialized after cpucaps are finalized, no usaef of arm64_kernel_unmapped_at_el0() in the KVM code is reachable during this window. * The arm64_mm_context_get() function in arch/arm64/mm/context.c is only called after the SMMU driver is brought up after alternatives have been patched. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). Similarly the asids_update_limit() function is called after alternatives have been patched as an arch_initcall, and this can safely use cpus_have_final_cap() or alternative_has_cap_*(). Similarly we do not expect an ASID rollover to occur between cpucaps being detected and patching alternatives. Thus set_reserved_asid_bits() can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The __tlbi_user() and __tlbi_user_level() macros are not used during this window, and only need to invalidate additional entries once userspace translation tables have been active on a CPU. Thus these can safely use alternative_has_cap_*(). * The xen_kernel_unmapped_at_usr() function is not used during this window as it is only used in a late_initcall. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The arm64_get_meltdown_state() function is not used during this window. It only used by arm64_get_meltdown_state() and KVM code, both of which are only used after cpucaps have been finalized. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The tls_thread_switch() uses arm64_kernel_unmapped_at_el0() as an optimization to avoid zeroing tpidrro_el0 when KPTI is enabled and this will be trampled by the KPTI trampoline. It doesn't matter if this continues to zero the register during the window between detecting the cpucap and patching alternatives, so this can safely use alternative_has_cap_*(). * The sdei_arch_get_entry_point() and do_sdei_event() functions aren't reachable at this time as the SDEI driver is registered later by acpi_init() -> acpi_ghes_init() -> sdei_init(), where acpi_init is a subsys_initcall. Thus these can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The uses under drivers/ aren't reachable at this time as the drivers are registered later: - TRBE is registered via module_init() - SMMUv3 is registred via module_driver() - SPE is registred via module_init() * The arm64_get_bp_hardening_vector() and this_cpu_set_vectors() functions need to run on boot CPUs prior to patching alternatives. As these are only called during the onlining of a CPU, it's fine to perform a system_cpucaps bitmap test using cpus_have_cap(). This patch modifies this_cpu_set_vectors() to use cpus_have_cap(), and replaced all other use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The ARM64_UNMAP_KERNEL_AT_EL0 cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64}Mark Rutland
In system_supports_{sve,sme,sme2,fa64}() we use cpus_have_const_cap() to check for the relevant cpucaps, but this is only necessary so that sve_setup() and sme_setup() can run prior to alternatives being patched, and otherwise alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. All of system_supports_{sve,sme,sme2,fa64}() will return false prior to system cpucaps being detected. In the window between system cpucaps being detected and patching alternatives, we need system_supports_sve() and system_supports_sme() to run to initialize SVE and SME properties, but all other users of system_supports_{sve,sme,sme2,fa64}() don't depend on the relevant cpucap becoming true until alternatives are patched: * No KVM code runs until after alternatives are patched, and so this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The cpuid_cpu_online() callback in arch/arm64/kernel/cpuinfo.c is registered later from cpuinfo_regs_init() as a device_initcall, and so this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The entry, signal, and ptrace code isn't reachable until userspace has run, and so this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * Currently perf_reg_validate() will un-reserve the PERF_REG_ARM64_VG pseudo-register before alternatives are patched, and before sve_setup() has run. If a sampling event is created early enough, this would allow perf_ext_reg_value() to sample (the as-yet uninitialized) thread_struct::vl[] prior to alternatives being patched. It would be preferable to defer this until alternatives are patched, and this can safely use alternative_has_cap_*(). * The context-switch code will run during this window as part of stop_machine() used during alternatives_patch_all(), and potentially for other work if other kernel threads are created early. No threads require the use of SVE/SME/SME2/FA64 prior to alternatives being patched, and it would be preferable for the related context-switch logic to take effect after alternatives are patched so that ths is guaranteed to see a consistent system-wide state (e.g. anything initialized by sve_setup() and sme_setup(). This can safely ues alternative_has_cap_*(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The sve_setup() and sme_setup() functions are modified to use cpus_have_cap() directly so that they can observe the cpucaps being set prior to alternatives being patched. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2Mark Rutland
In arm64_apply_bp_hardening() we use cpus_have_const_cap() to check for ARM64_SPECTRE_V2 , but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in arm64_apply_bp_hardening() is intended to avoid the overhead of looking up and invoking a per-cpu function pointer when no branch predictor hardening is required. The arm64_apply_bp_hardening() function itself is called in two distinct flows: 1) When handling certain exceptions taken from EL0, where the PC could be a TTBR1 address and hence might have trained a branch predictor. As cpucaps are detected and alternatives are patched long before it is possible to execute userspace, it is not necessary to use cpus_have_const_cap() for these cases, and cpus_have_final_cap() or alternative_has_cap() would be preferable. 2) When switching between tasks in check_and_switch_context(). This can be called before cpucaps are detected and alternatives are patched, but this is long before the kernel mounts filesystems or accepts any input. At this stage the kernel hasn't loaded any secrets and there is no potential for hostile branch predictor training. Once cpucaps have been finalized and alternatives have been patched, switching tasks will invalidate any prior predictions. Hence it is not necessary to use cpus_have_const_cap() for this case. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_SSBSMark Rutland
In ssbs_thread_switch() we use cpus_have_const_cap() to check for ARM64_SSBS, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in ssbs_thread_switch() is an optimization to avoid the overhead of spectre_v4_enable_task_mitigation() where all CPUs implement SSBS and naturally preserve the SSBS bit in SPSR_ELx. In the window between detecting the ARM64_SSBS system-wide and patching alternative branches it is benign to continue to call spectre_v4_enable_task_mitigation(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_MTEMark Rutland
In system_supports_mte() we use cpus_have_const_cap() to check for ARM64_MTE, but this is not necessary and cpus_have_final_boot_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_MTE cpucap is a boot cpu feature which is detected and patched early on the boot CPU under smp_prepare_boot_cpu(). In the window between detecting the ARM64_MTE cpucap and patching alternatives, nothing depends on the ARM64_MTE cpucap: * The kasan_hw_tags_enabled() helper depends upon the kasan_flag_enabled static key, which is initialized later in kasan_init_hw_tags() after alternatives have been applied. * No KVM code is called during this window, and KVM is not initialized until after system cpucaps have been detected and patched. KVM code can safely use cpus_have_final_cap() or alternative_has_cap_*(). * We don't context-switch prior to patching boot alternatives, and thus mte_thread_switch() is not reachable during this window. Thus, we can safely use cpus_have_final_boot_cap() or alternative_has_cap_*() in the context-switch code. * IRQ and FIQ are masked during this window, and we can only take SError and Debug exceptions. SError exceptions are fatal at this point in time, and we do not expect to take Debug exceptions, thus: - It's fine to lave TCO set for exceptions taken during this window, and mte_disable_tco_entry() doesn't need to do anything. - We don't need to detect and report asynchronous tag cehck faults during this window, and neither mte_check_tfsr_entry() nor mte_check_tfsr_exit() need to do anything. Since we want to report any SErrors taken during thiw window, these cannot safely use cpus_have_final_boot_cap() or cpus_have_final_cap(), but these can safely use alternative_has_cap_*(). * The __set_pte_at() function is not used during this window. It is possible for this to be used on kernel mappings prior to boot cpucaps being finalized, so this cannot safely use cpus_have_final_boot_cap() or cpus_have_final_cap(), but this can safely use alternative_has_cap_*(). * No userspace translation tables have been created yet, and swap has not been initialized yet. Thus swapping is not possible and none of the following are called: - arch_thp_swp_supported() - arch_prepare_to_swap() - arch_swap_invalidate_page() - arch_swap_invalidate_area() - arch_swap_restore() These can safely use system_has_final_cap() or alternative_has_cap_*(). * The elfcore functions are only reachable after userspace is brought up, which happens after system cpucaps have been detected and patched. Thus the elfcore code can safely use cpus_have_final_cap() or alternative_has_cap_*(). * Hibernation is only possible after userspace is brought up, which happens after system cpucaps have been detected and patched. Thus the hibernate code can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The set_tagged_addr_ctrl() function is only reachable after userspace is brought up, which happens after system cpucaps have been detected and patched. Thus this can safely use cpus_have_final_cap() or alternative_has_cap_*(). * The copy_user_highpage() and copy_highpage() functions are not used during this window, and can safely use alternative_has_cap_*(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGEMark Rutland
We use cpus_have_const_cap() to check for ARM64_HAS_TLB_RANGE, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. In the window between detecting the ARM64_HAS_TLB_RANGE cpucap and patching alternative branches, we do not perform any TLB invalidation, and even if we were to perform TLB invalidation here it would not be functionally necessary to optimize this by using range invalidation. Hence there's no need to use cpus_have_const_cap(), and alternative_has_cap_unlikely() is sufficient. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXTMark Rutland
In __delay() we use cpus_have_const_cap() to check for ARM64_HAS_WFXT, but this is not necessary and alternative_has_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in __delay() is an optimization to use WFIT and WFET in preference to busy-polling the counter and/or using regular WFE and relying upon the architected timer event stream. It is not necessary to apply this optimization in the window between detecting the ARM64_HAS_WFXT cpucap and patching alternatives. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNGMark Rutland
In __cpu_has_rng() we use cpus_have_const_cap() to check for ARM64_HAS_RNG, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. In the window between detecting the ARM64_HAS_RNG cpucap and patching alternative branches, nothing which calls __cpu_has_rng() can run, and hence it's not necessary to use cpus_have_const_cap(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPANMark Rutland
We use cpus_have_const_cap() to check for ARM64_HAS_EPAN but this is not necessary and alternative_has_cap() or cpus_have_cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_EPAN cpucap is used to affect two things: 1) The permision bits used for userspace executable mappings, which are chosen by adjust_protection_map(), which is an arch_initcall. This is called after the ARM64_HAS_EPAN cpucap has been detected and alternatives have been patched, and before any userspace translation tables exist. 2) The handling of faults taken from (user or kernel) accesses to userspace executable mappings in do_page_fault(). Userspace translation tables are created after adjust_protection_map() is called, and hence after the ARM64_HAS_EPAN cpucap has been detected and alternatives have been patched. Neither of these run until after ARM64_HAS_EPAN cpucap has been detected and alternatives have been patched, and hence there's no need to use cpus_have_const_cap(). Since adjust_protection_map() is only executed once at boot time it would be best for it to use cpus_have_cap(), and since do_page_fault() is executed frequently it would be best for it to use alternatives_have_cap_unlikely(). This patch replaces the uses of cpus_have_const_cap() with cpus_have_cap() and alternative_has_cap_unlikely(), which will avoid generating redundant code, and should be better for all subsequent calls at runtime. The ARM64_HAS_EPAN cpucap is added to cpucap_is_possible() so that code can be elided entirely when this is not possible. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PANMark Rutland
In system_uses_hw_pan() we use cpus_have_const_cap() to check for ARM64_HAS_PAN, but this is only necessary so that the system_uses_ttbr0_pan() check in setup_cpu_features() can run prior to alternatives being patched, and otherwise this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_PAN cpucap is used by system_uses_hw_pan() and system_uses_ttbr0_pan() depending on whether CONFIG_ARM64_SW_TTBR0_PAN is selected, and: * We only use system_uses_hw_pan() directly in __sdei_handler(), which isn't reachable until after alternatives have been patched, and for this it is safe to use alternative_has_cap_*(). * We use system_uses_ttbr0_pan() in a few places: - In check_and_switch_context() and cpu_uninstall_idmap(), which will defer installing a translation table into TTBR0 when the ARM64_HAS_PAN cpucap is not detected. Prior to patching alternatives, all CPUs will be using init_mm with the reserved ttbr0 translation tables install in TTBR0, so these can safely use alternative_has_cap_*(). - In update_saved_ttbr0(), which will only save the active TTBR0 into a per-thread variable when the ARM64_HAS_PAN cpucap is not detected. Prior to patching alternatives, all CPUs will be using init_mm with the reserved ttbr0 translation tables install in TTBR0, so these can safely use alternative_has_cap_*(). - In efi_set_pgd(), which will handle check_and_switch_context() deferring the installation of TTBR0 when TTBR0 PAN is detected. The EFI runtime services are not initialized until after alternatives have been patched, and so this can safely use alternative_has_cap_*() or cpus_have_final_cap(). - In uaccess_ttbr0_disable() and uaccess_ttbr0_enable(), where we'll avoid installing/uninstalling a translation table in TTBR0 when ARM64_HAS_PAN is detected. Prior to patching alternatives we will not perform any uaccess and will not call uaccess_ttbr0_disable() or uaccess_ttbr0_enable(), and so these can safely use alternative_has_cap_*() or cpus_have_final_cap(). - In is_el1_permission_fault() where we will consider a translation fault on a TTBR0 address to be a permission fault when ARM64_HAS_PAN is not detected *and* we have set the PAN bit in the SPSR (which tells us that in the interrupted context, TTBR0 pointed at the reserved zero ttbr). In the window between detecting system cpucaps and patching alternatives we should not perform any accesses to TTBR0 addresses, and no userspace translation tables exist until after patching alternatives. Thus it is safe for this to use alternative_has_cap*(). This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. So that the check for TTBR0 PAN in setup_cpu_features() can run prior to alternatives being patched, the call to system_uses_ttbr0_pan() is replaced with an explicit check of the ARM64_HAS_PAN bit in the system_cpucaps bitmap. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKINGMark Rutland
In system_uses_irq_prio_masking() we use cpus_have_const_cap() to check for ARM64_HAS_GIC_PRIO_MASKING, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. When CONFIG_ARM64_PSEUDO_NMI=y the ARM64_HAS_GIC_PRIO_MASKING cpucap is a strict boot cpu feature which is detected and patched early on the boot cpu, which both happen in smp_prepare_boot_cpu(). In the window between the ARM64_HAS_GIC_PRIO_MASKING cpucap is detected and alternatives are patched we don't run any code that depends upon the ARM64_HAS_GIC_PRIO_MASKING cpucap: * We leave DAIF.IF set until after boot alternatives are patched, and interrupts are unmasked later in init_IRQ(), so we cannot reach IRQ/FIQ entry code and will not use irqs_priority_unmasked(). * We don't call any code which uses arm_cpuidle_save_irq_context() and arm_cpuidle_restore_irq_context() during this window. * We don't call start_thread_common() during this window. * The local_irq_*() code in <asm/irqflags.h> depends solely on an alternative branch since commit: a5f61cc636f48bdf ("arm64: irqflags: use alternative branches for pseudo-NMI logic") ... and hence will use the default (DAIF-only) masking behaviour until alternatives are patched. * Secondary CPUs are brought up later after alternatives are patched, and alternatives are patched on the boot CPU immediately prior to calling init_gic_priority_masking(), so we'll correctly initialize interrupt masking regardless. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. As this makes system_uses_irq_prio_masking() equivalent to __irqflags_uses_pmr(), the latter is removed and replaced with the former for consistency. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DITMark Rutland
In __cpu_suspend_exit() we use cpus_have_const_cap() to check for ARM64_HAS_DIT but this is not necessary and cpus_have_final_cap() of alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_DIT cpucap is detected and patched (along with all other cpucaps) before __cpu_suspend_exit() can run. We'll only use __cpu_suspend_exit() as part of PSCI cpuidle or hibernation, and both of these are intialized after system cpucaps are detected and patched: the PSCI cpuidle driver is registered with a device_initcall, hibernation restoration occurs in a late_initcall, and hibarnation saving is driven by usrspace. Therefore it is not necessary to use cpus_have_const_cap(), and using alternative_has_cap_*() or cpus_have_final_cap() is sufficient. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. To clearly document the ordering relationship between suspend/resume and alternatives patching, an explicit check for system_capabilities_finalized() is added to cpu_suspend() along with a comment block, which will make it easier to spot issues if code is changed in future to allow these functions to be reached earlier. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNPMark Rutland
In system_supports_cnp() we use cpus_have_const_cap() to check for ARM64_HAS_CNP, but this is only necessary so that the cpu_enable_cnp() callback can run prior to alternatives being patched, and otherwise this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpu_enable_cnp() callback is run immediately after the ARM64_HAS_CNP cpucap is detected system-wide under setup_system_capabilities(), prior to alternatives being patched. During this window cpu_enable_cnp() uses cpu_replace_ttbr1() to set the CNP bit for the swapper_pg_dir in TTBR1. No other users of the ARM64_HAS_CNP cpucap need the up-to-date value during this window: * As KVM isn't initialized yet, kvm_get_vttbr() isn't reachable. * As cpuidle isn't initialized yet, __cpu_suspend_exit() isn't reachable. * At this point all CPUs are using the swapper_pg_dir with a reserved ASID in TTBR1, and the idmap_pg_dir in TTBR0, so neither check_and_switch_context() nor cpu_do_switch_mm() need to do anything special. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. To allow cpu_enable_cnp() to function prior to alternatives being patched, cpu_replace_ttbr1() is split into cpu_replace_ttbr1() and cpu_enable_swapper_cnp(), with the former only used for early TTBR1 replacement, and the latter used by both cpu_enable_cnp() and __cpu_suspend_exit(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DICMark Rutland
In icache_inval_all_pou() we use cpus_have_const_cap() to check for ARM64_HAS_CACHE_DIC, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The cpus_have_const_cap() check in icache_inval_all_pou() is an optimization to skip a redundant (but benign) IC IALLUIS + DSB ISH sequence when all CPUs in the system have DIC. In the window between detecting the ARM64_HAS_CACHE_DIC cpucap and patching alternative branches there is only a single potential call to icache_inval_all_pou() (in the alternatives patching itself), which there's no need to optimize for at the expense of other callers. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. This also aligns better with the way we patch the assembly cache maintenance sequences in arch/arm64/mm/cache.S. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTIMark Rutland
In system_supports_bti() we use cpus_have_const_cap() to check for ARM64_HAS_BTI, but this is not necessary and alternative_has_cap_*() or cpus_have_final_*cap() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. When CONFIG_ARM64_BTI_KERNEL=y, the ARM64_HAS_BTI cpucap is a strict boot cpu feature which is detected and patched early on the boot cpu. All uses guarded by CONFIG_ARM64_BTI_KERNEL happen after the boot CPU has detected ARM64_HAS_BTI and patched boot alternatives, and hence can safely use alternative_has_cap_*() or cpus_have_final_boot_cap(). Regardless of CONFIG_ARM64_BTI_KERNEL, all other uses of ARM64_HAS_BTI happen after system capabilities have been finalized and alternatives have been patched. Hence these can safely use alternative_has_cap_*) or cpus_have_final_cap(). This patch splits system_supports_bti() into system_supports_bti() and system_supports_bti_kernel(), with the former handling where the cpucap affects userspace functionality, and ther latter handling where the cpucap affects kernel functionality. The use of cpus_have_const_cap() is replaced by cpus_have_final_cap() in cpus_have_const_cap, and cpus_have_final_boot_cap() in system_supports_bti_kernel(). This will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The use of cpus_have_final_cap() and cpus_have_final_boot_cap() will make it easier to spot if code is chaanged such that these run before the ARM64_HAS_BTI cpucap is guaranteed to have been finalized. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTLMark Rutland
In __tlbi_level() we use cpus_have_const_cap() to check for ARM64_HAS_ARMv8_4_TTL, but this is not necessary and alternative_has_cap_*() would be preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. In the window between detecting the ARM64_HAS_ARMv8_4_TTL cpucap and patching alternative branches, we do not perform any TLB invalidation, and even if we were to perform TLB invalidation here it would not be functionally necessary to optimize this by using the TTL hint. Hence there's no need to use cpus_have_const_cap(), and alternative_has_cap_unlikely() is sufficient. This patch replaces the use of cpus_have_const_cap() with alternative_has_cap_unlikely(), which will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Avoid cpus_have_const_cap() for ARM64_HAS_{ADDRESS,GENERIC}_AUTHMark Rutland
In system_supports_address_auth() and system_supports_generic_auth() we use cpus_have_const_cap to check for ARM64_HAS_ADDRESS_AUTH and ARM64_HAS_GENERIC_AUTH respectively, but this is not necessary and alternative_has_cap_*() would bre preferable. For historical reasons, cpus_have_const_cap() is more complicated than it needs to be. Before cpucaps are finalized, it will perform a bitmap test of the system_cpucaps bitmap, and once cpucaps are finalized it will use an alternative branch. This used to be necessary to handle some race conditions in the window between cpucap detection and the subsequent patching of alternatives and static branches, where different branches could be out-of-sync with one another (or w.r.t. alternative sequences). Now that we use alternative branches instead of static branches, these are all patched atomically w.r.t. one another, and there are only a handful of cases that need special care in the window between cpucap detection and alternative patching. Due to the above, it would be nice to remove cpus_have_const_cap(), and migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(), or cpus_have_cap() depending on when their requirements. This will remove redundant instructions and improve code generation, and will make it easier to determine how each callsite will behave before, during, and after alternative patching. The ARM64_HAS_ADDRESS_AUTH cpucap is a boot cpu feature which is detected and patched early on the boot CPU before any pointer authentication keys are enabled via their respective SCTLR_ELx.EN* bits. Nothing which uses system_supports_address_auth() is called before the boot alternatives are patched. Thus it is safe for system_supports_address_auth() to use cpus_have_final_boot_cap() to check for ARM64_HAS_ADDRESS_AUTH. The ARM64_HAS_GENERIC_AUTH cpucap is a system feature which is detected on all CPUs, then finalized and patched under setup_system_capabilities(). We use system_supports_generic_auth() in a few places: * The pac_generic_keys_get() and pac_generic_keys_set() functions are only reachable from system calls once userspace is up and running. As cpucaps are finalzied long before userspace runs, these can safely use alternative_has_cap_*() or cpus_have_final_cap(). * The ptrauth_prctl_reset_keys() function is only reachable from system calls once userspace is up and running. As cpucaps are finalized long before userspace runs, this can safely use alternative_has_cap_*() or cpus_have_final_cap(). * The ptrauth_keys_install_user() function is used during context-switch. This is called prior to alternatives being applied, and so cannot use cpus_have_final_cap(), but as this only needs to switch the APGA key for userspace tasks, it's safe to use alternative_has_cap_*(). * The ptrauth_keys_init_user() function is used to initialize userspace keys, and is only reachable after system cpucaps have been finalized and patched. Thus this can safely use alternative_has_cap_*() or cpus_have_final_cap(). * The system_has_full_ptr_auth() helper function is only used by KVM code, which is only reachable after system cpucaps have been finalized and patched. Thus this can safely use alternative_has_cap_*() or cpus_have_final_cap(). This patch modifies system_supports_address_auth() to use cpus_have_final_boot_cap() to check ARM64_HAS_ADDRESS_AUTH, and modifies system_supports_generic_auth() to use alternative_has_cap_unlikely() to check ARM64_HAS_GENERIC_AUTH. In either case this will avoid generating code to test the system_cpucaps bitmap and should be better for all subsequent calls at runtime. The use of cpus_have_final_boot_cap() will make it easier to spot if code is chaanged such that these run before the relevant cpucap is guaranteed to have been finalized. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Use a positive cpucap for FP/SIMDMark Rutland
Currently we have a negative cpucap which describes the *absence* of FP/SIMD rather than *presence* of FP/SIMD. This largely works, but is somewhat awkward relative to other cpucaps that describe the presence of a feature, and it would be nicer to have a cpucap which describes the presence of FP/SIMD: * This will allow the cpucap to be treated as a standard ARM64_CPUCAP_SYSTEM_FEATURE, which can be detected with the standard has_cpuid_feature() function and ARM64_CPUID_FIELDS() description. * This ensures that the cpucap will only transition from not-present to present, reducing the risk of unintentional and/or unsafe usage of FP/SIMD before cpucaps are finalized. * This will allow using arm64_cpu_capabilities::cpu_enable() to enable the use of FP/SIMD later, with FP/SIMD being disabled at boot time otherwise. This will ensure that any unintentional and/or unsafe usage of FP/SIMD prior to this is trapped, and will ensure that FP/SIMD is never unintentionally enabled for userspace in mismatched big.LITTLE systems. This patch replaces the negative ARM64_HAS_NO_FPSIMD cpucap with a positive ARM64_HAS_FPSIMD cpucap, making changes as described above. Note that as FP/SIMD will now be trapped when not supported system-wide, do_fpsimd_acc() must handle these traps in the same way as for SVE and SME. The commentary in fpsimd_restore_current_state() is updated to describe the new scheme. No users of system_supports_fpsimd() need to know that FP/SIMD is available prior to alternatives being patched, so this is updated to use alternative_has_cap_likely() to check for the ARM64_HAS_FPSIMD cpucap, without generating code to test the system_cpucaps bitmap. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Rename SVE/SME cpu_enable functionsMark Rutland
The arm64_cpu_capabilities::cpu_enable() callbacks for SVE, SME, SME2, and FA64 are named with an unusual "${feature}_kernel_enable" pattern rather than the much more common "cpu_enable_${feature}". Now that we only use these as cpu_enable() callbacks, it would be nice to have them match the usual scheme. This patch renames the cpu_enable() callbacks to match this scheme. At the same time, the comment above cpu_enable_sve() is removed for consistency with the other cpu_enable() callbacks. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Use build-time assertions for cpucap orderingMark Rutland
Both sme2_kernel_enable() and fa64_kernel_enable() need to run after sme_kernel_enable(). This happens to be true today as ARM64_SME has a lower index than either ARM64_SME2 or ARM64_SME_FA64, and both functions have a comment to this effect. It would be nicer to have a build-time assertion like we for for can_use_gic_priorities() and has_gic_prio_relaxed_sync(), as that way it will be harder to miss any potential breakage. This patch replaces the comments with build-time assertions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16arm64: Explicitly save/restore CPACR when probing SVE and SMEMark Rutland
When a CPUs onlined we first probe for supported features and propetites, and then we subsequently enable features that have been detected. This is a little problematic for SVE and SME, as some properties (e.g. vector lengths) cannot be probed while they are disabled. Due to this, the code probing for SVE properties has to enable SVE for EL1 prior to proving, and the code probing for SME properties has to enable SME for EL1 prior to probing. We never disable SVE or SME for EL1 after probing. It would be a little nicer to transiently enable SVE and SME during probing, leaving them both disabled unless explicitly enabled, as this would make it much easier to catch unintentional usage (e.g. when they are not present system-wide). This patch reworks the SVE and SME feature probing code to only transiently enable support at EL1, disabling after probing is complete. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-10-16Merge tag 'imx-dt64-6.7' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into soc/dt i.MX arm64 device tree changes for 6.7 - New board support: TQ-Systems LS1043A/LS1046A and LS1088 based boards, VAR-SOM-MX6 SoM, SolidRun LX2162A SoM & Clearfog, and phyGATE-Tauri i.MX 8M Mini board. - A set of changes from Adam Ford adding audio related devices for i.MX8M SoCs, migrating sound card to simple-audio-card for imx8mm-beacon board, and adding DMIC support i.MX8M Beacon boards. - A series from Alexander Stein to add LVDS overlay support for i.MX8M based MBA8Mx boards. - A couple of changes from Cem Tenruh to add gpio-line-names for i.MX8MP based phycore boards. - A bunch of dt-schema check fixes from Fabio Estevam. - A few changes from Frank Li to add edma devices and enable UART support for i.MX93 and i.MX8 SoCs and related boards. - A series from Marek Vasut to improve various aspects of i.MX8MP based DHCOM boards support. - A series from Teresa Remmet to enable Flexcan, USB and RS232/RS485 support for imx8mp-phyboard-pollux board. - A number of changes from Tim Harvey to add imx219 overlay and TPM device support for Gateworks boards. - Other small and random changes. * tag 'imx-dt64-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux: (101 commits) arm64: dts: imx8mp: Drop i.MX8MP DHCOM rev.100 PHY address workaround from PDK3 DT arm64: dts: imx8mp: Update i.MX8MP DHCOM SoM DT to production rev.200 arm64: dts: imx8mp: Add UART1 and RTC wake up source on DH i.MX8M Plus DHCOM SoM arm64: dts: imx8mp: Switch WiFI enable signal to mmc-pwrseq-simple on i.MX8MP DHCOM SoM arm64: dts: imx8mp: Fix property indent on DH i.MX8M Plus DHCOM PDK3 arm64: dts: imx8mp: Describe VDD_ARM run and standby voltage for DH i.MX8M Plus DHCOM SoM arm64: dts: imx8mp: Describe VDD_ARM run and standby voltage for Data Modul i.MX8M Plus eDM SBC arm64: dts: imx8mp-beacon: Add DMIC support arm64: dts: imx8mn-beacon: Add DMIC support arm64: dts: imx8mm-beacon: Add DMIC support arm64: dts: imx8mm-beacon: Migrate sound card to simple-audio-card arm64: dts: imx8mn-evk: Remove codec clocks/clock-names arm64: dts: imx8mp-beacon: Configure 100MHz PCIe Ref Clk arm64: dts: imx8mn: Add sound-dai-cells to micfil node arm64: dts: imx8mm: Add sound-dai-cells to micfil node arm64: dts: freescale: add initial device tree for TQMLS1088A arm64: dts: freescale: add initial device tree for TQMLS1043A/TQMLS1046A arm64: dts: ls1043a: remove second dspi node arm64: dts: freescale: Add support for LX2162 SoM & Clearfog Board arm64: dts: lx2160a: describe the SerDes block #2 ... Link: https://lore.kernel.org/r/20231015132300.2268016-3-shawnguo@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16Merge tag 'imx-dt-6.7' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into soc/dt i.MX ARM device tree changes for 6.7: - New board support: Variscite VAR-SOM-MX6 SoM and Custom board. - A bunch of dt-schema check fixes from Fabio Estevam. - A couple of MBA6ULX changes from Alexander Stein that marks gpio-buttons as wakeup-source and improves gpio-keys button node names. - Add ATM0700D4 panel support for sk-imx53 board. - Correct regulator node name for imx6qdl-nitrogen6 board. - A couple of Gateworks i.MX6QDL board update: adding MDIO nodes and populating Ethernet MAC address. * tag 'imx-dt-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux: (31 commits) ARM: dts: imx7d-pico-pi: Disable USDHC1 ARM: dts: imx28: Fix dcp compatible ARM: dts: imx7s: Remove #power-domain-cells from gpcv2 ARM: dts: imx25: Remove clock-names from the watchdog ARM: dts: imx25: Fix sram node ARM: dts: imx25: Fix dryice node ARM: dts: imx6qdl-gw5904: add dt props for populating eth MAC addrs ARM: dts: vfxxx: Write dmas in a single line ARM: dts: imx27-phytec: Use eeprom as the node name ARM: dts: imx51: Remove invalid sahara compatible ARM: dts: imx53: Adjust the ecspi compatible ARM: dts: imx7ulp: Fix usbphy1 compatible ARM: dts: imx6q-pistachio: Use a valid value for fsl,tx-d-cal ARM: dts: imx6q-b650v3: Fix fsl,tx-cal-45-dn-ohms ARM: dts: imx28-tx28: Move phy_type to USB node ARM: dts: mxs: Switch to #pwm-cells = <3> ARM: dts: imx6q: Add Variscite MX6 Custom board support ARM: dts: imx6qdl: Add Variscite VAR-SOM-MX6 SoM support ARM: dts: mxs: Fix duart clock-names ARM: dts: imx6ull/7d-colibri: Fix compatible ... Link: https://lore.kernel.org/r/20231015132300.2268016-2-shawnguo@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-16x86/MCE/AMD: Split amd_mce_is_memory_error()Yazen Ghannam
Define helper functions for legacy and SMCA systems in order to reuse individual checks in later changes. Describe what each function is checking for, and correct the XEC bitmask for SMCA. No functional change intended. [ bp: Use "else in amd_mce_is_memory_error() to make the conditional balanced, for readability. ] Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com> Link: https://lore.kernel.org/r/20230613141142.36801-2-yazen.ghannam@amd.com
2023-10-16KVM: s390: add tracepoint in gmap notifierNico Boehr
The gmap notifier is called for changes in table entries with the notifier bit set. To diagnose performance issues, it can be useful to see what causes certain changes in the gmap. Hence, add a tracepoint in the gmap notifier. Signed-off-by: Nico Boehr <nrb@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Acked-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20231009093304.2555344-3-nrb@linux.ibm.com Message-Id: <20231009093304.2555344-3-nrb@linux.ibm.com>
2023-10-16KVM: s390: add stat counter for shadow gmap eventsNico Boehr
The shadow gmap tracks memory of nested guests (guest-3). In certain scenarios, the shadow gmap needs to be rebuilt, which is a costly operation since it involves a SIE exit into guest-1 for every entry in the respective shadow level. Add kvm stat counters when new shadow structures are created at various levels. Also add a counter gmap_shadow_create when a completely fresh shadow gmap is created as well as a counter gmap_shadow_reuse when an existing gmap is being reused. Note that when several levels are shadowed at once, counters on all affected levels will be increased. Also note that not all page table levels need to be present and a ASCE can directly point to e.g. a segment table. In this case, a new segment table will always be equivalent to a new shadow gmap and hence will be counted as gmap_shadow_create and not as gmap_shadow_segment. Signed-off-by: Nico Boehr <nrb@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20231009093304.2555344-2-nrb@linux.ibm.com Message-Id: <20231009093304.2555344-2-nrb@linux.ibm.com>