summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2022-05-12KVM: x86/mmu: Expand and clean up page fault statsSean Christopherson
Expand and clean up the page fault stats. The current stats are at best incomplete, and at worst misleading. Differentiate between faults that are actually fixed vs those that result in an MMIO SPTE being created, track faults that are spurious, faults that trigger emulation, faults that that are fixed in the fast path, and last but not least, track the number of faults that are taken. Note, the number of faults that require emulation for write-protected shadow pages can roughly be calculated by subtracting the number of MMIO SPTEs created from the overall number of faults that trigger emulation. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220423034752.1161007-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: x86/mmu: Use IS_ENABLED() to avoid RETPOLINE for TDP page faultsSean Christopherson
Use IS_ENABLED() instead of an #ifdef to activate the anti-RETPOLINE fast path for TDP page faults. The generated code is identical, and the #ifdef makes it dangerously difficult to extend the logic (guess who forgot to add an "else" inside the #ifdef and ran through the page fault handler twice). No functional or binary change intented. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220423034752.1161007-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: x86/mmu: Make all page fault handlers internal to the MMUSean Christopherson
Move kvm_arch_async_page_ready() to mmu.c where it belongs, and move all of the page fault handling collateral that was in mmu.h purely for the async #PF handler into mmu_internal.h, where it belongs. This will allow kvm_mmu_do_page_fault() to act on the RET_PF_* return without having to expose those enums outside of the MMU. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220423034752.1161007-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: x86/mmu: Add RET_PF_CONTINUE to eliminate bool+int* "returns"Sean Christopherson
Add RET_PF_CONTINUE and use it in handle_abnormal_pfn() and kvm_faultin_pfn() to signal that the page fault handler should continue doing its thing. Aside from being gross and inefficient, using a boolean return to signal continue vs. stop makes it extremely difficult to add more helpers and/or move existing code to a helper. E.g. hypothetically, if nested MMUs were to gain a separate page fault handler in the future, everything up to the "is self-modifying PTE" check can be shared by all shadow MMUs, but communicating up the stack whether to continue on or stop becomes a nightmare. More concretely, proposed support for private guest memory ran into a similar issue, where it'll be forced to forego a helper in order to yield sane code: https://lore.kernel.org/all/YkJbxiL%2FAz7olWlq@google.com. No functional change intended. Cc: David Matlack <dmatlack@google.com> Cc: Chao Peng <chao.p.peng@linux.intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220423034752.1161007-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: x86/mmu: Drop exec/NX check from "page fault can be fast"Sean Christopherson
Tweak the "page fault can be fast" logic to explicitly check for !PRESENT faults in the access tracking case, and drop the exec/NX check that becomes redundant as a result. No sane hardware will generate an access that is both an instruct fetch and a write, i.e. it's a waste of cycles. If hardware goes off the rails, or KVM runs under a misguided hypervisor, spuriously running throught fast path is benign (KVM has been uknowingly being doing exactly that for years). Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220423034752.1161007-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: x86/mmu: Don't attempt fast page fault just because EPT is in useSean Christopherson
Check for A/D bits being disabled instead of the access tracking mask being non-zero when deciding whether or not to attempt to fix a page fault vian the fast path. Originally, the access tracking mask was non-zero if and only if A/D bits were disabled by _KVM_ (including not being supported by hardware), but that hasn't been true since nVMX was fixed to honor EPTP12's A/D enabling, i.e. since KVM allowed L1 to cause KVM to not use A/D bits while running L2 despite KVM using them while running L1. In other words, don't attempt the fast path just because EPT is enabled. Note, attempting the fast path for all !PRESENT faults can "fix" a very, _VERY_ tiny percentage of faults out of mmu_lock by detecting that the fault is spurious, i.e. has been fixed by a different vCPU, but again the odds of that happening are vanishingly small. E.g. booting an 8-vCPU VM gets less than 10 successes out of 30k+ faults, and that's likely one of the more favorable scenarios. Disabling dirty logging can likely lead to a rash of collisions between vCPUs for some workloads that operate on a common set of pages, but penalizing _all_ !PRESENT faults for that one case is unlikely to be a net positive, not to mention that that problem is best solved by not zapping in the first place. The number of spurious faults does scale with the number of vCPUs, e.g. a 255-vCPU VM using TDP "jumps" to ~60 spurious faults detected in the fast path (again out of 30k), but that's all of 0.2% of faults. Using legacy shadow paging does get more spurious faults, and a few more detected out of mmu_lock, but the percentage goes _down_ to 0.08% (and that's ignoring faults that are reflected into the guest), i.e. the extra detections are purely due to the sheer number of faults observed. On the other hand, getting a "negative" in the fast path takes in the neighborhood of 150-250 cycles. So while it is tempting to keep/extend the current behavior, such a change needs to come with hard numbers showing that it's actually a win in the grand scheme, or any scheme for that matter. Fixes: 995f00a61958 ("x86: kvm: mmu: use ept a/d in vmcs02 iff used in vmcs12") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220423034752.1161007-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: VMX: clean up pi_wakeup_handlerLi RongQing
Passing per_cpu() to list_for_each_entry() causes the macro to be evaluated N+1 times for N sleeping vCPUs. This is a very small inefficiency, and the code is cleaner if the address of the per-CPU variable is loaded earlier. Do this for both the list and the spinlock. Signed-off-by: Li RongQing <lirongqing@baidu.com> Message-Id: <1649244302-6777-1-git-send-email-lirongqing@baidu.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12KVM: x86: fix typo in __try_cmpxchg_user causing non-atomicnessMaxim Levitsky
This shows up as a TDP MMU leak when running nested. Non-working cmpxchg on L0 relies makes L1 install two different shadow pages under same spte, and one of them is leaked. Fixes: 1c2361f667f36 ("KVM: x86: Use __try_cmpxchg_user() to emulate atomic accesses") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20220512101420.306759-1-mlevitsk@redhat.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-05-12x86/msr-index: Define INTEGRITY_CAPABILITIES MSRTony Luck
The INTEGRITY_CAPABILITIES MSR is enumerated by bit 2 of the CORE_CAPABILITIES MSR. Add defines for the CORE_CAPS enumeration as well as for the integrity MSR. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20220506225410.1652287-3-tony.luck@intel.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2022-05-12x86/microcode/intel: Expose collect_cpu_info_early() for IFSJithu Joseph
IFS is a CPU feature that allows a binary blob, similar to microcode, to be loaded and consumed to perform low level validation of CPU circuitry. In fact, it carries the same Processor Signature (family/model/stepping) details that are contained in Intel microcode blobs. In support of an IFS driver to trigger loading, validation, and running of these tests blobs, make the functionality of cpu_signatures_match() and collect_cpu_info_early() available outside of the microcode driver. Add an "intel_" prefix and drop the "_early" suffix from collect_cpu_info_early() and EXPORT_SYMBOL_GPL() it. Add declaration to x86 <asm/cpu.h> Make cpu_signatures_match() an inline function in x86 <asm/cpu.h>, and also give it an "intel_" prefix. No functional change intended. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Co-developed-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20220506225410.1652287-2-tony.luck@intel.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2022-05-12arm64: Enable repeat tlbi workaround on KRYO4XX gold CPUsShreyas K K
Add KRYO4XX gold/big cores to the list of CPUs that need the repeat TLBI workaround. Apply this to the affected KRYO4XX cores (rcpe to rfpe). The variant and revision bits are implementation defined and are different from the their Cortex CPU counterparts on which they are based on, i.e., (r0p0 to r3p0) is equivalent to (rcpe to rfpe). Signed-off-by: Shreyas K K <quic_shrekk@quicinc.com> Reviewed-by: Sai Prakash Ranjan <quic_saipraka@quicinc.com> Link: https://lore.kernel.org/r/20220512110134.12179-1-quic_shrekk@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2022-05-12ARM: at91: pm: add support for sama5d2 secure suspendClément Léger
When running with OP-TEE, the suspend control is handled securely. Suspend can be entered using PSCI support. Since the sama5d2 supports multiple suspend modes, add a new CONFIG_ATMEL_SECURE_PM which will send a SMC call to select the suspend mode at init time. "atmel.pm_modes" boot argument is still supported for compatibility purposes but the standby value is actually ignored since PSCI suspend is used and it only support one mode (suspend). Signed-off-by: Clément Léger <clement.leger@bootlin.com> Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-12ARM: at91: add code to handle secure callsClément Léger
Since OP-TEE now has a more complete support for sama5d2, add necessary code to perform SMC calls. The detection of OP-TEE is based on a specific device-tree node path (/firmware/optee) such has done by some other SoC. A check is added to avoid doing SMC calls without having OP-TEE. Signed-off-by: Clément Léger <clement.leger@bootlin.com> Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-12arm64: cpufeature: remove duplicate ID_AA64ISAR2_EL1 entryKristina Martsenko
The ID register table should have one entry per ID register but currently has two entries for ID_AA64ISAR2_EL1. Only one entry has an override, and get_arm64_ftr_reg() can end up choosing the other, causing the override to be ignored. Fix this by removing the duplicate entry. While here, also make the check in sort_ftr_regs() more strict so that duplicate entries can't be added in the future. Fixes: def8c222f054 ("arm64: Add support of PAuth QARMA3 architected algorithm") Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20220511162030.1403386-1-kristina.martsenko@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2022-05-12ARM: at91: Kconfig: implement PIT64B selectionClaudiu Beznea
Implement PIT64B selection thus it will be available for the necessary targets (at the moment SAM9X60 and SAMA7G5) w/o the necessity to specify it via defconfig. With this the current CONFIG_TIMER_OF dependency of PIT64B driver could be removed. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-12ARM: at91: pm: add quirks for pmClaudiu Beznea
SoCs supporting ULP0 or ULP1 modes and variants of Cadence Ethernet IP (controlled by macb driver) may behave buggy when Wake-on-Lan (WoL) is configured and WoL packet is received while in ULP0/ULP1. On some SoCs Ethernet interface is not working after resume. On other SoCs the CPU goes to abort on resume path when switching execution from internal SRAM to DRAM. For ULP1 + WoL the issue is related a particular restart sequence of the internal clocks when resuming. These clocks are automatically managed by PMC and may happen that GMAC peripheral clock is restarted few clock cycles before internal clocks causing blocking of Ethernet's DMA. As a consequence Ethernet TX transactions are stopped and RX transactions are partially stopped (packets are received by MAC, RX counters incremented but the data is not transferred to DRAM). The workaround for this is to disable Ethernet's peripheral clock when going to ULP1. Same behavior has been reproduced on ULP0 for some platforms (SAMA5D2, SAMA5D3) and the same workaround solves the issue. The problem has been solved on pm.c as quirk to avoid polluting the MACB driver with AT91 specific issues as this driver is generic to multiple vendors. At probe pointers to struct device_node are retrieved and on the at91_pm_enter() the quirk specifics are applied: for all Ethernet interfaces that were parsed the peripheral clocks are disabled. A special handling is done for modes in dns_modes mask as these are considered modes that blocks the system if WoL packet are received but for which applying quirk will lead to not waking up on WoL packets: in situation where Ethernet interface(s) has suspend mode in dns_modes mask and Ethernet interface(s) is the only available wakeup source the suspend is canceled. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-12ARM: at91: pm: use kernel documentation styleClaudiu Beznea
Use kernel documentation style. Along with it fix the naming of struct at91_pm_sfrbu_regs in documentation. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-12ARM: at91: pm: introduce macros for pm mode replacementClaudiu Beznea
Introduce macros to replace standby/suspend mode if they depends on controllers that failed to map (or other errors). Macros keep track of the complementary mode to avoid having set the same AT91 PM mode for both suspend and standby. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-12ARM: at91: pm: keep documentation inline with structure membersClaudiu Beznea
Move documentation of bu to keep the same order as in the structure itself. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
2022-05-11riscv: add memory-type errata for T-HeadHeiko Stuebner
Some current cpus based on T-Head cores implement memory-types way different than described in the svpbmt spec even going so far as using PTE bits marked as reserved. Add the T-Head vendor-id and necessary errata code to replace the affected instructions. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Samuel Holland <samuel@sholland.org> Link: https://lore.kernel.org/r/20220511192921.2223629-13-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: don't use global static vars to store alternative dataHeiko Stuebner
Right now the code uses a global struct to store vendor-ids and another global variable to store the vendor-patch-function. There exist specific cases where we'll need to patch the kernel at an even earlier stage, where trying to write to a static variable might actually result in hangs. Also collecting the vendor-information consists of 3 sbi-ecalls (or csr-reads) which is pretty negligible in the context of booting a kernel. So rework the code to not rely on static variables and instead collect the vendor-information when a round of alternatives is to be applied. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Guo Ren <guoren@kernel.org> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-12-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: remove FIXMAP_PAGE_IO and fall back to its default valueHeiko Stuebner
If not defined in the arch, FIXMAP_PAGE_IO defaults to PAGE_KERNEL_IO, which we defined when adding the svpbmt implementation. So drop the FIXMAP_PAGE_IO riscv define. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Reviewed-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20220511192921.2223629-11-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: add RISC-V Svpbmt extension supportHeiko Stuebner
Svpbmt (the S should be capitalized) is the "Supervisor-mode: page-based memory types" extension that specifies attributes for cacheability, idempotency and ordering. The relevant settings are done in special bits in PTEs: Here is the svpbmt PTE format: | 63 | 62-61 | 60-8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 N MT RSW D A G U X W R V ^ Of the Reserved bits [63:54] in a leaf PTE, the high bit is already allocated (as the N bit), so bits [62:61] are used as the MT (aka MemType) field. This field specifies one of three memory types that are close equivalents (or equivalent in effect) to the three main x86 and ARMv8 memory types - as shown in the following table. RISC-V Encoding & MemType RISC-V Description ---------- ------------------------------------------------ 00 - PMA Normal Cacheable, No change to implied PMA memory type 01 - NC Non-cacheable, idempotent, weakly-ordered Main Memory 10 - IO Non-cacheable, non-idempotent, strongly-ordered I/O memory 11 - Rsvd Reserved for future standard use As the extension will not be present on all implementations, implement a method to handle cpufeatures via alternatives to not incur runtime penalties on cpu variants not supporting specific extensions and patch relevant code parts at runtime. Co-developed-by: Wei Fu <wefu@redhat.com> Signed-off-by: Wei Fu <wefu@redhat.com> Co-developed-by: Liu Shaohua <liush@allwinnertech.com> Signed-off-by: Liu Shaohua <liush@allwinnertech.com> Co-developed-by: Guo Ren <guoren@kernel.org> Signed-off-by: Guo Ren <guoren@kernel.org> [moved to use the alternatives mechanism] Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-10-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: Fix accessing pfn bits in PTEs for non-32bit variantsHeiko Stuebner
On rv32 the PFN part of PTEs is defined to use bits [xlen-1:10] while on rv64 it is defined to use bits [53:10], leaving [63:54] as reserved. With upcoming optional extensions like svpbmt these previously reserved bits will get used so simply right-shifting the PTE to get the PFN won't be enough. So introduce a _PAGE_PFN_MASK constant to mask the correct bits for both rv32 and rv64 before shifting. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-9-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: move boot alternatives to after fill_hwcapHeiko Stuebner
Move the application of boot alternatives to after the hw-capabilities are populated. This allows to check for available extensions when determining which alternatives to apply and also makes it actually work if CONFIG_SMP is disabled for whatever reason. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Reviewed-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20220511192921.2223629-8-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: prevent compressed instructions in alternativesHeiko Stuebner
Instructions are opportunistically compressed by the RISC-V assembler when possible, but in alternatives-blocks both the old and new content need to be the same size, so having the toolchain do somewhat random optimizations will cause strange side-effects like "attempt to move .org backwards" compile-time errors. Already a simple "and" used in alternatives assembly will cause these mismatched code sizes. So prevent compressed instructions to be generated in alternatives- code and use option-push and -pop to only limit this to the relevant code blocks Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-7-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: extend concatenated alternatives-lines to the same lengthHeiko Stuebner
ALT_NEW_CONTENT already uses same-length assembler lines, so extend this to the other elements as well. This makes it more readable when these elements need to be extended in the future. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-6-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: implement ALTERNATIVE_2 macroHeiko Stuebner
When the alternatives were added the commit already provided a template on how to implement 2 different alternatives for one piece of code. Make this usable. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-5-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: implement module alternativesHeiko Stuebner
This allows alternatives to also be applied when loading modules and follows the implementation of other architectures (e.g. arm64). Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-4-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: allow different stages with alternativesHeiko Stuebner
Future features may need to be applied at a different time during boot, so allow defining stages for alternatives and handling them differently depending on the stage. Also make the alternatives-location more flexible so that future stages may provide their own location. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-3-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11riscv: integrate alternatives better into the main architectureHeiko Stuebner
Right now the alternatives need to be explicitly enabled and erratas are limited to SiFive ones. We want to use alternatives not only for patching soc erratas, but in the future also for handling different behaviour depending on the existence of future extensions. So move the core alternatives over to the kernel subdirectory and move the CONFIG_RISCV_ALTERNATIVE to be a hidden symbol which we expect relevant erratas and extensions to just select if needed. Signed-off-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Philipp Tomsich <philipp.tomsich@vrull.eu> Link: https://lore.kernel.org/r/20220511192921.2223629-2-heiko@sntech.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-12openrisc: remove bogus nops and shutdownsJason A. Donenfeld
Nop 42 is some leftover debugging thing by the looks of it. Nop 1 will shut down the simulator, which isn't what we want, since it makes it impossible to handle errors. Cc: Stafford Horne <shorne@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Stafford Horne <shorne@gmail.com>
2022-05-12openrisc: fix typos in commentsJulia Lawall
Various spelling mistakes in comments. Detected with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Stafford Horne <shorne@gmail.com>
2022-05-11ptrace: Reimplement PTRACE_KILL by always sending SIGKILLEric W. Biederman
The current implementation of PTRACE_KILL is buggy and has been for many years as it assumes it's target has stopped in ptrace_stop. At a quick skim it looks like this assumption has existed since ptrace support was added in linux v1.0. While PTRACE_KILL has been deprecated we can not remove it as a quick search with google code search reveals many existing programs calling it. When the ptracee is not stopped at ptrace_stop some fields would be set that are ignored except in ptrace_stop. Making the userspace visible behavior of PTRACE_KILL a noop in those case. As the usual rules are not obeyed it is not clear what the consequences are of calling PTRACE_KILL on a running process. Presumably userspace does not do this as it achieves nothing. Replace the implementation of PTRACE_KILL with a simple send_sig_info(SIGKILL) followed by a return 0. This changes the observable user space behavior only in that PTRACE_KILL on a process not stopped in ptrace_stop will also kill it. As that has always been the intent of the code this seems like a reasonable change. Cc: stable@vger.kernel.org Reported-by: Al Viro <viro@zeniv.linux.org.uk> Suggested-by: Al Viro <viro@zeniv.linux.org.uk> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lkml.kernel.org/r/20220505182645.497868-7-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-11ptrace: Remove arch_ptrace_attachEric W. Biederman
The last remaining implementation of arch_ptrace_attach is ia64's ptrace_attach_sync_user_rbs which was added at the end of 2007 in commit aa91a2e90044 ("[IA64] Synchronize RBS on PTRACE_ATTACH"). Reading the comments and examining the code ptrace_attach_sync_user_rbs has the sole purpose of saving registers to the stack when ptrace_attach changes TASK_STOPPED to TASK_TRACED. In all other cases arch_ptrace_stop takes care of the register saving. In commit d79fdd6d96f4 ("ptrace: Clean transitions between TASK_STOPPED and TRACED") modified ptrace_attach to wake up the thread and enter ptrace_stop normally even when the thread starts out stopped. This makes ptrace_attach_sync_user_rbs completely unnecessary. So just remove it. I read through the code to verify that ptrace_attach_sync_user_rbs is unnecessary. What I found is that the code is quite dead. Reading ptrace_attach_sync_user_rbs it is easy to see that the it does nothing unless __state == TASK_STOPPED. Calling arch_ptrace_attach (aka ptrace_attach_sync_user_rbs) after ptrace_traceme it is easy to see that because we are talking about the current process the value of __state is TASK_RUNNING. Which means ptrace_attach_sync_user_rbs does nothing. The only other call of arch_ptrace_attach (aka ptrace_attach_sync_user_rbs) is after ptrace_attach. If the task is running (and PTRACE_SEIZE is not specified), a SIGSTOP is sent which results in do_signal_stop setting JOBCTL_TRAP_STOP on the target task (as it is ptraced) and the target task stopping in ptrace_stop with __state == TASK_TRACED. If the task was already stopped then ptrace_attach sets JOBCTL_TRAPPING and JOBCTL_TRAP_STOP, wakes it out of __TASK_STOPPED, and waits until the JOBCTL_TRAPPING_BIT is clear. At which point the task stops in ptrace_stop. In both cases there are a couple of funning excpetions such as if the traced task receiveds a SIGCONT, or is set a fatal signal. However in all of those cases the tracee never stops in __state TASK_STOPPED. Which is a long way of saying that ptrace_attach_sync_user_rbs is guaranteed never to do anything. Cc: linux-ia64@vger.kernel.org Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lkml.kernel.org/r/20220505182645.497868-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-11ptrace/xtensa: Replace PT_SINGLESTEP with TIF_SINGLESTEPEric W. Biederman
xtensa is the last user of the PT_SINGLESTEP flag. Changing tsk->ptrace in user_enable_single_step and user_disable_single_step without locking could potentiallly cause problems. So use a thread info flag instead of a flag in tsk->ptrace. Use TIF_SINGLESTEP that xtensa already had defined but unused. Remove the definitions of PT_SINGLESTEP and PT_BLOCKSTEP as they have no more users. Cc: stable@vger.kernel.org Acked-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lkml.kernel.org/r/20220505182645.497868-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-11ptrace/um: Replace PT_DTRACE with TIF_SINGLESTEPEric W. Biederman
User mode linux is the last user of the PT_DTRACE flag. Using the flag to indicate single stepping is a little confusing and worse changing tsk->ptrace without locking could potentionally cause problems. So use a thread info flag with a better name instead of flag in tsk->ptrace. Remove the definition PT_DTRACE as uml is the last user. Cc: stable@vger.kernel.org Acked-by: Johannes Berg <johannes@sipsolutions.net> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lkml.kernel.org/r/20220505182645.497868-3-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-11Merge tag 'generic-ticket-spinlocks-v6' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/palmer/linux into asm-generic asm-generic: New generic ticket-based spinlock This contains a new ticket-based spinlock that uses only generic atomics and doesn't require as much from the memory system as qspinlock does in order to be fair. It also includes a bit of documentation about the qspinlock and qrwlock fairness requirements. This will soon be used by a handful of architectures that don't meet the qspinlock requirements. * tag 'generic-ticket-spinlocks-v6' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/linux: csky: Move to generic ticket-spinlock RISC-V: Move to queued RW locks RISC-V: Move to generic spinlocks openrisc: Move to ticket-spinlock asm-generic: qrwlock: Document the spinlock fairness requirements asm-generic: qspinlock: Indicate the use of mixed-size atomics asm-generic: ticket-lock: New generic ticket-based spinlock
2022-05-11csky: Move to generic ticket-spinlockGuo Ren
There is no benefit from custom implementation for ticket-spinlock, so move to generic ticket-spinlock for easy maintenance. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11RISC-V: Move to queued RW locksPalmer Dabbelt
Now that we have fair spinlocks we can use the generic queued rwlocks, so we might as well do so. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11RISC-V: Move to generic spinlocksPalmer Dabbelt
Our existing spinlocks aren't fair and replacing them has been on the TODO list for a long time. This moves to the recently-introduced ticket spinlocks, which are simple enough that they are likely to be correct and fast on the vast majority of extant implementations. This introduces a horrible hack that allows us to split out the spinlock conversion from the rwlock conversion. We have to do the spinlocks first because qrwlock needs fair spinlocks, but we don't want to pollute the asm-generic code to support the generic spinlocks without qrwlocks. Thus we pollute the RISC-V code, but just until the next commit as it's all going away. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Guo Ren <guoren@kernel.org> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11openrisc: Move to ticket-spinlockPeter Zijlstra
We have no indications that openrisc meets the qspinlock requirements, so move to ticket-spinlock as that is more likey to be correct. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Stafford Horne <shorne@gmail.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-11x86/vsyscall: Remove CONFIG_LEGACY_VSYSCALL_EMULATEAndy Lutomirski
CONFIG_LEGACY_VSYSCALL_EMULATE is, as far as I know, only needed for the combined use of exotic and outdated debugging mechanisms with outdated binaries. At this point, no one should be using it. Eventually, dynamic switching of vsyscalls will be implemented, but this is much more complicated to support in EMULATE mode than XONLY mode. So let's force all the distros off of EMULATE mode. If anyone actually needs it, they can set vsyscall=emulate, and the kernel can then get away with refusing to support newer security models if that option is set. [ bp: Remove "we"s. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Florian Weimer <fweimer@redhat.com> Link: https://lore.kernel.org/r/898932fe61db6a9d61bc2458fa2f6049f1ca9f5c.1652290558.git.luto@kernel.org
2022-05-11swiotlb-xen: fix DMA_ATTR_NO_KERNEL_MAPPING on armChristoph Hellwig
swiotlb-xen uses very different ways to allocate coherent memory on x86 vs arm. On the former it allocates memory from the page allocator, while on the later it reuses the dma-direct allocator the handles the complexities of non-coherent DMA on arm platforms. Unfortunately the complexities of trying to deal with the two cases in the swiotlb-xen.c code lead to a bug in the handling of DMA_ATTR_NO_KERNEL_MAPPING on arm. With the DMA_ATTR_NO_KERNEL_MAPPING flag the coherent memory allocator does not actually allocate coherent memory, but just a DMA handle for some memory that is DMA addressable by the device, but which does not have to have a kernel mapping. Thus dereferencing the return value will lead to kernel crashed and memory corruption. Fix this by using the dma-direct allocator directly for arm, which works perfectly fine because on arm swiotlb-xen is only used when the domain is 1:1 mapped, and then simplifying the remaining code to only cater for the x86 case with DMA coherent device. Reported-by: Rahul Singh <Rahul.Singh@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Rahul Singh <rahul.singh@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Tested-by: Rahul Singh <rahul.singh@arm.com>
2022-05-11x86: ACPI: Make mp_config_acpi_gsi() a void functionLi kunyu
Because the return value of mp_config_acpi_gsi() is not use, change it into a void function. Signed-off-by: Li kunyu <kunyu@nfschina.com> [ rjw: Subject and changelog rewrite ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-05-11x86/hyperv: Disable hardlockup detector by default in Hyper-V guestsMichael Kelley
In newer versions of Hyper-V, the x86/x64 PMU can be virtualized into guest VMs by explicitly enabling it. Linux kernels are typically built to automatically enable the hardlockup detector if the PMU is found. To prevent the possibility of false positives due to the vagaries of VM scheduling, disable the PMU-based hardlockup detector by default in a VM on Hyper-V. The hardlockup detector can still be enabled by overriding the default with the nmi_watchdog=1 option on the kernel boot line or via sysctl at runtime. This change mimics the approach taken with KVM guests in commit 692297d8f968 ("watchdog: introduce the hardlockup_detector_disable() function"). Linux on ARM64 does not provide a PMU-based hardlockup detector, so there's no corresponding disable in the Hyper-V init code on ARM64. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1652111063-6535-1-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2022-05-11perf/ibs: Fix commentRavi Bangoria
s/IBS Op Data 2/IBS Op Data 1/ for MSR 0xc0011035. Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220509044914.1473-9-ravi.bangoria@amd.com
2022-05-11perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attributeRavi Bangoria
PMU driver can advertise certain feature via capability attribute('caps' sysfs directory) which can be consumed by userspace tools like perf. Add zen4_ibs_extensions capability attribute for IBS pmus. This attribute will be enabled when CPUID_Fn8000001B_EAX[11] is set. With patch on Zen4: $ ls /sys/bus/event_source/devices/ibs_op/caps zen4_ibs_extensions Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220509044914.1473-5-ravi.bangoria@amd.com
2022-05-11perf/amd/ibs: Add support for L3 miss filteringRavi Bangoria
IBS L3 miss filtering works by tagging an instruction on IBS counter overflow and generating an NMI if the tagged instruction causes an L3 miss. Samples without an L3 miss are discarded and counter is reset with random value (between 1-15 for fetch pmu and 1-127 for op pmu). This helps in reducing sampling overhead when user is interested only in such samples. One of the use case of such filtered samples is to feed data to page-migration daemon in tiered memory systems. Add support for L3 miss filtering in IBS driver via new pmu attribute "l3missonly". Example usage: # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5 Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220509044914.1473-4-ravi.bangoria@amd.com
2022-05-11perf/amd/ibs: Use ->is_visible callback for dynamic attributesRavi Bangoria
Currently, some attributes are added at build time whereas others at boot time depending on IBS pmu capabilities. Instead, we can just add all attribute groups at build time but hide individual group at boot time using more appropriate ->is_visible() callback. Also, struct perf_ibs has bunch of fields for pmu attributes which just pass on the pointer, does not do anything else. Remove them. Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220509044914.1473-3-ravi.bangoria@amd.com