summaryrefslogtreecommitdiff
path: root/arch/riscv
AgeCommit message (Collapse)Author
2023-11-30riscv: dts: sophgo: Separate compatible specific for CV1800B socInochi Amaoto
As CV180x and CV181x have the identical layouts, it is OK to use the cv1800b basic device tree for the whole series. For CV1800B soc specific compatible, just move them out of the common file. Signed-off-by: Inochi Amaoto <inochiama@outlook.com> Acked-by: Chen Wang <unicorn_wang@outlook.com> Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
2023-11-26riscv: dts: microchip: move timebase-frequency to mpfs.dtsiConor Dooley
The timebase-frequency on PolarFire SoC is not set by an oscillator on the board, but rather by an internal divider, so move the property to mpfs.dtsi. This looks to be copy-pasta from the SiFive Unleashed as the comments in both places were almost identical. In the Unleashed's case this looks to actually be valid, as the clock is provided by a crystal on the PCB. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> --- CC: Conor Dooley <conor.dooley@microchip.com> CC: Daire McNamara <daire.mcnamara@microchip.com> CC: Rob Herring <robh+dt@kernel.org> CC: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org> CC: Paul Walmsley <paul.walmsley@sifive.com> CC: Palmer Dabbelt <palmer@dabbelt.com> CC: linux-riscv@lists.infradead.org CC: devicetree@vger.kernel.org
2023-11-23arch: vdso: consolidate gettime prototypesArnd Bergmann
The VDSO functions are defined as globals in the kernel sources but intended to be called from userspace, so there is no need to declare them in a kernel side header. Without a prototype, this now causes warnings such as arch/mips/vdso/vgettimeofday.c:14:5: error: no previous prototype for '__vdso_clock_gettime' [-Werror=missing-prototypes] arch/mips/vdso/vgettimeofday.c:28:5: error: no previous prototype for '__vdso_gettimeofday' [-Werror=missing-prototypes] arch/mips/vdso/vgettimeofday.c:36:5: error: no previous prototype for '__vdso_clock_getres' [-Werror=missing-prototypes] arch/mips/vdso/vgettimeofday.c:42:5: error: no previous prototype for '__vdso_clock_gettime64' [-Werror=missing-prototypes] arch/sparc/vdso/vclock_gettime.c:254:1: error: no previous prototype for '__vdso_clock_gettime' [-Werror=missing-prototypes] arch/sparc/vdso/vclock_gettime.c:282:1: error: no previous prototype for '__vdso_clock_gettime_stick' [-Werror=missing-prototypes] arch/sparc/vdso/vclock_gettime.c:307:1: error: no previous prototype for '__vdso_gettimeofday' [-Werror=missing-prototypes] arch/sparc/vdso/vclock_gettime.c:343:1: error: no previous prototype for '__vdso_gettimeofday_stick' [-Werror=missing-prototypes] Most architectures have already added workarounds for these by adding declarations somewhere, but since these are all compatible, we should really just have one copy, with an #ifdef check for the 32-bit vs 64-bit variant and use that everywhere. Unfortunately, the sparc an um versions are currently incompatible since they never added support for __vdso_clock_gettime64() in 32-bit userland. For the moment, I'm leaving this one out, as I can't easily test it and it requires a larger rework. Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-11-23arch: consolidate arch_irq_work_raise prototypesArnd Bergmann
The prototype was hidden in an #ifdef on x86, which causes a warning: kernel/irq_work.c:72:13: error: no previous prototype for 'arch_irq_work_raise' [-Werror=missing-prototypes] Some architectures have a working prototype, while others don't. Fix this by providing it in only one place that is always visible. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Acked-by: Guo Ren <guoren@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-11-16riscv: dts: sophgo: remove address-cells from intc nodeConor Dooley
A recent submission [1] from Rob has added additionalProperties: false to the interrupt-controller child node of RISC-V cpus, highlighting that the new cv1800b DT has been incorrectly using #address-cells. It has no child nodes, so #address-cells is not needed. Remove it. Link: https://patchwork.kernel.org/project/linux-riscv/patch/20230915201946.4184468-1-robh@kernel.org/ [1] Fixes: c3dffa879cca ("riscv: dts: sophgo: add initial CV1800B SoC device tree") Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Acked-by: Chen Wang <unicorn_wang@outlook.com> Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
2023-11-14Merge branch 'kvm-guestmemfd' into HEADPaolo Bonzini
Introduce several new KVM uAPIs to ultimately create a guest-first memory subsystem within KVM, a.k.a. guest_memfd. Guest-first memory allows KVM to provide features, enhancements, and optimizations that are kludgly or outright impossible to implement in a generic memory subsystem. The core KVM ioctl() for guest_memfd is KVM_CREATE_GUEST_MEMFD, which similar to the generic memfd_create(), creates an anonymous file and returns a file descriptor that refers to it. Again like "regular" memfd files, guest_memfd files live in RAM, have volatile storage, and are automatically released when the last reference is dropped. The key differences between memfd files (and every other memory subystem) is that guest_memfd files are bound to their owning virtual machine, cannot be mapped, read, or written by userspace, and cannot be resized. guest_memfd files do however support PUNCH_HOLE, which can be used to convert a guest memory area between the shared and guest-private states. A second KVM ioctl(), KVM_SET_MEMORY_ATTRIBUTES, allows userspace to specify attributes for a given page of guest memory. In the long term, it will likely be extended to allow userspace to specify per-gfn RWX protections, including allowing memory to be writable in the guest without it also being writable in host userspace. The immediate and driving use case for guest_memfd are Confidential (CoCo) VMs, specifically AMD's SEV-SNP, Intel's TDX, and KVM's own pKVM. For such use cases, being able to map memory into KVM guests without requiring said memory to be mapped into the host is a hard requirement. While SEV+ and TDX prevent untrusted software from reading guest private data by encrypting guest memory, pKVM provides confidentiality and integrity *without* relying on memory encryption. In addition, with SEV-SNP and especially TDX, accessing guest private memory can be fatal to the host, i.e. KVM must be prevent host userspace from accessing guest memory irrespective of hardware behavior. Long term, guest_memfd may be useful for use cases beyond CoCo VMs, for example hardening userspace against unintentional accesses to guest memory. As mentioned earlier, KVM's ABI uses userspace VMA protections to define the allow guest protection (with an exception granted to mapping guest memory executable), and similarly KVM currently requires the guest mapping size to be a strict subset of the host userspace mapping size. Decoupling the mappings sizes would allow userspace to precisely map only what is needed and with the required permissions, without impacting guest performance. A guest-first memory subsystem also provides clearer line of sight to things like a dedicated memory pool (for slice-of-hardware VMs) and elimination of "struct page" (for offload setups where userspace _never_ needs to DMA from or into guest memory). guest_memfd is the result of 3+ years of development and exploration; taking on memory management responsibilities in KVM was not the first, second, or even third choice for supporting CoCo VMs. But after many failed attempts to avoid KVM-specific backing memory, and looking at where things ended up, it is quite clear that of all approaches tried, guest_memfd is the simplest, most robust, and most extensible, and the right thing to do for KVM and the kernel at-large. The "development cycle" for this version is going to be very short; ideally, next week I will merge it as is in kvm/next, taking this through the KVM tree for 6.8 immediately after the end of the merge window. The series is still based on 6.6 (plus KVM changes for 6.7) so it will require a small fixup for changes to get_file_rcu() introduced in 6.7 by commit 0ede61d8589c ("file: convert to SLAB_TYPESAFE_BY_RCU"). The fixup will be done as part of the merge commit, and most of the text above will become the commit message for the merge. Pending post-merge work includes: - hugepage support - looking into using the restrictedmem framework for guest memory - introducing a testing mechanism to poison memory, possibly using the same memory attributes introduced here - SNP and TDX support There are two non-KVM patches buried in the middle of this series: fs: Rename anon_inode_getfile_secure() and anon_inode_getfd_secure() mm: Add AS_UNMOVABLE to mark mapping as completely unmovable The first is small and mostly suggested-by Christian Brauner; the second a bit less so but it was written by an mm person (Vlastimil Babka).
2023-11-13riscv: dts: renesas: Convert isa detection to new propertiesConor Dooley
Convert the RZ/Five devicetrees to use the new properties "riscv,isa-base" & "riscv,isa-extensions". For compatibility with other projects, "riscv,isa" remains. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20231009-smog-gag-3ba67e68126b@wendy Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2023-11-13KVM: Convert KVM_ARCH_WANT_MMU_NOTIFIER to CONFIG_KVM_GENERIC_MMU_NOTIFIERSean Christopherson
Convert KVM_ARCH_WANT_MMU_NOTIFIER into a Kconfig and select it where appropriate to effectively maintain existing behavior. Using a proper Kconfig will simplify building more functionality on top of KVM's mmu_notifier infrastructure. Add a forward declaration of kvm_gfn_range to kvm_types.h so that including arch/powerpc/include/asm/kvm_ppc.h's with CONFIG_KVM=n doesn't generate warnings due to kvm_gfn_range being undeclared. PPC defines hooks for PR vs. HV without guarding them via #ifdeffery, e.g. bool (*unmap_gfn_range)(struct kvm *kvm, struct kvm_gfn_range *range); bool (*age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); bool (*test_age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); bool (*set_spte_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); Alternatively, PPC could forward declare kvm_gfn_range, but there's no good reason not to define it in common KVM. Acked-by: Anup Patel <anup@brainfault.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fuad Tabba <tabba@google.com> Tested-by: Fuad Tabba <tabba@google.com> Message-Id: <20231027182217.3615211-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-11-10Merge tag 'riscv-for-linus-6.7-mw2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull more RISC-V updates from Palmer Dabbelt: - Support for handling misaligned accesses in S-mode - Probing for misaligned access support is now properly cached and handled in parallel - PTDUMP now reflects the SW reserved bits, as well as the PBMT and NAPOT extensions - Performance improvements for TLB flushing - Support for many new relocations in the module loader - Various bug fixes and cleanups * tag 'riscv-for-linus-6.7-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (51 commits) riscv: Optimize bitops with Zbb extension riscv: Rearrange hwcap.h and cpufeature.h drivers: perf: Do not broadcast to other cpus when starting a counter drivers: perf: Check find_first_bit() return value of: property: Add fw_devlink support for msi-parent RISC-V: Don't fail in riscv_of_parent_hartid() for disabled HARTs riscv: Fix set_memory_XX() and set_direct_map_XX() by splitting huge linear mappings riscv: Don't use PGD entries for the linear mapping RISC-V: Probe misaligned access speed in parallel RISC-V: Remove __init on unaligned_emulation_finish() RISC-V: Show accurate per-hart isa in /proc/cpuinfo RISC-V: Don't rely on positional structure initialization riscv: Add tests for riscv module loading riscv: Add remaining module relocations riscv: Avoid unaligned access when relocating modules riscv: split cache ops out of dma-noncoherent.c riscv: Improve flush_tlb_kernel_range() riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlb riscv: Improve flush_tlb_range() for hugetlb pages riscv: Improve tlb_flush() ...
2023-11-09riscv: Optimize bitops with Zbb extensionXiao Wang
This patch leverages the alternative mechanism to dynamically optimize bitops (including __ffs, __fls, ffs, fls) with Zbb instructions. When Zbb ext is not supported by the runtime CPU, legacy implementation is used. If Zbb is supported, then the optimized variants will be selected via alternative patching. The legacy bitops support is taken from the generic C implementation as fallback. If the parameter is a build-time constant, we leverage compiler builtin to calculate the result directly, this approach is inspired by x86 bitops implementation. EFI stub runs before the kernel, so alternative mechanism should not be used there, this patch introduces a macro NO_ALTERNATIVE for this purpose. Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231031064553.2319688-3-xiao.w.wang@intel.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-09riscv: Rearrange hwcap.h and cpufeature.hXiao Wang
Now hwcap.h and cpufeature.h are mutually including each other, and most of the variable/API declarations in hwcap.h are implemented in cpufeature.c, so, it's better to move them into cpufeature.h and leave only macros for ISA extension logical IDs in hwcap.h. BTW, the riscv_isa_extension_mask macro is not used now, so this patch removes it. Suggested-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231031064553.2319688-2-xiao.w.wang@intel.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-09Merge patch "drivers: perf: Do not broadcast to other cpus when starting a ↵Palmer Dabbelt
counter" This is really just a single patch, but since the offending fix hasn't yet made it to my for-next I'm merging it here. Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-08Merge patch series "Linux RISC-V AIA Preparatory Series"Palmer Dabbelt
These two ended up in the AIA series, but they're really independent improvements. * b4-shazam-merge: of: property: Add fw_devlink support for msi-parent RISC-V: Don't fail in riscv_of_parent_hartid() for disabled HARTs Link: https://lore.kernel.org/r/20231027154254.355853-1-apatel@ventanamicro.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-08RISC-V: Don't fail in riscv_of_parent_hartid() for disabled HARTsAnup Patel
The riscv_of_processor_hartid() used by riscv_of_parent_hartid() fails for HARTs disabled in the DT. This results in the following warning thrown by the RISC-V INTC driver for the E-core on SiFive boards: [ 0.000000] riscv-intc: unable to find hart id for /cpus/cpu@0/interrupt-controller The riscv_of_parent_hartid() is only expected to read the hartid from the DT so we directly call of_get_cpu_hwid() instead of calling riscv_of_processor_hartid(). Fixes: ad635e723e17 ("riscv: cpu: Add 64bit hartid support on RV64") Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/r/20231027154254.355853-2-apatel@ventanamicro.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-08Merge tag 'riscv-for-linus-6.7-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V updates from Palmer Dabbelt: - Support for cbo.zero in userspace - Support for CBOs on ACPI-based systems - A handful of improvements for the T-Head cache flushing ops - Support for software shadow call stacks - Various cleanups and fixes * tag 'riscv-for-linus-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (31 commits) RISC-V: hwprobe: Fix vDSO SIGSEGV riscv: configs: defconfig: Enable configs required for RZ/Five SoC riscv: errata: prefix T-Head mnemonics with th. riscv: put interrupt entries into .irqentry.text riscv: mm: Update the comment of CONFIG_PAGE_OFFSET riscv: Using TOOLCHAIN_HAS_ZIHINTPAUSE marco replace zihintpause riscv/mm: Fix the comment for swap pte format RISC-V: clarify the QEMU workaround in ISA parser riscv: correct pt_level name via pgtable_l5/4_enabled RISC-V: Provide pgtable_l5_enabled on rv32 clocksource: timer-riscv: Increase rating of clock_event_device for Sstc clocksource: timer-riscv: Don't enable/disable timer interrupt lkdtm: Fix CFI_BACKWARD on RISC-V riscv: Use separate IRQ shadow call stacks riscv: Implement Shadow Call Stack riscv: Move global pointer loading to a macro riscv: Deduplicate IRQ stack switching riscv: VMAP_STACK overflow detection thread-safe RISC-V: cacheflush: Initialize CBO variables on ACPI systems RISC-V: ACPI: RHCT: Add function to get CBO block sizes ...
2023-11-08Merge patch series "riscv: Fix set_memory_XX() and set_direct_map_XX()"Palmer Dabbelt
Alexandre Ghiti <alexghiti@rivosinc.com> says: Those 2 patches fix the set_memory_XX() and set_direct_map_XX() APIs, which in turn fix STRICT_KERNEL_RWX and memfd_secret(). Those were broken since the permission changes were not applied to the linear mapping because the linear mapping is mapped using hugepages and walk_page_range_novma() does not split such mappings. To fix that, patch 1 disables PGD mappings in the linear mapping as it is hard to propagate changes at this level in *all* the page tables, this has the downside of disabling PMD mapping for sv32 and PUD (1GB) mapping for sv39 in the linear mapping (for specific kernels, we could add a Kconfig to enable ARCH_HAS_SET_DIRECT_MAP and STRICT_KERNEL_RWX if needed, I'm pretty sure we'll discuss that). patch 2 implements the split of the huge linear mappings so that walk_page_range_novma() can properly apply the permissions. The whole split is protected with mmap_sem in write mode, but I'm wondering if that's enough, any opinion on that is appreciated. * b4-shazam-merge: riscv: Fix set_memory_XX() and set_direct_map_XX() by splitting huge linear mappings riscv: Don't use PGD entries for the linear mapping Link: https://lore.kernel.org/r/20231108075930.7157-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-08riscv: Fix set_memory_XX() and set_direct_map_XX() by splitting huge linear ↵Alexandre Ghiti
mappings When STRICT_KERNEL_RWX is set, any change of permissions on any kernel mapping (vmalloc/modules/kernel text...etc) should be applied on its linear mapping alias. The problem is that the riscv kernel uses huge mappings for the linear mapping and walk_page_range_novma() does not split those huge mappings. So this patchset implements such split in order to apply fine-grained permissions on the linear mapping. Below is the difference before and after (the first PUD mapping is split into PTE/PMD mappings): Before: ---[ Linear mapping ]--- 0xffffaf8000080000-0xffffaf8000200000 0x0000000080080000 1536K PTE D A G . . W R V 0xffffaf8000200000-0xffffaf8077c00000 0x0000000080200000 1914M PMD D A G . . W R V 0xffffaf8077c00000-0xffffaf8078800000 0x00000000f7c00000 12M PMD D A G . . . R V 0xffffaf8078800000-0xffffaf8078c00000 0x00000000f8800000 4M PMD D A G . . W R V 0xffffaf8078c00000-0xffffaf8079200000 0x00000000f8c00000 6M PMD D A G . . . R V 0xffffaf8079200000-0xffffaf807e600000 0x00000000f9200000 84M PMD D A G . . W R V 0xffffaf807e600000-0xffffaf807e716000 0x00000000fe600000 1112K PTE D A G . . W R V 0xffffaf807e717000-0xffffaf807e71a000 0x00000000fe717000 12K PTE D A G . . W R V 0xffffaf807e71d000-0xffffaf807e71e000 0x00000000fe71d000 4K PTE D A G . . W R V 0xffffaf807e722000-0xffffaf807e800000 0x00000000fe722000 888K PTE D A G . . W R V 0xffffaf807e800000-0xffffaf807fe00000 0x00000000fe800000 22M PMD D A G . . W R V 0xffffaf807fe00000-0xffffaf807ff54000 0x00000000ffe00000 1360K PTE D A G . . W R V 0xffffaf807ff55000-0xffffaf8080000000 0x00000000fff55000 684K PTE D A G . . W R V 0xffffaf8080000000-0xffffaf8400000000 0x0000000100000000 14G PUD D A G . . W R V After: ---[ Linear mapping ]--- 0xffffaf8000080000-0xffffaf8000200000 0x0000000080080000 1536K PTE D A G . . W R V 0xffffaf8000200000-0xffffaf8077c00000 0x0000000080200000 1914M PMD D A G . . W R V 0xffffaf8077c00000-0xffffaf8078800000 0x00000000f7c00000 12M PMD D A G . . . R V 0xffffaf8078800000-0xffffaf8078a00000 0x00000000f8800000 2M PMD D A G . . W R V 0xffffaf8078a00000-0xffffaf8078c00000 0x00000000f8a00000 2M PTE D A G . . W R V 0xffffaf8078c00000-0xffffaf8079200000 0x00000000f8c00000 6M PMD D A G . . . R V 0xffffaf8079200000-0xffffaf807e600000 0x00000000f9200000 84M PMD D A G . . W R V 0xffffaf807e600000-0xffffaf807e716000 0x00000000fe600000 1112K PTE D A G . . W R V 0xffffaf807e717000-0xffffaf807e71a000 0x00000000fe717000 12K PTE D A G . . W R V 0xffffaf807e71d000-0xffffaf807e71e000 0x00000000fe71d000 4K PTE D A G . . W R V 0xffffaf807e722000-0xffffaf807e800000 0x00000000fe722000 888K PTE D A G . . W R V 0xffffaf807e800000-0xffffaf807fe00000 0x00000000fe800000 22M PMD D A G . . W R V 0xffffaf807fe00000-0xffffaf807ff54000 0x00000000ffe00000 1360K PTE D A G . . W R V 0xffffaf807ff55000-0xffffaf8080000000 0x00000000fff55000 684K PTE D A G . . W R V 0xffffaf8080000000-0xffffaf8080800000 0x0000000100000000 8M PMD D A G . . W R V 0xffffaf8080800000-0xffffaf8080af6000 0x0000000100800000 3032K PTE D A G . . W R V 0xffffaf8080af6000-0xffffaf8080af8000 0x0000000100af6000 8K PTE D A G . X . R V 0xffffaf8080af8000-0xffffaf8080c00000 0x0000000100af8000 1056K PTE D A G . . W R V 0xffffaf8080c00000-0xffffaf8081a00000 0x0000000100c00000 14M PMD D A G . . W R V 0xffffaf8081a00000-0xffffaf8081a40000 0x0000000101a00000 256K PTE D A G . . W R V 0xffffaf8081a40000-0xffffaf8081a44000 0x0000000101a40000 16K PTE D A G . X . R V 0xffffaf8081a44000-0xffffaf8081a52000 0x0000000101a44000 56K PTE D A G . . W R V 0xffffaf8081a52000-0xffffaf8081a54000 0x0000000101a52000 8K PTE D A G . X . R V ... 0xffffaf809e800000-0xffffaf80c0000000 0x000000011e800000 536M PMD D A G . . W R V 0xffffaf80c0000000-0xffffaf8400000000 0x0000000140000000 13G PUD D A G . . W R V Note that this also fixes memfd_secret() syscall which uses set_direct_map_invalid_noflush() and set_direct_map_default_noflush() to remove the pages from the linear mapping. Below is the kernel page table while a memfd_secret() syscall is running, you can see all the !valid page table entries in the linear mapping: ... 0xffffaf8082240000-0xffffaf8082241000 0x0000000102240000 4K PTE D A G . . W R . 0xffffaf8082241000-0xffffaf8082250000 0x0000000102241000 60K PTE D A G . . W R V 0xffffaf8082250000-0xffffaf8082252000 0x0000000102250000 8K PTE D A G . . W R . 0xffffaf8082252000-0xffffaf8082256000 0x0000000102252000 16K PTE D A G . . W R V 0xffffaf8082256000-0xffffaf8082257000 0x0000000102256000 4K PTE D A G . . W R . 0xffffaf8082257000-0xffffaf8082258000 0x0000000102257000 4K PTE D A G . . W R V 0xffffaf8082258000-0xffffaf8082259000 0x0000000102258000 4K PTE D A G . . W R . 0xffffaf8082259000-0xffffaf808225a000 0x0000000102259000 4K PTE D A G . . W R V 0xffffaf808225a000-0xffffaf808225c000 0x000000010225a000 8K PTE D A G . . W R . 0xffffaf808225c000-0xffffaf8082266000 0x000000010225c000 40K PTE D A G . . W R V 0xffffaf8082266000-0xffffaf8082268000 0x0000000102266000 8K PTE D A G . . W R . 0xffffaf8082268000-0xffffaf8082284000 0x0000000102268000 112K PTE D A G . . W R V 0xffffaf8082284000-0xffffaf8082288000 0x0000000102284000 16K PTE D A G . . W R . 0xffffaf8082288000-0xffffaf808229c000 0x0000000102288000 80K PTE D A G . . W R V 0xffffaf808229c000-0xffffaf80822a0000 0x000000010229c000 16K PTE D A G . . W R . 0xffffaf80822a0000-0xffffaf80822a5000 0x00000001022a0000 20K PTE D A G . . W R V 0xffffaf80822a5000-0xffffaf80822a6000 0x00000001022a5000 4K PTE D A G . . . R V 0xffffaf80822a6000-0xffffaf80822ab000 0x00000001022a6000 20K PTE D A G . . W R V ... And when the memfd_secret() fd is released, the linear mapping is correctly reset: ... 0xffffaf8082240000-0xffffaf80822a5000 0x0000000102240000 404K PTE D A G . . W R V 0xffffaf80822a5000-0xffffaf80822a6000 0x00000001022a5000 4K PTE D A G . . . R V 0xffffaf80822a6000-0xffffaf80822af000 0x00000001022a6000 36K PTE D A G . . W R V ... Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20231108075930.7157-3-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-08riscv: Don't use PGD entries for the linear mappingAlexandre Ghiti
Propagating changes at this level is cumbersome as we need to go through all the page tables when that happens (either when changing the permissions or when splitting the mapping). Note that this prevents the use of 4MB mapping for sv32 and 1GB mapping for sv39 in the linear mapping. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20231108075930.7157-2-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07RISC-V: Probe misaligned access speed in parallelEvan Green
Probing for misaligned access speed takes about 0.06 seconds. On a system with 64 cores, doing this in smp_callin() means it's done serially, extending boot time by 3.8 seconds. That's a lot of boot time. Instead of measuring each CPU serially, let's do the measurements on all CPUs in parallel. If we disable preemption on all CPUs, the jiffies stop ticking, so we can do this in stages of 1) everybody except core 0, then 2) core 0. The allocations are all done outside of on_each_cpu() to avoid calling alloc_pages() with interrupts disabled. For hotplugged CPUs that come in after the boot time measurement, register CPU hotplug callbacks, and do the measurement there. Interrupts are enabled in those callbacks, so they're fine to do alloc_pages() in. Reported-by: Jisheng Zhang <jszhang@kernel.org> Closes: https://lore.kernel.org/all/mhng-9359993d-6872-4134-83ce-c97debe1cf9a@palmer-ri-x1c9/T/#mae9b8f40016f9df428829d33360144dc5026bcbf Fixes: 584ea6564bca ("RISC-V: Probe for unaligned access speed") Signed-off-by: Evan Green <evan@rivosinc.com> Link: https://lore.kernel.org/r/20231106225855.3121724-1-evan@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07RISC-V: Remove __init on unaligned_emulation_finish()Evan Green
This function shouldn't be __init, since it's called during hotplug. The warning says it well enough: WARNING: modpost: vmlinux: section mismatch in reference: check_unaligned_access_all_cpus+0x13a (section: .text) -> unaligned_emulation_finish (section: .init.text) Signed-off-by: Evan Green <evan@rivosinc.com> Fixes: 71c54b3d169d ("riscv: report misaligned accesses emulation to hwprobe") Link: https://lore.kernel.org/r/20231106231105.3141413-1-evan@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07RISC-V: Show accurate per-hart isa in /proc/cpuinfoEvan Green
In /proc/cpuinfo, most of the information we show for each processor is specific to that hart: marchid, mvendorid, mimpid, processor, hart, compatible, and the mmu size. But the ISA string gets filtered through a lowest common denominator mask, so that if one CPU is missing an ISA extension, no CPUs will show it. Now that we track the ISA extensions for each hart, let's report ISA extension info accurately per-hart in /proc/cpuinfo. We cannot change the "isa:" line, as usermode may be relying on that line to show only the common set of extensions supported across all harts. Add a new "hart isa" line instead, which reports the true set of extensions for that hart. Signed-off-by: Evan Green <evan@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231106232439.3176268-1-evan@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07RISC-V: Don't rely on positional structure initializationPalmer Dabbelt
Without this I get a bunch of warnings along the lines of arch/riscv/kernel/module.c:535:26: error: positional initialization of field in 'struct' declared with 'designated_init' attribute [-Werror=designated-init] 535 | [R_RISCV_32] = { apply_r_riscv_32_rela }, This just mades the member initializers explicit instead of positional. I also aligned some of the table, but mostly just to make the batch editing go faster. Fixes: b51fc88cb35e ("Merge patch series "riscv: Add remaining module relocations and tests"") Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231107155529.8368-1-palmer@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07Merge patch series "riscv: Add remaining module relocations and tests"Palmer Dabbelt
Charlie Jenkins <charlie@rivosinc.com> says: A handful of module relocations were missing, this patch includes the remaining ones. I also wrote some test cases to ensure that module loading works properly. Some relocations cannot be supported in the kernel, these include the ones that rely on thread local storage and dynamic linking. This patch also overhauls the implementation of ADD/SUB/SET/ULEB128 relocations to handle overflow. "Overflow" is different for ULEB128 since it is a variable-length encoding that the compiler can be expected to generate enough space for. Instead of overflowing, ULEB128 will expand into the next 8-bit segment of the location. A psABI proposal [1] was merged that mandates that SET_ULEB128 and SUB_ULEB128 are paired, however the discussion following the merging of the pull request revealed that while the pull request was valid, it would be better for linkers to properly handle this overflow. This patch proactively implements this methodology for future compatibility. This can be tested by enabling KUNIT, RUNTIME_KERNEL_TESTING_MENU, and RISCV_MODULE_LINKING_KUNIT. [1] https://github.com/riscv-non-isa/riscv-elf-psabi-doc/pull/403 * b4-shazam-merge: riscv: Add tests for riscv module loading riscv: Add remaining module relocations riscv: Avoid unaligned access when relocating modules Link: https://lore.kernel.org/r/20231101-module_relocations-v9-0-8dfa3483c400@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07riscv: Add tests for riscv module loadingCharlie Jenkins
Add test cases for the two main groups of relocations added: SUB and SET, along with uleb128. Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231101-module_relocations-v9-3-8dfa3483c400@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07riscv: Add remaining module relocationsCharlie Jenkins
Add all final module relocations and add error logs explaining the ones that are not supported. Implement overflow checks for ADD/SUB/SET/ULEB128 relocations. Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231101-module_relocations-v9-2-8dfa3483c400@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07riscv: Avoid unaligned access when relocating modulesEmil Renner Berthing
With the C-extension regular 32bit instructions are not necessarily aligned on 4-byte boundaries. RISC-V instructions are in fact an ordered list of 16bit little-endian "parcels", so access the instruction as such. This should also make the code work in case someone builds a big-endian RISC-V machine. Signed-off-by: Emil Renner Berthing <kernel@esmil.dk> Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20231101-module_relocations-v9-1-8dfa3483c400@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-07riscv: split cache ops out of dma-noncoherent.cChristoph Hellwig
The cache ops are also used by the pmem code which is unconditionally built into the kernel. Move them into a separate file that is built based on the correct config option. Fixes: fd962781270e ("riscv: RISCV_NONSTANDARD_CACHE_OPS shouldn't depend on RISCV_DMA_NONCOHERENT") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Tested-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # Link: https://lore.kernel.org/r/20231028155101.1039049-1-hch@lst.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: select ARCH_PROC_KCORE_TEXTAndreas Schwab
This adds a separate segment for kernel text in /proc/kcore, which has a different address than the direct linear map. Signed-off-by: Andreas Schwab <schwab@suse.de> Link: https://lore.kernel.org/r/mvmh6m758ao.fsf@suse.de Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: kernel: Use correct SYM_DATA_*() macro for dataClément Léger
Some data were incorrectly annotated with SYM_FUNC_*() instead of SYM_DATA_*() ones. Use the correct ones. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231024132655.730417-4-cleger@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: Use SYM_*() assembly macros instead of deprecated onesClément Léger
ENTRY()/END()/WEAK() macros are deprecated and we should make use of the new SYM_*() macros [1] for better annotation of symbols. Replace the deprecated ones with the new ones and fix wrong usage of END()/ENDPROC() to correctly describe the symbols. [1] https://docs.kernel.org/core-api/asm-annotations.html Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231024132655.730417-3-cleger@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: use ".L" local labels in assembly when applicableClément Léger
For the sake of coherency, use local labels in assembly when applicable. This also avoid kprobes being confused when applying a kprobe since the size of function is computed by checking where the next visible symbol is located. This might end up in computing some function size to be way shorter than expected and thus failing to apply kprobes to the specified offset. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20231024132655.730417-2-cleger@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: boot: Fix creation of loader.binGeert Uytterhoeven
When flashing loader.bin for K210 using kflash:     [ERROR] This is an ELF file and cannot be programmed to flash directly: arch/riscv/boot/loader.bin Before, loader.bin relied on "OBJCOPYFLAGS := -O binary" in the main RISC-V Makefile to create a boot image with the right format. With this removed, the image is now created in the wrong (ELF) format. Fix this by adding an explicit rule. Fixes: 505b02957e74f0c5 ("riscv: Remove duplicate objcopy flag") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/1086025809583809538dfecaa899892218f44e7e.1698159066.git.geert+renesas@glider.be Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06Merge patch series "riscv: tlb flush improvements"Palmer Dabbelt
Alexandre Ghiti <alexghiti@rivosinc.com> says: This series optimizes the tlb flushes on riscv which used to simply flush the whole tlb whatever the size of the range to flush or the size of the stride. Patch 3 introduces a threshold that is microarchitecture specific and will very likely be modified by vendors, not sure though which mechanism we'll use to do that (dt? alternatives? vendor initialization code?). * b4-shazam-merge: riscv: Improve flush_tlb_kernel_range() riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlb riscv: Improve flush_tlb_range() for hugetlb pages riscv: Improve tlb_flush() Link: https://lore.kernel.org/r/20231030133027.19542-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: Improve flush_tlb_kernel_range()Alexandre Ghiti
This function used to simply flush the whole tlb of all harts, be more subtile and try to only flush the range. The problem is that we can only use PAGE_SIZE as stride since we don't know the size of the underlying mapping and then this function will be improved only if the size of the region to flush is < threshold * PAGE_SIZE. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20231030133027.19542-5-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: Make __flush_tlb_range() loop over pte instead of flushing the whole tlbAlexandre Ghiti
Currently, when the range to flush covers more than one page (a 4K page or a hugepage), __flush_tlb_range() flushes the whole tlb. Flushing the whole tlb comes with a greater cost than flushing a single entry so we should flush single entries up to a certain threshold so that: threshold * cost of flushing a single entry < cost of flushing the whole tlb. Co-developed-by: Mayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com> Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20231030133027.19542-4-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: Improve flush_tlb_range() for hugetlb pagesAlexandre Ghiti
flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form, when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the whole tlb: so set a stride of the size of the hugetlb mapping in order to only flush the hugetlb mapping. However, if the hugepage is a NAPOT region, all PTEs that constitute this mapping must be invalidated, so the stride size must actually be the size of the PTE. Note that THPs are directly handled by flush_pmd_tlb_range(). Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Link: https://lore.kernel.org/r/20231030133027.19542-3-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-06riscv: Improve tlb_flush()Alexandre Ghiti
For now, tlb_flush() simply calls flush_tlb_mm() which results in a flush of the whole TLB. So let's use mmu_gather fields to provide a more fine-grained flush of the TLB. Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Tested-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> # On RZ/Five SMARC Link: https://lore.kernel.org/r/20231030133027.19542-2-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: mm: update T-Head memory type definitionsJisheng Zhang
Update T-Head memory type definitions according to C910 doc [1] For NC and IO, SH property isn't configurable, hardcoded as SH, so set SH for NOCACHE and IO. And also set bit[61](Bufferable) for NOCACHE according to the table 6.1 in the doc [1]. Link: https://github.com/T-head-Semi/openc910 [1] Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Guo Ren <guoren@kernel.org> Tested-by: Drew Fustini <dfustini@baylibre.com> Link: https://lore.kernel.org/r/20230912072510.2510-1-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05Merge patch series "riscv: vdso.lds.S: some improvement"Palmer Dabbelt
Jisheng Zhang <jszhang@kernel.org> says: This series renews one of my last year RFC patch[1], tries to improve the vdso layout a bit. patch1 removes useless symbols patch2 merges .data section of vdso into .rodata because they are readonly patch3 is the real renew patch, it removes hardcoded 0x800 .text start addr. But I rewrite the commit msg per Andrew's suggestions and move move .note, .eh_frame_hdr, and .eh_frame between .rodata and .text to keep the actual code well away from the non-instruction data. * b4-shazam-merge: riscv: vdso.lds.S: remove hardcoded 0x800 .text start addr riscv: vdso.lds.S: merge .data section into .rodata section riscv: vdso.lds.S: drop __alt_start and __alt_end symbols Link: https://lore.kernel.org/linux-riscv/20221123161805.1579-1-jszhang@kernel.org/ [1] Link: https://lore.kernel.org/r/20230912072015.2424-1-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: vdso.lds.S: remove hardcoded 0x800 .text start addrJisheng Zhang
I believe the hardcoded 0x800 and related comments come from the long history VDSO_TEXT_OFFSET in x86 vdso code, but commit 5b9304933730 ("x86 vDSO: generate vdso-syms.lds") and commit f6b46ebf904f ("x86 vDSO: new layout") removes the comment and hard coding for x86. Similar as x86 and other arch, riscv doesn't need the rigid layout using VDSO_TEXT_OFFSET since it "no longer matters to the kernel". so we could remove the hard coding now, and removing it brings a small vdso.so and aligns with other architectures. Also, having enough separation between data and text is important for I-cache, so similar as x86, move .note, .eh_frame_hdr, and .eh_frame between .rodata and .text. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Link: https://lore.kernel.org/r/20230912072015.2424-4-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: vdso.lds.S: merge .data section into .rodata sectionJisheng Zhang
The .data section doesn't need to be separate from .rodata section, they are both readonly. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Link: https://lore.kernel.org/r/20230912072015.2424-3-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: vdso.lds.S: drop __alt_start and __alt_end symbolsJisheng Zhang
These two symbols are not used, remove them. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Link: https://lore.kernel.org/r/20230912072015.2424-2-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: add userland instruction dump to RISC-V splatsYunhui Cui
Add userland instruction dump and rename dump_kernel_instr() to dump_instr(). An example: [ 0.822439] Freeing unused kernel image (initmem) memory: 6916K [ 0.823817] Run /init as init process [ 0.839411] init[1]: unhandled signal 4 code 0x1 at 0x000000000005be18 in bb[10000+5fb000] [ 0.840751] CPU: 0 PID: 1 Comm: init Not tainted 5.14.0-rc4-00049-gbd644290aa72-dirty #187 [ 0.841373] Hardware name: , BIOS [ 0.841743] epc : 000000000005be18 ra : 0000000000079e74 sp : 0000003fffcafda0 [ 0.842271] gp : ffffffff816e9dc8 tp : 0000000000000000 t0 : 0000000000000000 [ 0.842947] t1 : 0000003fffc9fdf0 t2 : 0000000000000000 s0 : 0000000000000000 [ 0.843434] s1 : 0000000000000000 a0 : 0000003fffca0190 a1 : 0000003fffcafe18 [ 0.843891] a2 : 0000000000000000 a3 : 0000000000000000 a4 : 0000000000000000 [ 0.844357] a5 : 0000000000000000 a6 : 0000000000000000 a7 : 0000000000000000 [ 0.844803] s2 : 0000000000000000 s3 : 0000000000000000 s4 : 0000000000000000 [ 0.845253] s5 : 0000000000000000 s6 : 0000000000000000 s7 : 0000000000000000 [ 0.845722] s8 : 0000000000000000 s9 : 0000000000000000 s10: 0000000000000000 [ 0.846180] s11: 0000000000d144e0 t3 : 0000000000000000 t4 : 0000000000000000 [ 0.846616] t5 : 0000000000000000 t6 : 0000000000000000 [ 0.847204] status: 0000000200000020 badaddr: 00000000f0028053 cause: 0000000000000002 [ 0.848219] Code: f06f ff5f 3823 fa11 0113 fb01 2e23 0201 0293 0000 (8053) f002 [ 0.851016] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004 Signed-off-by: Yunhui Cui <cuiyunhui@bytedance.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20230912021349.28302-1-cuiyunhui@bytedance.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: kprobes: allow writing to x0Nam Cao
Instructions can write to x0, so we should simulate these instructions normally. Currently, the kernel hangs if an instruction who writes to x0 is simulated. Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported") Cc: stable@vger.kernel.org Signed-off-by: Nam Cao <namcaov@gmail.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Acked-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20230829182500.61875-1-namcaov@gmail.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: provide riscv-specific is_trap_insn()Nam Cao
uprobes expects is_trap_insn() to return true for any trap instructions, not just the one used for installing uprobe. The current default implementation only returns true for 16-bit c.ebreak if C extension is enabled. This can confuse uprobes if a 32-bit ebreak generates a trap exception from userspace: uprobes asks is_trap_insn() who says there is no trap, so uprobes assume a probe was there before but has been removed, and return to the trap instruction. This causes an infinite loop of entering and exiting trap handler. Instead of using the default implementation, implement this function speficially for riscv with checks for both ebreak and c.ebreak. Fixes: 74784081aac8 ("riscv: Add uprobes supported") Signed-off-by: Nam Cao <namcaov@gmail.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> Reviewed-by: Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/20230829083614.117748-1-namcaov@gmail.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05Merge patch series "Improve PTDUMP and introduce new fields"Palmer Dabbelt
Yu Chien Peter Lin <peterlin@andestech.com> says: This patchset enhances PTDUMP by providing additional information from pagetable entries. The first patch fixes the RSW field, while the second and third patches introduce the PBMT and NAPOT fields, respectively, for RV64 systems. * b4-shazam-merge: riscv: Introduce NAPOT field to PTDUMP riscv: Introduce PBMT field to PTDUMP riscv: Improve PTDUMP to show RSW with non-zero value Link: https://lore.kernel.org/r/20230921025022.3989723-1-peterlin@andestech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: Introduce NAPOT field to PTDUMPYu Chien Peter Lin
This patch introduces the NAPOT field to PTDUMP, allowing it to display the letter "N" for pages that have the 63rd bit set. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230921025022.3989723-4-peterlin@andestech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: Introduce PBMT field to PTDUMPYu Chien Peter Lin
This patch introduces the PBMT field to the PTDUMP, so it can display the memory attributes for NC or IO. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230921025022.3989723-3-peterlin@andestech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05riscv: Improve PTDUMP to show RSW with non-zero valueYu Chien Peter Lin
RSW field can be used to encode 2 bits of software defined information. Currently, PTDUMP only prints "RSW" when its value is 1 or 3. To fix this issue and improve the debugging experience with PTDUMP, we redefine _PAGE_SPECIAL to its original value and use _PAGE_SOFT as the RSW mask, allow it to print the RSW with any non-zero value. This patch also removes the val from the struct prot_bits as it is no longer needed. Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230921025022.3989723-2-peterlin@andestech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-11-05RISC-V: capitalise CMO op macrosConor Dooley
The CMO op macros initially used lower case, as the original iteration of the ALT_CMO_OP alternative stringified the first parameter to finalise the assembly for the standard variant. As a knock-on, the T-Head versions of these CMOs had to use mixed case defines. Commit dd23e9535889 ("RISC-V: replace cbom instructions with an insn-def") removed the asm construction with stringify, replacing it an insn-def macro, rending the lower-case surplus to requirements. As far as I can tell from a brief check, CBO_zero does not see similar use and didn't require the mixed case define in the first place. Replace the lower case characters now for consistency with other insn-def macros in the standard and T-Head forms, and adjust the callsites. Suggested-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Link: https://lore.kernel.org/r/20230915-aloe-dollar-994937477776@spud Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>