summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-06-07objtool: Get rid of reloc->rel[a]Josh Poimboeuf
Get the relocation entry info from the underlying rsec->data. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 35.12G - After: peak heap memory consumption: 29.93G Link: https://lore.kernel.org/r/2be32323de6d8cc73179ee0ff14b71f4e7cefaa0.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Shrink elf hash nodesJosh Poimboeuf
Instead of using hlist for the 'struct elf' hashes, use a custom single-linked list scheme. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 36.89G - After: peak heap memory consumption: 35.12G Link: https://lore.kernel.org/r/6e8cd305ed22e743c30d6e72cfdc1be20fb94cd4.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Shrink reloc->sym_reloc_entryJosh Poimboeuf
Convert it to a singly-linked list. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 38.64G - After: peak heap memory consumption: 36.89G Link: https://lore.kernel.org/r/a51f0a6f9bbf2494d5a3a449807307e78a940988.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Get rid of reloc->jump_table_startJosh Poimboeuf
Rework the jump table logic slightly so 'jump_table_start' is no longer needed. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 40.37G - After: peak heap memory consumption: 38.64G Link: https://lore.kernel.org/r/e1602ed8a6171ada3cfac0bd8449892ec82bd188.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Get rid of reloc->addendJosh Poimboeuf
Get the addend from the embedded GElf_Rel[a] struct. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 42.10G - After: peak heap memory consumption: 40.37G Link: https://lore.kernel.org/r/ad2354f95d9ddd86094e3f7687acfa0750657784.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Get rid of reloc->typeJosh Poimboeuf
Get the type from the embedded GElf_Rel[a] struct. Link: https://lore.kernel.org/r/d1c1f8da31e4f052a2478aea585fcf355cacc53a.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Get rid of reloc->offsetJosh Poimboeuf
Get the offset from the embedded GElf_Rel[a] struct. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 43.83G - After: peak heap memory consumption: 42.10G Link: https://lore.kernel.org/r/2b9ec01178baa346a99522710bf2e82159412e3a.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Get rid of reloc->idxJosh Poimboeuf
Use the array offset to calculate the reloc index. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 45.56G - After: peak heap memory consumption: 43.83G Link: https://lore.kernel.org/r/7351d2ebad0519027db14a32f6204af84952574a.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Get rid of reloc->listJosh Poimboeuf
Now that all relocs are allocated in an array, the linked list is no longer needed. With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 49.02G - After: peak heap memory consumption: 45.56G Link: https://lore.kernel.org/r/71e7a2c017dbc46bb497857ec97d67214f832d10.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Allocate relocs in advance for new rela sectionsJosh Poimboeuf
Similar to read_relocs(), allocate the reloc structs all together in an array rather than allocating them one at a time. Link: https://lore.kernel.org/r/5332d845c5a2d6c2d052075b381bfba8bcb67ed5.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Add for_each_reloc()Josh Poimboeuf
Link: https://lore.kernel.org/r/dbfcb1037d8b958e52d097b67829c4c6811c24bb.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Don't free memory in elf_close()Josh Poimboeuf
It's not necessary, objtool's about to exit anyway. Link: https://lore.kernel.org/r/74bdb3058b8f029db8d5b3b5175f2a200804196d.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Keep GElf_Rel[a] structs syncedJosh Poimboeuf
Keep the GElf_Rela structs synced with their 'struct reloc' counterparts instead of having to go back and "rebuild" them later. Link: https://lore.kernel.org/r/156d8a3e528a11e5c8577cf552890ed1f2b9567b.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Add elf_create_section_pair()Josh Poimboeuf
When creating an annotation section, allocate the reloc section data at the beginning. This simplifies the data model a bit and also saves memory due to the removal of malloc() in elf_rebuild_reloc_section(). With allyesconfig + CONFIG_DEBUG_INFO: - Before: peak heap memory consumption: 53.49G - After: peak heap memory consumption: 49.02G Link: https://lore.kernel.org/r/048e908f3ede9b66c15e44672b6dda992b1dae3e.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Add mark_sec_changed()Josh Poimboeuf
Ensure elf->changed always gets set when sec->changed gets set. Link: https://lore.kernel.org/r/9a810a8d2e28af6ba07325362d0eb4703bb09d3a.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Fix reloc_hash sizeJosh Poimboeuf
With CONFIG_DEBUG_INFO, DWARF creates a lot of relocations and reloc_hash is woefully undersized, which can affect performance significantly. Fix that. Link: https://lore.kernel.org/r/38ef60dc8043270bf3b9dfd139ae2a30ca3f75cc.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Consolidate rel/rela handlingJosh Poimboeuf
The GElf_Rel[a] structs have more similarities than differences. It's safe to hard-code the assumptions about their shared fields as they will never change. Consolidate their handling where possible, getting rid of duplicated code. Also, at least for now we only ever create rela sections, so simplify the relocation creation code to be rela-only. Link: https://lore.kernel.org/r/dcabf6df400ca500ea929f1e4284f5e5ec0b27c8.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Improve reloc namingJosh Poimboeuf
- The term "reloc" is overloaded to mean both "an instance of struct reloc" and "a reloc section". Change the latter to "rsec". - For variable names, use "sec" for regular sections and "rsec" for rela sections to prevent them getting mixed up. - For struct reloc variables, use "reloc" instead of "rel" everywhere for consistency. Link: https://lore.kernel.org/r/8b790e403df46f445c21003e7893b8f53b99a6f3.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Remove flags argument from elf_create_section()Josh Poimboeuf
Simplify the elf_create_section() interface a bit by removing the flags argument. Most callers don't care about changing the section header flags. If needed, they can be modified afterwards, just like any other section header field. Link: https://lore.kernel.org/r/515235d9cf62637a14bee37bfa9169ef20065471.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Tidy elf.hJosh Poimboeuf
Reorganize elf.h a bit: - Move the prototypes higher up so they can be used by the inline functions. - Move hash-related code to the bottom. - Remove the unused ELF_HASH_BITS macro. No functional changes. Link: https://lore.kernel.org/r/b1490ed85951868219a6ece177a7cd30a6454d66.1685464332.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07drm/vmwgfx: Add unwind hints around RBP clobberJosh Poimboeuf
VMware high-bandwidth hypercalls take the RBP register as input. This breaks basic frame pointer convention, as RBP should never be clobbered. So frame pointer unwinding is broken for the instructions surrounding the hypercalls. Fortunately this doesn't break live patching with CONFIG_FRAME_POINTER, as it only unwinds from blocking tasks, and stack traces from preempted tasks are already marked unreliable anyway. However, for live patching with ORC, this could actually be a theoretical problem if vmw_port_hb_{in,out}() were still compiled with a frame pointer due to having an aligned stack. In practice that hasn't seemed to be an issue since the objtool warnings have only been seen with CONFIG_FRAME_POINTER. Add unwind hint annotations to tell the ORC unwinder to mark stack traces as unreliable. Fixes the following warnings: vmlinux.o: warning: objtool: vmw_port_hb_in+0x1df: return with modified stack frame vmlinux.o: warning: objtool: vmw_port_hb_out+0x1dd: return with modified stack frame Fixes: 89da76fde68d ("drm/vmwgfx: Add VMWare host messaging capability") Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/oe-kbuild-all/202305160135.97q0Elax-lkp@intel.com/ Link: https://lore.kernel.org/r/4c795f2d87bc0391cf6543bcb224fa540b55ce4b.1685981486.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07objtool: Allow stack operations in UNWIND_HINT_UNDEFINED regionsJosh Poimboeuf
If the code specified UNWIND_HINT_UNDEFINED, skip the "undefined stack state" warning due to a stack operation. Just ignore the stack op and continue to propagate the undefined state. Link: https://lore.kernel.org/r/820c5b433f17c84e8761fb7465a8d319d706b1cf.1685981486.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07arm64: cpufeature: fold cpus_set_cap() into update_cpu_capabilities()Mark Rutland
We only use cpus_set_cap() in update_cpu_capabilities(), where we open-code an analgous update to boot_cpucaps. Due to the way the cpucap_ptrs[] array is initialized, we know that the capability number cannot be greater than or equal to ARM64_NCAPS, so the warning is superfluous. Fold cpus_set_cap() into update_cpu_capabilities(), matching what we do for the boot_cpucaps, and making the relationship between the two a bit clearer. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230607164846.3967305-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-06-07arm64: cpufeature: use cpucap namingMark Rutland
To more clearly align the various users of the cpucap enumeration, this patch changes the cpufeature code to use the term `cpucap` in favour of `cpu_hwcap`. This more clearly aligns with other users of the cpucaps, and avoids confusion with the ELF hwcaps. There should be no functional change as a result of this patch; this is purely a renaming exercise. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230607164846.3967305-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-06-07arm64: alternatives: use cpucap namingMark Rutland
To more clearly align the various users of the cpucap enumeration, this patch changes the alternative code to use the term `cpucap` in favour of `feature`. The alternative_has_feature_{likely,unlikely}() functions are renamed to alternative_has_cap_<likely,unlikely}() to more clearly align with the cpus_have_{const_,}cap() helpers. At the same time remove the stale comment referring to the "ARM64_CB bit", which is evidently a typo for ARM64_CB_PATCH, which was removed in commit: 4c0bd995d73ed889 ("arm64: alternatives: have callbacks take a cap") There should be no functional change as a result of this patch; this is purely a renaming exercise. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230607164846.3967305-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-06-07arm64: standardise cpucap bitmap namesMark Rutland
The 'cpu_hwcaps' and 'boot_capabilities' bitmaps are bitmaps have the same enumerated bits, but are named wildly differently for no good reason. The terms 'hwcaps' and 'capabilities' have become ambiguous over time (e.g. due to clashes with ELF hwcaps and the structures used to manage feature detection), and it would be nicer to use 'cpucaps', matching the <asm/cpucaps.h> header the enumerated bit indices are defined in. While this isn't a functional problem, it makes the code harder than necessary to understand, and hard to extend with related functionality (e.g. per-cpu cpucap bitmaps). To that end, this patch renames `boot_capabilities` to `boot_cpucaps` and `cpu_hwcaps` to `system_cpucaps`. This more clearly indicates the relationship between the two and aligns with terminology used elsewhere in our feature management code. This change was scripted with: | find . -type f -name '*.[chS]' -print0 | \ | xargs -0 sed -i 's/\<boot_capabilities\>/boot_cpucaps/' | find . -type f -name '*.[chS]' -print0 | \ | xargs -0 sed -i 's/\<cpu_hwcaps\>/system_cpucaps/' ... and the instance of "cpu_hwcap" (without a trailing "s") in <asm/mmu_context.h> corrected manually to "system_cpucaps". Subsequent patches will adjust the naming of related functions to better align with the `cpucap` naming. There should be no functional change as a result of this patch; this is purely a renaming exercise. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230607164846.3967305-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2023-06-07x86/entry: Move thunk restore code into thunk functionsJosh Poimboeuf
There's no need for both thunk functions to jump to the same shared thunk restore code which lives outside the thunk function boundaries. It disrupts i-cache locality and confuses objtool. Keep it simple by keeping each thunk's restore code self-contained within the function. Fixes a bunch of false positive "missing __noreturn" warnings like: vmlinux.o: warning: objtool: do_arch_prctl_common+0xf4: preempt_schedule_thunk() is missing a __noreturn annotation Fixes: fedb724c3db5 ("objtool: Detect missing __noreturn annotations") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202305281037.3PaI3tW4-lkp@intel.com/ Link: https://lore.kernel.org/r/46aa8aeb716f302e22e1673ae15ee6fe050b41f4.1685488050.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07Revert "x86/orc: Make it callthunk aware"Josh Poimboeuf
Commit 396e0b8e09e8 ("x86/orc: Make it callthunk aware") attempted to deal with the fact that function prefix code didn't have ORC coverage. However, it didn't work as advertised. Use of the "null" ORC entry just caused affected unwinds to end early. The root cause has now been fixed with commit 5743654f5e2e ("objtool: Generate ORC data for __pfx code"). Revert most of commit 396e0b8e09e8 ("x86/orc: Make it callthunk aware"). The is_callthunk() function remains as it's now used by other code. Link: https://lore.kernel.org/r/a05b916ef941da872cbece1ab3593eceabd05a79.1684245404.git.jpoimboe@kernel.org Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07lkdtm: Avoid objtool/ibt warningPeter Zijlstra
For certain configs objtool will complain like: vmlinux.o: warning: objtool: lkdtm_UNSET_SMEP+0x1c3: relocation to !ENDBR: native_write_cr4+0x41 What happens is that GCC optimizes the loop: insn = (unsigned char *)native_write_cr4; for (i = 0; i < MOV_CR4_DEPTH; i++) to read something like: for (insn = (unsigned char *)native_write_cr4; insn < (unsigned char *)native_write_cr4 + MOV_CR4_DEPTH; insn++) Which then obviously generates the text reference native_write_cr4+041. Since none of this is a fast path, simply confuse GCC enough to inhibit this optimization. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/Y3JdgbXRV0MNZ+9h@hirez.programming.kicks-ass.net Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07tools: Remove unnecessary variablesLu Hongfei
There are several places where warnings variables are not needed, remove them and directly return 0. Signed-off-by: Lu Hongfei <luhongfei@vivo.com> Link: https://lore.kernel.org/r/20230530075649.21661-1-luhongfei@vivo.com Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-06-07afs: Fix setting of mtime when creating a file/dir/symlinkDavid Howells
kafs incorrectly passes a zero mtime (ie. 1st Jan 1970) to the server when creating a file, dir or symlink because the mtime recorded in the afs_operation struct gets passed to the server by the marshalling routines, but the afs_mkdir(), afs_create() and afs_symlink() functions don't set it. This gets masked if a file or directory is subsequently modified. Fix this by filling in op->mtime before calling the create op. Fixes: e49c7b2f6de7 ("afs: Build an abstraction around an "operation" concept") Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> Reviewed-by: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-07KVM: arm64: Use raw_smp_processor_id() in kvm_pmu_probe_armpmu()Oliver Upton
Sebastian reports that commit 1c913a1c35aa ("KVM: arm64: Iterate arm_pmus list to probe for default PMU") introduced the following splat with CONFIG_DEBUG_PREEMPT enabled: [70506.110187] BUG: using smp_processor_id() in preemptible [00000000] code: qemu-system-aar/3078242 [70506.119077] caller is debug_smp_processor_id+0x20/0x30 [70506.124229] CPU: 129 PID: 3078242 Comm: qemu-system-aar Tainted: G W 6.4.0-rc5 #25 [70506.133176] Hardware name: GIGABYTE R181-T92-00/MT91-FS4-00, BIOS F34 08/13/2020 [70506.140559] Call trace: [70506.142993] dump_backtrace+0xa4/0x130 [70506.146737] show_stack+0x20/0x38 [70506.150040] dump_stack_lvl+0x48/0x60 [70506.153704] dump_stack+0x18/0x28 [70506.157007] check_preemption_disabled+0xe4/0x108 [70506.161701] debug_smp_processor_id+0x20/0x30 [70506.166046] kvm_arm_pmu_v3_set_attr+0x460/0x628 [70506.170662] kvm_arm_vcpu_arch_set_attr+0x88/0xd8 [70506.175363] kvm_arch_vcpu_ioctl+0x258/0x4a8 [70506.179632] kvm_vcpu_ioctl+0x32c/0x6b8 [70506.183465] __arm64_sys_ioctl+0xb4/0x100 [70506.187467] invoke_syscall+0x78/0x108 [70506.191205] el0_svc_common.constprop.0+0x4c/0x100 [70506.195984] do_el0_svc+0x34/0x50 [70506.199287] el0_svc+0x34/0x108 [70506.202416] el0t_64_sync_handler+0xf4/0x120 [70506.206674] el0t_64_sync+0x194/0x198 Fix the issue by using the raw variant that bypasses the debug assertion. While at it, stick all of the nuance and UAPI baggage into a comment for posterity. Fixes: 1c913a1c35aa ("KVM: arm64: Iterate arm_pmus list to probe for default PMU") Reported-by: Sebastian Ott <sebott@redhat.com> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230606184814.456743-1-oliver.upton@linux.dev
2023-06-07KVM: arm64: Restore GICv2-on-GICv3 functionalityMarc Zyngier
When reworking the vgic locking, the vgic distributor registration got simplified, which was a very good cleanup. But just a tad too radical, as we now register the *native* vgic only, ignoring the GICv2-on-GICv3 that allows pre-historic VMs (or so I thought) to run. As it turns out, QEMU still defaults to GICv2 in some cases, and this breaks Nathan's setup! Fix it by propagating the *requested* vgic type rather than the host's version. Fixes: 59112e9c390b ("KVM: arm64: vgic: Fix a circular locking issue") Reported-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> link: https://lore.kernel.org/r/20230606221525.GA2269598@dev-arch.thelio-3990X
2023-06-07riscv: Check the virtual alignment before choosing a map sizeAlexandre Ghiti
We used to only check the alignment of the physical address to decide which mapping would fit for a certain region of the linear mapping, but it is not enough since the virtual address must also be aligned, so check that too. Fixes: 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") Reported-by: Song Shuai <songshuaishuai@tinylab.org> Link: https://lore.kernel.org/linux-riscv/tencent_7C3B580B47C1B17C16488EC1@qq.com/ Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230607125851.63370-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-06-07riscv: Fix kfence now that the linear mapping can be backed by PUD/P4D/PGDAlexandre Ghiti
RISC-V Kfence implementation used to rely on the fact the linear mapping was backed by at most PMD hugepages, which is not true anymore since commit 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping"). Instead of splitting PUD/P4D/PGD mappings afterwards, directly map the kfence pool region using PTE mappings by allocating this region before setup_vm_final(). Reported-by: syzbot+a74d57bddabbedd75135@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=a74d57bddabbedd75135 Fixes: 3335068f8721 ("riscv: Use PUD/P4D/PGD pages for the linear mapping") Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230606130444.25090-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-06-07riscv: mm: Ensure prot of VM_WRITE and VM_EXEC must be readableHsieh-Tseng Shen
Commit 8aeb7b17f04e ("RISC-V: Make mmap() with PROT_WRITE imply PROT_READ") allows riscv to use mmap with PROT_WRITE only, and meanwhile mmap with w+x is also permitted. However, when userspace tries to access this page with PROT_WRITE|PROT_EXEC, which causes infinite loop at load page fault as well as it triggers soft lockup. According to riscv privileged spec, "Writable pages must also be marked readable". The fix to drop the `PAGE_COPY_READ_EXEC` and then `PAGE_COPY_EXEC` would be just used instead. This aligns the other arches (i.e arm64) for protection_map. Fixes: 8aeb7b17f04e ("RISC-V: Make mmap() with PROT_WRITE imply PROT_READ") Signed-off-by: Hsieh-Tseng Shen <woodrow.shen@sifive.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20230425102828.1616812-1-woodrow.shen@sifive.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-06-07block: fix rootwait=Christoph Hellwig
Failures to look up the gendisk must return -ENODEV so that rootwait retries the lookup instead of -EINVAL which exits early. Fixes: cf056a431215 ("init: improve the name_to_dev_t interface") Reported-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Fabio Estevam <festevam@gmail.com> Link: https://lore.kernel.org/r/20230607135746.92995-1-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-07blk-cgroup: Reinit blkg_iostat_set after clearing in blkcg_reset_stats()Waiman Long
When blkg_alloc() is called to allocate a blkcg_gq structure with the associated blkg_iostat_set's, there are 2 fields within blkg_iostat_set that requires proper initialization - blkg & sync. The former field was introduced by commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") while the later one was introduced by commit f73316482977 ("blk-cgroup: reimplement basic IO stats using cgroup rstat"). Unfortunately those fields in the blkg_iostat_set's are not properly re-initialized when they are cleared in v1's blkcg_reset_stats(). This can lead to a kernel panic due to NULL pointer access of the blkg pointer. The missing initialization of sync is less problematic and can be a problem in a debug kernel due to missing lockdep initialization. Fix these problems by re-initializing them after memory clearing. Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") Fixes: f73316482977 ("blk-cgroup: reimplement basic IO stats using cgroup rstat") Signed-off-by: Waiman Long <longman@redhat.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230606180724.2455066-1-longman@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-07blk-ioc: fix recursive spin_lock/unlock_irq() in ioc_clear_queue()Yu Kuai
Recursive spin_lock/unlock_irq() is not safe, because spin_unlock_irq() will enable irq unconditionally: spin_lock_irq queue_lock -> disable irq spin_lock_irq ioc->lock spin_unlock_irq ioc->lock -> enable irq /* * AA dead lock will be triggered if current context is preempted by irq, * and irq try to hold queue_lock again. */ spin_unlock_irq queue_lock Fix this problem by using spin_lock/unlock() directly for 'ioc->lock'. Fixes: 5a0ac57c48aa ("blk-ioc: protect ioc_destroy_icq() by 'queue_lock'") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230606011438.3743440-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-07nbd: Add the maximum limit of allocated index in nbd_dev_addZhong Jinghua
If the index allocated by idr_alloc greater than MINORMASK >> part_shift, the device number will overflow, resulting in failure to create a block device. Fix it by imiting the size of the max allocation. Signed-off-by: Zhong Jinghua <zhongjinghua@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230605122159.2134384-1-zhongjinghua@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-07regulator: qcom-rpmh: Fix regulators for PM8550Abel Vesa
The PM8550 uses only NLDOs 515 and the LDO 6 through 8 are low voltage type, so fix accordingly. Fixes: e6e3776d682d ("regulator: qcom-rpmh: Add support for PM8550 regulators") Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Link: https://lore.kernel.org/r/20230605115607.921308-1-abel.vesa@linaro.org Signed-off-by: Mark Brown <broonie@kernel.org>
2023-06-07bpf: Add extra path pointer check to d_path helperJiri Olsa
Anastasios reported crash on stable 5.15 kernel with following BPF attached to lsm hook: SEC("lsm.s/bprm_creds_for_exec") int BPF_PROG(bprm_creds_for_exec, struct linux_binprm *bprm) { struct path *path = &bprm->executable->f_path; char p[128] = { 0 }; bpf_d_path(path, p, 128); return 0; } But bprm->executable can be NULL, so bpf_d_path call will crash: BUG: kernel NULL pointer dereference, address: 0000000000000018 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC NOPTI ... RIP: 0010:d_path+0x22/0x280 ... Call Trace: <TASK> bpf_d_path+0x21/0x60 bpf_prog_db9cf176e84498d9_bprm_creds_for_exec+0x94/0x99 bpf_trampoline_6442506293_0+0x55/0x1000 bpf_lsm_bprm_creds_for_exec+0x5/0x10 security_bprm_creds_for_exec+0x29/0x40 bprm_execve+0x1c1/0x900 do_execveat_common.isra.0+0x1af/0x260 __x64_sys_execve+0x32/0x40 It's problem for all stable trees with bpf_d_path helper, which was added in 5.9. This issue is fixed in current bpf code, where we identify and mark trusted pointers, so the above code would fail even to load. For the sake of the stable trees and to workaround potentially broken verifier in the future, adding the code that reads the path object from the passed pointer and verifies it's valid in kernel space. Fixes: 6e22ab9da793 ("bpf: Add d_path helper") Reported-by: Anastasios Papagiannis <tasos.papagiannnis@gmail.com> Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20230606181714.532998-1-jolsa@kernel.org
2023-06-07MAINTAINERS: add Andy Shevchenko as reviewer for the GPIO subsystemBartosz Golaszewski
Andy has been a de-facto reviewer for all things GPIO for a long time so let's make it official. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Acked-by: Andy Shevchenko <andy@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
2023-06-07gpio: sim: quietly ignore configured lines outside the bankKent Gibson
The user-space policy of the gpio-sim is that configuration for lines with offsets outside the bounds of the corresponding bank is ignored, but gpio-sim is still using that configuration when constructing the sim. In the case of named lines this results in temporarily allocating space for names that are not used, and for hogs results in errors being logged when the gpio-sim attempts to register the out of range hog with gpiolib: gpiochip_machine_hog: unable to get GPIO desc: -22 Add checks to filter out any line configuration outside the bounds of the bank when constructing the sim. Signed-off-by: Kent Gibson <warthog618@gmail.com> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2023-06-07debugobjects: Recheck debug_objects_enabled before reportingTetsuo Handa
syzbot is reporting false a positive ODEBUG message immediately after ODEBUG was disabled due to OOM. [ 1062.309646][T22911] ODEBUG: Out of memory. ODEBUG disabled [ 1062.886755][ T5171] ------------[ cut here ]------------ [ 1062.892770][ T5171] ODEBUG: assert_init not available (active state 0) object: ffffc900056afb20 object type: timer_list hint: process_timeout+0x0/0x40 CPU 0 [ T5171] CPU 1 [T22911] -------------- -------------- debug_object_assert_init() { if (!debug_objects_enabled) return; db = get_bucket(addr); lookup_object_or_alloc() { debug_objects_enabled = 0; return NULL; } debug_objects_oom() { pr_warn("Out of memory. ODEBUG disabled\n"); // all buckets get emptied here, and } lookup_object_or_alloc(addr, db, descr, false, true) { // this bucket is already empty. return ERR_PTR(-ENOENT); } // Emits false positive warning. debug_print_object(&o, "assert_init"); } Recheck debug_object_enabled in debug_print_object() to avoid that. Reported-by: syzbot <syzbot+7937ba6a50bdd00fffdf@syzkaller.appspotmail.com> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/492fe2ae-5141-d548-ebd5-62f5fe2e57f7@I-love.SAKURA.ne.jp Closes: https://syzkaller.appspot.com/bug?extid=7937ba6a50bdd00fffdf
2023-06-07net: sched: fix possible refcount leak in tc_chain_tmplt_add()Hangyu Hua
try_module_get will be called in tcf_proto_lookup_ops. So module_put needs to be called to drop the refcount if ops don't implement the required function. Fixes: 9f407f1768d3 ("net: sched: introduce chain templates") Signed-off-by: Hangyu Hua <hbh25y@gmail.com> Reviewed-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-07net: sched: act_police: fix sparse errors in tcf_police_dump()Eric Dumazet
Fixes following sparse errors: net/sched/act_police.c:360:28: warning: dereference of noderef expression net/sched/act_police.c:362:45: warning: dereference of noderef expression net/sched/act_police.c:362:45: warning: dereference of noderef expression net/sched/act_police.c:368:28: warning: dereference of noderef expression net/sched/act_police.c:370:45: warning: dereference of noderef expression net/sched/act_police.c:370:45: warning: dereference of noderef expression net/sched/act_police.c:376:45: warning: dereference of noderef expression net/sched/act_police.c:376:45: warning: dereference of noderef expression Fixes: d1967e495a8d ("net_sched: act_police: add 2 new attributes to support police 64bit rate and peakrate") Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-07net: openvswitch: fix upcall counter access before allocationEelco Chaudron
Currently, the per cpu upcall counters are allocated after the vport is created and inserted into the system. This could lead to the datapath accessing the counters before they are allocated resulting in a kernel Oops. Here is an example: PID: 59693 TASK: ffff0005f4f51500 CPU: 0 COMMAND: "ovs-vswitchd" #0 [ffff80000a39b5b0] __switch_to at ffffb70f0629f2f4 #1 [ffff80000a39b5d0] __schedule at ffffb70f0629f5cc #2 [ffff80000a39b650] preempt_schedule_common at ffffb70f0629fa60 #3 [ffff80000a39b670] dynamic_might_resched at ffffb70f0629fb58 #4 [ffff80000a39b680] mutex_lock_killable at ffffb70f062a1388 #5 [ffff80000a39b6a0] pcpu_alloc at ffffb70f0594460c #6 [ffff80000a39b750] __alloc_percpu_gfp at ffffb70f05944e68 #7 [ffff80000a39b760] ovs_vport_cmd_new at ffffb70ee6961b90 [openvswitch] ... PID: 58682 TASK: ffff0005b2f0bf00 CPU: 0 COMMAND: "kworker/0:3" #0 [ffff80000a5d2f40] machine_kexec at ffffb70f056a0758 #1 [ffff80000a5d2f70] __crash_kexec at ffffb70f057e2994 #2 [ffff80000a5d3100] crash_kexec at ffffb70f057e2ad8 #3 [ffff80000a5d3120] die at ffffb70f0628234c #4 [ffff80000a5d31e0] die_kernel_fault at ffffb70f062828a8 #5 [ffff80000a5d3210] __do_kernel_fault at ffffb70f056a31f4 #6 [ffff80000a5d3240] do_bad_area at ffffb70f056a32a4 #7 [ffff80000a5d3260] do_translation_fault at ffffb70f062a9710 #8 [ffff80000a5d3270] do_mem_abort at ffffb70f056a2f74 #9 [ffff80000a5d32a0] el1_abort at ffffb70f06297dac #10 [ffff80000a5d32d0] el1h_64_sync_handler at ffffb70f06299b24 #11 [ffff80000a5d3410] el1h_64_sync at ffffb70f056812dc #12 [ffff80000a5d3430] ovs_dp_upcall at ffffb70ee6963c84 [openvswitch] #13 [ffff80000a5d3470] ovs_dp_process_packet at ffffb70ee6963fdc [openvswitch] #14 [ffff80000a5d34f0] ovs_vport_receive at ffffb70ee6972c78 [openvswitch] #15 [ffff80000a5d36f0] netdev_port_receive at ffffb70ee6973948 [openvswitch] #16 [ffff80000a5d3720] netdev_frame_hook at ffffb70ee6973a28 [openvswitch] #17 [ffff80000a5d3730] __netif_receive_skb_core.constprop.0 at ffffb70f06079f90 We moved the per cpu upcall counter allocation to the existing vport alloc and free functions to solve this. Fixes: 95637d91fefd ("net: openvswitch: release vport resources on failure") Fixes: 1933ea365aa7 ("net: openvswitch: Add support to count upcall packets") Signed-off-by: Eelco Chaudron <echaudro@redhat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-07net: sched: move rtm_tca_policy declaration to include fileEric Dumazet
rtm_tca_policy is used from net/sched/sch_api.c and net/sched/cls_api.c, thus should be declared in an include file. This fixes the following sparse warning: net/sched/sch_api.c:1434:25: warning: symbol 'rtm_tca_policy' was not declared. Should it be static? Fixes: e331473fee3d ("net/sched: cls_api: add missing validation of netlink attributes") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-06-07Merge tag 'amdtee-fix-for-v6.5' of ↵Arnd Bergmann
https://git.linaro.org/people/jens.wiklander/linux-tee into arm/fixes AMDTEE add return origin to load TA command * tag 'amdtee-fix-for-v6.5' of https://git.linaro.org/people/jens.wiklander/linux-tee: tee: amdtee: Add return_origin to 'struct tee_cmd_load_ta' Link: https://lore.kernel.org/r/20230606075843.GA2792442@rayden Signed-off-by: Arnd Bergmann <arnd@arndb.de>