summaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2021-04-15Merge branch 'for-next/mte-async-kernel-mode' into for-next/coreCatalin Marinas
* for-next/mte-async-kernel-mode: : Add MTE asynchronous kernel mode support kasan, arm64: tests supports for HW_TAGS async mode arm64: mte: Report async tag faults before suspend arm64: mte: Enable async tag check fault arm64: mte: Conditionally compile mte_enable_kernel_*() arm64: mte: Enable TCO in functions that can read beyond buffer limits kasan: Add report for async mode arm64: mte: Drop arch_enable_tagging() kasan: Add KASAN mode kernel parameter arm64: mte: Add asynchronous mode support
2021-04-15Merge branches 'for-next/misc', 'for-next/kselftest', 'for-next/xntable', ↵Catalin Marinas
'for-next/vdso', 'for-next/fiq', 'for-next/epan', 'for-next/kasan-vmalloc', 'for-next/fgt-boot-init', 'for-next/vhe-only' and 'for-next/neon-softirqs-disabled', remote-tracking branch 'arm64/for-next/perf' into for-next/core * for-next/misc: : Miscellaneous patches arm64/sve: Add compile time checks for SVE hooks in generic functions arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG. arm64/sve: Remove redundant system_supports_sve() tests arm64: mte: Remove unused mte_assign_mem_tag_range() arm64: Add __init section marker to some functions arm64/sve: Rework SVE access trap to convert state in registers docs: arm64: Fix a grammar error arm64: smp: Add missing prototype for some smp.c functions arm64: setup: name `tcr` register arm64: setup: name `mair` register arm64: stacktrace: Move start_backtrace() out of the header arm64: barrier: Remove spec_bar() macro arm64: entry: remove test_irqs_unmasked macro ARM64: enable GENERIC_FIND_FIRST_BIT arm64: defconfig: Use DEBUG_INFO_REDUCED * for-next/kselftest: : Various kselftests for arm64 kselftest: arm64: Add BTI tests kselftest/arm64: mte: Report filename on failing temp file creation kselftest/arm64: mte: Fix clang warning kselftest/arm64: mte: Makefile: Fix clang compilation kselftest/arm64: mte: Output warning about failing compiler kselftest/arm64: mte: Use cross-compiler if specified kselftest/arm64: mte: Fix MTE feature detection kselftest/arm64: mte: common: Fix write() warnings kselftest/arm64: mte: user_mem: Fix write() warning kselftest/arm64: mte: ksm_options: Fix fscanf warning kselftest/arm64: mte: Fix pthread linking kselftest/arm64: mte: Fix compilation with native compiler * for-next/xntable: : Add hierarchical XN permissions for all page tables arm64: mm: use XN table mapping attributes for user/kernel mappings arm64: mm: use XN table mapping attributes for the linear region arm64: mm: add missing P4D definitions and use them consistently * for-next/vdso: : Minor improvements to the compat vdso and sigpage arm64: compat: Poison the compat sigpage arm64: vdso: Avoid ISB after reading from cntvct_el0 arm64: compat: Allow signal page to be remapped arm64: vdso: Remove redundant calls to flush_dcache_page() arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages * for-next/fiq: : Support arm64 FIQ controller registration arm64: irq: allow FIQs to be handled arm64: Always keep DAIF.[IF] in sync arm64: entry: factor irq triage logic into macros arm64: irq: rework root IRQ handler registration arm64: don't use GENERIC_IRQ_MULTI_HANDLER genirq: Allow architectures to override set_handle_irq() fallback * for-next/epan: : Support for Enhanced PAN (execute-only permissions) arm64: Support execute-only permissions with Enhanced PAN * for-next/kasan-vmalloc: : Support CONFIG_KASAN_VMALLOC on arm64 arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled arm64: kaslr: support randomized module area with KASAN_VMALLOC arm64: Kconfig: support CONFIG_KASAN_VMALLOC arm64: kasan: abstract _text and _end to KERNEL_START/END arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC * for-next/fgt-boot-init: : Booting clarifications and fine grained traps setup arm64: Require that system registers at all visible ELs be initialized arm64: Disable fine grained traps on boot arm64: Document requirements for fine grained traps at boot * for-next/vhe-only: : Dealing with VHE-only CPUs (a.k.a. M1) arm64: Get rid of CONFIG_ARM64_VHE arm64: Cope with CPUs stuck in VHE mode arm64: cpufeature: Allow early filtering of feature override * arm64/for-next/perf: arm64: perf: Remove redundant initialization in perf_event.c perf/arm_pmu_platform: Clean up with dev_printk perf/arm_pmu_platform: Fix error handling perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors docs: perf: Address some html build warnings docs: perf: Add new description on HiSilicon uncore PMU v2 drivers/perf: hisi: Add support for HiSilicon PA PMU driver drivers/perf: hisi: Add support for HiSilicon SLLC PMU driver drivers/perf: hisi: Update DDRC PMU for programmable counter drivers/perf: hisi: Add new functions for HHA PMU drivers/perf: hisi: Add new functions for L3C PMU drivers/perf: hisi: Add PMU version for uncore PMU drivers. drivers/perf: hisi: Refactor code for more uncore PMUs drivers/perf: hisi: Remove unnecessary check of counter index drivers/perf: Simplify the SMMUv3 PMU event attributes drivers/perf: convert sysfs sprintf family to sysfs_emit drivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit() drivers/perf: convert sysfs snprintf family to sysfs_emit * for-next/neon-softirqs-disabled: : Run kernel mode SIMD with softirqs disabled arm64: fpsimd: run kernel mode NEON with softirqs disabled arm64: assembler: introduce wxN aliases for wN registers arm64: assembler: remove conditional NEON yield macros
2021-04-15arm64/sve: Add compile time checks for SVE hooks in generic functionsMark Brown
The FPSIMD code was relying on IS_ENABLED() checks in system_suppors_sve() to cause the compiler to delete references to SVE functions in some places, add explicit IS_ENABLED() checks back. Fixes: ef9c5d09797d ("arm64/sve: Remove redundant system_supports_sve() tests") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210415121742.36628-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.zhouchuangao
It can be optimized at compile time. Signed-off-by: zhouchuangao <zhouchuangao@vivo.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/1617105472-6081-1-git-send-email-zhouchuangao@vivo.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-13arm64/sve: Remove redundant system_supports_sve() testsMark Brown
Currently there are a number of places in the SVE code where we check both system_supports_sve() and TIF_SVE. This is a bit redundant given that we should never get into a situation where we have set TIF_SVE without having SVE support and it is not clear that silently ignoring a mistakenly set TIF_SVE flag is the most sensible error handling approach. For now let's just drop the system_supports_sve() checks since this will at least reduce overhead a little. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210412172320.3315-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: fpsimd: run kernel mode NEON with softirqs disabledArd Biesheuvel
Kernel mode NEON can be used in task or softirq context, but only in a non-nesting manner, i.e., softirq context is only permitted if the interrupt was not taken at a point where the kernel was using the NEON in task context. This means all users of kernel mode NEON have to be aware of this limitation, and either need to provide scalar fallbacks that may be much slower (up to 20x for AES instructions) and potentially less safe, or use an asynchronous interface that defers processing to a later time when the NEON is guaranteed to be available. Given that grabbing and releasing the NEON is cheap, we can relax this restriction, by increasing the granularity of kernel mode NEON code, and always disabling softirq processing while the NEON is being used in task context. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: introduce wxN aliases for wN registersArd Biesheuvel
The AArch64 asm syntax has this slightly tedious property that the names used in mnemonics to refer to registers depend on whether the opcode in question targets the entire 64-bits (xN), or only the least significant 8, 16 or 32 bits (wN). When writing parameterized code such as macros, this can be annoying, as macro arguments don't lend themselves to indexed lookups, and so generating a reference to wN in a macro that receives xN as an argument is problematic. For instance, an upcoming patch that modifies the implementation of the cond_yield macro to be able to refer to 32-bit registers would need to modify invocations such as cond_yield 3f, x8 to cond_yield 3f, 8 so that the second argument can be token pasted after x or w to emit the correct register reference. Unfortunately, this interferes with the self documenting nature of the first example, where the second argument is obviously a register, whereas in the second example, one would need to go and look at the code to find out what '8' means. So let's fix this by defining wxN aliases for all xN registers, which resolve to the 32-bit alias of each respective 64-bit register. This allows the macro implementation to paste the xN reference after a w to obtain the correct register name. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-12arm64: assembler: remove conditional NEON yield macrosArd Biesheuvel
The users of the conditional NEON yield macros have all been switched to the simplified cond_yield macro, and so the NEON specific ones can be removed. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11kasan, arm64: tests supports for HW_TAGS async modeAndrey Konovalov
This change adds KASAN-KUnit tests support for the async HW_TAGS mode. In async mode, tag fault aren't being generated synchronously when a bad access happens, but are instead explicitly checked for by the kernel. As each KASAN-KUnit test expect a fault to happen before the test is over, check for faults as a part of the test handler. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-10-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Report async tag faults before suspendVincenzo Frascino
When MTE async mode is enabled TFSR_EL1 contains the accumulative asynchronous tag check faults for EL1 and EL0. During the suspend/resume operations the firmware might perform some operations that could change the state of the register resulting in a spurious tag check fault report. Report asynchronous tag faults before suspend and clear the TFSR_EL1 register after resume to prevent this to happen. Cc: Will Deacon <will@kernel.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-9-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Enable async tag check faultVincenzo Frascino
MTE provides a mode that asynchronously updates the TFSR_EL1 register when a tag check exception is detected. To take advantage of this mode the kernel has to verify the status of the register at: 1. Context switching 2. Return to user/EL0 (Not required in entry from EL0 since the kernel did not run) 3. Kernel entry from EL1 4. Kernel exit to EL1 If the register is non-zero a trace is reported. Add the required features for EL1 detection and reporting. Note: ITFSB bit is set in the SCTLR_EL1 register hence it guaranties that the indirect writes to TFSR_EL1 are synchronized at exception entry to EL1. On the context switch path the synchronization is guarantied by the dsb() in __switch_to(). The dsb(nsh) in mte_check_tfsr_exit() is provisional pending confirmation by the architects. Cc: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-8-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Conditionally compile mte_enable_kernel_*()Vincenzo Frascino
mte_enable_kernel_*() are not needed if KASAN_HW is disabled. Add ash defines around the functions to conditionally compile the functions. Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-7-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Enable TCO in functions that can read beyond buffer limitsVincenzo Frascino
load_unaligned_zeropad() and __get/put_kernel_nofault() functions can read past some buffer limits which may include some MTE granule with a different tag. When MTE async mode is enabled, the load operation crosses the boundaries and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit as if an asynchronous tag fault is happened. Enable Tag Check Override (TCO) in these functions before the load and disable it afterwards to prevent this to happen. Note: The same condition can be hit in MTE sync mode but we deal with it through the exception handling. In the current implementation, mte_async_mode flag is set only at boot time but in future kasan might acquire some runtime features that that change the mode dynamically, hence we disable it when sync mode is selected for future proof. Cc: Will Deacon <will@kernel.org> Reported-by: Branislav Rankov <Branislav.Rankov@arm.com> Tested-by: Branislav Rankov <Branislav.Rankov@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-6-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Drop arch_enable_tagging()Vincenzo Frascino
arch_enable_tagging() was left in memory.h after the introduction of async mode to not break the bysectability of the KASAN KUNIT tests. Remove the function now that KASAN has been fully converted. Cc: Will Deacon <will@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-4-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-11arm64: mte: Add asynchronous mode supportVincenzo Frascino
MTE provides an asynchronous mode for detecting tag exceptions. In particular instead of triggering a fault the arm64 core updates a register which is checked by the kernel after the asynchronous tag check fault has occurred. Add support for MTE asynchronous mode. The exception handling mechanism will be added with a future patch. Note: KASAN HW activates async mode via kasan.mode kernel parameter. The default mode is set to synchronous. The code that verifies the status of TFSR_EL1 will be added with a future patch. Cc: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Acked-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-2-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Get rid of CONFIG_ARM64_VHEMarc Zyngier
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Cope with CPUs stuck in VHE modeMarc Zyngier
It seems that the CPUs part of the SoC known as Apple M1 have the terrible habit of being stuck with HCR_EL2.E2H==1, in violation of the architecture. Try and work around this deplorable state of affairs by detecting the stuck bit early and short-circuit the nVHE dance. Additional filtering code ensures that attempts at switching to nVHE from the command-line are also ignored. It is still unknown whether there are many more such nuggets to be found... Reported-by: Hector Martin <marcan@marcan.st> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: cpufeature: Allow early filtering of feature overrideMarc Zyngier
Some CPUs are broken enough that some overrides need to be rejected at the earliest opportunity. In some cases, that's right at cpu feature override time. Provide the necessary infrastructure to filter out overrides, and to report such filtered out overrides to the core cpufeature code. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Disable fine grained traps on bootMark Brown
The arm64 FEAT_FGT extension introduces a set of traps to EL2 for accesses to small sets of registers and instructions from EL1 and EL0. Currently Linux makes no use of this feature, ensure that it is not active at boot by disabling the traps during EL2 setup. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210401180942.35815-3-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: mte: Remove unused mte_assign_mem_tag_range()Vincenzo Frascino
mte_assign_mem_tag_range() was added in commit 85f49cae4dfc ("arm64: mte: add in-kernel MTE helpers") in 5.11 but moved out of mte.S by commit 2cb34276427a ("arm64: kasan: simplify and inline MTE functions") in 5.12 and renamed to mte_set_mem_tag_range(). 2cb34276427a did not delete the old function prototypes in mte.h. Remove the unused prototype from mte.h. Cc: Will Deacon <will@kernel.org> Reported-by: Derrick McKee <derrick.mckee@gmail.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210407133817.23053-1-vincenzo.frascino@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64: Add __init section marker to some functionsJisheng Zhang
They are not needed after booting, so mark them as __init to move them to the .init section. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20210330135449.4dcffd7f@xhacker.debian Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-08arm64/sve: Rework SVE access trap to convert state in registersMark Brown
When we enable SVE usage in userspace after taking a SVE access trap we need to ensure that the portions of the register state that are not shared with the FPSIMD registers are zeroed. Currently we do this by forcing the FPSIMD registers to be saved to the task struct and converting them there. This is wasteful in the common case where the task state is loaded into the registers and we will immediately return to userspace since we can initialise the SVE state directly in registers instead of accessing multiple copies of the register state in memory. Instead in that common case do the conversion in the registers and update the task metadata so that we can return to userspace without spilling the register state to memory unless there is some other reason to do so. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210312190313.24598-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-04-01arm64: perf: Remove redundant initialization in perf_event.cQi Liu
The initialization of value in function armv8pmu_read_hw_counter() and armv8pmu_read_counter() seem redundant, as they are soon updated. So, We can remove them. Signed-off-by: Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1617275801-1980-1-git-send-email-liuqi115@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-29arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabledLecopzer Chen
Before this patch, someone who wants to use VMAP_STACK when KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC. >From Will's suggestion [1]: > I would _really_ like to move to VMAP stack unconditionally, and > that would effectively force KASAN_VMALLOC to be set if KASAN is in use Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if KASAN enabled, in order to make VMAP_STACK selected unconditionally, we bind KANSAN_GENERIC and KASAN_VMALLOC together. Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now, so this is the first step to make VMAP_STACK selected unconditionally. Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more memory at runtime, thus the alternative is using SW_TAGS KASAN instead. [1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/ Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> Link: https://lore.kernel.org/r/20210324040522.15548-6-lecopzer.chen@mediatek.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-29arm64: kaslr: support randomized module area with KASAN_VMALLOCLecopzer Chen
After KASAN_VMALLOC works in arm64, we can randomize module region into vmalloc area now. Test: VMALLOC area ffffffc010000000 fffffffdf0000000 before the patch: module_alloc_base/end ffffffc008b80000 ffffffc010000000 after the patch: module_alloc_base/end ffffffdcf4bed000 ffffffc010000000 And the function that insmod some modules is fine. Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> Link: https://lore.kernel.org/r/20210324040522.15548-5-lecopzer.chen@mediatek.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-29arm64: Kconfig: support CONFIG_KASAN_VMALLOCLecopzer Chen
We can backed shadow memory in vmalloc area after vmalloc area isn't populated at kasan_init(), thus make KASAN_VMALLOC selectable. Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> Acked-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210324040522.15548-4-lecopzer.chen@mediatek.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-29arm64: kasan: abstract _text and _end to KERNEL_START/ENDLecopzer Chen
Arm64 provides defined macro for KERNEL_START and KERNEL_END, thus replace them by the abstration instead of using _text and _end. Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> Acked-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210324040522.15548-3-lecopzer.chen@mediatek.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-29arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOCLecopzer Chen
Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9 ("kasan: support backing vmalloc space with real shadow memory") Like how the MODULES_VADDR does now, just not to early populate the VMALLOC_START between VMALLOC_END. Before: MODULE_VADDR: no mapping, no zero shadow at init VMALLOC_VADDR: backed with zero shadow at init After: MODULE_VADDR: no mapping, no zero shadow at init VMALLOC_VADDR: no mapping, no zero shadow at init Thus the mapping will get allocated on demand by the core function of KASAN_VMALLOC. ----------- vmalloc_shadow_start | | | | | | <= non-mapping | | | | |-----------| |///////////|<- kimage shadow with page table mapping. |-----------| | | | | <= non-mapping | | ------------- vmalloc_shadow_end |00000000000| |00000000000| <= Zero shadow |00000000000| ------------- KASAN_SHADOW_END Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com> Acked-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210324040522.15548-2-lecopzer.chen@mediatek.com [catalin.marinas@arm.com: add a build check on VMALLOC_START != MODULES_END] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-29arm64: smp: Add missing prototype for some smp.c functionsChen Lifu
In commit eb631bb5bf5b ("arm64: Support arch_irq_work_raise() via self IPIs") a new function "arch_irq_work_raise" was added without a prototype. In commit d914d4d49745 ("arm64: Implement panic_smp_self_stop()") a new function "panic_smp_self_stop" was added without a prototype. We get the following warnings on W=1: arch/arm64/kernel/smp.c:842:6: warning: no previous prototype for ‘arch_irq_work_raise’ [-Wmissing-prototypes] arch/arm64/kernel/smp.c:862:6: warning: no previous prototype for ‘panic_smp_self_stop’ [-Wmissing-prototypes] Fix the warnings by: 1. Adding the prototype for 'arch_irq_work_raise' in irq_work.h 2. Adding the prototype for 'panic_smp_self_stop' in smp.h Signed-off-by: Chen Lifu <chenlifu@huawei.com> Link: https://lore.kernel.org/r/20210329034343.183974-1-chenlifu@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-28arm64: setup: name `tcr` registerMark Rutland
In __cpu_setup we conditionally manipulate the TCR_EL1 value in x10 after previously using x10 as a scratch register for unrelated temporary variables. To make this a bit clearer, let's move the TCR_EL1 value into a named register `tcr`. To simplify the register allocation, this is placed in the highest available caller-saved scratch register, tcr. Following the example of `mair`, we initialise the register with the default value prior to any feature discovery, and write it to MAIR_EL1 after all feature discovery is complete, which allows us to simplify the featuere discovery code. The existing `mte_tcr` register is no longer needed, and is replaced by the use of x10 as a temporary, matching the rest of the MTE feature discovery assembly in __cpu_setup. As x20 is no longer used, the function is now AAPCS compliant, as we've generally aimed for in our assembly functions. There should be no functional change as as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210326180137.43119-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-28arm64: setup: name `mair` registerMark Rutland
In __cpu_setup we conditionally manipulate the MAIR_EL1 value in x5 before later reusing x5 as a scratch register for unrelated temporary variables. To make this a bit clearer, let's move the MAIR_EL1 value into a named register `mair`. To simplify the register allocation, this is placed in the highest available caller-saved scratch register, x17. As it is no longer clobbered by other usage, we can write the value to MAIR_EL1 at the end of the function as we do for TCR_EL1 rather than part-way though feature discovery. There should be no functional change as as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210326180137.43119-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-28arm64: stacktrace: Move start_backtrace() out of the headerMark Brown
Currently start_backtrace() is a static inline function in the header. Since it really shouldn't be sufficiently performance critical that we actually need to have it inlined move it into a C file, this will save anyone else scratching their head about why it is defined in the header. As far as I can see it's only there because it was factored out of the various callers. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210319174022.33051-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-26arm64: Support execute-only permissions with Enhanced PANVladimir Murzin
Enhanced Privileged Access Never (EPAN) allows Privileged Access Never to be used with Execute-only mappings. Absence of such support was a reason for 24cecc377463 ("arm64: Revert support for execute-only user mappings"). Thus now it can be revisited and re-enabled. Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210312173811.58284-2-vladimir.murzin@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-25arm64: barrier: Remove spec_bar() macroLinus Walleij
The spec_bar() macro was introduced in commit bd4fb6d270bc ("arm64: Add support for SB barrier and patch in over DSB; ISB sequences") as a way for C to insert a speculation barrier and was then used in one single place: set_fs(). Later on commit 3d2403fd10a1 ("arm64: uaccess: remove set_fs()") deleted set_fs() altogether and as noted in the commit on the new path the regular sb() assembly macro will be used. Delete the remnant. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210325141304.1607595-1-linus.walleij@linaro.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-25arm64: entry: remove test_irqs_unmasked macroMark Rutland
We haven't needed the test_irqs_unmasked macro since commit: 105fc3352077bba5 ("arm64: entry: move el1 irq/nmi logic to C") ... and as we convert more of the entry logic to C it is decreasingly likely we'll need it in future, so let's remove the unused macro. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210323181201.18889-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: irq: allow FIQs to be handledMark Rutland
On contemporary platforms we don't use FIQ, and treat any stray FIQ as a fatal event. However, some platforms have an interrupt controller wired to FIQ, and need to handle FIQ as part of regular operation. So that we can support both cases dynamically, this patch updates the FIQ exception handling code to operate the same way as the IRQ handling code, with its own handle_arch_fiq handler. Where a root FIQ handler is not registered, an unexpected FIQ exception will trigger the default FIQ handler, which will panic() as today. Where a root FIQ handler is registered, handling of the FIQ is deferred to that handler. As el0_fiq_invalid_compat is supplanted by el0_fiq, the former is removed. For !CONFIG_COMPAT builds we never expect to take an exception from AArch32 EL0, so we keep the common el0_fiq_invalid handler. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210315115629.57191-7-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: Always keep DAIF.[IF] in syncHector Martin
Apple SoCs (A11 and newer) have some interrupt sources hardwired to the FIQ line. We implement support for this by simply treating IRQs and FIQs the same way in the interrupt vectors. To support these systems, the FIQ mask bit needs to be kept in sync with the IRQ mask bit, so both kinds of exceptions are masked together. No other platforms should be delivering FIQ exceptions right now, and we already unmask FIQ in normal process context, so this should not have an effect on other systems - if spurious FIQs were arriving, they would already panic the kernel. Signed-off-by: Hector Martin <marcan@marcan.st> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210315115629.57191-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: entry: factor irq triage logic into macrosMarc Zyngier
In subsequent patches we'll allow an FIQ handler to be registered, and FIQ exceptions will need to be triaged very similarly to IRQ exceptions. So that we can reuse the existing logic, this patch factors the IRQ triage logic out into macros that can be reused for FIQ. The macros are named to follow the elX_foo_handler scheme used by the C exception handlers. For consistency with other top-level exception handlers, the kernel_entry/kernel_exit logic is not moved into the macros. As FIQ will use a different C handler, this handler name is provided as an argument to the macros. There should be no functional change as a result of this patch. Signed-off-by: Marc Zyngier <maz@kernel.org> [Mark: rework macros, commit message, rebase before DAIF rework] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: James Morse <james.morse@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210315115629.57191-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: irq: rework root IRQ handler registrationMark Rutland
If we accidentally unmask IRQs before we've registered a root IRQ handler, handle_arch_irq will be NULL, and the IRQ exception handler will branch to a bogus address. To make this easier to debug, this patch initialises handle_arch_irq to a default handler which will panic(), making such problems easier to debug. When we add support for FIQ handlers, we can follow the same approach. When we add support for a root FIQ handler, it's possible to have root IRQ handler without an root FIQ handler, and in theory the inverse is also possible. To permit this, and to keep the IRQ/FIQ registration logic similar, this patch removes the panic in the absence of a root IRQ controller. Instead, set_handle_irq() logs when a handler is registered, which is sufficient for debug purposes. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210315115629.57191-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: don't use GENERIC_IRQ_MULTI_HANDLERMarc Zyngier
In subsequent patches we want to allow irqchip drivers to register as FIQ handlers, with a set_handle_fiq() function. To keep the IRQ/FIQ paths similar, we want arm64 to provide both set_handle_irq() and set_handle_fiq(), rather than using GENERIC_IRQ_MULTI_HANDLER for the former. This patch adds an arm64-specific implementation of set_handle_irq(). There should be no functional change as a result of this patch. Signed-off-by: Marc Zyngier <maz@kernel.org> [Mark: use a single handler pointer] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: James Morse <james.morse@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210315115629.57191-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: compat: Poison the compat sigpageWill Deacon
Commit 9c698bff66ab ("ARM: ensure the signal page contains defined contents") poisoned the unused portions of the signal page for 32-bit Arm. Implement the same poisoning for the compat signal page on arm64 rather than using __GFP_ZERO. Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210318170738.7756-6-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: vdso: Avoid ISB after reading from cntvct_el0Will Deacon
We can avoid the expensive ISB instruction after reading the counter in the vDSO gettime functions by creating a fake address hazard against a dummy stack read, just like we do inside the kernel. Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210318170738.7756-5-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: compat: Allow signal page to be remappedWill Deacon
For compatability with 32-bit Arm, allow the compat signal page to be remapped via mremap(). Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210318170738.7756-4-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: vdso: Remove redundant calls to flush_dcache_page()Will Deacon
flush_dcache_page() ensures that the 'PG_dcache_clean' flag for its 'page' argument is clear so that cache maintenance will be performed if the page is mapped into userspace with execute permissions. Newly allocated pages have this flag clear, so there is no need to call flush_dcache_page() for the compat vdso or signal pages. Remove the redundant calls. Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210318170738.7756-3-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-24arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pagesWill Deacon
There's no need to allocate the compat vDSO and signal pages using GFP_ATOMIC allocations, so use GFP_KERNEL instead. Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210318170738.7756-2-will@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-19arm64: mm: use XN table mapping attributes for user/kernel mappingsArd Biesheuvel
As the kernel and user space page tables are strictly mutually exclusive when it comes to executable permissions, we can set the UXN table attribute on all table entries that are created while creating kernel mappings in the swapper page tables, and the PXN table attribute on all table entries that are created while creating user space mappings in user space page tables. While at it, get rid of a redundant comment. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310104942.174584-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-19arm64: mm: use XN table mapping attributes for the linear regionArd Biesheuvel
The way the arm64 kernel virtual address space is constructed guarantees that swapper PGD entries are never shared between the linear region on the one hand, and the vmalloc region on the other, which is where all kernel text, module text and BPF text mappings reside. This means that mappings in the linear region (which never require executable permissions) never share any table entries at any level with mappings that do require executable permissions, and so we can set the table-level PXN attributes for all table entries that are created while setting up mappings in the linear region. Since swapper's PGD level page table is mapped r/o itself, this adds another layer of robustness to the way the kernel manages its own page tables. While at it, set the UXN attribute as well for all kernel mappings created at boot. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210310104942.174584-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-19arm64: mm: add missing P4D definitions and use them consistentlyArd Biesheuvel
Even though level 0, 1 and 2 descriptors share the same attribute encodings, let's be a bit more consistent about using the right one at the right level. So add new macros for level 0/P4D definitions, and clean up some inconsistencies involving these macros. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310104942.174584-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-15ARM64: enable GENERIC_FIND_FIRST_BITYury Norov
ARM64 doesn't implement find_first_{zero}_bit in arch code and doesn't enable it in a config. It leads to using find_next_bit() which is less efficient: 0000000000000000 <find_first_bit>: 0: aa0003e4 mov x4, x0 4: aa0103e0 mov x0, x1 8: b4000181 cbz x1, 38 <find_first_bit+0x38> c: f9400083 ldr x3, [x4] 10: d2800802 mov x2, #0x40 // #64 14: 91002084 add x4, x4, #0x8 18: b40000c3 cbz x3, 30 <find_first_bit+0x30> 1c: 14000008 b 3c <find_first_bit+0x3c> 20: f8408483 ldr x3, [x4], #8 24: 91010045 add x5, x2, #0x40 28: b50000c3 cbnz x3, 40 <find_first_bit+0x40> 2c: aa0503e2 mov x2, x5 30: eb02001f cmp x0, x2 34: 54ffff68 b.hi 20 <find_first_bit+0x20> // b.pmore 38: d65f03c0 ret 3c: d2800002 mov x2, #0x0 // #0 40: dac00063 rbit x3, x3 44: dac01063 clz x3, x3 48: 8b020062 add x2, x3, x2 4c: eb02001f cmp x0, x2 50: 9a829000 csel x0, x0, x2, ls // ls = plast 54: d65f03c0 ret ... 0000000000000118 <_find_next_bit.constprop.1>: 118: eb02007f cmp x3, x2 11c: 540002e2 b.cs 178 <_find_next_bit.constprop.1+0x60> // b.hs, b.nlast 120: d346fc66 lsr x6, x3, #6 124: f8667805 ldr x5, [x0, x6, lsl #3] 128: b4000061 cbz x1, 134 <_find_next_bit.constprop.1+0x1c> 12c: f8667826 ldr x6, [x1, x6, lsl #3] 130: 8a0600a5 and x5, x5, x6 134: ca0400a6 eor x6, x5, x4 138: 92800005 mov x5, #0xffffffffffffffff // #-1 13c: 9ac320a5 lsl x5, x5, x3 140: 927ae463 and x3, x3, #0xffffffffffffffc0 144: ea0600a5 ands x5, x5, x6 148: 54000120 b.eq 16c <_find_next_bit.constprop.1+0x54> // b.none 14c: 1400000e b 184 <_find_next_bit.constprop.1+0x6c> 150: d346fc66 lsr x6, x3, #6 154: f8667805 ldr x5, [x0, x6, lsl #3] 158: b4000061 cbz x1, 164 <_find_next_bit.constprop.1+0x4c> 15c: f8667826 ldr x6, [x1, x6, lsl #3] 160: 8a0600a5 and x5, x5, x6 164: eb05009f cmp x4, x5 168: 540000c1 b.ne 180 <_find_next_bit.constprop.1+0x68> // b.any 16c: 91010063 add x3, x3, #0x40 170: eb03005f cmp x2, x3 174: 54fffee8 b.hi 150 <_find_next_bit.constprop.1+0x38> // b.pmore 178: aa0203e0 mov x0, x2 17c: d65f03c0 ret 180: ca050085 eor x5, x4, x5 184: dac000a5 rbit x5, x5 188: dac010a5 clz x5, x5 18c: 8b0300a3 add x3, x5, x3 190: eb03005f cmp x2, x3 194: 9a839042 csel x2, x2, x3, ls // ls = plast 198: aa0203e0 mov x0, x2 19c: d65f03c0 ret ... 0000000000000238 <find_next_bit>: 238: a9bf7bfd stp x29, x30, [sp, #-16]! 23c: aa0203e3 mov x3, x2 240: d2800004 mov x4, #0x0 // #0 244: aa0103e2 mov x2, x1 248: 910003fd mov x29, sp 24c: d2800001 mov x1, #0x0 // #0 250: 97ffffb2 bl 118 <_find_next_bit.constprop.1> 254: a8c17bfd ldp x29, x30, [sp], #16 258: d65f03c0 ret Enabling find_{first,next}_bit() would also benefit for_each_{set,clear}_bit(). On A-53 find_first_bit() is almost twice faster than find_next_bit(), according to lib/find_bit_benchmark (thanks to Alexey for testing): GENERIC_FIND_FIRST_BIT=n: [7126084.948181] find_first_bit: 47389224 ns, 16357 iterations [7126085.032315] find_first_bit: 19048193 ns, 655 iterations GENERIC_FIND_FIRST_BIT=y: [ 84.158068] find_first_bit: 27193319 ns, 16406 iterations [ 84.233005] find_first_bit: 11082437 ns, 656 iterations GENERIC_FIND_FIRST_BIT=n bloats the kernel despite that it disables generation of find_{first,next}_bit(): yury:linux$ scripts/bloat-o-meter vmlinux vmlinux.ffb add/remove: 4/1 grow/shrink: 19/251 up/down: 564/-1692 (-1128) ... Overall, GENERIC_FIND_FIRST_BIT=n is harmful both in terms of performance and code size, and it's better to have GENERIC_FIND_FIRST_BIT enabled. Tested-by: Alexey Klimov <aklimov@redhat.com> Signed-off-by: Yury Norov <yury.norov@gmail.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210225135700.1381396-2-yury.norov@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-15arm64: defconfig: Use DEBUG_INFO_REDUCEDMark Brown
We've had DEBUG_INFO enabled for arm64 defconfigs since the initial commit. This is probably not that frequently used but substantially inflates the size of the build tree and amount of I/O needed during the build. This was causing issues with storage usage in some automated CI environments which don't expect defconfigs to be quite this big, and generally increases the resource consumption for both them and people doing local builds. The main use case for the debug info is decoding things with scripts/faddr2line but that doesn't need the full DEBUG_INFO, DEBUG_INFO_REDUCED is enough for it, so enable that by default. Without this patch my build tree is 6.8G, with it the size drops to 2G (smaller than the 6.4G for allmodconfig!). Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Guillaume Tucker <guillaume.tucker@collabora.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Kevin Hilman <khilman@baylibre.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210304174407.17537-1-broonie@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>