summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-02-01arm64: hibernate: add __force attribute to gfp_t castingPavel Tatashin
Two new warnings are reported by sparse: "sparse warnings: (new ones prefixed by >>)" >> arch/arm64/kernel/hibernate.c:181:39: sparse: sparse: cast to restricted gfp_t >> arch/arm64/kernel/hibernate.c:202:44: sparse: sparse: cast from restricted gfp_t gfp_t has __bitwise type attribute and requires __force added to casting in order to avoid these warnings. Fixes: 50f53fb72181 ("arm64: trans_pgd: make trans_pgd_map_page generic") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/r/20210201150306.54099-2-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28perf/arm-cmn: Move IRQs when migrating contextRobin Murphy
If we migrate the PMU context to another CPU, we need to remember to retarget the IRQs as well. Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/e080640aea4ed8dfa870b8549dfb31221803eb6b.1611839564.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28perf/arm-cmn: Fix PMU instance namingRobin Murphy
Although it's neat to avoid the suffix for the typical case of a single PMU, it means systems with multiple CMN instances end up with inconsistent naming. I think it also breaks perf tool's "uncore alias" logic if the common instance prefix is also the full name of one. Avoid any surprises by not trying to be clever and simply numbering every instance, even when it might technically prove redundant. Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/649a2281233f193d59240b13ed91b57337c77b32.1611839564.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28KVM: arm64: Move __hyp_set_vectors out of .hyp.textQuentin Perret
The .hyp.text section is supposed to be reserved for the nVHE EL2 code. However, there is currently one occurrence of EL1 executing code located in .hyp.text when calling __hyp_{re}set_vectors(), which happen to sit next to the EL2 stub vectors. While not a problem yet, such patterns will cause issues when removing the host kernel from the TCB, so a cleaner split would be preferable. Fix this by delimiting the end of the .hyp.text section in hyp-stub.S. Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20210128173850.2478161-1-qperret@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28mm/nommu: Fix return type of filemap_map_pages()Geert Uytterhoeven
If CONFIG_MMU is not set (e.g. m68k/m5272c3_defconfig): mm/nommu.c:1671:6: error: conflicting types for ‘filemap_map_pages’ 1671 | void filemap_map_pages(struct vm_fault *vmf, | ^~~~~~~~~~~~~~~~~ In file included from mm/nommu.c:20: ./include/linux/mm.h:2578:19: note: previous declaration of ‘filemap_map_pages’ was here 2578 | extern vm_fault_t filemap_map_pages(struct vm_fault *vmf, | ^~~~~~~~~~~~~~~~~ The signature of filemap_map_pages() was changed, but the nommu implementation wasn't updated. Reported-by: noreply@ellerman.id.au Fixes: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/20210128100626.2257638-1-geert@linux-m68k.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: kexec: arm64_relocate_new_kernel don't use x0 as tempPavel Tatashin
x0 will contain the only argument to arm64_relocate_new_kernel; don't use it as a temp. Reassigned registers to free-up x0 so we won't need to copy argument, and can use it at the beginning and at the end of the function. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-13-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizationsPavel Tatashin
In preparation to bigger changes to arm64_relocate_new_kernel that would enable this function to do MMU backed memory copy, do few clean-ups and optimizations. These include: 1. Call raw_dcache_line_size() only when relocation is actually going to happen. i.e. kdump type kexec, does not need it. 2. copy_page(dest, src, tmps...) increments dest and src by PAGE_SIZE, so no need to store dest prior to calling copy_page and increment it after. Also, src is not used after a copy, not need to copy either. 3. For consistency use comment on the same line with instruction when it describes the instruction itself. 4. Some comment corrections Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/r/20210125191923.1060122-12-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: kexec: call kexec_image_info only oncePavel Tatashin
Currently, kexec_image_info() is called during load time, and right before kernel is being kexec'ed. There is no need to do both. So, call it only once when segments are loaded and the physical location of page with copy of arm64_relocate_new_kernel is known. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-11-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: kexec: move relocation function setupPavel Tatashin
Currently, kernel relocation function is configured in machine_kexec() at the time of kexec reboot by using control_code_page. This operation, however, is more logical to be done during kexec_load, and thus remove from reboot time. Move, setup of this function to newly added machine_kexec_post_load(). Because once MMU is enabled, kexec control page will contain more than relocation kernel, but also vector table, add pointer to the actual function within this page arch.kern_reloc. Currently, it equals to the beginning of page, we will add offsets later, when vector table is added. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-10-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: trans_pgd: hibernate: idmap the single page that holds the copy page ↵James Morse
routines To resume from hibernate, the contents of memory are restored from the swap image. This may overwrite any page, including the running kernel and its page tables. Hibernate copies the code it uses to do the restore into a single page that it knows won't be overwritten, and maps it with page tables built from pages that won't be overwritten. Today the address it uses for this mapping is arbitrary, but to allow kexec to reuse this code, it needs to be idmapped. To idmap the page we must avoid the kernel helpers that have VA_BITS baked in. Convert create_single_mapping() to take a single PA, and idmap it. The page tables are built in the reverse order to normal using pfn_pte() to stir in any bits between 52:48. T0SZ is always increased to cover 48bits, or 52 if the copy code has bits 52:48 in its PA. Signed-off-by: James Morse <james.morse@arm.com> [Adopted the original patch from James to trans_pgd interface, so it can be commonly used by both Kexec and Hibernate. Some minor clean-ups.] Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/linux-arm-kernel/20200115143322.214247-4-james.morse@arm.com/ Link: https://lore.kernel.org/r/20210125191923.1060122-9-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()James Morse
Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz() can check for platforms that need to do this using __cpu_uses_extended_idmap() before doing its work. The idmap is only built with enough levels, (and T0SZ bits) to map its single page. To allow hibernate, and then kexec to idmap their single page copy routines, __cpu_set_tcr_t0sz() needs to consider additional users, who may need a different number of levels/T0SZ-bits to the idmap. (i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec) Always read TCR_EL1, and check whether any work needs doing for this request. __cpu_uses_extended_idmap() remains as it is used by KVM, whose idmap is also part of the kernel image. This mostly affects the cpuidle path, where we now get an extra system register read . CC: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> CC: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/r/20210125191923.1060122-8-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: trans_pgd: pass NULL instead of init_mm to *_populate functionsPavel Tatashin
trans_pgd_* should be independent from mm context because the tables that are created by this code are used when there are no mm context around, as it is between kernels. Simply replace mm_init's with NULL. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-7-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: trans_pgd: pass allocator trans_pgd_create_copyPavel Tatashin
Make trans_pgd_create_copy and its subroutines to use allocator that is passed as an argument Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-6-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: trans_pgd: make trans_pgd_map_page genericPavel Tatashin
kexec is going to use a different allocator, so make trans_pgd_map_page to accept allocator as an argument, and also kexec is going to use a different map protection, so also pass it via argument. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Matthias Brugger <mbrugger@suse.com> Link: https://lore.kernel.org/r/20210125191923.1060122-5-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: hibernate: move page handling function to new trans_pgd.cPavel Tatashin
Now, that we abstracted the required functions move them to a new home. Later, we will generalize these function in order to be useful outside of hibernation. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-4-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: hibernate: variable pudp is used instead of pd4dpPavel Tatashin
There should be p4dp used when p4d page is allocated. This is not a functional issue, but for the logical correctness this should be fixed. Fixes: e9f6376858b9 ("arm64: add support for folded p4d page tables") Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Link: https://lore.kernel.org/r/20210125191923.1060122-3-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: kexec: make dtb_mem always enabledPavel Tatashin
Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is enabled. This adds ugly ifdefs to c files. Always enabled dtb_mem, when it is not used, it is NULL. Change the dtb_mem to phys_addr_t, as it is a physical address. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20210125191923.1060122-2-pasha.tatashin@soleen.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27arm64: Include linux/io.h in mm/mmap.cWill Deacon
Commit 507d664450f8 ("arm64: mm: Remove unused header file") removed a bunch of apparently "unused" header inclusions from our mm/mmap.c implementation, but in doing so introduced the following warning when building with W=1: >> arch/arm64/mm/mmap.c:17:5: warning: no previous prototype for 'valid_phys_addr_range' [-Wmissing-prototypes] 17 | int valid_phys_addr_range(phys_addr_t addr, size_t size) | ^~~~~~~~~~~~~~~~~~~~~ >> arch/arm64/mm/mmap.c:36:5: warning: no previous prototype for 'valid_mmap_phys_addr_range' [-Wmissing-prototypes] 36 | int valid_mmap_phys_addr_range(unsigned long pfn, size_t size) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Add back the linux/io.h header inclusion to pull in the missing prototypes. Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202101271438.V9TmBC31-lkp@intel.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-26arm64: cacheflush: Remove stale commentShaokun Zhang
Remove stale comment since commit a7ba121215fa ("arm64: use asm-generic/cacheflush.h") Cc: Christoph Hellwig <hch@lst.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1611575753-36435-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-26arm64: mm: Remove unused header fileShaokun Zhang
Many header files are never used, let's remove them directly. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1611663884-43329-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21arm64/sparsemem: reduce SECTION_SIZE_BITSSudarshan Rajagopalan
memory_block_size_bytes() determines the memory hotplug granularity i.e the amount of memory which can be hot added or hot removed from the kernel. The generic value here being MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS) for memory_block_size_bytes() on platforms like arm64 that does not override. Current SECTION_SIZE_BITS is 30 i.e 1GB which is large and a reduction here increases memory hotplug granularity, thus improving its agility. A reduced section size also reduces memory wastage in vmemmmap mapping for sections with large memory holes. So we try to set the least section size as possible. A section size bits selection must follow: (MAX_ORDER - 1 + PAGE_SHIFT) <= SECTION_SIZE_BITS CONFIG_FORCE_MAX_ZONEORDER is always defined on arm64 and so just following it would help achieve the smallest section size. SECTION_SIZE_BITS = (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT) SECTION_SIZE_BITS = 22 (11 - 1 + 12) i.e 4MB for 4K pages SECTION_SIZE_BITS = 24 (11 - 1 + 14) i.e 16MB for 16K pages without THP SECTION_SIZE_BITS = 25 (12 - 1 + 14) i.e 32MB for 16K pages with THP SECTION_SIZE_BITS = 26 (11 - 1 + 16) i.e 64MB for 64K pages without THP SECTION_SIZE_BITS = 29 (14 - 1 + 16) i.e 512MB for 64K pages with THP But there are other problems in reducing SECTION_SIZE_BIT. Reducing it by too much would over populate /sys/devices/system/memory/ and also consume too many page->flags bits in the !vmemmap case. Also section size needs to be multiple of 128MB to have PMD based vmemmap mapping with CONFIG_ARM64_4K_PAGES. Given these constraints, lets just reduce the section size to 128MB for 4K and 16K base page size configs, and to 512MB for 64K base page size config. Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com> Suggested-by: David Hildenbrand <david@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven Price <steven.price@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/43843c5e092bfe3ec4c41e3c8c78a7ee35b69bb0.1611206601.git.sudaraja@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21arm64: Add support for SMCCC TRNG entropy sourceAndre Przywara
The ARM architected TRNG firmware interface, described in ARM spec DEN0098, defines an ARM SMCCC based interface to a true random number generator, provided by firmware. This can be discovered via the SMCCC >=v1.1 interface, and provides up to 192 bits of entropy per call. Hook this SMC call into arm64's arch_get_random_*() implementation, coming to the rescue when the CPU does not implement the ARM v8.5 RNG system registers. For the detection, we piggy back on the PSCI/SMCCC discovery (which gives us the conduit to use (hvc/smc)), then try to call the ARM_SMCCC_TRNG_VERSION function, which returns -1 if this interface is not implemented. Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21firmware: smccc: Introduce SMCCC TRNG frameworkAndre Przywara
The ARM DEN0098 document describe an SMCCC based firmware service to deliver hardware generated random numbers. Its existence is advertised according to the SMCCC v1.1 specification. Add a (dummy) call to probe functions implemented in each architecture (ARM and arm64), to determine the existence of this interface. For now this return false, but this will be overwritten by each architecture's support patch. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21random: avoid arch_get_random_seed_long() when collecting IRQ randomnessArd Biesheuvel
When reseeding the CRNG periodically, arch_get_random_seed_long() is called to obtain entropy from an architecture specific source if one is implemented. In most cases, these are special instructions, but in some cases, such as on ARM, we may want to back this using firmware calls, which are considerably more expensive. Another call to arch_get_random_seed_long() exists in the CRNG driver, in add_interrupt_randomness(), which collects entropy by capturing inter-interrupt timing and relying on interrupt jitter to provide random bits. This is done by keeping a per-CPU state, and mixing in the IRQ number, the cycle counter and the return address every time an interrupt is taken, and mixing this per-CPU state into the entropy pool every 64 invocations, or at least once per second. The entropy that is gathered this way is credited as 1 bit of entropy. Every time this happens, arch_get_random_seed_long() is invoked, and the result is mixed in as well, and also credited with 1 bit of entropy. This means that arch_get_random_seed_long() is called at least once per second on every CPU, which seems excessive, and doesn't really scale, especially in a virtualization scenario where CPUs may be oversubscribed: in cases where arch_get_random_seed_long() is backed by an instruction that actually goes back to a shared hardware entropy source (such as RNDRRS on ARM), we will end up hitting it hundreds of times per second. So let's drop the call to arch_get_random_seed_long() from add_interrupt_randomness(), and instead, rely on crng_reseed() to call the arch hook to get random seed material from the platform. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Eric Biggers <ebiggers@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20201105152944.16953-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21mm: Mark anonymous struct field of 'struct vm_fault' as 'const'Will Deacon
The fields of this struct are only ever read after being initialised, so mark it 'const' before somebody tries to modify it again. GCC will then complain (with an error) about modification of these fields after they have been initialised, although LLVM currently allows them without even a warning: https://bugs.llvm.org/show_bug.cgi?id=48755 Hopefully, future versions of LLVM will emit a warning. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21mm: Use static initialisers for immutable fields of 'struct vm_fault'Will Deacon
In preparation for const-ifying the anonymous struct field of 'struct vm_fault', ensure that it is initialised using designated initialisers. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21mm: Avoid modifying vmf.address in __collapse_huge_page_swapin()Will Deacon
In preparation for const-ifying the anonymous struct field of 'struct vm_fault', rework __collapse_huge_page_swapin() to avoid continuously updating vmf.address and instead populate a new 'struct vm_fault' on the stack for each page being processed. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21mm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULTWill Deacon
Rather than modifying the 'address' field of the 'struct vm_fault' passed to do_set_pte(), leave that to identify the real faulting address and pass in the virtual address to be mapped by the new pte as a separate argument. This makes FAULT_FLAG_PREFAULT redundant, as a prefault entry can be identified simply by comparing the new address parameter with the faulting address, so remove the redundant flag at the same time. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-21mm: Move immutable fields of 'struct vm_fault' into anonymous structWill Deacon
'struct vm_fault' contains both information about the fault being serviced alongside mutable fields contributing to the state of the fault-handling logic. Unfortunately, the distinction between the two is not clear-cut, and a number of callers end up manipulating the structure temporarily before restoring it when returning. Try to clean this up by moving the immutable fault information into an anonymous struct, which will later be marked as 'const'. Ideally, the 'flags' field would be part of the new structure too, but it seems as though the ->page_mkwrite() path is not ready for this yet. Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/CAHk-=whYs9XsO88iqJzN6NC=D-dp2m0oYXuOoZ=eWnvv=5OA+w@mail.gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20perf: Constify static struct attribute_groupRikard Falkeborn
The only usage is to put their addresses in an array of pointers to const struct attribute group. Make them const to allow the compiler to put them in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Link: https://lore.kernel.org/r/20210117212847.21319-5-rikard.falkeborn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20perf: hisi: Constify static struct attribute_groupRikard Falkeborn
The only usage is to put their addresses in an array of pointers to const struct attribute group. Make them const to allow the compiler to put them in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Link: https://lore.kernel.org/r/20210117212847.21319-4-rikard.falkeborn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20perf/imx_ddr: Constify static struct attribute_groupRikard Falkeborn
The only usage is to put their addresses in an array of pointers to const struct attribute group. Make them const to allow the compiler to put them in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Link: https://lore.kernel.org/r/20210117212847.21319-3-rikard.falkeborn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20perf: qcom: Constify static struct attribute_groupRikard Falkeborn
The only usage is to put their addresses in an array of pointers to const struct attribute group. Make them const to allow the compiler to put them in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Link: https://lore.kernel.org/r/20210117212847.21319-2-rikard.falkeborn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20drivers/perf: Add support for ARMv8.3-SPEWei Li
Armv8.3 extends the SPE by adding: - Alignment field in the Events packet, and filtering on this event using PMSEVFR_EL1. - Support for the Scalable Vector Extension (SVE). The main additions for SVE are: - Recording the vector length for SVE operations in the Operation Type packet. It is not possible to filter on vector length. - Incomplete predicate and empty predicate fields in the Events packet, and filtering on these events using PMSEVFR_EL1. Update the check of pmsevfr for empty/partial predicated SVE and alignment event in SPE driver. Signed-off-by: Wei Li <liwei391@huawei.com> Link: https://lore.kernel.org/r/20201203141609.14148-1-liwei391@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20arm64: mm: Implement arch_wants_old_prefaulted_pte()Will Deacon
On CPUs with hardware AF/DBM, initialising prefaulted PTEs as 'old' improves vmscan behaviour and does not appear to introduce any overhead elsewhere. Implement arch_wants_old_prefaulted_pte() to return 'true' if we detect hardware access flag support at runtime. This can be extended in future based on MIDR matching if necessary. Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20mm: Allow architectures to request 'old' entries when prefaultingWill Deacon
Commit 5c0a85fad949 ("mm: make faultaround produce old ptes") changed the "faultaround" behaviour to initialise prefaulted PTEs as 'old', since this avoids vmscan wrongly assuming that they are hot, despite having never been explicitly accessed by userspace. The change has been shown to benefit numerous arm64 micro-architectures (with hardware access flag) running Android, where both application launch latency and direct reclaim time are significantly reduced (by 10%+ and ~80% respectively). Unfortunately, commit 315d09bf30c2 ("Revert "mm: make faultaround produce old ptes"") reverted the change due to it being identified as the cause of a ~6% regression in unixbench on x86. Experiments on a variety of recent arm64 micro-architectures indicate that unixbench is not affected by the original commit, which appears to yield a 0-1% performance improvement. Since one size does not fit all for the initial state of prefaulted PTEs, introduce arch_wants_old_prefaulted_pte(), which allows an architecture to opt-in to 'old' prefaulted PTEs at runtime based on whatever criteria it may have. Cc: Jan Kara <jack@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Reported-by: Vinayak Menon <vinmenon@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20mm: Cleanup faultaround and finish_fault() codepathsKirill A. Shutemov
alloc_set_pte() has two users with different requirements: in the faultaround code, it called from an atomic context and PTE page table has to be preallocated. finish_fault() can sleep and allocate page table as needed. PTL locking rules are also strange, hard to follow and overkill for finish_fault(). Let's untangle the mess. alloc_set_pte() has gone now. All locking is explicit. The price is some code duplication to handle huge pages in faultaround path, but it should be fine, having overall improvement in readability. Link: https://lore.kernel.org/r/20201229132819.najtavneutnf7ajp@box Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> [will: s/from from/from/ in comment; spotted by willy] Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20arm64: remove EL0 exception frame recordMark Rutland
When entering an exception from EL0, the entry code creates a synthetic frame record with a NULL PC. This was used by the code introduced in commit: 7326749801396105 ("arm64: unwind: reference pt_regs via embedded stack frame") ... to discover exception entries on the stack and dump the associated pt_regs. Since the NULL PC was undesirable for the stacktrace, we added a special case to unwind_frame() to prevent the NULL PC from being logged. Since commit: a25ffd3a6302a678 ("arm64: traps: Don't print stack or raw PC/LR values in backtraces") ... we no longer try to dump the pt_regs as part of a stacktrace, and hence no longer need the synthetic exception record. This patch removes the synthetic exception record and the associated special case in unwind_frame(). Instead, EL0 exceptions set the FP to NULL, as is the case for other terminal records (e.g. when a kernel thread starts). The synthetic record for exceptions from EL1 is retrained as this has useful unwind information for the interrupted context. To make the terminal case a bit clearer, an explicit check is added to the start of unwind_frame(). This would otherwise be caught implicitly by the on_accessible_stack() checks. Reported-by: Mark Brown <broonie@kernel.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210113173155.43063-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20arm64: mte: style: Simplify bool comparisonYANG LI
Fix the following coccicheck warning: ./tools/testing/selftests/arm64/mte/check_buffer_fill.c:84:12-35: WARNING: Comparison to bool Signed-off-by: YANG LI <abaci-bugfix@linux.alibaba.com> Reported-by: Abaci Robot<abaci@linux.alibaba.com> Link: https://lore.kernel.org/r/1610357737-68678-1-git-send-email-abaci-bugfix@linux.alibaba.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20firmware: smccc: Add SMCCC TRNG function call IDsArd Biesheuvel
The ARM architected TRNG firmware interface, described in ARM spec DEN0098, define an ARM SMCCC based interface to a true random number generator, provided by firmware. Add the definitions of the SMCCC functions as defined by the spec. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210106103453.152275-2-andre.przywara@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20arm64/mm: Add warning for outside range requests in vmemmap_populate()Anshuman Khandual
vmemmap_populate() does not validate the requested vmemmap address range to be inside the platform assigned space i.e [VMEMMAP_START..VMEMMAP_END] for vmemmap. Instead it would just go ahead and create the mapping which might then overlap with other sections in the kernel virtual address space. Just adding an warning here for range overrun which would help detect the problem earlier on, before a potential struct page corruption. This also makes vmemmap_populate() symmetrical with vmemmap_free() which already has a similar warning. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1609845851-25064-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-20arm64: Drop workaround for broken 'S' constraint with GCC 4.9Marc Zyngier
Since GCC < 5.1 has been shown to be unsuitable for the arm64 kernel, let's drop the workaround for the 'S' asm constraint that GCC 4.9 doesn't always grok. This is effectively a revert of 9fd339a45be5 ("arm64: Work around broken GCC 4.9 handling of "S" constraint"). Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210118130129.2875949-1-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-17Linux 5.11-rc4v5.11-rc4Linus Torvalds
2021-01-17Merge tag 'perf-tools-fixes-2021-01-17' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tools fixes from Arnaldo Carvalho de Melo: - Fix 'CPU too large' error in Intel PT - Correct event attribute sizes in 'perf inject' - Sync build_bug.h and kvm.h kernel copies - Fix bpf.h header include directive in 5sec.c 'perf trace' bpf example - libbpf tests fixes - Fix shadow stat 'perf test' for non-bash shells - Take cgroups into account for shadow stats in 'perf stat' * tag 'perf-tools-fixes-2021-01-17' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: perf inject: Correct event attribute sizes perf intel-pt: Fix 'CPU too large' error perf stat: Take cgroups into account for shadow stats perf stat: Introduce struct runtime_stat_data libperf tests: Fail when failing to get a tracepoint id libperf tests: If a test fails return non-zero libperf tests: Avoid uninitialized variable warning perf test: Fix shadow stat test for non-bash shells tools headers: Syncronize linux/build_bug.h with the kernel sources tools headers UAPI: Sync kvm.h headers with the kernel sources perf bpf examples: Fix bpf.h header include directive in 5sec.c example
2021-01-17Merge tag 'powerpc-5.11-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "One fix for a lack of alignment in our linker script, that can lead to crashes depending on configuration etc. One fix for the 32-bit VDSO after the C VDSO conversion. Thanks to Andreas Schwab, Ariel Marcovitch, and Christophe Leroy" * tag 'powerpc-5.11-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/vdso: Fix clock_gettime_fallback for vdso32 powerpc: Fix alignment bug within the init sections
2021-01-17Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds
Pull misc vfs fixes from Al Viro: "Several assorted fixes. I still think that audit ->d_name race is better fixed this way for the benefit of backports, with any possibly fancier variants done on top of it" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: dump_common_audit_data(): fix racy accesses to ->d_name iov_iter: fix the uaccess area in copy_compat_iovec_from_user umount(2): move the flag validity checks first
2021-01-17mm: don't put pinned pages into the swap cacheLinus Torvalds
So technically there is nothing wrong with adding a pinned page to the swap cache, but the pinning obviously means that the page can't actually be free'd right now anyway, so it's a bit pointless. However, the real problem is not with it being a bit pointless: the real issue is that after we've added it to the swap cache, we'll try to unmap the page. That will succeed, because the code in mm/rmap.c doesn't know or care about pinned pages. Even the unmapping isn't fatal per se, since the page will stay around in memory due to the pinning, and we do hold the connection to it using the swap cache. But when we then touch it next and take a page fault, the logic in do_swap_page() will map it back into the process as a possibly read-only page, and we'll then break the page association on the next COW fault. Honestly, this issue could have been fixed in any of those other places: (a) we could refuse to unmap a pinned page (which makes conceptual sense), or (b) we could make sure to re-map a pinned page writably in do_swap_page(), or (c) we could just make do_wp_page() not COW the pinned page (which was what we historically did before that "mm: do_wp_page() simplification" commit). But while all of them are equally valid models for breaking this chain, not putting pinned pages into the swap cache in the first place is the simplest one by far. It's also the safest one: the reason why do_wp_page() was changed in the first place was that getting the "can I re-use this page" wrong is so fraught with errors. If you do it wrong, you end up with an incorrectly shared page. As a result, using "page_maybe_dma_pinned()" in either do_wp_page() or do_swap_page() would be a serious bug since it is only a (very good) heuristic. Re-using the page requires a hard black-and-white rule with no room for ambiguity. In contrast, saying "this page is very likely dma pinned, so let's not add it to the swap cache and try to unmap it" is an obviously safe thing to do, and if the heuristic might very rarely be a false positive, no harm is done. Fixes: 09854ba94c6a ("mm: do_wp_page() simplification") Reported-and-tested-by: Martin Raiber <martin@urbackup.org> Cc: Pavel Begunkov <asml.silence@gmail.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-01-16Merge tag 'scsi-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Nine minor fixes, seven in drivers and two in the core SCSI disk driver (sd) which should be harmless involving removing an unused variable and quietening a spurious warning" Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com> * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: sd: Remove obsolete variable in sd_remove() scsi: sd: Suppress spurious errors when WRITE SAME is being disabled scsi: scsi_debug: Fix memleak in scsi_debug_init() scsi: mpt3sas: Fix spelling mistake in Kconfig "compatiblity" -> "compatibility" scsi: qedi: Correct max length of CHAP secret scsi: ufs: Correct the LUN used in eh_device_reset_handler() callback scsi: ufs: Relocate flush of exceptional event scsi: ufs: Relax the condition of UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL scsi: ufs: Fix possible power drain during system suspend
2021-01-16dump_common_audit_data(): fix racy accesses to ->d_nameAl Viro
We are not guaranteed the locking environment that would prevent dentry getting renamed right under us. And it's possible for old long name to be freed after rename, leading to UAF here. Cc: stable@kernel.org # v2.6.2+ Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-01-16Merge tag 'block-5.11-2021-01-16' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull block fixes from Jens Axboe: "Just an nvme pull request via Christoph: - don't initialize hwmon for discover controllers (Sagi Grimberg) - fix iov_iter handling in nvme-tcp (Sagi Grimberg) - fix a preempt warning in nvme-tcp (Sagi Grimberg) - fix a possible NULL pointer dereference in nvme (Israel Rukshin)" * tag 'block-5.11-2021-01-16' of git://git.kernel.dk/linux-block: nvme: don't intialize hwmon for discovery controllers nvme-tcp: fix possible data corruption with bio merges nvme-tcp: Fix warning with CONFIG_DEBUG_PREEMPT nvmet-rdma: Fix NULL deref when setting pi_enable and traddr INADDR_ANY