summaryrefslogtreecommitdiff
path: root/arch/arm/mm
AgeCommit message (Collapse)Author
2021-09-03Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge misc updates from Andrew Morton: "173 patches. Subsystems affected by this series: ia64, ocfs2, block, and mm (debug, pagecache, gup, swap, shmem, memcg, selftests, pagemap, mremap, bootmem, sparsemem, vmalloc, kasan, pagealloc, memory-failure, hugetlb, userfaultfd, vmscan, compaction, mempolicy, memblock, oom-kill, migration, ksm, percpu, vmstat, and madvise)" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (173 commits) mm/madvise: add MADV_WILLNEED to process_madvise() mm/vmstat: remove unneeded return value mm/vmstat: simplify the array size calculation mm/vmstat: correct some wrong comments mm/percpu,c: remove obsolete comments of pcpu_chunk_populated() selftests: vm: add COW time test for KSM pages selftests: vm: add KSM merging time test mm: KSM: fix data type selftests: vm: add KSM merging across nodes test selftests: vm: add KSM zero page merging test selftests: vm: add KSM unmerge test selftests: vm: add KSM merge test mm/migrate: correct kernel-doc notation mm: wire up syscall process_mrelease mm: introduce process_mrelease system call memblock: make memblock_find_in_range method private mm/mempolicy.c: use in_task() in mempolicy_slab_node() mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies mm/mempolicy: advertise new MPOL_PREFERRED_MANY mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY ...
2021-09-03mm: remove flush_kernel_dcache_pageChristoph Hellwig
flush_kernel_dcache_page is a rather confusing interface that implements a subset of flush_dcache_page by not being able to properly handle page cache mapped pages. The only callers left are in the exec code as all other previous callers were incorrect as they could have dealt with page cache pages. Replace the calls to flush_kernel_dcache_page with calls to flush_dcache_page, which for all architectures does either exactly the same thing, can contains one or more of the following: 1) an optimization to defer the cache flush for page cache pages not mapped into userspace 2) additional flushing for mapped page cache pages if cache aliases are possible Link: https://lkml.kernel.org/r/20210712060928.4161649-7-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Cc: Alex Shi <alexs@kernel.org> Cc: Geoff Levand <geoff@infradead.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Guo Ren <guoren@kernel.org> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Cercueil <paul@crapouillou.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Yoshinori Sato <ysato@users.osdn.me> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-09-02Merge tag 'dma-mapping-5.15' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds
Pull dma-mapping updates from Christoph Hellwig: - fix debugfs initialization order (Anthony Iliopoulos) - use memory_intersects() directly (Kefeng Wang) - allow to return specific errors from ->map_sg (Logan Gunthorpe, Martin Oliveira) - turn the dma_map_sg return value into an unsigned int (me) - provide a common global coherent pool іmplementation (me) * tag 'dma-mapping-5.15' of git://git.infradead.org/users/hch/dma-mapping: (31 commits) hexagon: use the generic global coherent pool dma-mapping: make the global coherent pool conditional dma-mapping: add a dma_init_global_coherent helper dma-mapping: simplify dma_init_coherent_memory dma-mapping: allow using the global coherent pool for !ARM ARM/nommu: use the generic dma-direct code for non-coherent devices dma-direct: add support for dma_coherent_default_memory dma-mapping: return an unsigned int from dma_map_sg{,_attrs} dma-mapping: disallow .map_sg operations from returning zero on error dma-mapping: return error code from dma_dummy_map_sg() x86/amd_gart: don't set failed sg dma_address to DMA_MAPPING_ERROR x86/amd_gart: return error code from gart_map_sg() xen: swiotlb: return error code from xen_swiotlb_map_sg() parisc: return error code from .map_sg() ops sparc/iommu: don't set failed sg dma_address to DMA_MAPPING_ERROR sparc/iommu: return error codes from .map_sg() ops s390/pci: don't set failed sg dma_address to DMA_MAPPING_ERROR s390/pci: return error code from s390_dma_map_sg() powerpc/iommu: don't set failed sg dma_address to DMA_MAPPING_ERROR powerpc/iommu: return error code from .map_sg() ops ...
2021-08-18ARM/nommu: use the generic dma-direct code for non-coherent devicesChristoph Hellwig
Select the right options to just use the generic dma-direct code instead of reimplementing it. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Dillon Min <dillon.minfei@gmail.com>
2021-08-10ARM: 9104/2: Fix Keystone 2 kernel mapping regressionLinus Walleij
This fixes a Keystone 2 regression discovered as a side effect of defining an passing the physical start/end sections of the kernel to the MMU remapping code. As the Keystone applies an offset to all physical addresses, including those identified and patches by phys2virt, we fail to account for this offset in the kernel_sec_start and kernel_sec_end variables. Further these offsets can extend into the 64bit range on LPAE systems such as the Keystone 2. Fix it like this: - Extend kernel_sec_start and kernel_sec_end to be 64bit - Add the offset also to kernel_sec_start and kernel_sec_end As passing kernel_sec_start and kernel_sec_end as 64bit invariably incurs BE8 endianness issues I have attempted to dry-code around these. Tested on the Vexpress QEMU model both with and without LPAE enabled. Fixes: 6e121df14ccd ("ARM: 9090/1: Map the lowmem and kernel separately") Reported-by: Nishanth Menon <nmenon@kernel.org> Suggested-by: Russell King <rmk+kernel@armlinux.org.uk> Tested-by: Grygorii Strashko <grygorii.strashko@ti.com> Tested-by: Nishanth Menon <nmenon@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-08-09ARM/dma-mapping: don't set failed sg dma_address to DMA_MAPPING_ERRORLogan Gunthorpe
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of the ->map_sg calling convention, so remove it. Link: https://lore.kernel.org/linux-mips/20210716063241.GC13345@lst.de/ Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-08-09ARM/dma-mapping: return error code from .map_sg() opsMartin Oliveira
The .map_sg() op now expects an error code instead of zero on failure. In the case of a DMA_MAPPING_ERROR, -EIO is returned. Otherwise, -ENOMEM or -EINVAL is returned depending on the error from __map_sg_chunk(). Signed-off-by: Martin Oliveira <martin.oliveira@eideticom.com> Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2021-07-10Merge tag 'fixes-2021-07-09' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock Pull memblock fix from Mike Rapoport: "This is a fix for the rework of ARM's pfn_valid() implementation merged during this merge window. Don't abuse pfn_valid() to check if pfn is in RAM The semantics of pfn_valid() is to check presence of the memory map for a PFN and not whether a PFN is in RAM. The memory map may be present for a hole in the physical memory and if such hole corresponds to an MMIO range, __arm_ioremap_pfn_caller() will produce a WARN() and fail. Use memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in RAM or not" * tag 'fixes-2021-07-09' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock: arm: ioremap: don't abuse pfn_valid() to check if pfn is in RAM
2021-07-06Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM development updates from Russell King: - Make it clear __swp_entry_to_pte() uses PTE_TYPE_FAULT - Updates for setting vmalloc size via command line to resolve an issue with the 8MiB hole not properly being accounted for, and clean up the code. - ftrace support for module PLTs - Spelling fixes - kbuild updates for removing generated files and pattern rules for generating files - Clang/llvm updates - Change the way the kernel is mapped, placing it in vmalloc space instead. - Remove arm_pm_restart from arm and aarch64. * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (29 commits) ARM: 9098/1: ftrace: MODULE_PLT: Fix build problem without DYNAMIC_FTRACE ARM: 9097/1: mmu: Declare section start/end correctly ARM: 9096/1: Remove arm_pm_restart() ARM: 9095/1: ARM64: Remove arm_pm_restart() ARM: 9094/1: Register with kernel restart handler ARM: 9093/1: drivers: firmwapsci: Register with kernel restart handler ARM: 9092/1: xen: Register with kernel restart handler ARM: 9091/1: Revert "mm: qsd8x50: Fix incorrect permission faults" ARM: 9090/1: Map the lowmem and kernel separately ARM: 9089/1: Define kernel physical section start and end ARM: 9088/1: Split KERNEL_OFFSET from PAGE_OFFSET ARM: 9087/1: kprobes: test-thumb: fix for LLVM_IAS=1 ARM: 9086/1: syscalls: use pattern rules to generate syscall headers ARM: 9085/1: remove unneeded abi parameter to syscallnr.sh ARM: 9084/1: simplify the build rule of mach-types.h ARM: 9083/1: uncompress: atags_to_fdt: Spelling s/REturn/Return/ ARM: 9082/1: [v2] mark prepare_page_table as __init ARM: 9079/1: ftrace: Add MODULE_PLTS support ARM: 9078/1: Add warn suppress parameter to arm_gen_branch_link() ARM: 9077/1: PLT: Move struct plt_entries definition to header ...
2021-07-06arm: ioremap: don't abuse pfn_valid() to check if pfn is in RAMMike Rapoport
The semantics of pfn_valid() is to check presence of the memory map for a PFN and not whether a PFN is in RAM. The memory map may be present for a hole in the physical memory and if such hole corresponds to an MMIO range, __arm_ioremap_pfn_caller() will produce a WARN() and fail: [ 2.863406] WARNING: CPU: 0 PID: 1 at arch/arm/mm/ioremap.c:287 __arm_ioremap_pfn_caller+0xf0/0x1dc [ 2.864812] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.13.0-09882-ga180bd1d7e16 #1 [ 2.865263] Hardware name: Generic DT based system [ 2.865711] Backtrace: [ 2.866063] [<80b07e58>] (dump_backtrace) from [<80b080ac>] (show_stack+0x20/0x24) [ 2.866633] r7:00000009 r6:0000011f r5:60000153 r4:80ddd1c0 [ 2.866922] [<80b0808c>] (show_stack) from [<80b18df0>] (dump_stack_lvl+0x58/0x74) [ 2.867117] [<80b18d98>] (dump_stack_lvl) from [<80b18e20>] (dump_stack+0x14/0x1c) [ 2.867309] r5:80118cac r4:80dc6774 [ 2.867404] [<80b18e0c>] (dump_stack) from [<80122fcc>] (__warn+0xe4/0x150) [ 2.867583] [<80122ee8>] (__warn) from [<80b08850>] (warn_slowpath_fmt+0x88/0xc0) [ 2.867774] r7:0000011f r6:80dc6774 r5:00000000 r4:814c4000 [ 2.867917] [<80b087cc>] (warn_slowpath_fmt) from [<80118cac>] (__arm_ioremap_pfn_caller+0xf0/0x1dc) [ 2.868158] r9:00000001 r8:9ef00000 r7:80e8b0d4 r6:0009ef00 r5:00000000 r4:00100000 [ 2.868346] [<80118bbc>] (__arm_ioremap_pfn_caller) from [<80118df8>] (__arm_ioremap_caller+0x60/0x68) [ 2.868581] r9:9ef00000 r8:821b6dc0 r7:00100000 r6:00000000 r5:815d1010 r4:80118d98 [ 2.868761] [<80118d98>] (__arm_ioremap_caller) from [<80118fcc>] (ioremap+0x28/0x30) [ 2.868958] [<80118fa4>] (ioremap) from [<8062871c>] (__devm_ioremap_resource+0x154/0x1c8) [ 2.869169] r5:815d1010 r4:814c5d2c [ 2.869263] [<806285c8>] (__devm_ioremap_resource) from [<8062899c>] (devm_ioremap_resource+0x14/0x18) [ 2.869495] r9:9e9f57a0 r8:814c4000 r7:815d1000 r6:815d1010 r5:8177c078 r4:815cf400 [ 2.869676] [<80628988>] (devm_ioremap_resource) from [<8091c6e4>] (fsi_master_acf_probe+0x1a8/0x5d8) [ 2.869909] [<8091c53c>] (fsi_master_acf_probe) from [<80723dbc>] (platform_probe+0x68/0xc8) [ 2.870124] r9:80e9dadc r8:00000000 r7:815d1010 r6:810c1000 r5:815d1010 r4:00000000 [ 2.870306] [<80723d54>] (platform_probe) from [<80721208>] (really_probe+0x1cc/0x470) [ 2.870512] r7:815d1010 r6:810c1000 r5:00000000 r4:815d1010 [ 2.870651] [<8072103c>] (really_probe) from [<807215cc>] (__driver_probe_device+0x120/0x1fc) [ 2.870872] r7:815d1010 r6:810c1000 r5:810c1000 r4:815d1010 [ 2.871013] [<807214ac>] (__driver_probe_device) from [<807216e8>] (driver_probe_device+0x40/0xd8) [ 2.871244] r9:80e9dadc r8:00000000 r7:815d1010 r6:810c1000 r5:812feaa0 r4:812fe994 [ 2.871428] [<807216a8>] (driver_probe_device) from [<80721a58>] (__driver_attach+0xa8/0x1d4) [ 2.871647] r9:80e9dadc r8:00000000 r7:00000000 r6:810c1000 r5:815d1054 r4:815d1010 [ 2.871830] [<807219b0>] (__driver_attach) from [<8071ee8c>] (bus_for_each_dev+0x88/0xc8) [ 2.872040] r7:00000000 r6:814c4000 r5:807219b0 r4:810c1000 [ 2.872194] [<8071ee04>] (bus_for_each_dev) from [<80722208>] (driver_attach+0x28/0x30) [ 2.872418] r7:810a2aa0 r6:00000000 r5:821b6000 r4:810c1000 [ 2.872570] [<807221e0>] (driver_attach) from [<8071f80c>] (bus_add_driver+0x114/0x200) [ 2.872788] [<8071f6f8>] (bus_add_driver) from [<80722ec4>] (driver_register+0x98/0x128) [ 2.873011] r7:81011d0c r6:814c4000 r5:00000000 r4:810c1000 [ 2.873167] [<80722e2c>] (driver_register) from [<80725240>] (__platform_driver_register+0x2c/0x34) [ 2.873408] r5:814dcb80 r4:80f2a764 [ 2.873513] [<80725214>] (__platform_driver_register) from [<80f2a784>] (fsi_master_acf_init+0x20/0x28) [ 2.873766] [<80f2a764>] (fsi_master_acf_init) from [<80f014a8>] (do_one_initcall+0x108/0x290) [ 2.874007] [<80f013a0>] (do_one_initcall) from [<80f01840>] (kernel_init_freeable+0x1ac/0x230) [ 2.874248] r9:80e9dadc r8:80f3987c r7:80f3985c r6:00000007 r5:814dcb80 r4:80f627a4 [ 2.874456] [<80f01694>] (kernel_init_freeable) from [<80b19f44>] (kernel_init+0x20/0x138) [ 2.874691] r10:00000000 r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:80b19f24 [ 2.874894] r4:00000000 [ 2.874977] [<80b19f24>] (kernel_init) from [<80100170>] (ret_from_fork+0x14/0x24) [ 2.875231] Exception stack(0x814c5fb0 to 0x814c5ff8) [ 2.875535] 5fa0: 00000000 00000000 00000000 00000000 [ 2.875849] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 [ 2.876133] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000 [ 2.876363] r5:80b19f24 r4:00000000 [ 2.876683] ---[ end trace b2f74b8536829970 ]--- [ 2.876911] fsi-master-acf gpio-fsi: ioremap failed for resource [mem 0x9ef00000-0x9effffff] [ 2.877492] fsi-master-acf gpio-fsi: Error -12 mapping coldfire memory [ 2.877689] fsi-master-acf: probe of gpio-fsi failed with error -12 Use memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in RAM or not. Reported-by: Guenter Roeck <linux@roeck-us.net> Fixes: a4d5613c4dc6 ("arm: extend pfn_valid to take into account freed memory map alignment") Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Guenter Roeck <linux@roeck-us.net>
2021-07-04Merge tag 'memblock-v5.14-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock Pull memblock updates from Mike Rapoport: "Fix arm crashes caused by holes in the memory map. The coordination between freeing of unused memory map, pfn_valid() and core mm assumptions about validity of the memory map in various ranges was not designed for complex layouts of the physical memory with a lot of holes all over the place. Kefen Wang reported crashes in move_freepages() on a system with the following memory layout [1]: node 0: [mem 0x0000000080a00000-0x00000000855fffff] node 0: [mem 0x0000000086a00000-0x0000000087dfffff] node 0: [mem 0x000000008bd00000-0x000000008c4fffff] node 0: [mem 0x000000008e300000-0x000000008ecfffff] node 0: [mem 0x0000000090d00000-0x00000000bfffffff] node 0: [mem 0x00000000cc000000-0x00000000dc9fffff] node 0: [mem 0x00000000de700000-0x00000000de9fffff] node 0: [mem 0x00000000e0800000-0x00000000e0bfffff] node 0: [mem 0x00000000f4b00000-0x00000000f6ffffff] node 0: [mem 0x00000000fda00000-0x00000000ffffefff] These crashes can be mitigated by enabling CONFIG_HOLES_IN_ZONE on ARM and essentially turning pfn_valid_within() to pfn_valid() instead of having it hardwired to 1 on that architecture, but this would require to keep CONFIG_HOLES_IN_ZONE solely for this purpose. A cleaner approach is to update ARM's implementation of pfn_valid() to take into accounting rounding of the freed memory map to pageblock boundaries and make sure it returns true for PFNs that have memory map entries even if there is no physical memory backing those PFNs" Link: https://lore.kernel.org/lkml/2a1592ad-bc9d-4664-fd19-f7448a37edc0@huawei.com [1] * tag 'memblock-v5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock: arm: extend pfn_valid to take into account freed memory map alignment memblock: ensure there is no overflow in memblock_overlaps_region() memblock: align freed memory map on pageblock boundaries with SPARSEMEM memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER
2021-06-30arm: extend pfn_valid to take into account freed memory map alignmentMike Rapoport
When unused memory map is freed the preserved part of the memory map is extended to match pageblock boundaries because lots of core mm functionality relies on homogeneity of the memory map within pageblock boundaries. Since pfn_valid() is used to check whether there is a valid memory map entry for a PFN, make it return true also for PFNs that have memory map entries even if there is no actual memory populated there. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com> Tested-by: Tony Lindgren <tony@atomide.com>
2021-06-29mm: update legacy flush_tlb_* to use vmaChen Li
1. These tlb flush functions have been using vma instead mm long time ago, but there is still some comments use mm as parameter. 2. the actual struct we use is vm_area_struct instead of vma_struct. 3. remove unused flush_kern_tlb_page. Link: https://lkml.kernel.org/r/87k0oaq311.wl-chenli@uniontech.com Signed-off-by: Chen Li <chenli@uniontech.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-13ARM: 9091/1: Revert "mm: qsd8x50: Fix incorrect permission faults"Wang Kefeng
This reverts commit e220ba60223a9d63e70217e5b112160df8c21cea. The VERIFY_PERMISSION_FAULT is introduced since 2009 but no one use it, just revert it and clean unused comment. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-06-13ARM: 9090/1: Map the lowmem and kernel separatelyLinus Walleij
Using our knowledge of where the physical kernel sections start and end we can split mapping of lowmem and kernel apart. This is helpful when you want to place the kernel independently from lowmem and not be limited to putting it into lowmem only, but also into places such as the VMALLOC area. We extensively rewrite the lowmem mapping code to account for all cases where the kernel image overlaps with the lowmem in different ways. This is helpful to handle situations which occur when the kernel is loaded in different places and makes it possible to place the kernel in a more random manner which is done with e.g. KASLR. We sprinkle some comments with illustrations and pr_debug() over it so it is also very evident to readers what is happening. We now use the kernel_sec_start and kernel_sec_end instead of relying on __pa() (phys_to_virt) to provide this. This is helpful if we want to resolve physical-to-virtual and virtual-to-physical mappings at runtime rather than compiletime, especially if we are not using patch phys to virt. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: 9082/1: [v2] mark prepare_page_table as __initArnd Bergmann
In some configurations when building with gcc-11, prepare_page_table does not get inline, which causes a build time warning for a section mismatch: WARNING: modpost: vmlinux.o(.text.unlikely+0xce8): Section mismatch in reference from the function prepare_page_table() to the (unknown reference) .init.data:(unknown) The function prepare_page_table() references the (unknown reference) __initdata (unknown). This is often because prepare_page_table lacks a __initdata annotation or the annotation of (unknown) is wrong. Mark the function as __init to avoid the warning regardless of the inlining, and remove the 'inline' keyword. The compiler is free to ignore the 'inline' here and it doesn't result in better object code or more readable source. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: use MiB for vmalloc sizesRussell King (Oracle)
Rather than using "m" (which is the unit of metres, or milli), and "MB" in the printk statements, use MiB to make it clear that we are talking about the power-of-2 megabytes, aka mebibytes. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: use "* SZ_1M" rather than "<< 20"Russell King (Oracle)
Make the default vmalloc size clearer by using a more natural multiplication by SZ_1M rather than a shift left by 20 bits. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: change vmalloc_start to vmalloc_sizeRussell King (Oracle)
Rather than storing the start of vmalloc space, store the size, and move the calculation into adjust_lowmem_limit(). We now have one single place where this calculation takes place. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: change vmalloc_min to vmalloc_startRussell King (Oracle)
Change the current vmalloc_min, which is supposed to be the lowest address of vmalloc space including the VMALLOC_OFFSET, to vmalloc_start which does not include VMALLOC_OFFSET. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: use a temporary variable to hold maximum vmalloc sizeRussell King (Oracle)
We calculate the maximum size of the vmalloc space twice in early_vmalloc(). Use a temporary variable to hold this value. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-06-07ARM: change vmalloc_min to be unsigned longRussell King (Oracle)
vmalloc_min is currently a void pointer, but everywhere its used contains a cast - either to a void pointer when setting or back to an integer type when being used. Eliminate these casts by changing its type to unsigned long. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2021-05-06Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: - Fix BSS size calculation for LLVM - Improve robustness of kernel entry around v7_invalidate_l1 - Fix and update kprobes assembly - Correct breakpoint overflow handler check - Pause function graph tracer when suspending a CPU - Switch to generic syscallhdr.sh and syscalltbl.sh - Remove now unused set_kernel_text_r[wo] functions - Updates for ptdump (__init marking and using DEFINE_SHOW_ATTRIBUTE) - Fix for interrupted SMC (secure) calls - Remove Compaq Personal Server platform * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: footbridge: remove personal server platform ARM: 9075/1: kernel: Fix interrupted SMC calls ARM: 9074/1: ptdump: convert to DEFINE_SHOW_ATTRIBUTE ARM: 9073/1: ptdump: add __init section marker to three functions ARM: 9072/1: mm: remove set_kernel_text_r[ow]() ARM: 9067/1: syscalls: switch to generic syscallhdr.sh ARM: 9068/1: syscalls: switch to generic syscalltbl.sh ARM: 9066/1: ftrace: pause/unpause function graph tracer in cpu_suspend() ARM: 9064/1: hw_breakpoint: Do not directly check the event's overflow_handler hook ARM: 9062/1: kprobes: rewrite test-arm.c in UAL ARM: 9061/1: kprobes: fix UNPREDICTABLE warnings ARM: 9060/1: kexec: Remove unused kexec_reinit callback ARM: 9059/1: cache-v7: get rid of mini-stack ARM: 9058/1: cache-v7: refactor v7_invalidate_l1 to avoid clobbering r5/r6 ARM: 9057/1: cache-v7: add missing ISB after cache level selection ARM: 9056/1: decompressor: fix BSS size calculation for LLVM ld.lld
2021-05-04Merge branch 'stable/for-linus-5.13' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "Christoph Hellwig has taken a cleaver and trimmed off the not-needed code and nicely folded duplicate code in the generic framework. This lays the groundwork for more work to add extra DMA-backend-ish in the future. Along with that some bug-fixes to make this a nice working package" * 'stable/for-linus-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb: don't override user specified size in swiotlb_adjust_size swiotlb: Fix the type of index swiotlb: Make SWIOTLB_NO_FORCE perform no allocation ARM: Qualify enabling of swiotlb_init() swiotlb: remove swiotlb_nr_tbl swiotlb: dynamically allocate io_tlb_default_mem swiotlb: move global variables into a new io_tlb_mem structure xen-swiotlb: remove the unused size argument from xen_swiotlb_fixup xen-swiotlb: split xen_swiotlb_init swiotlb: lift the double initialization protection from xen-swiotlb xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabs xen-swiotlb: remove xen_set_nslabs xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supported xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_buffer swiotlb: split swiotlb_tbl_sync_single swiotlb: move orig addr and size validation into swiotlb_bounce swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_single powerpc/svm: stop using io_tlb_start
2021-04-30mm: move mem_init_print_info() into mm_init()Kefeng Wang
mem_init_print_info() is called in mem_init() on each architecture, and pass NULL argument, so using void argument and move it into mm_init(). Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86] Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc] Acked-by: David Hildenbrand <david@redhat.com> Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64] Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm] Acked-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.osdn.me> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Peter Zijlstra" <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-04-30mm: move page_mapping_file to pagemap.hMatthew Wilcox (Oracle)
page_mapping_file() is only used by some architectures, and then it is usually only used in one place. Make it a static inline function so other architectures don't have to carry this dead code. Link: https://lkml.kernel.org/r/20210317123011.350118-1-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-04-18ARM: 9074/1: ptdump: convert to DEFINE_SHOW_ATTRIBUTEJisheng Zhang (syna)
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-04-18ARM: 9073/1: ptdump: add __init section marker to three functionsJisheng Zhang (syna)
They are not needed after booting, so mark them as __init to move them to the .init section. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-04-18ARM: 9072/1: mm: remove set_kernel_text_r[ow]()Jisheng Zhang (syna)
After commit 5a735583b764 ("arm/ftrace: Use __patch_text()"), the last and only user of these functions has gone, remove them. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-03-25ARM: 9069/1: NOMMU: Fix conversion for_each_membock() to for_each_mem_range()Vladimir Murzin
for_each_mem_range() uses a loop variable, yet looking into code it is not just iteration counter but more complex entity which encodes information about memblock. Thus condition i == 0 looks fragile. Indeed, it broke boot of R-class platforms since it never took i == 0 path (due to i was set to 1). Fix that with restoring original flag check. Fixes: b10d6bca8720 ("arch, drivers: replace for_each_membock() with for_each_mem_range()") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-03-25ARM: 9063/1: mm: reduce maximum number of CPUs if DEBUG_KMAP_LOCAL is enabledArd Biesheuvel
The debugging code for kmap_local() doubles the number of per-CPU fixmap slots allocated for kmap_local(), in order to use half of them as guard regions. This causes the fixmap region to grow downwards beyond the start of its reserved window if the supported number of CPUs is large, and collide with the newly added virtual DT mapping right below it, which is obviously not good. One manifestation of this is EFI boot on a kernel built with NR_CPUS=32 and CONFIG_DEBUG_KMAP_LOCAL=y, which may pass the FDT in highmem, resulting in block entries below the fixmap region that the fixmap code misidentifies as fixmap table entries, and subsequently tries to dereference using a phys-to-virt translation that is only valid for lowmem. This results in a cryptic splat such as the one below. ftrace: allocating 45548 entries in 89 pages 8<--- cut here --- Unable to handle kernel paging request at virtual address fc6006f0 pgd = (ptrval) [fc6006f0] *pgd=80000040207003, *pmd=00000000 Internal error: Oops: a06 [#1] SMP ARM Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.11.0+ #382 Hardware name: Generic DT based system PC is at cpu_ca15_set_pte_ext+0x24/0x30 LR is at __set_fixmap+0xe4/0x118 pc : [<c041ac9c>] lr : [<c04189d8>] psr: 400000d3 sp : c1601ed8 ip : 00400000 fp : 00800000 r10: 0000071f r9 : 00421000 r8 : 00c00000 r7 : 00c00000 r6 : 0000071f r5 : ffade000 r4 : 4040171f r3 : 00c00000 r2 : 4040171f r1 : c041ac78 r0 : fc6006f0 Flags: nZcv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none Control: 30c5387d Table: 40203000 DAC: 00000001 Process swapper (pid: 0, stack limit = 0x(ptrval)) So let's limit CONFIG_NR_CPUS to 16 when CONFIG_DEBUG_KMAP_LOCAL=y. Also, fix the BUILD_BUG_ON() check that was supposed to catch this, by checking whether the region grows below the start address rather than above the end address. Fixes: 2a15ba82fa6ca3f3 ("ARM: highmem: Switch to generic kmap atomic") Reported-by: Peter Robinson <pbrobinson@gmail.com> Tested-by: Peter Robinson <pbrobinson@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-03-19ARM: Qualify enabling of swiotlb_init()Florian Fainelli
We do not need a SWIOTLB unless we have DRAM that is addressable beyond the arm_dma_limit. Compare max_pfn with arm_dma_pfn_limit to determine whether we do need a SWIOTLB to be initialized. Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-09ARM: 9059/1: cache-v7: get rid of mini-stackArd Biesheuvel
Now that we have reduced the number of registers that we need to preserve when calling v7_invalidate_l1 from the boot code, we can use scratch registers to preserve the remaining ones, and get rid of the mini stack entirely. This works around any issues regarding cache behavior in relation to the uncached accesses to this memory, which is hard to get right in the general case (i.e., both bare metal and under virtualization) While at it, switch v7_invalidate_l1 to using ip as a scratch register instead of r4. This makes the function AAPCS compliant, and removes the need to stash r4 in ip across the call. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-03-09ARM: 9058/1: cache-v7: refactor v7_invalidate_l1 to avoid clobbering r5/r6Ard Biesheuvel
The cache invalidation code in v7_invalidate_l1 can be tweaked to re-read the associativity from CCSIDR, and keep the way identifier component in a single register that is assigned in the outer loop. This way, we need 2 registers less. Given that the number of sets is typically much larger than the associativity, rearrange the code so that the outer loop has the fewer number of iterations, ensuring that the re-read of CCSIDR only occurs a handful of times in practice. Fix the whitespace while at it, and update the comment to indicate that this code is no longer a clone of anything else. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-03-09ARM: 9057/1: cache-v7: add missing ISB after cache level selectionArd Biesheuvel
A write to CSSELR needs to complete before its results can be observed via CCSIDR. So add a ISB to ensure that this is the case. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-22Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: - Generalise byte swapping assembly - Update debug addresses for STI - Validate start of physical memory with DTB - Do not clear SCTLR.nTLSMD in decompressor - amba/locomo/sa1111 devices remove method return type is void - address markers for KASAN in page table dump * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 9065/1: OABI compat: fix build when EPOLL is not enabled ARM: 9055/1: mailbox: arm_mhuv2: make remove callback return void amba: Make use of bus_type functions amba: Make the remove callback return void vfio: platform: simplify device removal amba: reorder functions amba: Fix resource leak for drivers without .remove ARM: 9054/1: arch/arm/mm/mmu.c: Remove duplicate header ARM: 9053/1: arm/mm/ptdump:Add address markers for KASAN regions ARM: 9051/1: vdso: remove unneded extra-y addition ARM: 9050/1: Kconfig: Select ARCH_HAVE_NMI_SAFE_CMPXCHG where possible ARM: 9049/1: locomo: make locomo bus's remove callback return void ARM: 9048/1: sa1111: make sa1111 bus's remove callback return void ARM: 9047/1: smp: remove unused variable ARM: 9046/1: decompressor: Do not clear SCTLR.nTLSMD for ARMv7+ cores ARM: 9045/1: uncompress: Validate start of physical memory against passed DTB ARM: 9042/1: debug: no uncompress debugging while semihosting ARM: 9041/1: sti LL_UART: add STiH418 SBC UART0 support ARM: 9040/1: use DEBUG_UART_PHYS and DEBUG_UART_VIRT for sti LL_UART ARM: 9039/1: assembler: generalize byte swapping macro into rev_l
2021-02-01ARM: 9054/1: arch/arm/mm/mmu.c: Remove duplicate headerHailong Liu
Remove asm/fixmap.h which is included more than once. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn> Signed-off-by: Hailong Liu <carver4lio@163.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-01ARM: 9053/1: arm/mm/ptdump:Add address markers for KASAN regionsHailong Liu
ARM has recently supported KASAN, so I think that it's time to add KASAN regions for PTDUMP on ARM. This patch has been tested with QEMU + vexpress-a15. Both CONFIG_ARM_LPAE and no CONFIG_ARM_LPAE. The result after patching looks like this: ---[ Kasan shadow start ]--- 0x6ee00000-0x7af00000 193M RW NX SHD MEM/CACHED/WBWA 0x7b000000-0x7f000000 64M ro NX SHD MEM/CACHED/WBWA ---[ Kasan shadow end ]--- ---[ Modules ]--- ---[ Kernel Mapping ]--- ...... ---[ vmalloc() Area ]--- ...... ---[ vmalloc() End ]--- ---[ Fixmap Area ]--- ---[ Vectors ]--- ...... ---[ Vectors End ]--- Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Hailong liu <liu.hailong6@zte.com.cn> Signed-off-by: Hailong liu <carver4lio@163.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-01-15ARM: drop efm32 platformUwe Kleine-König
I didn't touch this code since it served as a platform to introduce ARMv7-M support to Linux. The only known machine that runs Linux has only 4 MiB of RAM (that originally only exists to hold the display's framebuffer). There are no known users and no further use foreseeable, so drop the code. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Link: https://lore.kernel.org/r/20210115155130.185010-2-u.kleine-koenig@pengutronix.de' Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-12-22Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linuxLinus Torvalds
Pull ARM updates from Russell King: - Rework phys/virt translation - Add KASan support - Move DT out of linear map region - Use more PC-relative addressing in assembly - Remove FP emulation handling while in kernel mode - Link with '-z norelro' - remove old check for GCC <= 4.2 in ARM unwinder code - disable big endian if using clang's linker * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (46 commits) ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section ARM: 9038/1: Link with '-z norelro' ARM: 9037/1: uncompress: Add OF_DT_MAGIC macro ARM: 9036/1: uncompress: Fix dbgadtb size parameter name ARM: 9035/1: uncompress: Add be32tocpu macro ARM: 9033/1: arm/smp: Drop the macro S(x,s) ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functions ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbol ARM: 9044/1: vfp: use undef hook for VFP support detection ARM: 9034/1: __div64_32(): straighten up inline asm constraints ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode ARM: 9029/1: Make iwmmxt.S support Clang's integrated assembler ARM: 9028/1: disable KASAN in call stack capturing routines ARM: 9026/1: unwind: remove old check for GCC <= 4.2 ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD ARM: 9024/1: Drop useless cast of "u64" to "long long" ARM: 9023/1: Spelling s/mmeory/memory/ ARM: 9022/1: Change arch/arm/lib/mem*.S to use WEAK instead of .weak ARM: kvm: replace open coded VA->PA calculations with adr_l call ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET ...
2020-12-18Merge tag 'riscv-for-linus-5.11-mw0' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V updates from Palmer Dabbelt: "We have a handful of new kernel features for 5.11: - Support for the contiguous memory allocator. - Support for IRQ Time Accounting - Support for stack tracing - Support for strict /dev/mem - Support for kernel section protection I'm being a bit conservative on the cutoff for this round due to the timing, so this is all the new development I'm going to take for this cycle (even if some of it probably normally would have been OK). There are, however, some fixes on the list that I will likely be sending along either later this week or early next week. There is one issue in here: one of my test configurations (PREEMPT{,_DEBUG}=y) fails to boot on QEMU 5.0.0 (from April) as of the .text.init alignment patch. With any luck we'll sort out the issue, but given how many bugs get fixed all over the place and how unrelated those features seem my guess is that we're just running into something that's been lurking for a while and has already been fixed in the newer QEMU (though I wouldn't be surprised if it's one of these implicit assumptions we have in the boot flow). If it was hardware I'd be strongly inclined to look more closely, but given that users can upgrade their simulators I'm less worried about it" * tag 'riscv-for-linus-5.11-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: arm64: Use the generic devmem_is_allowed() arm: Use the generic devmem_is_allowed() RISC-V: Use the new generic devmem_is_allowed() lib: Add a generic version of devmem_is_allowed() riscv: Fixed kernel test robot warning riscv: kernel: Drop unused clean rule riscv: provide memmove implementation RISC-V: Move dynamic relocation section under __init RISC-V: Protect all kernel sections including init early RISC-V: Align the .init.text section RISC-V: Initialize SBI early riscv: Enable ARCH_STACKWALK riscv: Make stack walk callback consistent with generic code riscv: Cleanup stacktrace riscv: Add HAVE_IRQ_TIME_ACCOUNTING riscv: Enable CMA support riscv: Ignore Image.* and loader.bin riscv: Clean up boot dir riscv: Fix compressed Image formats build RISC-V: Add kernel image sections to the resource tree
2020-12-15Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge misc updates from Andrew Morton: - a few random little subsystems - almost all of the MM patches which are staged ahead of linux-next material. I'll trickle to post-linux-next work in as the dependents get merged up. Subsystems affected by this patch series: kthread, kbuild, ide, ntfs, ocfs2, arch, and mm (slab-generic, slab, slub, dax, debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, hmm, vmalloc, documentation, kasan, pagealloc, memory-failure, hugetlb, vmscan, z3fold, compaction, oom-kill, migration, cma, page-poison, userfaultfd, zswap, zsmalloc, uaccess, zram, and cleanups). * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (200 commits) mm: cleanup kstrto*() usage mm: fix fall-through warnings for Clang mm: slub: convert sysfs sprintf family to sysfs_emit/sysfs_emit_at mm: shmem: convert shmem_enabled_show to use sysfs_emit_at mm:backing-dev: use sysfs_emit in macro defining functions mm: huge_memory: convert remaining use of sprintf to sysfs_emit and neatening mm: use sysfs_emit for struct kobject * uses mm: fix kernel-doc markups zram: break the strict dependency from lzo zram: add stat to gather incompressible pages since zram set up zram: support page writeback mm/process_vm_access: remove redundant initialization of iov_r mm/zsmalloc.c: rework the list_add code in insert_zspage() mm/zswap: move to use crypto_acomp API for hardware acceleration mm/zswap: fix passing zero to 'PTR_ERR' warning mm/zswap: make struct kernel_param_ops definitions const userfaultfd/selftests: hint the test runner on required privilege userfaultfd/selftests: fix retval check for userfaultfd_open() userfaultfd/selftests: always dump something in modes userfaultfd: selftests: make __{s,u}64 format specifiers portable ...
2020-12-15arm, arm64: move free_unused_memmap() to generic mmMike Rapoport
ARM and ARM64 free unused parts of the memory map just before the initialization of the page allocator. To allow holes in the memory map both architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID. Allowing holes in the memory map for FLATMEM may be useful for small machines, such as ARC and m68k and will enable those architectures to cease using DISCONTIGMEM and still support more than one memory bank. Move the functions that free unused memory map to generic mm and enable them in case HAVE_ARCH_PFN_VALID=y. Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Meelis Roos <mroos@linux.ee> Cc: Michael Schmitz <schmitzmic@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-14Merge tag 'core-mm-2020-12-14' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull kmap updates from Thomas Gleixner: "The new preemtible kmap_local() implementation: - Consolidate all kmap_atomic() internals into a generic implementation which builds the base for the kmap_local() API and make the kmap_atomic() interface wrappers which handle the disabling/enabling of preemption and pagefaults. - Switch the storage from per-CPU to per task and provide scheduler support for clearing mapping when scheduling out and restoring them when scheduling back in. - Merge the migrate_disable/enable() code, which is also part of the scheduler pull request. This was required to make the kmap_local() interface available which does not disable preemption when a mapping is established. It has to disable migration instead to guarantee that the virtual address of the mapped slot is the same across preemption. - Provide better debug facilities: guard pages and enforced utilization of the mapping mechanics on 64bit systems when the architecture allows it. - Provide the new kmap_local() API which can now be used to cleanup the kmap_atomic() usage sites all over the place. Most of the usage sites do not require the implicit disabling of preemption and pagefaults so the penalty on 64bit and 32bit non-highmem systems is removed and quite some of the code can be simplified. A wholesale conversion is not possible because some usage depends on the implicit side effects and some need to be cleaned up because they work around these side effects. The migrate disable side effect is only effective on highmem systems and when enforced debugging is enabled. On 64bit and 32bit non-highmem systems the overhead is completely avoided" * tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits) ARM: highmem: Fix cache_is_vivt() reference x86/crashdump/32: Simplify copy_oldmem_page() io-mapping: Provide iomap_local variant mm/highmem: Provide kmap_local* sched: highmem: Store local kmaps in task struct x86: Support kmap_local() forced debugging mm/highmem: Provide CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP mm/highmem: Provide and use CONFIG_DEBUG_KMAP_LOCAL microblaze/mm/highmem: Add dropped #ifdef back xtensa/mm/highmem: Make generic kmap_atomic() work correctly mm/highmem: Take kmap_high_get() properly into account highmem: High implementation details and document API Documentation/io-mapping: Remove outdated blurb io-mapping: Cleanup atomic iomap mm/highmem: Remove the old kmap_atomic cruft highmem: Get rid of kmap_types.h xtensa/mm/highmem: Switch to generic kmap atomic sparc/mm/highmem: Switch to generic kmap atomic powerpc/mm/highmem: Switch to generic kmap atomic nds32/mm/highmem: Switch to generic kmap atomic ...
2020-12-11Add and use a generic version of devmem_is_allowed()Palmer Dabbelt
As part of adding STRICT_DEVMEM support to the RISC-V port, Zong provided an implementation of devmem_is_allowed() that's exactly the same as the version in a handful of other ports. Rather than duplicate code, I've put a generic version of this in lib/ and used it for the RISC-V port. * palmer/generic-devmem: arm64: Use the generic devmem_is_allowed() arm: Use the generic devmem_is_allowed() RISC-V: Use the new generic devmem_is_allowed() lib: Add a generic version of devmem_is_allowed()
2020-12-11arm: Use the generic devmem_is_allowed()Palmer Dabbelt
This is exactly the same as the arm64 version, which I recently copied into lib/ for use by the RISC-V port. Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-08ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLDNick Desaulniers
LLD does not yet support any big endian architectures. Make this config non-selectable when using LLD until LLD is fixed. Link: https://github.com/ClangBuiltLinux/linux/issues/965 Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-11-06ARM: highmem: Switch to generic kmap atomicThomas Gleixner
No reason having the same code in every architecture. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20201103095857.582196476@linutronix.de
2020-11-04ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservationsArd Biesheuvel
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") it rounded beginning of each region upwards and end of each region downwards. However, after that commit free_highmem() rounds the beginning and end of each region downwards, and we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. Restore the original rounding of the region boundaries to avoid freeing reserved pages. Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") Link: https://lore.kernel.org/r/20201029110334.4118-1-ardb@kernel.org/ Link: https://lore.kernel.org/r/20201031094345.6984-1-rppt@kernel.org Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Co-developed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com>
2020-10-27ARM: 9016/2: Initialize the mapping of KASan shadow memoryLinus Walleij
This patch initializes KASan shadow region's page table and memory. There are two stage for KASan initializing: 1. At early boot stage the whole shadow region is mapped to just one physical page (kasan_zero_page). It is finished by the function kasan_early_init which is called by __mmap_switched(arch/arm/kernel/ head-common.S) 2. After the calling of paging_init, we use kasan_zero_page as zero shadow for some memory that KASan does not need to track, and we allocate a new shadow space for the other memory that KASan need to track. These issues are finished by the function kasan_init which is call by setup_arch. When using KASan we also need to increase the THREAD_SIZE_ORDER from 1 to 2 as the extra calls for shadow memory uses quite a bit of stack. As we need to make a temporary copy of the PGD when setting up shadow memory we create a helpful PGD_SIZE definition for both LPAE and non-LPAE setups. The KASan core code unconditionally calls pud_populate() so this needs to be changed from BUG() to do {} while (0) when building with KASan enabled. After the initial development by Andre Ryabinin several modifications have been made to this code: Abbott Liu <liuwenliang@huawei.com> - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's mapping table need be copied in the pgd_alloc() function. - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate, kasan_pgd_populate from .meminit.text section to .init.text section. Reported by Florian Fainelli <f.fainelli@gmail.com> Linus Walleij <linus.walleij@linaro.org>: - Drop the custom mainpulation of TTBR0 and just use cpu_switch_mm() to switch the pgd table. - Adopt to handle 4th level page tabel folding. - Rewrite the entire page directory and page entry initialization sequence to be recursive based on ARM64:s kasan_init.c. Ard Biesheuvel <ardb@kernel.org>: - Necessary underlying fixes. - Crucial bug fixes to the memory set-up code. Co-developed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Co-developed-by: Abbott Liu <liuwenliang@huawei.com> Co-developed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: kasan-dev@googlegroups.com Cc: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q Reported-by: Russell King - ARM Linux <rmk+kernel@armlinux.org.uk> Reported-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Abbott Liu <liuwenliang@huawei.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>