summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-12-09ARM: zynq: Convert at25 binding to new description on zc770-xm013Michal Simek
The commit f8f79fa6bb25 ("dt-bindings: at25: convert the binding document to yaml") converted binding to yaml and 3 deprecated properties pop up. The patch is fixing these warnings: .../zynq-zc770-xm013.dt.yaml: eeprom@2: 'pagesize' is a required property .../zynq-zc770-xm013.dt.yaml: eeprom@2: 'size' is a required property .../zynq-zc770-xm013.dt.yaml: eeprom@2: 'address-width' is a required property >From schema: .../Documentation/devicetree/bindings/eeprom/at25.yaml by converting them to new binding. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Link: https://lore.kernel.org/r/be2c1125d98386033e182012eb08986924707a76.1606397101.git.michal.simek@xilinx.com
2020-12-09ARM: zynq: Fix OCM mapping to be aligned with binding on zc702Michal Simek
The commit f69629919942 ("dt-bindings: sram: Convert SRAM bindings to json-schema") converted binding to yaml and some missing required properties started to be reported. Align binding based on it. The patch is fixing these warnings: .../zynq-zc702.dt.yaml: sram@fffc0000: '#address-cells' is a required property .../zynq-zc702.dt.yaml: sram@fffc0000: '#size-cells' is a required property .../zynq-zc702.dt.yaml: sram@fffc0000: 'ranges' is a required property >From schema: .../Documentation/devicetree/bindings/sram/sram.yaml Signed-off-by: Michal Simek <michal.simek@xilinx.com> Link: https://lore.kernel.org/r/87c02786ccd8d7827827a9d95a8737bb300caeb0.1606397101.git.michal.simek@xilinx.com
2020-12-09ARM: zynq: Fix leds subnode name for zc702/zybo-z7Michal Simek
Fix the leds subnode names to match (^led-[0-9a-f]$|led). Similar change has been also done by commit 9a19a39ee48b ("arm64: dts: zynqmp: Fix leds subnode name for zcu100/ultra96 v1"). The patch is fixing these warnings: .../zynq-zc702.dt.yaml: leds: 'ds23' does not match any of the regexes: '(^led-[0-9a-f]$|led)', 'pinctrl-[0-9]+' >From schema: .../Documentation/devicetree/bindings/leds/leds-gpio.yaml .../zynq-zybo-z7.dt.yaml: gpio-leds: 'ld4' does not match any of the regexes: '(^led-[0-9a-f]$|led)', 'pinctrl-[0-9]+' >From schema: .../Documentation/devicetree/bindings/leds/leds-gpio.yaml Signed-off-by: Michal Simek <michal.simek@xilinx.com> Link: https://lore.kernel.org/r/607a66783b129294364abf09a6fc8abd241ff4ee.1606397101.git.michal.simek@xilinx.com
2020-12-09ARM: zynq: Rename bus to be align with simple-bus yamlMichal Simek
Rename amba to AXI. Based on Xilinx Zynq TRM (Chapter 5) chip is "AXI point-to-point channels for communicating addresses, data, and response transactions between master and slave clients. This ARM AMBA 3.0..." Issues are reported as: .. amba: $nodename:0: 'amba' does not match '^([a-z][a-z0-9\\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$' >From schema: ../github.com/devicetree-org/dt-schema/dtschema/schemas/simple-bus.yaml Similar change has been done for Xilinx ZynqMP SoC. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Link: https://lore.kernel.org/r/8a4bc80debfbb79c296e76fc1e4c173e62657286.1606397101.git.michal.simek@xilinx.com
2020-12-09ARM: zynq: Fix compatible string for adi,adxl345 chipMichal Simek
The commit e359a29225dd ("dt-bindings: iio: accel: adxl345: switch to YAML bindings") switched binding to yaml and the following error pop up: ../zynq-zturn-v5.dt.yaml: accelerometer@53: compatible: 'oneOf' conditional failed, one must be fixed: ['adi,adxl345', 'adxl345', 'adi,adxl34x', 'adxl34x'] is too long Additional items are not allowed ('adi,adxl34x', 'adxl34x' were unexpected) Additional items are not allowed ('adxl345', 'adi,adxl34x', 'adxl34x' were unexpected) 'adi,adxl346' was expected 'adi,adxl345' was expected Use only one compatible string to be aligned with the binding. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Link: https://lore.kernel.org/r/a9075ab54df13461380e46d3002302d3672325b5.1606397101.git.michal.simek@xilinx.com
2020-12-09powerpc/64: irq replay remove decrementer overflow checkNicholas Piggin
This is way to catch some cases of decrementer overflow, when the decrementer has underflowed an odd number of times, while MSR[EE] was disabled. With a typical small decrementer, a timer that fires when MSR[EE] is disabled will be "lost" if MSR[EE] remains disabled for between 4.3 and 8.6 seconds after the timer expires. In any case, the decrementer interrupt would be taken at 8.6 seconds and the timer would be found at that point. So this check is for catching extreme latency events, and it prevents those latencies from being a further few seconds long. It's not obvious this is a good tradeoff. This is already a watchdog magnitude event and that situation is not improved a significantly with this check. For large decrementers, it's useless. Therefore remove this check, which avoids a mftb when enabling hard disabled interrupts (e.g., when enabling after coming from hardware interrupt handlers). Perhaps more importantly, it also removes the clunky MSR[EE] vs PACA_IRQ_HARD_DIS incoherency in soft-interrupt replay which simplifies the code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201107014336.2337337-1-npiggin@gmail.com
2020-12-09powerpc/64s: Remove MSR[ISF] bitNicholas Piggin
No supported processor implements this mode. Setting the bit in MSR values can be a bit confusing (and would prevent the bit from ever being reused). Remove it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201106045340.1935841-1-npiggin@gmail.com
2020-12-09powerpc/64s/iommu: Don't use atomic_ function on atomic64_t typeNicholas Piggin
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201111110723.3148665-3-npiggin@gmail.com
2020-12-09powerpc/32s: Cleanup around PTE_FLAGS_OFFSET in hash_low.SChristophe Leroy
PTE_FLAGS_OFFSET is defined in asm/page_32.h and used only in hash_low.S And PTE_FLAGS_OFFSET nullity depends on CONFIG_PTE_64BIT Instead of tests like #if (PTE_FLAGS_OFFSET != 0), use CONFIG_PTE_64BIT related code. Also move the definition of PTE_FLAGS_OFFSET into hash_low.S directly, that improves readability. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f5bc21db7a33dab55924734e6060c2e9daed562e.1606247495.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: In add_hash_page(), calculate VSID laterChristophe Leroy
VSID is only for create_hpte(). When _PAGE_HASHPTE is already set, add_hash_page() bails out without calling create_hpte() and doesn't need the value of VSID. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3907199974c89b85a3441cf3f528751173b7649c.1606247495.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Remove unused counters incremented by create_hpte()Christophe Leroy
primary_pteg_full and htab_hash_searches are not used. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6470ab99e58c84a5445af43ce4d1d772b0dc3e93.1606247495.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/mm: Refactor the floor/ceiling check in hugetlb range freeing functionsChristophe Leroy
All hugetlb range freeing functions have a verification like the following, which only differs by the mask used, depending on the page table level. start &= MASK; if (start < floor) return; if (ceiling) { ceiling &= MASK; if (! ceiling) return; } if (end - 1 > ceiling - 1) return; Refactor that into a helper function which takes the mask as an argument, returning true when [start;end[ is not fully contained inside [floor;ceiling[ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/16a571bb32eb6e8cd44bda484c8d81cd8a25e6d7.1604668827.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/fault: Perform exception fixup in do_page_fault()Christophe Leroy
Exception fixup doesn't require the heady full regs saving, do it from do_page_fault() directly. For that, split bad_page_fault() in two parts. As bad_page_fault() can also be called from other places than handle_page_fault(), it will still perform exception fixup and fallback on __bad_page_fault(). handle_page_fault() directly calls __bad_page_fault() as the exception fixup will now be done by do_page_fault() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bd07d6fef9237614cd6d318d8f19faeeadaa816b.1607491748.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/fault: Avoid heavy search_exception_tables() verificationChristophe Leroy
search_exception_tables() is an heavy operation, we have to avoid it. When KUAP is selected, we'll know the fault has been blocked by KUAP. When it is blocked by KUAP, check whether we are in an expected userspace access place. If so, emit a warning to spot something is going work. Otherwise, just remain silent, it will likely Oops soon. When KUAP is not selected, it behaves just as if the address was already in the TLBs and no fault was generated. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9870f01e293a5a76c4f4e4ddd4a6b0f63038c591.1607491748.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/mm: Move the WARN() out of bad_kuap_fault()Christophe Leroy
In order to prepare the removal of calls to search_exception_tables() on the fast path, move the WARN() out of bad_kuap_fault(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9501311014bd6507e04b27a0c3035186ccf65cd5.1607491748.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/fault: Unnest definition of page_fault_is_write() and ↵Christophe Leroy
page_fault_is_bad() To make it more readable, separate page_fault_is_write() and page_fault_is_bad() to avoir several levels of #ifdefs Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6afaac2495248d68f94c438c5ec36b6010931de5.1607491748.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/mm: sanity_check_fault() should work for all, not only BOOK3SChristophe Leroy
The verification and message introduced by commit 374f3f5979f9 ("powerpc/mm/hash: Handle user access of kernel address gracefully") applies to all platforms, it should not be limited to BOOK3S. Make the BOOK3S version of sanity_check_fault() the one for all, and bail out earlier if not BOOK3S. Fixes: 374f3f5979f9 ("powerpc/mm/hash: Handle user access of kernel address gracefully") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fe199d5af3578d3bf80035d203a94d742a7a28af.1607491748.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/ppc-opcode: Add PPC_RAW_MFSPR()Christophe Leroy
Add PPC_RAW_MFSPR() to replace open coding done in 8xx-pmu.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e281e3a611eead8817c49cf06a60072a021af823.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Use SPRN_SPRG_SCRATCH2 in DTLB miss exceptionChristophe Leroy
Use SPRN_SPRG_SCRATCH2 in DTLB miss exception instead of DAR in order to be similar to ITLB miss exception. This also simplifies mpc8xx_pmu_del() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e3cc8f023ef40e1e8ae144e4dd1330a5ff022528.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Use SPRN_SPRG_SCRATCH2 in ITLB miss exceptionChristophe Leroy
In order to re-enable MMU earlier, ensure ITLB miss exception cannot clobber SPRN_SPRG_SCRATCH0 and SPRN_SPRG_SCRATCH1. Do so by using SPRN_SPRG_SCRATCH2 and SPRN_M_TW instead, like the DTLB miss exception. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/abc78e8e9577d473691ebb9996c6413b37bfd9ca.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Simplify INVALIDATE_ADJACENT_PAGES_CPU15Christophe Leroy
We now have r11 available as a scratch register so INVALIDATE_ADJACENT_PAGES_CPU15() can be simplified. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bdafd651b4ac3a851fd09249f5f3699c50da29f2.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Always pin kernel text TLBChristophe Leroy
There is no big poing in not pinning kernel text anymore, as now we can keep pinned TLB even with things like DEBUG_PAGEALLOC. Remove CONFIG_PIN_TLB_TEXT, making it always right. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop ifdef around mmu_pin_tlb() to fix build errors] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/203b89de491e1379f1677a2685211b7c32adfff0.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09Merge remote-tracking branch 'origin/kvm-arm64/psci-relay' into ↵Marc Zyngier
kvmarm-master/next Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-12-09x86/membarrier: Get rid of a dubious optimizationAndy Lutomirski
sync_core_before_usermode() had an incorrect optimization. If the kernel returns from an interrupt, it can get to usermode without IRET. It just has to schedule to a different task in the same mm and do SYSRET. Fortunately, there were no callers of sync_core_before_usermode() that could have had in_irq() or in_nmi() equal to true, because it's only ever called from the scheduler. While at it, clarify a related comment. Fixes: 70216e18e519 ("membarrier: Provide core serializing command, *_SYNC_CORE") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/5afc7632be1422f91eaf7611aaaa1b5b8580a086.1607058304.git.luto@kernel.org
2020-12-09efi: stub: get rid of efi_get_max_fdt_addr()Ard Biesheuvel
Now that ARM started following the example of arm64 and RISC-V, and no longer imposes any restrictions on the placement of the FDT in memory at boot, we no longer need per-arch implementations of efi_get_max_fdt_addr() to factor out the differences. So get rid of it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Atish Patra <atish.patra@wdc.com> Link: https://lore.kernel.org/r/20201029134901.9773-1-ardb@kernel.org
2020-12-09efi: arm: reduce minimum alignment of uncompressed kernelArd Biesheuvel
Now that we reduced the minimum relative alignment between PHYS_OFFSET and PAGE_OFFSET to 2 MiB, we can take this into account when allocating memory for the decompressed kernel when booting via EFI. This minimizes the amount of unusable memory we may end up with due to the base of DRAM being occupied by firmware. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-12-09efi: capsule: clean scatter-gather entries from the D-cacheArd Biesheuvel
Scatter-gather lists passed to UpdateCapsule() should be cleaned from the D-cache to ensure that they are visible to the CPU after a warm reboot before the MMU is enabled. On ARM and arm64 systems, this implies a D-cache clean by virtual address to the point of coherency. However, due to the fact that the firmware itself is not able to map physical addresses back to virtual addresses when running under the OS, this must be done by the caller. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-12-09powerpc/8xx: DEBUG_PAGEALLOC doesn't require an ITLB miss exception handlerChristophe Leroy
Since commit e611939fc8ec ("powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs"), pinned TLBs are not anymore invalidated by __kernel_map_pages() when CONFIG_DEBUG_PAGEALLOC is selected. Remove the dependency on CONFIG_DEBUG_PAGEALLOC. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e796c5fcb5898de827c803cf1ab8ba1d7a5d4b76.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/process: Remove target specific __set_dabr()Christophe Leroy
__set_dabr() are simple functions that can be inline directly inside set_dabr() and using IS_ENABLED() instead of #ifdef Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c10b263668e137236c71d76648b03cf2cd1ee66f.1607076733.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Fix early debug when SMC1 is relocatedChristophe Leroy
When SMC1 is relocated and early debug is selected, the board hangs is ppc_md.setup_arch(). This is because ones the microcode has been loaded and SMC1 relocated, early debug writes in the weed. To allow smooth continuation, the SMC1 parameter RAM set up by the bootloader have to be copied into the new location. Fixes: 43db76f41824 ("powerpc/8xx: Add microcode patch to move SMC parameter RAM.") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b2f71f39eca543f1e4ec06596f09a8b12235c701.1607076683.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Handle PROTFAULT in hash_page() also for CONFIG_PPC_KUAPChristophe Leroy
On hash 32 bits, handling minor protection faults like unsetting dirty flag is heavy if done from the normal page_fault processing, because it implies hash table software lookup for flushing the entry and then a DSI is taken anyway to add the entry back. When KUAP was implemented, as explained in commit a68c31fc01ef ("powerpc/32s: Implement Kernel Userspace Access Protection"), protection faults has been diverted from hash_page() because hash_page() was not able to identify a KUAP fault. Implement KUAP verification in hash_page(), by clearing write permission when the access is a kernel access and Ks is 1. This works regardless of the address because kernel segments always have Ks set to 0 while user segments have Ks set to 0 only when kernel write to userspace is granted. Then protection faults can be handled by hash_page() even for KUAP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8a4ffe4798e9ea32aaaccdf85e411bb1beed3500.1605542955.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Make support for 603 and 604+ selectableChristophe Leroy
book3s/32 has two main families: - CPU with 603 cores that don't have HASH PTE table and perform SW TLB loading. - Other CPUs based on 604+ cores that have HASH PTE table. This leads to some complex logic and additionnal code to support both. This makes sense for distribution kernels that aim at running on any CPU, but when you are fine tuning a kernel for an embedded 603 based board you don't need all the HASH logic. Allow selection of support for each family, in order to opt out unneeded parts of code. At least one must be selected. Note that some of the CPU supporting HASH also support SW TLB loading, however it is not supported by Linux kernel at the time being, because they do not have alternate registers in the TLB miss exception handlers. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8dde0cdb629a71abc29b0d85a52a86e920376cb6.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Regroup 603 based CPUs in cputableChristophe Leroy
In order to selectively build the kernel for 603 SW TLB handling, regroup all 603 based CPUs together. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/45065263fdb9f5cc2a2d210ec2a762ac8bf5b2bc.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Remove CONFIG_PPC_BOOK3S_6xxChristophe Leroy
As 601 is gone, CONFIG_PPC_BOO3S_6xx and CONFIG_PPC_BOOK3S_32 are dedundant. Remove CONFIG_PPC_BOOK3S_6xx. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f18c16af37f6f77b577bed8d9e12831b695617ae.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Move early_mmu_init() into mmu.cChristophe Leroy
early_mmu_init() is independent of MMU type and not directly linked to tlb handling. In a following patch, tlb.c will be restricted to HASH mmu. Move early_mmu_init() into mmu.c which is common. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e51b5e2fe6bca623b33116403043d3a1b5eaf826.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Inline flush_hash_entry()Christophe Leroy
flush_hash_entry() is a simple function calling flush_hash_pages() if it's a hash MMU or doing nothing otherwise. Inline it. And use it also in __ptep_test_and_clear_young(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9af895be7d4b404d40e749a2659552fd138e62c4.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Inline tlb_flush()Christophe Leroy
On book3s/32, tlb_flush() does nothing when the CPU has a hash table, it calls _tlbia() otherwise. Inline it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ebc933d1c530a19ef3cf7983f6ae94814f6e92ac.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Split and inline flush_range()Christophe Leroy
flush_range() handle both the MMU_FTR_HPTE_TABLE case and the other case. The non MMU_FTR_HPTE_TABLE case is trivial as it is only a call to _tlbie()/_tlbia() which is not worth a dedicated function. Make flush_range() a hash specific and call it from tlbflush.h based on mmu_has_feature(MMU_FTR_HPTE_TABLE). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/132ab19aae52abc8e06ab524ec86d4229b5b9c3d.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Inline flush_tlb_range() and flush_tlb_kernel_range()Christophe Leroy
flush_tlb_range() and flush_tlb_kernel_range() are trivial calls to flush_range(). Make flush_range() global and inline flush_tlb_range() and flush_tlb_kernel_range(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c7029a78e78709ad9272d7a44260e06b649169b2.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Split and inline flush_tlb_mm() and flush_tlb_page()Christophe Leroy
flush_tlb_mm() and flush_tlb_page() handle both the MMU_FTR_HPTE_TABLE case and the other case. The non MMU_FTR_HPTE_TABLE case is trivial as it is only a call to _tlbie()/_tlbia() which is not worth a dedicated function. Make flush_tlb_mm() and flush_tlb_page() hash specific and call them from tlbflush.h based on mmu_has_feature(MMU_FTR_HPTE_TABLE). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/11e932ded41ba6d9b251d89b7afa33cc060d3aa4.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Move _tlbie() and _tlbia() in a new fileChristophe Leroy
_tlbie() and _tlbia() are used only on 603 cores while the other functions are used only on cores having a hash table. Move them into a new file named nohash_low.S Add mmu_hash_lock var is used by both, it needs to go in a common file. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9a265b1b17a64153463d361280cb4b43eb1266a4.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Inline _tlbie() on non SMPChristophe Leroy
On non SMP, _tlbie() is just a tlbie plus a sync instruction. Make it static inline. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/475136425541db5c7c8a0395d19d400525b251bc.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Move _tlbie() and _tlbia() prototypes to tlbflush.hChristophe Leroy
In order to use _tlbie() and _tlbia() directly from asm/book3s/32/tlbflush.h, move their prototypes from mm/mm_decl.h to there. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/867587af929973ad65f8ef6972f2474a80c1737a.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Declare Hash related vars as __initdataChristophe Leroy
Hash related vars are used at init only. Declare them in __initdata. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3878ea30706839fcff9196790ff3f99c128c3f6a.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Make Hash var staticChristophe Leroy
Hash var is used only locally in mmu.c now. No need to set it in head_32.S anymore. Make it static and initialises it to the early hash table. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/786c82a89cdfdaabb32b72a44f7c312fa81d192b.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Use mmu_has_feature(MMU_FTR_HPTE_TABLE) instead of checking ↵Christophe Leroy
Hash var We now have an early hash table on hash MMU, so no need to check Hash var to know if the Hash table is set of not. Use mmu_has_feature(MMU_FTR_HPTE_TABLE) instead. This will allow optimisation via jump_label. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f1766631a9e014b6433f1a3c12c726ddfce34220.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Make bat_addrs[] staticChristophe Leroy
This table is used only locally. Declare it static. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/054fec0c139fc4c0a306360b360784733c0a6e65.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/mm: Remove flush_tlb_page_nohash() prototype.Christophe Leroy
flush_tlb_page_nohash() was removed by commit 703b41ad1a87 ("powerpc/mm: remove flush_tlb_page_nohash") Remove stale prototype and comment. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4a58831da6d6ba4fe309b94aa1dd8f02982d46b2.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/mm: Add mask of always present MMU featuresChristophe Leroy
On the same principle as commit 773edeadf672 ("powerpc/mm: Add mask of possible MMU features"), add mask for MMU features that are always there in order to optimise out dead branches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4943775fbe91885eb3e09133b093aaf62e55c715.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/rtas: Fix typo of ibm,open-errinjct in RTAS filterTyrel Datwyler
Commit bd59380c5ba4 ("powerpc/rtas: Restrict RTAS requests from userspace") introduced the following error when invoking the errinjct userspace tool: [root@ltcalpine2-lp5 librtas]# errinjct open [327884.071171] sys_rtas: RTAS call blocked - exploit attempt? [327884.071186] sys_rtas: token=0x26, nargs=0 (called by errinjct) errinjct: Could not open RTAS error injection facility errinjct: librtas: open: Unexpected I/O error The entry for ibm,open-errinjct in rtas_filter array has a typo where the "j" is omitted in the rtas call name. After fixing this typo the errinjct tool functions again as expected. [root@ltcalpine2-lp5 linux]# errinjct open RTAS error injection facility open, token = 1 Fixes: bd59380c5ba4 ("powerpc/rtas: Restrict RTAS requests from userspace") Cc: stable@vger.kernel.org Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201208195434.8289-1-tyreld@linux.ibm.com