summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-02-01Merge branch 'topic/user-access-begin' into nextMichael Ellerman
Merge the user_access_begin() series from Christophe. This is based on a commit from Linus that went into v5.5-rc7.
2020-01-31powerpc: configs: Cleanup old Kconfig optionsKrzysztof Kozlowski
CONFIG_ENABLE_WARN_DEPRECATED is gone since commit 771c035372a0 ("deprecate the '__deprecated' attribute warnings entirely and for good"). CONFIG_IOSCHED_DEADLINE and CONFIG_IOSCHED_CFQ are gone since commit f382fb0bcef4 ("block: remove legacy IO schedulers"). The IOSCHED_DEADLINE was replaced by MQ_IOSCHED_DEADLINE and it will be now enabled by default (along with MQ_IOSCHED_KYBER). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200130195223.3843-1-krzk@kernel.org
2020-01-31powerpc/configs/skiroot: Enable some more hardening optionsMichael Ellerman
Enable more hardening options. Note BUG_ON_DATA_CORRUPTION selects DEBUG_LIST and is essentially just a synonym for it. DEBUG_SG, DEBUG_NOTIFIERS, DEBUG_LIST, DEBUG_CREDENTIALS and SCHED_STACK_END_CHECK should all be low overhead and just add a few extra checks. SLAB_FREELIST_RANDOM, and SLUB_DEBUG_ON will add some overhead to the SLAB allocator, but nothing that should be meaningful for skiroot. Unselecting SLAB_MERGE_DEFAULT causes the SLAB to use more memory, but the skiroot kernel shouldn't be memory constrained on any of our systems, all it does is run a small bootloader. Disabling merging has some security/robustness benefit as it means a user-after-free or overflow will be limited to the objects in that slab, rather than potentially affecting objects from unrelated slabs that have been merged. Note also that slab merging is disabled anyway by enabling SLUB_DEBUG_ON, because of the SLAB_NEVER_MERGE mask. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-9-mpe@ellerman.id.au
2020-01-31powerpc/configs/skiroot: Disable xmon default & enable reboot on panicMichael Ellerman
If the skiroot kernel crashes we don't want it sitting at an xmon prompt forever. Instead it's more helpful to reboot and bring the boot loader back up, and if the crash was transient we can then boot successfully. Similarly if we panic we should reboot, with a short timeout in case someone is watching the console. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-8-mpe@ellerman.id.au
2020-01-31powerpc/configs/skiroot: Enable security featuresJoel Stanley
This turns on HARDENED_USERCOPY with HARDENED_USERCOPY_PAGESPAN, and FORTIFY_SOURCE. It also enables SECURITY_LOCKDOWN_LSM with _EARLY and LOCK_DOWN_KERNEL_FORCE_INTEGRITY options enabled. This still allows xmon to be used in read-only mode. MODULE_SIG is selected by lockdown, so it is still enabled. Because we're setting LOCK_DOWN_KERNELFORCE_INTEGRITY=y we also need to enable KEXEC_FILE=y so that kexec continues to work. Signed-off-by: Joel Stanley <joel@jms.id.au> [mpe: Switch to lockdown integrity mode per oohal, enable KEXEC_FILE as reported by jms] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-7-mpe@ellerman.id.au
2020-01-31powerpc/configs/skiroot: Update for symbol movement onlyMichael Ellerman
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-6-mpe@ellerman.id.au
2020-01-31powerpc/configs/skiroot: Drop default n CONFIG_CRYPTO_ECHAINIVMichael Ellerman
It's default n so we don't need to disable it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-5-mpe@ellerman.id.au
2020-01-31powerpc/configs/skiroot: Drop HID_LOGITECHMichael Ellerman
Commit bdd08fff4915 ("HID: logitech: Add depends on LEDS_CLASS to Logitech Kconfig entry") made HID_LOGITECH depend on LEDS_CLASS which we do not enable, meaning we are not actually enabling those drivers any more. The Kconfig help text suggests USB HID compliant Logictech devices will continue to work without HID_LOGITECH, so just drop it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-4-mpe@ellerman.id.au
2020-01-31powerpc/configs: Drop NET_VENDOR_HP which moved to stagingMichael Ellerman
The HP network driver moved to staging in commit 52340b82cf1a ("hp100: Move 100BaseVG AnyLAN driver to staging") meaning we don't need to disable it any more in our defconfigs. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-3-mpe@ellerman.id.au
2020-01-31powerpc/configs: NET_CADENCE became NET_VENDOR_CADENCEMichael Ellerman
The NET_CADENCE symbol was renamed to NET_VENDOR_CADENCE, so we don't need to disable the former, see commit 0df5f81c481e ("net: ethernet: Add missing VENDOR to Cadence and Packet Engines symbols"). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-2-mpe@ellerman.id.au
2020-01-31powerpc/configs: Drop CONFIG_QLGE which moved to stagingMichael Ellerman
The QLGE driver moved to staging in commit 955315b0dc8c ("qlge: Move drivers/net/ethernet/qlogic/qlge/ to drivers/staging/qlge/"), meaning our defconfigs that enable it have no effect as we don't enable CONFIG_STAGING. It sounds like the device is obsolete, so drop the driver. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20200121043000.16212-1-mpe@ellerman.id.au
2020-01-31powerpc: Do not consider weak unresolved symbol relocations as badAlexandre Ghiti
Commit 8580ac9404f6 ("bpf: Process in-kernel BTF") introduced two weak symbols that may be unresolved at link time which result in an absolute relocation to 0. relocs_check.sh emits the following warning: "WARNING: 2 bad relocations c000000001a41478 R_PPC64_ADDR64 _binary__btf_vmlinux_bin_start c000000001a41480 R_PPC64_ADDR64 _binary__btf_vmlinux_bin_end" whereas those relocations are legitimate even for a relocatable kernel compiled with -pie option. relocs_check.sh already excluded some weak unresolved symbols explicitly: remove those hardcoded symbols and add some logic that parses the symbols using nm, retrieves all the weak unresolved symbols and excludes those from the list of the potential bad relocations. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Alexandre Ghiti <alex@ghiti.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200118170335.21440-1-alex@ghiti.fr
2020-01-30powerpc/32s: Fix kasan_early_hash_table() for CONFIG_VMAP_STACKChristophe Leroy
On book3s/32 CPUs that are handling MMU through a hash table, MMU_init_hw() function was adapted for VMAP_STACK in order to handle virtual addresses instead of physical addresses in the low level hash functions. When using KASAN, the same adaptations are required for the early hash table set up by kasan_early_hash_table() function. Fixes: cd08f109e262 ("powerpc/32s: Enable CONFIG_VMAP_STACK") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fc8390a33c2a470105f01abbcbdc7916c30c0a54.1580301269.git.christophe.leroy@c-s.fr
2020-01-30powerpc: indent to improve Kconfig readabilityRandy Dunlap
Indent a Kconfig continuation line to improve readability. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ff8729c1-3a4b-c720-48ba-a1a42b0ef892@infradead.org
2020-01-29powerpc: Provide initial documentation for PAPR hcallsVaibhav Jain
This doc patch provides an initial description of the hcall op-codes that are used by Linux kernel running as a guest (LPAR) on top of PowerVM or any other sPAPR compliant hyper-visor (e.g qemu). Apart from documenting the hcalls the doc-patch also provides a rudimentary overview of how hcall ABI, how they are issued with the Linux kernel and how information/control flows between the guest and hypervisor. Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Add SPDX tag, add it to index.rst] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190828082729.16695-1-vaibhav@linux.ibm.com
2020-01-28powerpc: Implement user_access_save() and user_access_restore()Christophe Leroy
Implement user_access_save() and user_access_restore() On 8xx and radix: - On save, get the value of the associated special register then prevent user access. - On restore, set back the saved value to the associated special register. On book3s/32: - On save, get the value stored in current->thread.kuap and prevent user access. - On restore, regenerate address range from the stored value and reopen read/write access for that range. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/54f2f74938006b33c55a416674807b42ef222068.1579866752.git.christophe.leroy@c-s.fr
2020-01-28powerpc: Implement user_access_begin and friendsChristophe Leroy
Today, when a function like strncpy_from_user() is called, the userspace access protection is de-activated and re-activated for every word read. By implementing user_access_begin and friends, the protection is de-activated at the beginning of the copy and re-activated at the end. Implement user_access_begin(), user_access_end() and unsafe_get_user(), unsafe_put_user() and unsafe_copy_to_user() For the time being, we keep user_access_save() and user_access_restore() as nops. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/36d4fbf9e56a75994aca4ee2214c77b26a5a8d35.1579866752.git.christophe.leroy@c-s.fr
2020-01-28powerpc/32s: Prepare prevent_user_access() for user_access_end()Christophe Leroy
In preparation of implementing user_access_begin and friends on powerpc, the book3s/32 version of prevent_user_access() need to be prepared for user_access_end(). user_access_end() doesn't provide the address and size which were passed to user_access_begin(), required by prevent_user_access() to know which segment to modify. The list of segments which where unprotected by allow_user_access() are available in current->kuap. But we don't want prevent_user_access() to read this all the time, especially everytime it is 0 (for instance because the access was not a write access). Implement a special direction named KUAP_CURRENT. In this case only, the addr and end are retrieved from current->kuap. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/55bcc1f25d8200892a31f67a0b024ff3b816c3cc.1579866752.git.christophe.leroy@c-s.fr
2020-01-28powerpc/32s: Drop NULL addr verificationChristophe Leroy
NULL addr is a user address. Don't waste time checking it. If someone tries to access it, it will SIGFAULT the same way as for address 1, so no need to make it special. The special case is when not doing a write, in that case we want to drop the entire function. This is now handled by 'dir' param and not by the nulity of 'to' anymore. Also make beginning of prevent_user_access() similar to beginning of allow_user_access(), and tell the compiler that writing in kernel space or with a 0 length is unlikely Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/85e971223dfe6ace734637db1841678939a76155.1579866752.git.christophe.leroy@c-s.fr
2020-01-28powerpc/kuap: Fix set direction in allow/prevent_user_access()Christophe Leroy
__builtin_constant_p() always return 0 for pointers, so on RADIX we always end up opening both direction (by writing 0 in SPR29): 0000000000000170 <._copy_to_user>: ... 1b0: 4c 00 01 2c isync 1b4: 39 20 00 00 li r9,0 1b8: 7d 3d 03 a6 mtspr 29,r9 1bc: 4c 00 01 2c isync 1c0: 48 00 00 01 bl 1c0 <._copy_to_user+0x50> 1c0: R_PPC64_REL24 .__copy_tofrom_user ... 0000000000000220 <._copy_from_user>: ... 2ac: 4c 00 01 2c isync 2b0: 39 20 00 00 li r9,0 2b4: 7d 3d 03 a6 mtspr 29,r9 2b8: 4c 00 01 2c isync 2bc: 7f c5 f3 78 mr r5,r30 2c0: 7f 83 e3 78 mr r3,r28 2c4: 48 00 00 01 bl 2c4 <._copy_from_user+0xa4> 2c4: R_PPC64_REL24 .__copy_tofrom_user ... Use an explicit parameter for direction selection, so that GCC is able to see it is a constant: 00000000000001b0 <._copy_to_user>: ... 1f0: 4c 00 01 2c isync 1f4: 3d 20 40 00 lis r9,16384 1f8: 79 29 07 c6 rldicr r9,r9,32,31 1fc: 7d 3d 03 a6 mtspr 29,r9 200: 4c 00 01 2c isync 204: 48 00 00 01 bl 204 <._copy_to_user+0x54> 204: R_PPC64_REL24 .__copy_tofrom_user ... 0000000000000260 <._copy_from_user>: ... 2ec: 4c 00 01 2c isync 2f0: 39 20 ff ff li r9,-1 2f4: 79 29 00 04 rldicr r9,r9,0,0 2f8: 7d 3d 03 a6 mtspr 29,r9 2fc: 4c 00 01 2c isync 300: 7f c5 f3 78 mr r5,r30 304: 7f 83 e3 78 mr r3,r28 308: 48 00 00 01 bl 308 <._copy_from_user+0xa8> 308: R_PPC64_REL24 .__copy_tofrom_user ... Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Spell out the directions, s/KUAP_R/KUAP_READ/ etc.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f4e88ec4941d5facb35ce75026b0112f980086c3.1579866752.git.christophe.leroy@c-s.fr
2020-01-28powerpc/32s: Fix bad_kuap_fault()Christophe Leroy
At the moment, bad_kuap_fault() reports a fault only if a bad access to userspace occurred while access to userspace was not granted. But if a fault occurs for a write outside the allowed userspace segment(s) that have been unlocked, bad_kuap_fault() fails to detect it and the kernel loops forever in do_page_fault(). Fix it by checking that the accessed address is within the allowed range. Fixes: a68c31fc01ef ("powerpc/32s: Implement Kernel Userspace Access Protection") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f48244e9485ada0a304ed33ccbb8da271180c80d.1579866752.git.christophe.leroy@c-s.fr
2020-01-28powerpc/32s: Fix CPU wake-up from sleep modeChristophe Leroy
Commit f7354ccac844 ("powerpc/32: Remove CURRENT_THREAD_INFO and rename TI_CPU") broke the CPU wake-up from sleep mode (i.e. when _TLF_SLEEPING is set) by delaying the tovirt(r2, r2). This is because r2 is not restored by fast_exception_return. It used to work (by chance ?) because CPU wake-up interrupt never comes from user, so r2 is expected to point to 'current' on return. Commit e2fb9f544431 ("powerpc/32: Prepare for Kernel Userspace Access Protection") broke it even more by clobbering r0 which is not restored by fast_exception_return either. Use r6 instead of r0. This is possible because r3-r6 are restored by fast_exception_return and only r3-r5 are used for exception arguments. For r2 it could be converted back to virtual address, but stay on the safe side and restore it from the stack instead. It should be live in the cache at that moment, so loading from the stack should make no difference compared to converting it from phys to virt. Fixes: f7354ccac844 ("powerpc/32: Remove CURRENT_THREAD_INFO and rename TI_CPU") Fixes: e2fb9f544431 ("powerpc/32: Prepare for Kernel Userspace Access Protection") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6d02c3ae6ad77af34392e98117e44c2bf6d13ba1.1580121710.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Reuse orphaned memblocks in kasan_init_shadow_page_tables()Christophe Leroy
If concurrent PMD population has happened, re-use orphaned memblocks. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b29ffffb9206dc14541fa420c17604240728041b.1579024426.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Simplify KASAN initChristophe Leroy
Since kasan_init_region() is not used anymore for modules, KASAN init is done while slab_is_available() is false. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/84b27bf08b41c8343efd88e10f2eccd8e9f85593.1579024426.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Force KASAN_VMALLOC for modulesChristophe Leroy
Unloading/Reloading of modules seems to fail with KASAN_VMALLOC but works properly with it. Force selection of KASAN_VMALLOC when MODULES are selected, and drop module_alloc() which was dedicated to KASAN for modules. Reported-by: <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=205283 Link: https://lore.kernel.org/r/f909da11aecb59ab7f32ba01fae6f356eaa4d7bc.1579024426.git.christophe.leroy@c-s.fr
2020-01-27powerpc/kconfig: Move CONFIG_PPC32 into Kconfig.cputypeChristophe Leroy
Move CONFIG_PPC32 at the same place as CONFIG_PPC64 for consistency. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6f28085c2a1aa987093d50db17586633bbf8e206.1579024426.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Add support of KASAN_VMALLOCChristophe Leroy
Add support of KASAN_VMALLOC on PPC32. To allow this, the early shadow covering the VMALLOC space need to be removed once high_memory var is set and before freeing memblock. And the VMALLOC area need to be aligned such that boundaries are covered by a full shadow page. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/031dec5487bde9b2181c8b3c9800e1879cf98c1a.1579024426.git.christophe.leroy@c-s.fr
2020-01-27powerpc/mm: Don't log user reads to 0xffffffffChristophe Leroy
Running vdsotest leaves many times the following log: [ 79.629901] vdsotest[396]: User access of kernel address (ffffffff) - exploit attempt? (uid: 0) A pointer set to (-1) is likely a programming error similar to a NULL pointer and is not worth logging as an exploit attempt. Don't log user accesses to 0xffffffff. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0728849e826ba16f1fbd6fa7f5c6cc87bd64e097.1577087627.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32s: Enable CONFIG_VMAP_STACKChristophe Leroy
A few changes to retrieve DAR and DSISR from struct regs instead of retrieving them directly, as they may have changed due to a TLB miss. Also modifies hash_page() and friends to work with virtual data addresses instead of physical ones. Same on load_up_fpu() and load_up_altivec(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Fix tovirt_vmstack call in head_32.S to fix CHRP build] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2e2509a242fd5f3e23df4a06530c18060c4d321e.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32s: Avoid crossing page boundary while changing SRR0/1.Christophe Leroy
Trying VMAP_STACK with KVM, vmlinux was not starting. This was due to SRR0 and SRR1 clobbered by an ISI due to the rfi being in a different page than the mtsrr0/1: c0003fe0 <mmu_off>: c0003fe0: 38 83 00 54 addi r4,r3,84 c0003fe4: 7c 60 00 a6 mfmsr r3 c0003fe8: 70 60 00 30 andi. r0,r3,48 c0003fec: 4d 82 00 20 beqlr c0003ff0: 7c 63 00 78 andc r3,r3,r0 c0003ff4: 7c 9a 03 a6 mtsrr0 r4 c0003ff8: 7c 7b 03 a6 mtsrr1 r3 c0003ffc: 7c 00 04 ac hwsync c0004000: 4c 00 00 64 rfi Align the 4 instruction block used to deactivate MMU to order 4, so that the block never crosses a page boundary. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/30d2cda111b7977227fff067fa7e358440e2b3a4.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32s: Reorganise DSI handler.Christophe Leroy
The part decidated to handling hash_page() is fully unneeded for processors not having real hash pages like the 603. Lets enlarge the content of the feature fixup, and provide an alternative which jumps directly instead of getting NIPs. Also, in preparation of VMAP stacks, the end of DSI handler has moved to later in the code as it won't fit anymore once VMAP stacks are there. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c31b22c91af8b011d0a4fd9e52ad6afb4b593f71.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/8xx: Enable CONFIG_VMAP_STACKChristophe Leroy
This patch enables CONFIG_VMAP_STACK. For that, a few changes are done in head_8xx.S. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d7ba1e34e80898310d6a314cbebe48baa32894ef.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/8xx: Move tail of alignment exception out of lineMichael Ellerman
When we enable VMAP_STACK there will not be enough room for the alignment handler at 0x600 in head_8xx.S. For now move the tail of the alignment handler out of line, and branch to it. Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2020-01-27powerpc/8xx: Split breakpoint exceptionChristophe Leroy
Breakpoint exception is big. Split it to support future growth on exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1dda3293d86d0f715b13b2633c95d2188a42a02c.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/8xx: Move DataStoreTLBMiss perf handlerChristophe Leroy
Move DataStoreTLBMiss perf handler in order to cope with future growing exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/75dd28b04efd2cbdbf01153173d99c11cdff2f08.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/8xx: Drop exception entries for non-existing exceptionsChristophe Leroy
head_8xx.S has entries for all exceptions from 0x100 to 0x1f00. Several of them do not exist and are never generated by the 8xx in accordance with the documentation. Remove those entry points to make some room for future growing exception code. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/66f92866fe9524cf0f056016921c7d53adaef3a0.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/8xx: Use alternative scratch registers in DTLB miss handlerChristophe Leroy
In preparation of handling CONFIG_VMAP_STACK, DTLB miss handler need to use different scratch registers than other exception handlers in order to not jeopardise exception entry on stack DTLB misses. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c5287ea59ae9630f505019b309bf94029241635f.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Use vmapped stacks for interruptsChristophe Leroy
In order to also catch overflows on IRQ stacks, use vmapped stacks. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d33ad1b36ddff4dcc19f96c592c12a61cf85d406.1576916812.git.christophe.leroy@c-s.fr
2020-01-27powerpc/32: Add early stack overflow detection with VMAP stack.Christophe Leroy
To avoid recursive faults, stack overflow detection has to be performed before writing in the stack in exception prologs. Do it by checking the alignment. If the stack pointer alignment is wrong, it means it is pointing to the following or preceding page. Without VMAP stack, a stack overflow is catastrophic. With VMAP stack, a stack overflow isn't destructive, so don't panic. Kill the task with SIGSEGV instead. A dedicated overflow stack is set up for each CPU. lkdtm: Performing direct entry EXHAUST_STACK lkdtm: Calling function with 512 frame size to depth 32 ... lkdtm: loop 32/32 ... lkdtm: loop 31/32 ... lkdtm: loop 30/32 ... lkdtm: loop 29/32 ... lkdtm: loop 28/32 ... lkdtm: loop 27/32 ... lkdtm: loop 26/32 ... lkdtm: loop 25/32 ... lkdtm: loop 24/32 ... lkdtm: loop 23/32 ... lkdtm: loop 22/32 ... lkdtm: loop 21/32 ... lkdtm: loop 20/32 ... Kernel stack overflow in process test[359], r1=c900c008 Oops: Kernel stack overflow, sig: 6 [#1] BE PAGE_SIZE=4K MMU=Hash PowerMac Modules linked in: CPU: 0 PID: 359 Comm: test Not tainted 5.3.0-rc7+ #2225 NIP: c0622060 LR: c0626710 CTR: 00000000 REGS: c0895f48 TRAP: 0000 Not tainted (5.3.0-rc7+) MSR: 00001032 <ME,IR,DR,RI> CR: 28004224 XER: 00000000 GPR00: c0626ca4 c900c008 c783c000 c07335cc c900c010 c07335cc c900c0f0 c07335cc GPR08: c900c0f0 00000001 00000000 00000000 28008222 00000000 00000000 00000000 GPR16: 00000000 00000000 10010128 10010000 b799c245 10010158 c07335cc 00000025 GPR24: c0690000 c08b91d4 c068f688 00000020 c900c0f0 c068f668 c08b95b4 c08b91d4 NIP [c0622060] format_decode+0x0/0x4d4 LR [c0626710] vsnprintf+0x80/0x5fc Call Trace: [c900c068] [c0626ca4] vscnprintf+0x18/0x48 [c900c078] [c007b944] vprintk_store+0x40/0x214 [c900c0b8] [c007bf50] vprintk_emit+0x90/0x1dc [c900c0e8] [c007c5cc] printk+0x50/0x60 [c900c128] [c03da5b0] recursive_loop+0x44/0x6c [c900c338] [c03da5c4] recursive_loop+0x58/0x6c [c900c548] [c03da5c4] recursive_loop+0x58/0x6c [c900c758] [c03da5c4] recursive_loop+0x58/0x6c [c900c968] [c03da5c4] recursive_loop+0x58/0x6c [c900cb78] [c03da5c4] recursive_loop+0x58/0x6c [c900cd88] [c03da5c4] recursive_loop+0x58/0x6c [c900cf98] [c03da5c4] recursive_loop+0x58/0x6c [c900d1a8] [c03da5c4] recursive_loop+0x58/0x6c [c900d3b8] [c03da5c4] recursive_loop+0x58/0x6c [c900d5c8] [c03da5c4] recursive_loop+0x58/0x6c [c900d7d8] [c03da5c4] recursive_loop+0x58/0x6c [c900d9e8] [c03da5c4] recursive_loop+0x58/0x6c [c900dbf8] [c03da5c4] recursive_loop+0x58/0x6c [c900de08] [c03da67c] lkdtm_EXHAUST_STACK+0x30/0x4c [c900de18] [c03da3e8] direct_entry+0xc8/0x140 [c900de48] [c029fb40] full_proxy_write+0x64/0xcc [c900de68] [c01500f8] __vfs_write+0x30/0x1d0 [c900dee8] [c0152cb8] vfs_write+0xb8/0x1d4 [c900df08] [c0152f7c] ksys_write+0x58/0xe8 [c900df38] [c0014208] ret_from_syscall+0x0/0x34 --- interrupt: c01 at 0xf806664 LR = 0x1000c868 Instruction dump: 4bffff91 80010014 7c832378 7c0803a6 38210010 4e800020 3d20c08a 3ca0c089 8089a0cc 38a58f0c 38600001 4ba2d494 <9421ffe0> 7c0802a6 bfc10018 7c9f2378 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1b89c121b4070c7ee99e4f22cc178f15a736b07b.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc: align stack to 2 * THREAD_SIZE with VMAP_STACKChristophe Leroy
In order to ease stack overflow detection, align stack to 2 * THREAD_SIZE when using VMAP_STACK. This allows overflow detection using a single bit check. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/60e9ae86b7d2cdcf21468787076d345663648f46.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: prepare for CONFIG_VMAP_STACKChristophe Leroy
To support CONFIG_VMAP_STACK, the kernel has to activate Data MMU Translation for accessing the stack. Before doing that it must save SRR0, SRR1 and also DAR and DSISR when relevant, in order to not loose them in case there is a Data TLB Miss once the translation is reactivated. This patch adds fields in thread struct for saving those registers. It prepares entry_32.S to handle exception entry with Data MMU Translation enabled and alters EXCEPTION_PROLOG macros to save SRR0, SRR1, DAR and DSISR then reenables Data MMU. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a775a1fea60f190e0f63503463fb775310a2009b.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: add a macro to get and/or save DAR and DSISR on stack.Christophe Leroy
Refactor reading and saving of DAR and DSISR in exception vectors. This will ease the implementation of VMAP stack. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1286b3e51b07727c6b4b05f2df9af3f9b1717fb5.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: move MSR_PR test into EXCEPTION_PROLOG_0Christophe Leroy
In order to simplify VMAP stack implementation, move MSR_PR test into EXCEPTION_PROLOG_0. This requires to not modify cr0 between EXCEPTION_PROLOG_0 and EXCEPTION_PROLOG_1. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5c8b5bba692b92654dbd363a229a1ba91db725bb.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: save DEAR/DAR before calling handle_page_faultChristophe Leroy
handle_page_fault() is the only function that save DAR/DEAR itself. Save DAR/DEAR before calling handle_page_fault() to prepare for VMAP stack which will require to save even before. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3a4d58d378091086f00fde42b59610c80289e120.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: Add EXCEPTION_PROLOG_0 in head_32.hChristophe Leroy
This patch creates a macro for the very first part of exception prolog, this will help when implementing CONFIG_VMAP_STACK Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2249fe62f481121a180e9655ad2b998093f318f3.1576916812.git.christophe.leroy@c-s.fr
2020-01-26powerpc/32: replace MTMSRD() by mtmsrChristophe Leroy
On PPC32, MTMSRD() is simply defined as mtmsr. Replace MTMSRD(reg) by mtmsr reg in files dedicated to PPC32, this makes the code less obscure. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/22469e78230edea3dbd0c79a555d73124f6c6d93.1576916812.git.christophe.leroy@c-s.fr
2020-01-26selftests/eeh: Bump EEH wait time to 60sOliver O'Halloran
Some newer cards supported by aacraid can take up to 40s to recover after an EEH event. This causes spurious failures in the basic EEH self-test since the current maximim timeout is only 30s. Fix the immediate issue by bumping the timeout to a default of 60s, and allow the wait time to be specified via an environmental variable (EEH_MAX_WAIT). Reported-by: Steve Best <sbest@redhat.com> Suggested-by: Douglas Miller <dougmill@us.ibm.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200122031125.25991-1-oohall@gmail.com
2020-01-26powerpc/pseries/lparcfg: Fix display of Maximum MemoryMichael Bringmann
Correct overflow problem in calculation and display of Maximum Memory value to syscfg. Signed-off-by: Michael Bringmann <mwb@linux.ibm.com> [mpe: Only n_lmbs needs casting to unsigned long] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5577aef8-1d5a-ca95-ff0a-9c7b5977e5bf@linux.ibm.com
2020-01-26powerpc/mm: Remove kvm radix prefetch workaround for Power9 DD2.2Jordan Niethe
Commit a25bd72badfa ("powerpc/mm/radix: Workaround prefetch issue with KVM") introduced a number of workarounds as coming out of a guest with the mmu enabled would make the cpu would start running in hypervisor state with the PID value from the guest. The cpu will then start prefetching for the hypervisor with that PID value. In Power9 DD2.2 the cpu behaviour was modified to fix this. When accessing Quadrant 0 in hypervisor mode with LPID != 0 prefetching will not be performed. This means that we can get rid of the workarounds for Power9 DD2.2 and later revisions. Add a new cpu feature CPU_FTR_P9_RADIX_PREFETCH_BUG to indicate if the workarounds are needed. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Acked-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20191206031722.25781-1-jniethe5@gmail.com
2020-01-26powerpc/papr_scm: Fix leaking 'bus_desc.provider_name' in some pathsVaibhav Jain
String 'bus_desc.provider_name' allocated inside papr_scm_nvdimm_init() will leaks in case call to nvdimm_bus_register() fails or when papr_scm_remove() is called. This minor patch ensures that 'bus_desc.provider_name' is freed in error path for nvdimm_bus_register() as well as in papr_scm_remove(). Fixes: b5beae5e224f ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200122155140.120429-1-vaibhav@linux.ibm.com