summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel
AgeCommit message (Collapse)Author
2020-12-11powerpc/smp: Parse ibm,thread-groups with multiple propertiesGautham R. Shenoy
The "ibm,thread-groups" device-tree property is an array that is used to indicate if groups of threads within a core share certain properties. It provides details of which property is being shared by which groups of threads. This array can encode information about multiple properties being shared by different thread-groups within the core. Example: Suppose, "ibm,thread-groups" = [1,2,4,8,10,12,14,9,11,13,15,2,2,4,8,10,12,14,9,11,13,15] This can be decomposed up into two consecutive arrays: a) [1,2,4,8,10,12,14,9,11,13,15] b) [2,2,4,8,10,12,14,9,11,13,15] where in, a) provides information of Property "1" being shared by "2" groups, each with "4" threads each. The "ibm,ppc-interrupt-server#s" of the first group is {8,10,12,14} and the "ibm,ppc-interrupt-server#s" of the second group is {9,11,13,15}. Property "1" is indicative of the thread in the group sharing L1 cache, translation cache and Instruction Data flow. b) provides information of Property "2" being shared by "2" groups, each group with "4" threads. The "ibm,ppc-interrupt-server#s" of the first group is {8,10,12,14} and the "ibm,ppc-interrupt-server#s" of the second group is {9,11,13,15}. Property "2" indicates that the threads in each group share the L2-cache. The existing code assumes that the "ibm,thread-groups" encodes information about only one property. Hence even on platforms which encode information about multiple properties being shared by the corresponding groups of threads, the current code will only pick the first one. (In the above example, it will only consider [1,2,4,8,10,12,14,9,11,13,15] but not [2,2,4,8,10,12,14,9,11,13,15]). This patch extends the parsing support on platforms which encode information about multiple properties being shared by the corresponding groups of threads. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1607596739-32439-2-git-send-email-ego@linux.vnet.ibm.com
2020-12-11powerpc/watchpoint: Workaround P10 DD1 issue with VSX-32 byte instructionsRavi Bangoria
POWER10 DD1 has an issue where it generates watchpoint exceptions when it shouldn't. The conditions where this occur are: - octword op - ending address of DAWR range is less than starting address of op - those addresses need to be in the same or in two consecutive 512B blocks - 'op address + 64B' generates an address that has a carry into bit 52 (crosses 2K boundary) Handle such spurious exception by considering them as extraneous and emulating/single-steeping instruction without generating an event. [ravi: Fixed build warning reported by lkp@intel.com] Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201106045650.278987-1-ravi.bangoria@linux.ibm.com
2020-12-10powerpc/64s: Remove idle workaround code from restore_cpu_cpufeaturesNicholas Piggin
Idle code no longer uses the .cpu_restore CPU operation to restore SPRs, so this workaround is no longer required. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190711022404.18132-2-npiggin@gmail.com
2020-12-09powerpc/64: irq replay remove decrementer overflow checkNicholas Piggin
This is way to catch some cases of decrementer overflow, when the decrementer has underflowed an odd number of times, while MSR[EE] was disabled. With a typical small decrementer, a timer that fires when MSR[EE] is disabled will be "lost" if MSR[EE] remains disabled for between 4.3 and 8.6 seconds after the timer expires. In any case, the decrementer interrupt would be taken at 8.6 seconds and the timer would be found at that point. So this check is for catching extreme latency events, and it prevents those latencies from being a further few seconds long. It's not obvious this is a good tradeoff. This is already a watchdog magnitude event and that situation is not improved a significantly with this check. For large decrementers, it's useless. Therefore remove this check, which avoids a mftb when enabling hard disabled interrupts (e.g., when enabling after coming from hardware interrupt handlers). Perhaps more importantly, it also removes the clunky MSR[EE] vs PACA_IRQ_HARD_DIS incoherency in soft-interrupt replay which simplifies the code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201107014336.2337337-1-npiggin@gmail.com
2020-12-09powerpc/64s: Remove MSR[ISF] bitNicholas Piggin
No supported processor implements this mode. Setting the bit in MSR values can be a bit confusing (and would prevent the bit from ever being reused). Remove it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201106045340.1935841-1-npiggin@gmail.com
2020-12-09powerpc/fault: Perform exception fixup in do_page_fault()Christophe Leroy
Exception fixup doesn't require the heady full regs saving, do it from do_page_fault() directly. For that, split bad_page_fault() in two parts. As bad_page_fault() can also be called from other places than handle_page_fault(), it will still perform exception fixup and fallback on __bad_page_fault(). handle_page_fault() directly calls __bad_page_fault() as the exception fixup will now be done by do_page_fault() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bd07d6fef9237614cd6d318d8f19faeeadaa816b.1607491748.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Use SPRN_SPRG_SCRATCH2 in DTLB miss exceptionChristophe Leroy
Use SPRN_SPRG_SCRATCH2 in DTLB miss exception instead of DAR in order to be similar to ITLB miss exception. This also simplifies mpc8xx_pmu_del() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e3cc8f023ef40e1e8ae144e4dd1330a5ff022528.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Use SPRN_SPRG_SCRATCH2 in ITLB miss exceptionChristophe Leroy
In order to re-enable MMU earlier, ensure ITLB miss exception cannot clobber SPRN_SPRG_SCRATCH0 and SPRN_SPRG_SCRATCH1. Do so by using SPRN_SPRG_SCRATCH2 and SPRN_M_TW instead, like the DTLB miss exception. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/abc78e8e9577d473691ebb9996c6413b37bfd9ca.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Simplify INVALIDATE_ADJACENT_PAGES_CPU15Christophe Leroy
We now have r11 available as a scratch register so INVALIDATE_ADJACENT_PAGES_CPU15() can be simplified. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bdafd651b4ac3a851fd09249f5f3699c50da29f2.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: Always pin kernel text TLBChristophe Leroy
There is no big poing in not pinning kernel text anymore, as now we can keep pinned TLB even with things like DEBUG_PAGEALLOC. Remove CONFIG_PIN_TLB_TEXT, making it always right. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop ifdef around mmu_pin_tlb() to fix build errors] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/203b89de491e1379f1677a2685211b7c32adfff0.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/8xx: DEBUG_PAGEALLOC doesn't require an ITLB miss exception handlerChristophe Leroy
Since commit e611939fc8ec ("powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs"), pinned TLBs are not anymore invalidated by __kernel_map_pages() when CONFIG_DEBUG_PAGEALLOC is selected. Remove the dependency on CONFIG_DEBUG_PAGEALLOC. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e796c5fcb5898de827c803cf1ab8ba1d7a5d4b76.1606231483.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/process: Remove target specific __set_dabr()Christophe Leroy
__set_dabr() are simple functions that can be inline directly inside set_dabr() and using IS_ENABLED() instead of #ifdef Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c10b263668e137236c71d76648b03cf2cd1ee66f.1607076733.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Handle PROTFAULT in hash_page() also for CONFIG_PPC_KUAPChristophe Leroy
On hash 32 bits, handling minor protection faults like unsetting dirty flag is heavy if done from the normal page_fault processing, because it implies hash table software lookup for flushing the entry and then a DSI is taken anyway to add the entry back. When KUAP was implemented, as explained in commit a68c31fc01ef ("powerpc/32s: Implement Kernel Userspace Access Protection"), protection faults has been diverted from hash_page() because hash_page() was not able to identify a KUAP fault. Implement KUAP verification in hash_page(), by clearing write permission when the access is a kernel access and Ks is 1. This works regardless of the address because kernel segments always have Ks set to 0 while user segments have Ks set to 0 only when kernel write to userspace is granted. Then protection faults can be handled by hash_page() even for KUAP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8a4ffe4798e9ea32aaaccdf85e411bb1beed3500.1605542955.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Make support for 603 and 604+ selectableChristophe Leroy
book3s/32 has two main families: - CPU with 603 cores that don't have HASH PTE table and perform SW TLB loading. - Other CPUs based on 604+ cores that have HASH PTE table. This leads to some complex logic and additionnal code to support both. This makes sense for distribution kernels that aim at running on any CPU, but when you are fine tuning a kernel for an embedded 603 based board you don't need all the HASH logic. Allow selection of support for each family, in order to opt out unneeded parts of code. At least one must be selected. Note that some of the CPU supporting HASH also support SW TLB loading, however it is not supported by Linux kernel at the time being, because they do not have alternate registers in the TLB miss exception handlers. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8dde0cdb629a71abc29b0d85a52a86e920376cb6.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Regroup 603 based CPUs in cputableChristophe Leroy
In order to selectively build the kernel for 603 SW TLB handling, regroup all 603 based CPUs together. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/45065263fdb9f5cc2a2d210ec2a762ac8bf5b2bc.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Remove CONFIG_PPC_BOOK3S_6xxChristophe Leroy
As 601 is gone, CONFIG_PPC_BOO3S_6xx and CONFIG_PPC_BOOK3S_32 are dedundant. Remove CONFIG_PPC_BOOK3S_6xx. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f18c16af37f6f77b577bed8d9e12831b695617ae.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/32s: Make Hash var staticChristophe Leroy
Hash var is used only locally in mmu.c now. No need to set it in head_32.S anymore. Make it static and initialises it to the early hash table. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/786c82a89cdfdaabb32b72a44f7c312fa81d192b.1603348103.git.christophe.leroy@csgroup.eu
2020-12-09powerpc/rtas: Fix typo of ibm,open-errinjct in RTAS filterTyrel Datwyler
Commit bd59380c5ba4 ("powerpc/rtas: Restrict RTAS requests from userspace") introduced the following error when invoking the errinjct userspace tool: [root@ltcalpine2-lp5 librtas]# errinjct open [327884.071171] sys_rtas: RTAS call blocked - exploit attempt? [327884.071186] sys_rtas: token=0x26, nargs=0 (called by errinjct) errinjct: Could not open RTAS error injection facility errinjct: librtas: open: Unexpected I/O error The entry for ibm,open-errinjct in rtas_filter array has a typo where the "j" is omitted in the rtas call name. After fixing this typo the errinjct tool functions again as expected. [root@ltcalpine2-lp5 linux]# errinjct open RTAS error injection facility open, token = 1 Fixes: bd59380c5ba4 ("powerpc/rtas: Restrict RTAS requests from userspace") Cc: stable@vger.kernel.org Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201208195434.8289-1-tyreld@linux.ibm.com
2020-12-08powerpc/rtas: remove unused rtas_suspend_last_cpu()Nathan Lynch
rtas_suspend_last_cpu() is now unused, remove it and __rtas_suspend_last_cpu() which also becomes unused. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-24-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: remove rtas_suspend_cpu()Nathan Lynch
rtas_suspend_cpu() no longer has users; remove it and __rtas_suspend_cpu() which now becomes unused as well. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-22-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: remove rtas_ibm_suspend_me_unsafe()Nathan Lynch
rtas_ibm_suspend_me_unsafe() is now unused; remove it and rtas_percpu_suspend_me() which becomes unused as a result. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-17-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: dispatch partition migration requests to pseriesNathan Lynch
sys_rtas() cannot call ibm,suspend-me directly in the same way it handles other inputs. Instead it must dispatch the request to code that can first perform the H_JOIN sequence before any call to ibm,suspend-me can succeed. Over time kernel/rtas.c has accreted a fair amount of platform-specific code to implement this. Since a different, more robust implementation of the suspend sequence is now in the pseries platform code, we want to dispatch the request there. Note that invoking ibm,suspend-me via the RTAS syscall is all but deprecated; this change preserves ABI compatibility for old programs while providing to them the benefit of the new partition suspend implementation. This is a behavior change in that the kernel performs the device tree update and firmware activation before returning, but experimentation indicates this is tolerated fine by legacy user space. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-16-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: add rtas_activate_firmware()Nathan Lynch
Provide a documented wrapper function for the ibm,activate-firmware service, which must be called after a partition migration or hibernation. If the function is absent or the call fails, the OS will continue to run normally with the current firmware, so there is no need to perform any recovery. Just log it and continue. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-6-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: add rtas_ibm_suspend_me()Nathan Lynch
Now that the name is available, provide a simple wrapper for ibm,suspend-me which returns both a Linux errno and optionally the actual RTAS status to the caller. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-5-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: rtas_ibm_suspend_me -> rtas_ibm_suspend_me_unsafeNathan Lynch
The pseries partition suspend sequence requires that all active CPUs call H_JOIN, which suspends all but one of them with interrupts disabled. The "chosen" CPU is then to call ibm,suspend-me to complete the suspend. Upon returning from ibm,suspend-me, the chosen CPU is to use H_PROD to wake the joined CPUs. Using on_each_cpu() for this, as rtas_ibm_suspend_me() does to implement partition migration, is susceptible to deadlock with other users of on_each_cpu() and with users of stop_machine APIs. The callback passed to on_each_cpu() is not allowed to synchronize with other CPUs in the way it is used here. Complicating the fix is the fact that rtas_ibm_suspend_me() also occupies the function name that should be used to provide a more conventional wrapper for ibm,suspend-me. Rename rtas_ibm_suspend_me() to rtas_ibm_suspend_me_unsafe() to free up the name and indicate that it should not gain users. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-4-nathanl@linux.ibm.com
2020-12-08powerpc/rtas: prevent suspend-related sys_rtas use on LENathan Lynch
While drmgr has had work in some areas to make its RTAS syscall interactions endian-neutral, its code for performing partition migration via the syscall has never worked on LE. While it is able to complete ibm,suspend-me successfully, it crashes when attempting the subsequent ibm,update-nodes call. drmgr is the only known (or plausible) user of ibm,suspend-me, ibm,update-nodes, and ibm,update-properties, so allow them only in big-endian configurations. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201207215200.1785968-2-nathanl@linux.ibm.com
2020-12-05powerpc: Remove ucache_bsizeChristophe Leroy
ppc601 and e200 were the users of ucache_bsize. ppc601 and e200 are now gone. Remove ucache_bsize. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/288b6048597c0fdc495b203fda57a223d89499d2.1605589460.git.christophe.leroy@csgroup.eu
2020-12-05powerpc: Retire e200 core (mpc555x processor)Christophe Leroy
There is no defconfig selecting CONFIG_E200, and no platform. e200 is an earlier version of booke, a predecessor of e500, with some particularities like an unified cache instead of both an instruction cache and a data cache. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/34ebc3ba2c768d97f363bd5f2deea2356e9ae127.1605589460.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/pci: Remove LSI mappings on device teardownOliver O'Halloran
When a passthrough IO adapter is removed from a pseries machine using hash MMU and the XIVE interrupt mode, the POWER hypervisor expects the guest OS to clear all page table entries related to the adapter. If some are still present, the RTAS call which isolates the PCI slot returns error 9001 "valid outstanding translations" and the removal of the IO adapter fails. This is because when the PHBs are scanned, Linux maps automatically the INTx interrupts in the Linux interrupt number space but these are never removed. This problem can be fixed by adding the corresponding unmap operation when the device is removed. There's no pcibios_* hook for the remove case, but the same effect can be achieved using a bus notifier. Because INTx are shared among PHBs (and potentially across the system), this adds tracking of virq to unmap them only when the last user is gone. [aik: added refcounter] Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Tested-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com> Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202005222.5477-1-aik@ozlabs.ru
2020-12-04powerpc/44x: Don't support 47x code and non 47x code at the same timeChristophe Leroy
440/460 variants and 470 variants are not compatible, no need to make code supporting both and using MMU features. Just use CONFIG_PPC_47x to decide what to build. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c3e64da3d5d068c69a201e03bbae7da055761e5b.1603041883.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/44x: Don't support 440 when CONFIG_PPC_47x is setChristophe Leroy
As stated in platform/44x/Kconfig, CONFIG_PPC_47x is not compatible with 440 and 460 variants. This is confirmed in asm/cache.h as L1_CACHE_SHIFT is different for 47x, meaning a kernel built for 47x will not run correctly on a 440. In cputable, opt out all 440 and 460 variants when CONFIG_PPC_47x is set. Also add a default match dedicated to 470. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/822833ce3dc10634339818f7d1ab616edf63b0c6.1603041883.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/feature: Remove CPU_FTR_NODSISRALIGNChristophe Leroy
CPU_FTR_NODSISRALIGN has not been used since commit 31bfdb036f12 ("powerpc: Use instruction emulation infrastructure to handle alignment faults") Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/05d98136b24bbf11525445414bb18cffe2724f48.1602587470.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/32: Use SPRN_SPRG_SCRATCH2 in exception prologsChristophe Leroy
Use SPRN_SPRG_SCRATCH2 as a third scratch register in exception prologs in order to simplify them and avoid data going back and forth from/to CR. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6f5c8a7faa8cc54acb89c55c20aa579a2f30a4e9.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/32s: Use SPRN_SPRG_SCRATCH2 in DSI prologChristophe Leroy
Use SPRN_SPRG_SCRATCH2 as an alternative scratch register in the early part of DSI prolog in order to avoid clobbering SPRN_SPRG_SCRATCH0/1 used by other prologs. The 603 doesn't like a jump from DataLoadTLBMiss to the 10 nops that are now in the beginning of DSI exception as a result of the feature section. To workaround this, add a jump as alternative. It also avoids fetching 10 nops for nothing. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f9f8df2a2be93568768ef1ac793639f7914cf103.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/32: Simplify EXCEPTION_PROLOG_1 macroChristophe Leroy
Make code more readable with a clear CONFIG_VMAP_STACK section and a clear non CONFIG_VMAP_STACK section. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c0f16cf432d22fc80097264d94649460d3dd761d.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/603: Use SPRN_SDR1 to store the pgdir phys addressChristophe Leroy
On the 603, SDR1 is not used. In order to free SPRN_SPRG2, use SPRN_SDR1 to store the pgdir phys addr. But only some bits of SDR1 can be used (0xffff01ff). As the pgdir is 4k aligned, rotate it by 4 bits to the left. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7370574b49d8476878ce5480726197993cb76108.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/32s: Fix an FTR_SECTION_ELSEChristophe Leroy
An FTR_SECTION_ELSE is in the middle of BEGIN_MMU_FTR_SECTION/ALT_MMU_FTR_SECTION_END_IFSET Change it to MMU_FTR_SECTION_ELSE Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/61790f1a91692950a6bb5bb53d6d514d9bcdad74.1606285014.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/32s: Always map kernel text and rodata with BATsChristophe Leroy
Since commit 2b279c0348af ("powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC"), there is no real situation where mapping without BATs is required. In order to simplify memory handling, always map kernel text and rodata with BATs even when "nobats" kernel parameter is set. Also fix the 603 TLB miss exceptions that don't require anymore kernel page table if DEBUG_PAGEALLOC. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/da51f7ec632825a4ce43290a904aad61648408c0.1606285013.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/perf: MMCR0 control for PMU registers under PMCC=00Athira Rajeev
PowerISA v3.1 introduces new control bit (PMCCEXT) for restricting access to group B PMU registers in problem state when MMCR0 PMCC=0b00. In problem state and when MMCR0 PMCC=0b00, setting the Monitor Mode Control Register bit 54 (MMCR0 PMCCEXT), will restrict read permission on Group B Performance Monitor Registers (SIER, SIAR, SDAR and MMCR1). When this bit is set to zero, group B registers will be readable. In other platforms (like power9), the older behaviour is retained where group B PMU SPRs are readable. Patch adds support for MMCR0 PMCCEXT bit in power10 by enabling this bit during boot and during the PMU event enable/disable callback functions. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606409684-1589-8-git-send-email-atrajeev@linux.vnet.ibm.com
2020-12-04powerpc/book3s64/pkeys: Optimize KUAP and KUEP feature disabled caseAneesh Kumar K.V
If FTR_BOOK3S_KUAP is disabled, kernel will continue to run with the same AMR value with which it was entered. Hence there is a high chance that we can return without restoring the AMR value. This also helps the case when applications are not using the pkey feature. In this case, different applications will have the same AMR values and hence we can avoid restoring AMR in this case too. Also avoid isync() if not really needed. Do the same for IAMR. null-syscall benchmark results: With smap/smep disabled: Without patch: 957.95 ns 2778.17 cycles With patch: 858.38 ns 2489.30 cycles With smap/smep enabled: Without patch: 1017.26 ns 2950.36 cycles With patch: 1021.51 ns 2962.44 cycles Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-23-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.Aneesh Kumar K.V
Now that kernel correctly store/restore userspace AMR/IAMR values, avoid manipulating AMR and IAMR from the kernel on behalf of userspace. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-15-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one.Aneesh Kumar K.V
We will remove thread.amr/iamr/uamor in a later patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-14-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Reset userspace AMR correctly on execAneesh Kumar K.V
On fork, we inherit from the parent and on exec, we should switch to default_amr values. Also, avoid changing the AMR register value within the kernel. The kernel now runs with different AMR values. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-13-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Inherit correctly on fork.Aneesh Kumar K.V
Child thread.kuap value is inherited from the parent in copy_thread_tls. We still need to make sure when the child returns from a fork in the kernel we start with the kernel default AMR value. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-12-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry ↵Aneesh Kumar K.V
and exit from kernel This prepare kernel to operate with a different value than userspace AMR/IAMR. For this, AMR/IAMR need to be saved and restored on entry and return from the kernel. With KUAP we modify kernel AMR when accessing user address from the kernel via copy_to/from_user interfaces. We don't need to modify IAMR value in similar fashion. If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on entering kernel from userspace. If not we can assume that AMR/IAMR is not modified from userspace. We need to save AMR if we have MMU_FTR_BOOK3S_KUAP feature enabled and we are interrupted within kernel. This is required so that if we get interrupted within copy_to/from_user we continue with the right AMR value. If we hae MMU_FTR_BOOK3S_KUEP enabled we need to restore IAMR on return to userspace beause kernel will be running with a different IAMR value. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-11-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/exec: Set thread.regs early during execAneesh Kumar K.V
In later patches during exec, we would like to access default regs.amr to control access to the user mapping. Having thread.regs set early makes the code changes simpler. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-10-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap/kuep: Add PPC_PKEY config on book3s64Aneesh Kumar K.V
The config CONFIG_PPC_PKEY is used to select the base support that is required for PPC_MEM_KEYS, KUAP, and KUEP. Adding this dependency reduces the code complexity(in terms of #ifdefs) and enables us to move some of the initialization code to pkeys.c Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-4-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/64s: Remove "Host" from MCE loggingNicholas Piggin
"Host" caused machine check is printed when the kernel sees a MCE hit in this kernel or userspace, and "Guest" if it hit one of its guests. This is confusing when a guest kernel handles a hypervisor- delivered MCE, it also prints "Host". Just remove "Host". "Guest" is adequate to make the distinction. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-8-npiggin@gmail.com
2020-12-04powerpc/64s/pseries: Add ERAT specific machine check handlerNicholas Piggin
Don't treat ERAT MCEs as SLB, don't save the SLB and use a specific ERAT flush to recover it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-7-npiggin@gmail.com
2020-12-04powerpc/64s/powernv: Allow KVM to handle guest machine check detailsNicholas Piggin
KVM has strategies to perform machine check recovery. If a MCE hits in a guest, have the low level handler just decode and save the MCE but not try to recover anything, so KVM can deal with it. The host does not own SLBs and does not need to report the SLB state in case of a multi-hit for example, or know about the virtual memory map of the guest. UE and memory poisoning of guest pages in the host is one thing that is possibly not completely robust at the moment, but this too needs to go via KVM (possibly via the guest and back out to host via hcall) rather than being handled at a low level in the host handler. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-3-npiggin@gmail.com