summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-12-04powerpc/perf: Invoke per-CPU variable access with disabled interruptsAthira Rajeev
The power_pmu_event_init() callback access per-cpu variable (cpu_hw_events) to check for event constraints and Branch Stack (BHRB). Current usage is to disable preemption when accessing the per-cpu variable, but this does not prevent timer callback from interrupting event_init. Fix this by using 'local_irq_save/restore' to make sure the code path is invoked with disabled interrupts. This change is tested in mambo simulator to ensure that, if a timer interrupt comes in during the per-cpu access in event_init, it will be soft masked and replayed later. For testing purpose, introduced a udelay() in power_pmu_event_init() to make sure a timer interrupt arrives while in per-cpu variable access code between local_irq_save/resore. As expected the timer interrupt was replayed later during local_irq_restore called from power_pmu_event_init. This was confirmed by adding breakpoint in mambo and checking the backtrace when timer_interrupt was hit. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606814880-1720-1-git-send-email-atrajeev@linux.vnet.ibm.com
2020-12-04selftests/powerpc: Fix uninitialized variable warningHarish
Patch fixes uninitialized variable warning in bad_accesses test which causes the selftests build to fail in older distibutions bad_accesses.c: In function ‘bad_access’: bad_accesses.c:52:9: error: ‘x’ may be used uninitialized in this function [-Werror=maybe-uninitialized] printf("Bad - no SEGV! (%c)\n", x); ^ cc1: all warnings being treated as errors Signed-off-by: Harish <harish@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201092403.238182-1-harish@linux.ibm.com
2020-12-04selftests/powerpc: update .gitignoreDaniel Axtens
I did an in-place build of the self-tests and found that it left the tree dirty. Add missed test binaries to .gitignore Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144427.1228745-1-dja@axtens.net
2020-12-04powerpc/feature-fixups: use a semicolon rather than a commaDaniel Axtens
In a bunch of our security flushes, we use a comma rather than a semicolon to 'terminate' an assignment. Nothing breaks, but checkpatch picks it up if you copy it into another flush. Switch to semicolons for ending statements. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144344.1228421-1-dja@axtens.net
2020-12-04powerpc/pseries: Define PCI bus speed for Gen4 and Gen5Frederic Barrat
Update bus speed definition for PCI Gen4 and 5. Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130152949.26467-1-fbarrat@linux.ibm.com
2020-12-04powerpc: Allow relative pointers in bug table entriesJordan Niethe
This enables GENERIC_BUG_RELATIVE_POINTERS on Power so that 32-bit offsets are stored in the bug entries rather than 64-bit pointers. While this doesn't save space for 32-bit machines, use it anyway so there is only one code path. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201005203.15210-1-jniethe5@gmail.com
2020-12-04powerpc/xmon: Fix build failure for 8xxRavi Bangoria
With CONFIG_PPC_8xx and CONFIG_XMON set, kernel build fails with arch/powerpc/xmon/xmon.c:1379:12: error: 'find_free_data_bpt' defined but not used [-Werror=unused-function] Fix it by enclosing find_free_data_bpt() inside #ifndef CONFIG_PPC_8xx. Fixes: 30df74d67d48 ("powerpc/watchpoint/xmon: Support 2nd DAWR") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130034406.288047-1-ravi.bangoria@linux.ibm.com
2020-12-04powerpc: Use common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macroYouling Tang
Use the common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macro rule for the linker script in an effort. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606460857-2723-1-git-send-email-tangyouling@loongson.cn
2020-12-04powerpc/64: Fix an EMIT_BUG_ENTRY in head_64.SJordan Niethe
Commit 63ce271b5e37 ("powerpc/prom: convert PROM_BUG() to standard trap") added an EMIT_BUG_ENTRY for the trap after the branch to start_kernel(). The EMIT_BUG_ENTRY was for the address "0b", however the trap was not labeled with "0". Hence the address used for bug is in relative_toc() where the previous "0" label is. Label the trap as "0" so the correct address is used. Fixes: 63ce271b5e37 ("powerpc/prom: convert PROM_BUG() to standard trap") Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130004404.30953-1-jniethe5@gmail.com
2020-12-04powerpc/vdso: Cleanup vdso.hChristophe Leroy
Rename the guard define to _ASM_POWERPC_VDSO_H And remove useless #ifdef __KERNEL__ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9902590d410cd1c2afa48b83b277faf0711f07b2.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove VDSO32_LBASE and VDSO64_LBASEChristophe Leroy
VDSO32_LBASE and VDSO64_LBASE are 0. Remove them to simplify code. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6c4d6570d886bbe1cc471e8ca01602e4b4d9beb5.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove DBG()Christophe Leroy
DBG() is not used anymore. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e11a9b50e709f197bb3aa2ed1d80d2dee8714afc.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso_readyChristophe Leroy
There is no way to get out of vdso_init() prematuraly anymore. Remove vdso_ready as it will always be 1. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0e1e18c6329b848aa3edeeba76509b4d76182e7d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso_setup()Christophe Leroy
vdso_fixup_features() cannot fail anymore and that's the only function called by vdso_setup(). vdso_setup() has become trivial and can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/11522eec6140f510a8c89c63cbb739277d097fdc.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove lib32_elfinfo and lib64_elfinfoChristophe Leroy
lib32_elfinfo and lib64_elfinfo are not used anymore, remove them. Also remove vdso32_kbase and vdso64_kbase while removing the last use. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/01ac65abf22f0428f8f764525a7d84459c54d806.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove symbol section information in struct lib32/64_elfinfoChristophe Leroy
The members related to the symbol section in struct lib32_elfinfo and struct lib64_elfinfo are not used anymore, removed them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b779e5b7cc0354e2f87fd407fe5b02f4a8a73825.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove unused text member in struct lib32/64_elfinfoChristophe Leroy
The text member in struct lib32_elfinfo and struct lib64_elfinfo is not used, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f53dcc9bb1946a7854d15b34d03d3d2e2003848c.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso_patches[] and associated functionsChristophe Leroy
vdso_patches[] is now empty, remove it and remove all functions that depends on it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/27d75debd6e4ddeaffe1d66ffed1e7526684a004.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove runtime generated sigtramp offsetsChristophe Leroy
Signal trampoline offsets are now generated at buildtime. Runtime generated offsets are not used anymore, remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7c192d35a437151837cf4c48aeccb42380d6daac.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove __kernel_datapage_offsetChristophe Leroy
__kernel_datapage_offset is not used anymore, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ddb5c746bec4e1a026d7c85243213a1876ef844f.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso32_pages and vdso64_pagesChristophe Leroy
vdso32_pages and vdso64_pages are not used anymore. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bce021f616cbaf39dfb5766cf7ef114adcb918d9.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Merge __kernel_sync_dicache_p5() into __kernel_sync_dicache()Christophe Leroy
__kernel_sync_dicache_p5() is an alternative to __kernel_sync_dicache() when cpu has CPU_FTR_COHERENT_ICACHE Remove this alternative function and merge __kernel_sync_dicache_p5() into __kernel_sync_dicache() using standard CPU feature fixup. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4c7dcc6544882761b2b0249d7a8ec2c3a8088cb5.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Use builtin symbols to locate fixup sectionChristophe Leroy
Add builtin symbols to locate fixup section and use them instead of locating sections through elf headers at runtime. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2954526981859ca1ccfcfc7a7c4263920e9ddfcb.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Retrieve sigtramp offsets at buildtimeChristophe Leroy
This is copied from arm64. Instead of using runtime generated signal trampoline offsets, get offsets at buildtime. If the said trampoline doesn't exist, build will fail. So no need to check whether the trampoline exists or not in the VDSO. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f8bfd6812c3e3678b1cdb4d55a52f9eb022b40d3.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove unused \tmp param in __get_datapage()Christophe Leroy
The \tmp param is not used anymore, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4b13f897dcccce8ae03c031a4598cf26b32e2f1c.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Simplify __get_datapage()Christophe Leroy
The VDSO datapage and the text pages are always located immediately next to each other, so it can be hardcoded without an indirection through __kernel_datapage_offset Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b08f5ef99d64cfc38f79b7ad5310d9b4d2479eeb.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Move vdso datapage up frontChristophe Leroy
Move the vdso datapage in front of the VDSO area, before vdso test. This will allow to remove the __kernel_datapage_offset symbol and simplify __get_datapage() in following patches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b68c99b6e8ee0b1d99bfa4c7e34c359fc1bc1000.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Replace vdso_base by vdsoChristophe Leroy
All other architectures but s390 use a void pointer named 'vdso' to reference the VDSO mapping. In a following patch, the VDSO data page will be put in front of text, vdso_base will then not anymore point to VDSO text. To avoid confusion between vdso_base and VDSO text, rename vdso_base into vdso and make it a void __user *. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8e6cefe474aa4ceba028abb729485cd46c140990.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Provide vdso_remap()Christophe Leroy
Provide vdso_remap() through _install_special_mapping() and drop arch_remap(). This adds a test of the size and returns -EINVAL if the size is not correct. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/373c66f768fa9cc8890f3b55462209a98c522326.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Move to _install_special_mapping() and remove arch_vma_name()Christophe Leroy
Copied from commit 2fea7f6c98f5 ("arm64: vdso: move to _install_special_mapping and remove arch_vma_name"). Use the new _install_special_mapping() API added by commit a62c34bd2a8a ("x86, mm: Improve _install_special_mapping and fix x86 vdso naming") which obsolete install_special_mapping(). And remove arch_vma_name() as the name is handled by the new API. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: kernel test robot <lkp@intel.com> [mpe: Squash fix to use PTR_ERR_OR_ZERO() from lkp] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e7e5dfe0f93234e31051f2a610b4b07f50b0082f.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Simplify arch_setup_additional_pages() exitChristophe Leroy
To simplify arch_setup_additional_pages() exit, rename it __arch_setup_additional_pages() and create a caller arch_setup_additional_pages() which does the locking. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/603c1d039d3f928ee95e547fcd2219fcf4c3b514.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Use VDSO size in arch_setup_additional_pages()Christophe Leroy
In arch_setup_additional_pages(), instead of using number of VDSO pages and recalculate VDSO size, directly use the VDSO size. As vdso_ready is set, vdso_pages can't be 0 so just remove the test. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4edfa548c3885a430b765335dc720105716e273f.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove unnecessary ifdefs in vdso_pagelist initializationChristophe Leroy
No need of all those #ifdefs around the pagelist initialisation, use IS_ENABLED(), GCC will kick out unused static variables. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f9333432e329b1fcbbbf846cb1cd4a1c4127a60b.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Refactor 32 bits and 64 bits pages setupChristophe Leroy
The setup of VDSO pages is identical for 32 bits VDSO and 64 bits VDSO. Refactor that setup. And use &vdsoXX_start which is synonym of vdsoXX_kbase. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/269ffb54c37fc1d46128f77d7a39f88ef4a9957d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove NULL termination element in vdso_pagelistChristophe Leroy
No need of a NULL last element in pagelists, install_special_mapping() knows how long the list is. Remove that element. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e58d95ab859e3cbc9bae3c9ce2959e17d2864f5d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove get_page() in vdso_pagelist initializationChristophe Leroy
Partly copied from commit 16fb1a9bec61 ("arm64: vdso: clean up vdso_pagelist initialization"). No need to get_page() the vdso text/data - these are part of the kernel image. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9d14540bd10832b6c9519d74fb5728fdc4974b36.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Rename syscall_map_32/64 to simplify vdso_setup_syscall_map()Christophe Leroy
Today vdso_data structure has: - syscall_map_32[] and syscall_map_64[] on PPC64 - syscall_map_32[] on PPC32 On PPC32, syscall_map_32[] is populated using sys_call_table[]. On PPC64, syscall_map_64[] is populated using sys_call_table[] and syscal_map_32[] is populated using compat_sys_call_table[]. To simplify vdso_setup_syscall_map(), - On PPC32 rename syscall_map_32[] into syscall_map[], - On PPC64 rename syscall_map_64[] into syscall_map[], - On PPC64 rename syscall_map_32[] into compat_syscall_map[]. That way, syscall_map[] gets populated using sys_call_table[] and compat_syscall_map[] gets population using compat_sys_call_table[]. Also define an empty compat_syscall_map[] on PPC32 to avoid ifdefs. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/472734be0d9991eee320a06824219a5b2663736b.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Add missing includes and clean vdso_setup_syscall_map()Christophe Leroy
Instead of including extern references locally in vdso_setup_syscall_map(), add the missing headers. sys_ni_syscall() being a function, cast its address to an unsigned long instead of declaring it as a fake unsigned long object. At the same time, remove a comment which paraphrases the function name. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b4afedce748ed2858299ceab5ae29b52109263ef.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Stripped VDSO is not needed, don't build itChristophe Leroy
Since commit 24b659a13866 ("powerpc: Use unstripped VDSO image for more accurate profiling data"), only the unstripped VDSO image has been used. Partially revert commit 8150caad0226 ("[POWERPC] powerpc vDSO: install unstripped copies on disk") to avoid building the stripped version. And the unstripped version in $(MODLIB)/vdso/ is not required anymore as it is the one embedded in the kernel image. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5986ca25be44fe6e9790486304507f240077d8c4.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Transform save_user_regs() and save_tm_user_regs() in ↵Christophe Leroy
'unsafe' version Change those two functions to be used within a user access block. For that, change save_general_regs() to and unsafe_save_general_regs(), then replace all user accesses by unsafe_ versions. This series leads to a reduction from 2.55s to 1.73s of the system CPU time with the following microbench app on an mpc832x with KUAP (approx 32%) Without KUAP, the difference is in the noise. void sigusr1(int sig) { } int main(int argc, char **argv) { int i = 100000; signal(SIGUSR1, sigusr1); for (;i--;) raise(SIGUSR1); exit(0); } An additional 0.10s reduction is achieved by removing CONFIG_PPC_FPU, as the mpc832x has no FPU. A bit less spectacular on an 8xx as KUAP is less heavy, prior to the series (with KUAP) it ran in 8.10 ms. Once applies the removal of FPU regs handling, we get 7.05s. With the full series, we get 6.9s. If artificially re-activating FPU regs handling with the full series, we get 7.6s. So for the 8xx, the removal of the FPU regs copy is what makes the difference, but the rework of handle_signal also have a benefit. Same as above, without KUAP the difference is in the noise. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fixup typo in SPE handling] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c7b37b385ccf9666066452e58f018a86573f83e8.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Isolate non-copy actions in save_user_regs() and ↵Christophe Leroy
save_tm_user_regs() Reorder actions in save_user_regs() and save_tm_user_regs() to regroup copies together in order to switch to user_access_begin() logic in a later patch. Move non-copy actions into new functions called prepare_save_user_regs() and prepare_save_tm_user_regs(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f6eac65781b4a57220477c8864bca2b57f29a5d5.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal: Create 'unsafe' versions of copy_[ck][fpr/vsx]_to_user()Christophe Leroy
For the non VSX version, that's trivial. Just use unsafe_copy_to_user() instead of __copy_to_user(). For the VSX version, remove the intermediate step through a buffer and use unsafe_put_user() directly. This generates a far smaller code which is acceptable to inline, see below: Standard VSX version: 0000000000000000 <.copy_fpr_to_user>: 0: 7c 08 02 a6 mflr r0 4: fb e1 ff f8 std r31,-8(r1) 8: 39 00 00 20 li r8,32 c: 39 24 0b 80 addi r9,r4,2944 10: 7d 09 03 a6 mtctr r8 14: f8 01 00 10 std r0,16(r1) 18: f8 21 fe 71 stdu r1,-400(r1) 1c: 39 41 00 68 addi r10,r1,104 20: e9 09 00 00 ld r8,0(r9) 24: 39 4a 00 08 addi r10,r10,8 28: 39 29 00 10 addi r9,r9,16 2c: f9 0a 00 00 std r8,0(r10) 30: 42 00 ff f0 bdnz 20 <.copy_fpr_to_user+0x20> 34: e9 24 0d 80 ld r9,3456(r4) 38: 3d 42 00 00 addis r10,r2,0 3a: R_PPC64_TOC16_HA .toc 3c: eb ea 00 00 ld r31,0(r10) 3e: R_PPC64_TOC16_LO_DS .toc 40: f9 21 01 70 std r9,368(r1) 44: e9 3f 00 00 ld r9,0(r31) 48: 81 29 00 20 lwz r9,32(r9) 4c: 2f 89 00 00 cmpwi cr7,r9,0 50: 40 9c 00 18 bge cr7,68 <.copy_fpr_to_user+0x68> 54: 4c 00 01 2c isync 58: 3d 20 40 00 lis r9,16384 5c: 79 29 07 c6 rldicr r9,r9,32,31 60: 7d 3d 03 a6 mtspr 29,r9 64: 4c 00 01 2c isync 68: 38 a0 01 08 li r5,264 6c: 38 81 00 70 addi r4,r1,112 70: 48 00 00 01 bl 70 <.copy_fpr_to_user+0x70> 70: R_PPC64_REL24 .__copy_tofrom_user 74: 60 00 00 00 nop 78: e9 3f 00 00 ld r9,0(r31) 7c: 81 29 00 20 lwz r9,32(r9) 80: 2f 89 00 00 cmpwi cr7,r9,0 84: 40 9c 00 18 bge cr7,9c <.copy_fpr_to_user+0x9c> 88: 4c 00 01 2c isync 8c: 39 20 ff ff li r9,-1 90: 79 29 00 44 rldicr r9,r9,0,1 94: 7d 3d 03 a6 mtspr 29,r9 98: 4c 00 01 2c isync 9c: 38 21 01 90 addi r1,r1,400 a0: e8 01 00 10 ld r0,16(r1) a4: eb e1 ff f8 ld r31,-8(r1) a8: 7c 08 03 a6 mtlr r0 ac: 4e 80 00 20 blr 'unsafe' simulated VSX version (The ... are only nops) using unsafe_copy_fpr_to_user() macro: unsigned long copy_fpr_to_user(void __user *to, struct task_struct *task) { unsafe_copy_fpr_to_user(to, task, failed); return 0; failed: return 1; } 0000000000000000 <.copy_fpr_to_user>: 0: 39 00 00 20 li r8,32 4: 39 44 0b 80 addi r10,r4,2944 8: 7d 09 03 a6 mtctr r8 c: 7c 69 1b 78 mr r9,r3 ... 20: e9 0a 00 00 ld r8,0(r10) 24: f9 09 00 00 std r8,0(r9) 28: 39 4a 00 10 addi r10,r10,16 2c: 39 29 00 08 addi r9,r9,8 30: 42 00 ff f0 bdnz 20 <.copy_fpr_to_user+0x20> 34: e9 24 0d 80 ld r9,3456(r4) 38: f9 23 01 00 std r9,256(r3) 3c: 38 60 00 00 li r3,0 40: 4e 80 00 20 blr ... 50: 38 60 00 01 li r3,1 54: 4e 80 00 20 blr Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/29f6c4b8e7a5bbc61e6a8801b78bbf493f9f819e.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Switch swap_context() to user_access_begin() logicChristophe Leroy
As this was the last user of put_sigset_t(), remove it as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c3ac4f2d134a3391bb51bdaa2d00e9a409aba9f8.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Add and use unsafe_put_sigset_t()Christophe Leroy
put_sigset_t() calls copy_to_user() for copying two words. This is terribly inefficient for copying two words. By switching to unsafe_put_user(), we end up with something as simple as: 3cc: 81 3d 00 00 lwz r9,0(r29) 3d0: 91 26 00 b4 stw r9,180(r6) 3d4: 81 3d 00 04 lwz r9,4(r29) 3d8: 91 26 00 b8 stw r9,184(r6) Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/06def97e87ac1c4ae8e3197e0982e1fab7b3c8ae.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04signal: Add unsafe_put_compat_sigset()Christophe Leroy
Implement 'unsafe' version of put_compat_sigset() For the bigendian, use unsafe_put_user() directly to avoid intermediate copy through the stack. For the littleendian, use a straight unsafe_copy_to_user(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/537c7082ee309a0bb9c67a50c5d9dd929aedb82d.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Remove ifdefery in middle of if/elseChristophe Leroy
MSR_TM_ACTIVE() is always defined and returns always 0 when CONFIG_PPC_TRANSACTIONAL_MEM is not selected, so the awful ifdefery in the middle of an if/else can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f3c36d687e4228f58d5c207a4036aa9ddcc7420a.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Switch handle_rt_signal32() to user_access_begin() logicChristophe Leroy
On the same way as handle_signal32(), replace all user accesses with equivalent unsafe_ versions, and move the trampoline code icache flush outside the user access block. Functions that have no unsafe_ equivalent also remains outside the access block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2974314226256f958e2984912b48883ef1754185.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Switch handle_signal32() to user_access_begin() logicChristophe Leroy
Replace the access_ok() by user_access_begin() and change all user accesses to unsafe_ version. Move flush_icache_range() outside the user access block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a27797f781aa00da96f8284c898173d18e952361.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Move signal trampoline setup to handle_[rt_]signal32Christophe Leroy
Move signal trampoline setup into handle_signal32() and handle_rt_signal32(). At the same time, remove the define which hides the mc_pad field used for trampoline. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e439cc0fa35aa45da6776520777a61848b92fd4b.1597770847.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/signal32: Misc changes to make handle_[rt_]_signal32() more similarChristophe Leroy
Miscellaneous changes to clean and make handle_signal32() and handle_rt_signal32() even more similar. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/df0bc8c3b8fa96390c46f611df79b2a94ac21844.1597770847.git.christophe.leroy@csgroup.eu