summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-05-19ASoC: SOF: ipc3-dtrace: Move host ops wrappers from generic header to privatePeter Ujfalusi
Move the snd_sof_dma_trace_* ops wrappers from ops.h to ipc3-priv.h since they are not used outside of IPC3 code. While moving, rename them to sof_dtrace_host_* Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> Reviewed-by: Paul Olaru <paul.olaru@oss.nxp.com> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Link: https://lore.kernel.org/r/20220516104711.26115-6-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-05-19ASoC: SOF: Switch to IPC generic firmware tracingPeter Ujfalusi
Introduce new, generic API for firmware tracing with sof_fw_trace_ prefix and switch to use it. At the same time the old IPC3 code can be dropped from trace.c, which is now a generic wrapper for the firmware tracing ops. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> Reviewed-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220516104711.26115-5-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-05-19ASoC: SOF: Clone the trace code to ipc3-dtrace as fw_tracing implementationPeter Ujfalusi
The existing trace.c file is implementing the IPC3 dma-trace support. Clone the existing code with prefix fixes as ipc3 fw_tracing implementation to be used when the core is converted to use generic ops for firmware tracing. Drop the dual licensing of the content as the implementation is based on debugfs. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> Reviewed-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220516104711.26115-4-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-05-19ASoC: SOF: Rename dtrace_is_supported flag to fw_trace_is_supportedPeter Ujfalusi
Rename the internal flag to not limit it's use for dma-trace, but to be used for generic firmware tracing functionality. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> Reviewed-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220516104711.26115-3-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-05-19ASoC: SOF: Introduce IPC independent ops for firmware tracing supportPeter Ujfalusi
The current (dma-)trace is only supported with IPC3, it is not available when IPC4 is used. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> Reviewed-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220516104711.26115-2-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-05-19random: unify batched entropy implementationsJason A. Donenfeld
There are currently two separate batched entropy implementations, for u32 and u64, with nearly identical code, with the goal of avoiding unaligned memory accesses and letting the buffers be used more efficiently. Having to maintain these two functions independently is a bit of a hassle though, considering that they always need to be kept in sync. This commit factors them out into a type-generic macro, so that the expansion produces the same code as before, such that diffing the assembly shows no differences. This will also make it easier in the future to add u16 and u8 batches. This was initially tested using an always_inline function and letting gcc constant fold the type size in, but the code gen was less efficient, and in general it was more verbose and harder to follow. So this patch goes with the boring macro solution, similar to what's already done for the _wait functions in random.h. Cc: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: move randomize_page() into mm where it belongsJason A. Donenfeld
randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: remove mostly unused async readiness notifierJason A. Donenfeld
The register_random_ready_notifier() notifier is somewhat complicated, and was already recently rewritten to use notifier blocks. It is only used now by one consumer in the kernel, vsprintf.c, for which the async mechanism is really overly complex for what it actually needs. This commit removes register_random_ready_notifier() and unregister_random_ ready_notifier(), because it just adds complication with little utility, and changes vsprintf.c to just check on `!rng_is_initialized() && !rng_has_arch_random()`, which will eventually be true. Performance- wise, that code was already using a static branch, so there's basically no overhead at all to this change. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Petr Mladek <pmladek@suse.com> # for vsprintf.c Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: remove get_random_bytes_arch() and add rng_has_arch_random()Jason A. Donenfeld
The RNG incorporates RDRAND into its state at boot and every time it reseeds, so there's no reason for callers to use it directly. The hashing that the RNG does on it is preferable to using the bytes raw. The only current use case of get_random_bytes_arch() is vsprintf's siphash key for pointer hashing, which uses it to initialize the pointer secret earlier than usual if RDRAND is available. In order to replace this narrow use case, just expose whether RDRAND is mixed into the RNG, with a new function called rng_has_arch_random(). With that taken care of, there are no users of get_random_bytes_arch() left, so it can be removed. Later, if trust_cpu gets turned on by default (as most distros are doing), this one use of rng_has_arch_random() can probably go away as well. Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Petr Mladek <pmladek@suse.com> # for vsprintf.c Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: move initialization functions out of hot pagesJason A. Donenfeld
Much of random.c is devoted to initializing the rng and accounting for when a sufficient amount of entropy has been added. In a perfect world, this would all happen during init, and so we could mark these functions as __init. But in reality, this isn't the case: sometimes the rng only finishes initializing some seconds after system init is finished. For this reason, at the moment, a whole host of functions that are only used relatively close to system init and then never again are intermixed with functions that are used in hot code all the time. This creates more cache misses than necessary. In order to pack the hot code closer together, this commit moves the initialization functions that can't be marked as __init into .text.unlikely by way of the __cold attribute. Of particular note is moving credit_init_bits() into a macro wrapper that inlines the crng_ready() static branch check. This avoids a function call to a nop+ret, and most notably prevents extra entropy arithmetic from being computed in mix_interrupt_randomness(). Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: make consistent use of buf and lenJason A. Donenfeld
The current code was a mix of "nbytes", "count", "size", "buffer", "in", and so forth. Instead, let's clean this up by naming input parameters "buf" (or "ubuf") and "len", so that you always understand that you're reading this variety of function argument. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: use proper return types on get_random_{int,long}_wait()Jason A. Donenfeld
Before these were returning signed values, but the API is intended to be used with unsigned values. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: remove extern from functions in headerJason A. Donenfeld
Accoriding to the kernel style guide, having `extern` on functions in headers is old school and deprecated, and doesn't add anything. So remove them from random.h, and tidy up the file a little bit too. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19random: use static branch for crng_ready()Jason A. Donenfeld
Since crng_ready() is only false briefly during initialization and then forever after becomes true, we don't need to evaluate it after, making it a prime candidate for a static branch. One complication, however, is that it changes state in a particular call to credit_init_bits(), which might be made from atomic context, which means we must kick off a workqueue to change the static key. Further complicating things, credit_init_bits() may be called sufficiently early on in system initialization such that system_wq is NULL. Fortunately, there exists the nice function execute_in_process_context(), which will immediately execute the function if !in_interrupt(), and otherwise defer it to a workqueue. During early init, before workqueues are available, in_interrupt() is always false, because interrupts haven't even been enabled yet, which means the function in that case executes immediately. Later on, after workqueues are available, in_interrupt() might be true, but in that case, the work is queued in system_wq and all goes well. Cc: Theodore Ts'o <tytso@mit.edu> Cc: Sultan Alsawaf <sultan@kerneltoast.com> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-19mmc: core: Fix busy polling for MMC_SEND_OP_COND againUlf Hansson
It turned out that polling period for MMC_SEND_OP_COND, that currently is set to 1ms, still isn't sufficient. In particular a Micron eMMC on a Beaglebone platform, is reported to sometimes fail to initialize. Additional test, shows that extending the period to 4ms is working fine, so let's make that change. Reported-by: Jean Rene Dawin <jdawin@math.uni-bielefeld.de> Tested-by: Jean Rene Dawin <jdawin@math.uni-bielefeld.de> Fixes: 1760fdb6fe9f (mmc: core: Restore (almost) the busy polling for MMC_SEND_OP_COND") Fixes: 76bfc7ccc2fa ("mmc: core: adjust polling interval for CMD1") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Link: https://lore.kernel.org/r/20220517101046.27512-1-ulf.hansson@linaro.org
2022-05-19dt-bindings: pinctrl: qcom: Drop 'maxItems' on 'wakeup-parent'Rob Herring
'wakeup-parent' is a single phandle and not an array, so 'maxItems' is wrong. Drop it. Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220519011210.170022-1-robh@kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-05-19pinctrl: starfive: Make the irqchip immutableGeert Uytterhoeven
Commit 6c846d026d49 ("gpio: Don't fiddle with irqchips marked as immutable") added a warning to indicate if the gpiolib is altering the internals of irqchips. Following this change the following warning is now observed for the starfive driver: gpio gpiochip0: (11910000.pinctrl): not an immutable chip, please consider fixing it! Fix this by making the irqchip in the starfive driver immutable. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/5eb66be34356afd5eb0ea9027329e0939d03d3a0.1652884852.git.geert+renesas@glider.be Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-05-19powerpc/ftrace: Use PPC_RAW_xxx() macros instead of opencoding.Christophe Leroy
PPC_RAW_xxx() macros are self explanatory and less error prone than open coding. Use them in ftrace.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9292094c9a69cef6d29ee83f435a557b59c45065.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Use BRANCH_SET_LINK instead of value 1Christophe Leroy
To make it explicit, use BRANCH_SET_LINK instead of value 1 when calling create_branch(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d57847063ac93660a5af620d4df1847f10edf61a.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Remove ftrace_plt_tramps[]Christophe Leroy
ftrace_plt_tramps table is never filled so it is useless. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/daeeb618a6619e3a7e3f82f1bd83ca7c25af6330.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Use CONFIG_FUNCTION_TRACER instead of CONFIG_DYNAMIC_FTRACEChristophe Leroy
Since commit 0c0c52306f47 ("powerpc: Only support DYNAMIC_FTRACE not static"), CONFIG_DYNAMIC_FTRACE is always selected when CONFIG_FUNCTION_TRACER is selected. To avoid confusion and have the reader wonder what's happen when CONFIG_FUNCTION_TRACER is selected and CONFIG_DYNAMIC_FTRACE is not, use CONFIG_FUNCTION_TRACER in ifdefs instead of CONFIG_DYNAMIC_FTRACE. As CONFIG_FUNCTION_GRAPH_TRACER depends on CONFIG_FUNCTION_TRACER, ftrace.o doesn't need to appear for both symbols in Makefile. Then as ftrace.o is built only when CONFIG_FUNCTION_TRACER is selected ifdef CONFIG_FUNCTION_TRACER is not needed in ftrace.c, and since it implies CONFIG_DYNAMIC_FTRACE, CONFIG_DYNAMIC_FTRACE is not needed in ftrace.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/628d357503eb90b4a034f99b7df516caaff4d279.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Don't include ftrace.o for CONFIG_FTRACE_SYSCALLSChristophe Leroy
Since commit 7bea7ac0ca01 ("powerpc/syscalls: Fix syscall tracing") ftrace.o is not needed anymore for CONFIG_FTRACE_SYSCALLS. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/275932a5d61543b825ff9a64f61abed6da5d4a2a.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Make __ftrace_make_{nop/call}() common to PPC32 and PPC64Christophe Leroy
Since c93d4f6ecf4b ("powerpc/ftrace: Add module_trampoline_target() for PPC32"), __ftrace_make_nop() for PPC32 is very similar to the one for PPC64. Same for __ftrace_make_call(). Make them common. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/96f53c237316dab4b1b8c682685266faa92da816.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc: Finalise cleanup around ABI useChristophe Leroy
Now that we have CONFIG_PPC64_ELF_ABI_V1 and CONFIG_PPC64_ELF_ABI_V2, get rid of all indirect detection of ABI version. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/709d9d69523c14c8a9fba4486395dca0f2d675b1.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc: Replace PPC64_ELF_ABI_v{1/2} by CONFIG_PPC64_ELF_ABI_V{1/2}Christophe Leroy
Replace all uses of PPC64_ELF_ABI_v1 and PPC64_ELF_ABI_v2 by resp CONFIG_PPC64_ELF_ABI_V1 and CONFIG_PPC64_ELF_ABI_V2. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ba13d59e8c50bc9aa6328f1c7f0c0d0278e0a3a7.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc: Add CONFIG_PPC64_ELF_ABI_V1 and CONFIG_PPC64_ELF_ABI_V2Christophe Leroy
At the time being, we use CONFIG_CPU_LITTLE_ENDIAN and CONFIG_CPU_BIG_ENDIAN to pass -mabi=elfv1 or elfv2 to compiler, then define a PPC64_ELF_ABI_v1 or PPC64_ELF_ABI_v2 macro in asm/types.h based on _CALL_ELF define set by the compiler. Make it more straight forward with a CONFIG option that is directly usable. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1eca1addbc550167da9841c7340a010d0c4b2200.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Use patch_instruction() return directlyChristophe Leroy
Instead of returning -EPERM when patch_instruction() fails, just return what patch_instruction returns. That simplifies ftrace_modify_code(): 0: 94 21 ff c0 stwu r1,-64(r1) 4: 93 e1 00 3c stw r31,60(r1) 8: 7c 7f 1b 79 mr. r31,r3 c: 40 80 00 30 bge 3c <ftrace_modify_code+0x3c> 10: 93 c1 00 38 stw r30,56(r1) 14: 7c 9e 23 78 mr r30,r4 18: 7c a4 2b 78 mr r4,r5 1c: 80 bf 00 00 lwz r5,0(r31) 20: 7c 1e 28 40 cmplw r30,r5 24: 40 82 00 34 bne 58 <ftrace_modify_code+0x58> 28: 83 c1 00 38 lwz r30,56(r1) 2c: 7f e3 fb 78 mr r3,r31 30: 83 e1 00 3c lwz r31,60(r1) 34: 38 21 00 40 addi r1,r1,64 38: 48 00 00 00 b 38 <ftrace_modify_code+0x38> 38: R_PPC_REL24 patch_instruction Before: 0: 94 21 ff c0 stwu r1,-64(r1) 4: 93 e1 00 3c stw r31,60(r1) 8: 7c 7f 1b 79 mr. r31,r3 c: 40 80 00 4c bge 58 <ftrace_modify_code+0x58> 10: 93 c1 00 38 stw r30,56(r1) 14: 7c 9e 23 78 mr r30,r4 18: 7c a4 2b 78 mr r4,r5 1c: 80 bf 00 00 lwz r5,0(r31) 20: 7c 08 02 a6 mflr r0 24: 90 01 00 44 stw r0,68(r1) 28: 7c 1e 28 40 cmplw r30,r5 2c: 40 82 00 48 bne 74 <ftrace_modify_code+0x74> 30: 7f e3 fb 78 mr r3,r31 34: 48 00 00 01 bl 34 <ftrace_modify_code+0x34> 34: R_PPC_REL24 patch_instruction 38: 80 01 00 44 lwz r0,68(r1) 3c: 20 63 00 00 subfic r3,r3,0 40: 83 c1 00 38 lwz r30,56(r1) 44: 7c 63 19 10 subfe r3,r3,r3 48: 7c 08 03 a6 mtlr r0 4c: 83 e1 00 3c lwz r31,60(r1) 50: 38 21 00 40 addi r1,r1,64 54: 4e 80 00 20 blr It improves ftrace activation/deactivation duration by about 3%. Modify patch_instruction() return on failure to -EPERM in order to match with ftrace expectations. Other users of patch_instruction() do not care about the exact error value returned. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/49a8597230713e2633e7d9d7b56140787c4a7e20.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Inline ftrace_modify_code()Christophe Leroy
Inlining ftrace_modify_code(), it increases a bit the size of ftrace code but brings 5% improvment on ftrace activation. Usually in C files we let gcc decide what to do but here it really help to 'help' gcc to decide to inline, thought we don't want to force it with an __always_inline that would be too much for CONFIG_CC_OPTIMIZE_FOR_SIZE. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1597a06d57cfc80e6853838c4066e799bf6c7977.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/code-patching: Inline create_branch()Christophe Leroy
create_branch() is a good candidate for inlining because: - Flags can be folded in. - Range tests are likely to be already done. Hence reducing the create_branch() to only a set of instructions. So inline it. It improves ftrace activation by 10%. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/69851cc9a7bf8f03d025e6d29e165f2d0bd3bb6e.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Use is_offset_in_branch_range()Christophe Leroy
Use is_offset_in_branch_range() instead of create_branch() to check if a target is within branch range. This patch together with the previous one improves ftrace activation time by 7% Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/912ae51782f5a53c44e435497c8c3fb5cc632387.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/code-patching: Inline is_offset_in_{cond}_branch_range()Christophe Leroy
Test in is_offset_in_branch_range() and is_offset_in_cond_branch_range() are simple tests that are worth inlining. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a05be0ccb7373e6a9789a1988fcd0c810f5f9269.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Remove redundant create_branch() callsChristophe Leroy
Since commit d5937db114e4 ("powerpc/code-patching: Fix patch_branch() return on out-of-range failure") patch_branch() fails with -ERANGE when trying to branch out of range. No need to perform the test twice. Remove redundant create_branch() calls. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/aa45fbad0b4b7493080835d8276c0cb4ce146503.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/ftrace: Refactor prepare_ftrace_return()Christophe Leroy
When we have CONFIG_DYNAMIC_FTRACE_WITH_ARGS, prepare_ftrace_return() is called by ftrace_graph_func() otherwise prepare_ftrace_return() is called from assembly. Refactor prepare_ftrace_return() into a static __prepare_ftrace_return() that will be called by both prepare_ftrace_return() and ftrace_graph_func(). It will allow GCC to fold __prepare_ftrace_return() inside ftrace_graph_func(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0d42deafe353980c66cf19d3132638c05ba9f4a9.1652074503.git.christophe.leroy@csgroup.eu
2022-05-19powerpc/rtas: enture rtas_call is called with MMU enabledNicholas Piggin
rtas_call must not be called with the MMU disabled because in case of rtas error, log_error is called which requires MMU enabled. Add a test and warning for this. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-14-npiggin@gmail.com
2022-05-19powerpc/rtas: Leave MSR[RI] enabled over RTAS callNicholas Piggin
PAPR specifies that RTAS may be called with MSR[RI] enabled if the calling context is recoverable, and RTAS will manage RI as necessary. Call the rtas entry point with RI enabled, and add a check to ensure the caller has RI enabled. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-10-npiggin@gmail.com
2022-05-19powerpc/rtas: PACA can be restored directly from SPRGNicholas Piggin
On 64-bit, PACA is saved in a SPRG so it does not need to be saved on stack. We also don't need to mask off the top bits for real mode addresses because the architecture does this for us. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-8-npiggin@gmail.com
2022-05-19powerpc/rtas: Call enter_rtas with MSR[EE] disabledNicholas Piggin
Disable MSR[EE] in C code rather than asm. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-5-npiggin@gmail.com
2022-05-19powerpc/rtas: Fix whitespace in rtas_entry.SNicholas Piggin
The code was moved verbatim including whitespace cruft. Fix that. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-4-npiggin@gmail.com
2022-05-19powerpc/rtas: Make enter_rtas a nokprobe symbol on 64-bitNicholas Piggin
This symbol is marked nokprobe on 32-bit but not 64-bit, add it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-3-npiggin@gmail.com
2022-05-19powerpc/rtas: Move rtas entry assembly into its own fileNicholas Piggin
This makes working on the code a bit easier. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220308135047.478297-2-npiggin@gmail.com
2022-05-19powerpc/signal: Report minimum signal frame size to userspace via AT_MINSIGSTKSZNicholas Piggin
Implement the AT_MINSIGSTKSZ AUXV entry, allowing userspace to dynamically size stack allocations in a manner forward-compatible with new processor state saved in the signal frame For now these statically find the maximum signal frame size rather than doing any runtime testing of features to minimise the size. glibc 2.34 will take advantage of this, as will applications that use use _SC_MINSIGSTKSZ and _SC_SIGSTKSZ. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> References: 94b07c1f8c39 ("arm64: signal: Report signal frame size to userspace via auxv") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220307182734.289289-2-npiggin@gmail.com
2022-05-19powerpc/64: Bump SIGSTKSZ and MINSIGSTKSZNicholas Piggin
The sad tale of SIGSTKSZ and MINSIGSTKSZ is documented in glibc.git commit f7c399cff5bd ("PowerPC SIGSTKSZ"), which explains why glibc does not use the kernel defines for these constants. Since then in fact there has been a further expansion of the signal stack frame size on little-endian with linux commit 573ebfa6601f ("powerpc: Increase stack redzone for 64-bit userspace to 512 bytes"), which has caused it to exceed even the glibc defines. See kernel commit 63dee5df43a3 ("powerpc: Allow 4224 bytes of stack expansion for the signal frame") for more details of the history of the expansion. Increase MINSIGSTKSZ to 8192 which is double the current glibc value and fits the current stack frame with room to grow. SIGSTKSZ is set to 4x the minimum as convention. glibc will have to be updated as well. Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220307182734.289289-1-npiggin@gmail.com
2022-05-19powerpc/vdso: Link with ld.lld when requestedNathan Chancellor
The PowerPC vDSO uses $(CC) to link, which differs from the rest of the kernel, which uses $(LD) directly. As a result, the default linker of the compiler is used, which may differ from the linker requested by the builder. For example: $ make ARCH=powerpc LLVM=1 mrproper defconfig arch/powerpc/kernel/vdso/ ... $ llvm-readelf -p .comment arch/powerpc/kernel/vdso/vdso{32,64}.so.dbg File: arch/powerpc/kernel/vdso/vdso32.so.dbg String dump of section '.comment': [ 0] clang version 14.0.0 (Fedora 14.0.0-1.fc37) File: arch/powerpc/kernel/vdso/vdso64.so.dbg String dump of section '.comment': [ 0] clang version 14.0.0 (Fedora 14.0.0-1.fc37) LLVM=1 sets LD=ld.lld but ld.lld is not used to link the vDSO; GNU ld is because "ld" is the default linker for clang on most Linux platforms. This is a problem for Clang's Link Time Optimization as implemented in the kernel because use of GNU ld with LTO requires the LLVMgold plugin, which is not technically supported for ld.bfd per https://llvm.org/docs/GoldPlugin.html. Furthermore, if LLVMgold.so is missing from a user's system, the build will fail, even though LTO as it is implemented in the kernel requires ld.lld to avoid this dependency in the first place. Ultimately, the PowerPC vDSO should be converted to compiling and linking with $(CC) and $(LD) respectively but there were issues last time this was tried, potentially due to older but supported tool versions. To avoid regressing GCC + binutils, use the compiler option '-fuse-ld', which tells the compiler which linker to use when it is invoked as both the compiler and linker. Use '-fuse-ld=lld' when LD=ld.lld has been specified (CONFIG_LD_IS_LLD) so that the vDSO is linked with the same linker as the rest of the kernel. $ llvm-readelf -p .comment arch/powerpc/kernel/vdso/vdso{32,64}.so.dbg File: arch/powerpc/kernel/vdso/vdso32.so.dbg String dump of section '.comment': [ 0] Linker: LLD 14.0.0 [ 14] clang version 14.0.0 (Fedora 14.0.0-1.fc37) File: arch/powerpc/kernel/vdso/vdso64.so.dbg String dump of section '.comment': [ 0] Linker: LLD 14.0.0 [ 14] clang version 14.0.0 (Fedora 14.0.0-1.fc37) LD can be a full path to ld.lld, which will not be handled properly by '-fuse-ld=lld' if the full path to ld.lld is outside of the compiler's search path. '-fuse-ld' can take a path to the linker but it is deprecated in clang 12.0.0; '--ld-path' is preferred for this scenario. Use '--ld-path' if it is supported, as it will handle a full path or just 'ld.lld' properly. See the LLVM commit below for the full details of '--ld-path'. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/774 Link: https://github.com/llvm/llvm-project/commit/1bc5c84710a8c73ef21295e63c19d10a8c71f2f5 Link: https://lore.kernel.org/r/20220511185001.3269404-3-nathan@kernel.org
2022-05-19powerpc/vdso: Remove unused ENTRY in linker scriptsNathan Chancellor
When linking vdso{32,64}.so.dbg with ld.lld, there is a warning about not finding _start for the starting address: ld.lld: warning: cannot find entry symbol _start; not setting start address ld.lld: warning: cannot find entry symbol _start; not setting start address Looking at GCC + GNU ld, the entry point address is 0x0: $ llvm-readelf -h vdso{32,64}.so.dbg &| rg "(File|Entry point address):" File: vdso32.so.dbg Entry point address: 0x0 File: vdso64.so.dbg Entry point address: 0x0 This matches what ld.lld emits: $ powerpc64le-linux-gnu-readelf -p .comment vdso{32,64}.so.dbg File: vdso32.so.dbg String dump of section '.comment': [ 0] Linker: LLD 14.0.0 [ 14] clang version 14.0.0 (Fedora 14.0.0-1.fc37) File: vdso64.so.dbg String dump of section '.comment': [ 0] Linker: LLD 14.0.0 [ 14] clang version 14.0.0 (Fedora 14.0.0-1.fc37) $ llvm-readelf -h vdso{32,64}.so.dbg &| rg "(File|Entry point address):" File: vdso32.so.dbg Entry point address: 0x0 File: vdso64.so.dbg Entry point address: 0x0 Remove ENTRY to remove the warning, as it is unnecessary for the vDSO to function correctly. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220511185001.3269404-2-nathan@kernel.org
2022-05-19powerpc: Export mmu_feature_keys[] as non-GPLKevin Hao
When the mmu_feature_keys[] was introduced in the commit c12e6f24d413 ("powerpc: Add option to use jump label for mmu_has_feature()"), it is unlikely that it would be used either directly or indirectly in the out of tree modules. So we exported it as GPL only. But with the evolution of the codes, especially the PPC_KUAP support, it may be indirectly referenced by some primitive macro or inline functions such as get_user() or __copy_from_user_inatomic(), this will make it impossible to build many non GPL modules (such as ZFS) on ppc architecture. Fix this by exposing the mmu_feature_keys[] to the non-GPL modules too. Fixes: 7613f5a66bec ("powerpc/64s/kuap: Use mmu_has_feature()") Reported-by: Nathaniel Filardo <nwfilardo@gmail.com> Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220329085709.4132729-1-haokexin@gmail.com
2022-05-19powerpc/setup: Refactor/untangle panic notifiersGuilherme G. Piccoli
The panic notifiers infrastructure is a bit limited in the scope of the callbacks - basically every kind of functionality is dropped in a list that runs in the same point during the kernel panic path. This is not really on par with the complexities and particularities of architecture / hypervisors' needs, and a refactor is ongoing. As part of this refactor, it was observed that powerpc has 2 notifiers, with mixed goals: one is just a KASLR offset dumper, whereas the other aims to hard-disable IRQs (necessary on panic path), warn firmware of the panic event (fadump) and run low-level platform-specific machinery that might stop kernel execution and never come back. Clearly, the 2nd notifier has opposed goals: disable IRQs / fadump should run earlier while low-level platform actions should run late since it might not even return. Hence, this patch decouples the notifiers splitting them in three: - First one is responsible for hard-disable IRQs and fadump, should run early; - The kernel KASLR offset dumper is really an informative notifier, harmless and may run at any moment in the panic path; - The last notifier should run last, since it aims to perform low-level actions for specific platforms, and might never return. It is also only registered for 2 platforms, pseries and ps3. The patch better documents the notifiers and clears the code too, also removing a useless header. Currently no functionality change should be observed, but after the planned panic refactor we should expect more panic reliability with this patch. Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220427224924.592546-9-gpiccoli@igalia.com
2022-05-19pinctrl: mediatek: Add pinctrl driver for MT6795 Helio X10AngeloGioacchino Del Regno
Add support for the MediaTek Helio X10 (MT6795) SoC's GPIO/pinmux controller. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20220517083957.11816-3-angelogioacchino.delregno@collabora.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-05-19dt-bindings: pinctrl: Add MediaTek MT6795 pinctrl bindingsAngeloGioacchino Del Regno
Add devicetree and pinfunc bindings for MediaTek Helio X10 MT6795. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220517083957.11816-2-angelogioacchino.delregno@collabora.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2022-05-19Merge branch 'topic/ppc-kvm' into nextMichael Ellerman
Merge our KVM topic branch.
2022-05-19pinctrl: freescale: Add i.MXRT1170 pinctrl driver supportJesse Taube
Add the pinctrl driver support for i.MXRT1170. Cc: Giulio Benetti <giulio.benetti@benettiengineering.com> Signed-off-by: Jesse Taube <Mr.Bossman075@gmail.com> Reviewed-by: Dong Aisheng <aisheng.dong@nxp.com> Link: https://lore.kernel.org/r/20220517032802.451743-11-Mr.Bossman075@gmail.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>