summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-09-26objtool: Preserve special st_shndx indexes in elf_update_symbolSami Tolvanen
elf_update_symbol fails to preserve the special st_shndx values between [SHN_LORESERVE, SHN_HIRESERVE], which results in it converting SHN_ABS entries into SHN_UNDEF, for example. Explicitly check for the special indexes and ensure these symbols are not marked undefined. Fixes: ead165fa1042 ("objtool: Fix symbol creation") Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-17-samitolvanen@google.com
2022-09-26treewide: Drop __cficanonicalSami Tolvanen
CONFIG_CFI_CLANG doesn't use a jump table anymore and therefore, won't change function references to point elsewhere. Remove the __cficanonical attribute and all uses of it. Note that the Clang definition of the attribute was removed earlier, just clean up the no-op definition and users. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-16-samitolvanen@google.com
2022-09-26treewide: Drop WARN_ON_FUNCTION_MISMATCHSami Tolvanen
CONFIG_CFI_CLANG no longer breaks cross-module function address equality, which makes WARN_ON_FUNCTION_MISMATCH unnecessary. Remove the definition and switch back to WARN_ON_ONCE. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-15-samitolvanen@google.com
2022-09-26treewide: Drop function_nocfiSami Tolvanen
With -fsanitize=kcfi, we no longer need function_nocfi() as the compiler won't change function references to point to a jump table. Remove all implementations and uses of the macro. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-14-samitolvanen@google.com
2022-09-26init: Drop __nocfi from __initSami Tolvanen
It's no longer necessary to disable CFI checking for all __init functions. Drop the __nocfi attribute from __init. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-13-samitolvanen@google.com
2022-09-26arm64: Drop unneeded __nocfi attributesSami Tolvanen
With -fsanitize=kcfi, CONFIG_CFI_CLANG no longer has issues with address space confusion in functions that switch to linear mapping. Now that the indirectly called assembly functions have type annotations, drop the __nocfi attributes. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-12-samitolvanen@google.com
2022-09-26arm64: Add CFI error handlingSami Tolvanen
With -fsanitize=kcfi, CFI always traps. Add arm64 support for handling CFI failures. The registers containing the target address and the expected type are encoded in the first ten bits of the ESR as follows: - 0-4: n, where the register Xn contains the target address - 5-9: m, where the register Wm contains the type hash This produces the following oops on CFI failure (generated using lkdtm): [ 21.885179] CFI failure at lkdtm_indirect_call+0x2c/0x44 [lkdtm] (target: lkdtm_increment_int+0x0/0x1c [lkdtm]; expected type: 0x7e0c52a) [ 21.886593] Internal error: Oops - CFI: 0 [#1] PREEMPT SMP [ 21.891060] Modules linked in: lkdtm [ 21.893363] CPU: 0 PID: 151 Comm: sh Not tainted 5.19.0-rc1-00021-g852f4e48dbab #1 [ 21.895560] Hardware name: linux,dummy-virt (DT) [ 21.896543] pstate: 80400009 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 21.897583] pc : lkdtm_indirect_call+0x2c/0x44 [lkdtm] [ 21.898551] lr : lkdtm_CFI_FORWARD_PROTO+0x3c/0x6c [lkdtm] [ 21.899520] sp : ffff8000083a3c50 [ 21.900191] x29: ffff8000083a3c50 x28: ffff0000027e0ec0 x27: 0000000000000000 [ 21.902453] x26: 0000000000000000 x25: ffffc2aa3d07e7b0 x24: 0000000000000002 [ 21.903736] x23: ffffc2aa3d079088 x22: ffffc2aa3d07e7b0 x21: ffff000003379000 [ 21.905062] x20: ffff8000083a3dc0 x19: 0000000000000012 x18: 0000000000000000 [ 21.906371] x17: 000000007e0c52a5 x16: 000000003ad55aca x15: ffffc2aa60d92138 [ 21.907662] x14: ffffffffffffffff x13: 2e2e2e2065707974 x12: 0000000000000018 [ 21.909775] x11: ffffc2aa62322b88 x10: ffffc2aa62322aa0 x9 : c7e305fb5195d200 [ 21.911898] x8 : ffffc2aa3d077e20 x7 : 6d20676e696c6c61 x6 : 43203a6d74646b6c [ 21.913108] x5 : ffffc2aa6266c9df x4 : ffffc2aa6266c9e1 x3 : ffff8000083a3968 [ 21.914358] x2 : 80000000fffff122 x1 : 00000000fffff122 x0 : ffffc2aa3d07e8f8 [ 21.915827] Call trace: [ 21.916375] lkdtm_indirect_call+0x2c/0x44 [lkdtm] [ 21.918060] lkdtm_CFI_FORWARD_PROTO+0x3c/0x6c [lkdtm] [ 21.919030] lkdtm_do_action+0x34/0x4c [lkdtm] [ 21.919920] direct_entry+0x170/0x1ac [lkdtm] [ 21.920772] full_proxy_write+0x84/0x104 [ 21.921759] vfs_write+0x188/0x3d8 [ 21.922387] ksys_write+0x78/0xe8 [ 21.922986] __arm64_sys_write+0x1c/0x2c [ 21.923696] invoke_syscall+0x58/0x134 [ 21.924554] el0_svc_common+0xb4/0xf4 [ 21.925603] do_el0_svc+0x2c/0xb4 [ 21.926563] el0_svc+0x2c/0x7c [ 21.927147] el0t_64_sync_handler+0x84/0xf0 [ 21.927985] el0t_64_sync+0x18c/0x190 [ 21.929133] Code: 728a54b1 72afc191 6b11021f 54000040 (d4304500) [ 21.930690] ---[ end trace 0000000000000000 ]--- [ 21.930971] Kernel panic - not syncing: Oops - CFI: Fatal exception Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-11-samitolvanen@google.com
2022-09-26arm64: Add types to indirect called assembly functionsSami Tolvanen
With CONFIG_CFI_CLANG, assembly functions indirectly called from C code must be annotated with type identifiers to pass CFI checking. Use SYM_TYPED_FUNC_START for the indirectly called functions, and ensure we emit `bti c` also with SYM_TYPED_FUNC_START. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-10-samitolvanen@google.com
2022-09-26psci: Fix the function type for psci_initcall_tSami Tolvanen
Functions called through a psci_initcall_t pointer all have non-const arguments. Fix the type definition to avoid tripping indirect call checks with CFI_CLANG. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-9-samitolvanen@google.com
2022-09-26lkdtm: Emit an indirect call for CFI testsSami Tolvanen
Clang can convert the indirect calls in lkdtm_CFI_FORWARD_PROTO into direct calls. Move the call into a noinline function that accepts the target address as an argument to ensure the compiler actually emits an indirect call instead. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Kees Cook <keescook@chromium.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-8-samitolvanen@google.com
2022-09-26cfi: Add type helper macrosSami Tolvanen
With CONFIG_CFI_CLANG, assembly functions called indirectly from C code must be annotated with type identifiers to pass CFI checking. In order to make this easier, the compiler emits a __kcfi_typeid_<function> symbol for each address-taken function declaration in C, which contains the expected type identifier that we can refer to in assembly code. Add a typed version of SYM_FUNC_START, which emits the type identifier before the function. Architectures that support KCFI can define their own __CFI_TYPE macro to override the default preamble format. As an example, for the x86_64 blowfish_dec_blk function, the compiler emits the following type symbol: $ readelf -sW vmlinux | grep __kcfi_typeid_blowfish_dec_blk 120204: 00000000ef478db5 0 NOTYPE WEAK DEFAULT ABS __kcfi_typeid_blowfish_dec_blk And SYM_TYPED_FUNC_START will generate the following preamble based on the __CFI_TYPE definition for the architecture: $ objdump -dr arch/x86/crypto/blowfish-x86_64-asm_64.o ... 0000000000000400 <__cfi_blowfish_dec_blk>: ... 40b: b8 00 00 00 00 mov $0x0,%eax 40c: R_X86_64_32 __kcfi_typeid_blowfish_dec_blk 0000000000000410 <blowfish_dec_blk>: ... Note that the address of all assembly functions annotated with SYM_TYPED_FUNC_START must be taken in C code that's linked into the binary or the missing __kcfi_typeid_ symbol will result in a linker error with CONFIG_CFI_CLANG. If the code that contains the indirect call is not always compiled in, __ADDRESSABLE(functionname) can be used to ensure that the __kcfi_typeid_ symbol is emitted. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-7-samitolvanen@google.com
2022-09-26cfi: Switch to -fsanitize=kcfiSami Tolvanen
Switch from Clang's original forward-edge control-flow integrity implementation to -fsanitize=kcfi, which is better suited for the kernel, as it doesn't require LTO, doesn't use a jump table that requires altering function references, and won't break cross-module function address equality. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-6-samitolvanen@google.com
2022-09-26cfi: Drop __CFI_ADDRESSABLESami Tolvanen
The __CFI_ADDRESSABLE macro is used for init_module and cleanup_module to ensure we have the address of the CFI jump table, and with CONFIG_X86_KERNEL_IBT to ensure LTO won't optimize away the symbols. As __CFI_ADDRESSABLE is no longer necessary with -fsanitize=kcfi, add a more flexible version of the __ADDRESSABLE macro and always ensure these symbols won't be dropped. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-5-samitolvanen@google.com
2022-09-26cfi: Remove CONFIG_CFI_CLANG_SHADOWSami Tolvanen
In preparation to switching to -fsanitize=kcfi, remove support for the CFI module shadow that will no longer be needed. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-4-samitolvanen@google.com
2022-09-26scripts/kallsyms: Ignore __kcfi_typeid_Sami Tolvanen
The compiler generates __kcfi_typeid_ symbols for annotating assembly functions with type information. These are constants that can be referenced in assembly code and are resolved by the linker. Ignore them in kallsyms. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-3-samitolvanen@google.com
2022-09-26treewide: Filter out CC_FLAGS_CFISami Tolvanen
In preparation for removing CC_FLAGS_CFI from CC_FLAGS_LTO, explicitly filter out CC_FLAGS_CFI in all the makefiles where we currently filter out CC_FLAGS_LTO. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-2-samitolvanen@google.com
2022-09-26Revert "net: mvpp2: debugfs: fix memory leak when using debugfs_lookup()"Sasha Levin
This reverts commit fe2c9c61f668cde28dac2b188028c5299cedcc1e. On Tue, Sep 13, 2022 at 05:48:58PM +0100, Russell King (Oracle) wrote: >What happens if this is built as a module, and the module is loaded, >binds (and creates the directory), then is removed, and then re- >inserted? Nothing removes the old directory, so doesn't >debugfs_create_dir() fail, resulting in subsequent failure to add >any subsequent debugfs entries? > >I don't think this patch should be backported to stable trees until >this point is addressed. Revert until a proper fix is available as the original behavior was better. Cc: Marcin Wojtas <mw@semihalf.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: stable@kernel.org Reported-by: Russell King <linux@armlinux.org.uk> Fixes: fe2c9c61f668 ("net: mvpp2: debugfs: fix memory leak when using debugfs_lookup()") Signed-off-by: Sasha Levin <sashal@kernel.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20220923234736.657413-1-sashal@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-26tracepoint: Optimize the critical region of mutex_lock in ↵Zhen Lei
tracepoint_module_coming() The memory allocation of 'tp_mod' does not require mutex_lock() protection, move it out. Link: https://lkml.kernel.org/r/20220914061416.1630-1-thunder.leizhen@huawei.com Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26tracing/filter: Call filter predicate functions directly via a switch statementSteven Rostedt (Google)
Due to retpolines, indirect calls are much more expensive than direct calls. The filters have a select set of functions it uses for the predicates. Instead of using function pointers to call them, create a filter_pred_fn_call() function that uses a switch statement to call the predicate functions directly. This gives almost a 10% speedup to the filter logic. Using the histogram benchmark: Before: # event histogram # # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active] # { delta: 113 } hitcount: 272 { delta: 114 } hitcount: 840 { delta: 118 } hitcount: 344 { delta: 119 } hitcount: 25428 { delta: 120 } hitcount: 350590 { delta: 121 } hitcount: 1892484 { delta: 122 } hitcount: 6205004 { delta: 123 } hitcount: 11583521 { delta: 124 } hitcount: 37590979 { delta: 125 } hitcount: 108308504 { delta: 126 } hitcount: 131672461 { delta: 127 } hitcount: 88700598 { delta: 128 } hitcount: 65939870 { delta: 129 } hitcount: 45055004 { delta: 130 } hitcount: 33174464 { delta: 131 } hitcount: 31813493 { delta: 132 } hitcount: 29011676 { delta: 133 } hitcount: 22798782 { delta: 134 } hitcount: 22072486 { delta: 135 } hitcount: 17034113 { delta: 136 } hitcount: 8982490 { delta: 137 } hitcount: 2865908 { delta: 138 } hitcount: 980382 { delta: 139 } hitcount: 1651944 { delta: 140 } hitcount: 4112073 { delta: 141 } hitcount: 3963269 { delta: 142 } hitcount: 1712508 { delta: 143 } hitcount: 575941 After: # event histogram # # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active] # { delta: 103 } hitcount: 60 { delta: 104 } hitcount: 16966 { delta: 105 } hitcount: 396625 { delta: 106 } hitcount: 3223400 { delta: 107 } hitcount: 12053754 { delta: 108 } hitcount: 20241711 { delta: 109 } hitcount: 14850200 { delta: 110 } hitcount: 4946599 { delta: 111 } hitcount: 3479315 { delta: 112 } hitcount: 18698299 { delta: 113 } hitcount: 62388733 { delta: 114 } hitcount: 95803834 { delta: 115 } hitcount: 58278130 { delta: 116 } hitcount: 15364800 { delta: 117 } hitcount: 5586866 { delta: 118 } hitcount: 2346880 { delta: 119 } hitcount: 1131091 { delta: 120 } hitcount: 620896 { delta: 121 } hitcount: 236652 { delta: 122 } hitcount: 105957 { delta: 123 } hitcount: 119107 { delta: 124 } hitcount: 54494 { delta: 125 } hitcount: 63856 { delta: 126 } hitcount: 64454 { delta: 127 } hitcount: 34818 { delta: 128 } hitcount: 41446 { delta: 129 } hitcount: 51242 { delta: 130 } hitcount: 28361 { delta: 131 } hitcount: 23926 The peak before was 126ns per event, after the peak is 114ns, and the fastest time went from 113ns to 103ns. Link: https://lkml.kernel.org/r/20220906225529.781407172@goodmis.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Tom Zanussi <zanussi@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26tracing: Move struct filter_pred into trace_events_filter.cSteven Rostedt (Google)
The structure filter_pred and the typedef of the function used are only referenced by trace_events_filter.c. There's no reason to have it in an external header file. Move them into the only file they are used in. Link: https://lkml.kernel.org/r/20220906225529.598047132@goodmis.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Tom Zanussi <zanussi@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26tracing/hist: Call hist functions directly via a switch statementSteven Rostedt (Google)
Due to retpolines, indirect calls are much more expensive than direct calls. The histograms have a select set of functions it uses for the histograms, instead of using function pointers to call them, create a hist_fn_call() function that uses a switch statement to call the histogram functions directly. This gives a 13% speedup to the histogram logic. Using the histogram benchmark: Before: # event histogram # # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active] # { delta: 129 } hitcount: 2213 { delta: 130 } hitcount: 285965 { delta: 131 } hitcount: 1146545 { delta: 132 } hitcount: 5185432 { delta: 133 } hitcount: 19896215 { delta: 134 } hitcount: 53118616 { delta: 135 } hitcount: 83816709 { delta: 136 } hitcount: 68329562 { delta: 137 } hitcount: 41859349 { delta: 138 } hitcount: 46257797 { delta: 139 } hitcount: 54400831 { delta: 140 } hitcount: 72875007 { delta: 141 } hitcount: 76193272 { delta: 142 } hitcount: 49504263 { delta: 143 } hitcount: 38821072 { delta: 144 } hitcount: 47702679 { delta: 145 } hitcount: 41357297 { delta: 146 } hitcount: 22058238 { delta: 147 } hitcount: 9720002 { delta: 148 } hitcount: 3193542 { delta: 149 } hitcount: 927030 { delta: 150 } hitcount: 850772 { delta: 151 } hitcount: 1477380 { delta: 152 } hitcount: 2687977 { delta: 153 } hitcount: 2865985 { delta: 154 } hitcount: 1977492 { delta: 155 } hitcount: 2475607 { delta: 156 } hitcount: 3403612 After: # event histogram # # trigger info: hist:keys=delta:vals=hitcount:sort=delta:size=2048 if delta > 0 [active] # { delta: 113 } hitcount: 272 { delta: 114 } hitcount: 840 { delta: 118 } hitcount: 344 { delta: 119 } hitcount: 25428 { delta: 120 } hitcount: 350590 { delta: 121 } hitcount: 1892484 { delta: 122 } hitcount: 6205004 { delta: 123 } hitcount: 11583521 { delta: 124 } hitcount: 37590979 { delta: 125 } hitcount: 108308504 { delta: 126 } hitcount: 131672461 { delta: 127 } hitcount: 88700598 { delta: 128 } hitcount: 65939870 { delta: 129 } hitcount: 45055004 { delta: 130 } hitcount: 33174464 { delta: 131 } hitcount: 31813493 { delta: 132 } hitcount: 29011676 { delta: 133 } hitcount: 22798782 { delta: 134 } hitcount: 22072486 { delta: 135 } hitcount: 17034113 { delta: 136 } hitcount: 8982490 { delta: 137 } hitcount: 2865908 { delta: 138 } hitcount: 980382 { delta: 139 } hitcount: 1651944 { delta: 140 } hitcount: 4112073 { delta: 141 } hitcount: 3963269 { delta: 142 } hitcount: 1712508 { delta: 143 } hitcount: 575941 { delta: 144 } hitcount: 351427 { delta: 145 } hitcount: 218077 { delta: 146 } hitcount: 167297 { delta: 147 } hitcount: 146198 { delta: 148 } hitcount: 116122 { delta: 149 } hitcount: 58993 { delta: 150 } hitcount: 40228 The delta above is in nanoseconds. It brings the fastest time down from 129ns to 113ns, and the peak from 141ns to 126ns. Link: https://lkml.kernel.org/r/20220906225529.411545333@goodmis.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Tom Zanussi <zanussi@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26tracing: Add numeric delta time to the trace event benchmarkSteven Rostedt (Google)
In order to testing filtering and histograms via the trace event benchmark, record the delta time of the last event as a numeric value (currently, it just saves it within the string) so that filters and histograms can use it. Link: https://lkml.kernel.org/r/20220906225529.213677569@goodmis.org Cc: Ingo Molnar <mingo@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Tom Zanussi <zanussi@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26rv/dot2K: add 'static' qualifier for local variableZeng Heng
Following Daniel's suggestion, fix similar warning in template files, which would prevent new monitors from such warning. Link: https://lkml.kernel.org/r/20220824034357.2014202-3-zengheng4@huawei.com Cc: <mingo@redhat.com> Fixes: 24bce201d798 ("tools/rv: Add dot2k") Suggested-by: Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by: Zeng Heng <zengheng4@huawei.com> Acked-by: Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26rv/monitors: add 'static' qualifier for local symbolsZeng Heng
The sparse tool complains as follows: kernel/trace/rv/monitors/wwnr/wwnr.c:18:19: warning: symbol 'rv_wwnr' was not declared. Should it be static? The `rv_wwnr` symbol is not dereferenced by other extern files, so add static qualifier for it. So does wip module. Link: https://lkml.kernel.org/r/20220824034357.2014202-2-zengheng4@huawei.com Cc: <mingo@redhat.com> Fixes: ccc319dcb450 ("rv/monitor: Add the wwnr monitor") Fixes: 8812d21219b9 ("rv/monitor: Add the wip monitor skeleton created by dot2k") Signed-off-by: Zeng Heng <zengheng4@huawei.com> Acked-by: Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26selftests/ftrace: Add eprobe syntax error testcaseMasami Hiramatsu (Google)
Add a syntax error test case for eprobe as same as kprobes. Link: https://lkml.kernel.org/r/165932115471.2850673.8014722990775242727.stgit@devnote2 Cc: Tzvetomir Stoyanov <tz.stoyanov@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26tracing/eprobe: Add eprobe filter supportMasami Hiramatsu (Google)
Add the filter option to the event probe. This is useful if user wants to derive a new event based on the condition of the original event. E.g. echo 'e:egroup/stat_runtime_4core sched/sched_stat_runtime \ runtime=$runtime:u32 if cpu < 4' >> ../dynamic_events Then it can filter the events only on first 4 cores. Note that the fields used for 'if' must be the fields in the original events, not eprobe events. Link: https://lkml.kernel.org/r/165932114513.2850673.2592206685744598080.stgit@devnote2 Cc: Tzvetomir Stoyanov <tz.stoyanov@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2022-09-26a.out: restore CMAGICнаб
Part of UAPI and the on-disk format: this means that it's not a magic number per magic-number.rst, and it's best to leave it untouched to avoid breaking userspace and suffer the same fate as a.out in general Fixes: 53c2bd679017 ("a.out: remove define-only CMAGIC, previously magic number") Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz> Link: https://lore.kernel.org/r/20220926151554.7gxd6unp5727vw3c@tarta.nabijaczleweli.xyz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-26macsec: don't free NULL metadata_dstSabrina Dubroca
Commit 0a28bfd4971f added a metadata_dst to each tx_sc, but that's only allocated when macsec_add_dev has run, which happens after device registration. If the requested or computed SCI already exists, or if linking to the lower device fails, we will panic because metadata_dst_free can't handle NULL. Reproducer: ip link add link $lower type macsec ip link add link $lower type macsec Fixes: 0a28bfd4971f ("net/macsec: Add MACsec skb_metadata_dst Tx Data path support") Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Acked-by: Raed Salem <raeds@nvidia.com> Link: https://lore.kernel.org/r/60f2a1965fe553e2cade9472407d0fafff8de8ce.1663923580.git.sd@queasysnail.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-26KVM: remove KVM_REQ_UNHALTPaolo Bonzini
KVM_REQ_UNHALT is now unnecessary because it is replaced by the return value of kvm_vcpu_block/kvm_vcpu_halt. Remove it. No functional change intended. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Message-Id: <20220921003201.1441511-13-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: mips, x86: do not rely on KVM_REQ_UNHALTPaolo Bonzini
KVM_REQ_UNHALT is a weird request that simply reports the value of kvm_arch_vcpu_runnable() on exit from kvm_vcpu_halt(). Only MIPS and x86 are looking at it, the others just clear it. Check the state of the vCPU directly so that the request is handled as a nop on all architectures. No functional change intended, except for corner cases where an event arrive immediately after a signal become pending or after another similar host-side event. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20220921003201.1441511-12-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: never write to memory from kvm_vcpu_check_block()Paolo Bonzini
kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore it cannot sleep. Writing to guest memory is therefore forbidden, but it can happen on AMD processors if kvm_check_nested_events() causes a vmexit. Fortunately, all events that are caught by kvm_check_nested_events() are also recognized by kvm_vcpu_has_events() through vendor callbacks such as kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so remove the call and postpone the actual processing to vcpu_block(). Opportunistically honor the return of kvm_check_nested_events(). KVM punted on the check in kvm_vcpu_running() because the only error path is if vmx_complete_nested_posted_interrupt() fails, in which case KVM exits to userspace with "internal error" i.e. the VM is likely dead anyways so it wasn't worth overloading the return of kvm_vcpu_running(). Add the check mostly so that KVM is consistent with itself; the return of the call via kvm_apic_accept_events()=>kvm_check_nested_events() that immediately follows _is_ checked. Reported-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [sean: check and handle return of kvm_check_nested_events()] Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-11-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Don't snapshot pending INIT/SIPI prior to checking nested eventsSean Christopherson
Don't snapshot pending INIT/SIPI events prior to checking nested events, architecturally there's nothing wrong with KVM processing (dropping) a SIPI that is received immediately after synthesizing a VM-Exit. Taking and consuming the snapshot makes the flow way more subtle than it needs to be, e.g. nVMX consumes/clears events that trigger VM-Exit (INIT/SIPI), and so at first glance it appears that KVM is double-dipping on pending INITs and SIPIs. But that's not the case because INIT is blocked unconditionally in VMX root mode the CPU cannot be in wait-for_SIPI after VM-Exit, i.e. the paths that truly consume the snapshot are unreachable if apic->pending_events is modified by kvm_check_nested_events(). nSVM is a similar story as GIF is cleared by the CPU on VM-Exit; INIT is blocked regardless of whether or not it was pending prior to VM-Exit. Drop the snapshot logic so that a future fix doesn't create weirdness when kvm_vcpu_running()'s call to kvm_check_nested_events() is moved to vcpu_block(). In that case, kvm_check_nested_events() will be called immediately before kvm_apic_accept_events(), which raises the obvious question of why that change doesn't break the snapshot logic. Note, there is a subtle functional change. Previously, KVM would clear pending SIPIs if and only SIPI was pending prior to VM-Exit, whereas now KVM clears pending SIPI unconditionally if INIT+SIPI are blocked. The latter is architecturally allowed, as SIPI is ignored if the CPU is not in wait-for-SIPI mode (arguably, KVM should be even more aggressive in dropping SIPIs). It is software's responsibility to ensure the SIPI is delivered, i.e. software shouldn't be firing INIT-SIPI at a CPU until it knows with 100% certaining that the target CPU isn't in VMX root mode. Furthermore, the existing code is extra weird as SIPIs that arrive after VM-Exit _are_ dropped if there also happened to be a pending SIPI before VM-Exit. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: nVMX: Make event request on VMXOFF iff INIT/SIPI is pendingSean Christopherson
Explicitly check for a pending INIT/SIPI event when emulating VMXOFF instead of blindly making an event request. There's obviously no need to evaluate events if none are pending. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: nVMX: Make an event request if INIT or SIPI is pending on VM-EnterSean Christopherson
Evaluate interrupts, i.e. set KVM_REQ_EVENT, if INIT or SIPI is pending when emulating nested VM-Enter. INIT is blocked while the CPU is in VMX root mode, but not in VMX non-root, i.e. becomes unblocked on VM-Enter. This bug has been masked by KVM calling ->check_nested_events() in the core run loop, but that hack will be fixed in the near future. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: SVM: Make an event request if INIT or SIPI is pending when GIF is setSean Christopherson
Set KVM_REQ_EVENT if INIT or SIPI is pending when the guest enables GIF. INIT in particular is blocked when GIF=0 and needs to be processed when GIF is toggled to '1'. This bug has been masked by (a) KVM calling ->check_nested_events() in the core run loop and (b) hypervisors toggling GIF from 0=>1 only when entering guest mode (L1 entering L2). Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: lapic does not have to process INIT if it is blockedPaolo Bonzini
Do not return true from kvm_vcpu_has_events() if the vCPU isn' going to immediately process a pending INIT/SIPI. INIT/SIPI shouldn't be treated as wake events if they are blocked. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [sean: rebase onto refactored INIT/SIPI helpers, massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Rename kvm_apic_has_events() to make it INIT/SIPI specificSean Christopherson
Rename kvm_apic_has_events() to kvm_apic_has_pending_init_or_sipi() so that it's more obvious that "events" really just means "INIT or SIPI". Opportunistically clean up a weirdly worded comment that referenced kvm_apic_has_events() instead of kvm_apic_accept_events(). No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Rename and expose helper to detect if INIT/SIPI are allowedSean Christopherson
Rename and invert kvm_vcpu_latch_init() to kvm_apic_init_sipi_allowed() so as to match the behavior of {interrupt,nmi,smi}_allowed(), and expose the helper so that it can be used by kvm_vcpu_has_events() to determine whether or not an INIT or SIPI is pending _and_ can be taken immediately. Opportunistically replaced usage of the "latch" terminology with "blocked" and/or "allowed", again to align with KVM's terminology used for all other event types. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: nVMX: Make an event request when pending an MTF nested VM-ExitSean Christopherson
Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run through inject_pending_event() and thus vmx_check_nested_events() prior to re-entering the guest. MTF currently works by virtue of KVM's hack that calls kvm_check_nested_events() from kvm_vcpu_running(), but that hack will be removed in the near future. Until that call is removed, the patch introduces no real functional change. Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation") Cc: stable@vger.kernel.org Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: make vendor code check for all nested eventsPaolo Bonzini
Interrupts, NMIs etc. sent while in guest mode are already handled properly by the *_interrupt_allowed callbacks, but other events can cause a vCPU to be runnable that are specific to guest mode. In the case of VMX there are two, the preemption timer and the monitor trap. The VMX preemption timer is already special cased via the hv_timer_pending callback, but the purpose of the callback can be easily extended to MTF or in fact any other event that can occur only in guest mode. Rename the callback and add an MTF check; kvm_arch_vcpu_runnable() now can return true if an MTF is pending, without relying on kvm_vcpu_running()'s call to kvm_check_nested_events(). Until that call is removed, however, the patch introduces no functional change. Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220921003201.1441511-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26clk: qcom: lcc-ipq806x: use ARRAY_SIZE for num_parentsChristian Marangi
Use ARRAY_SIZE for num_parents instead of raw number to prevent any confusion/mistake. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220724182329.9891-4-ansuelsmth@gmail.com
2022-09-26clk: qcom: lcc-ipq806x: convert to parent dataChristian Marangi
Convert lcc-ipq806x driver to parent_data API. Change parent_name for pll4 to pxo_board to prepare the future to eventually drop the double pxo board clk. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220724182329.9891-3-ansuelsmth@gmail.com
2022-09-26clk: qcom: lcc-ipq806x: add reset definitionChristian Marangi
Add reset definition for lcc-ipq806x. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220724182329.9891-2-ansuelsmth@gmail.com
2022-09-26dt-bindings: clock: add pcm reset for ipq806x lccChristian Marangi
Add pcm reset define for ipq806x lcc. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220724182329.9891-1-ansuelsmth@gmail.com
2022-09-26clk: qcom: cpu-8996: use constant mask for pmuxDmitry Baryshkov
Both pmux instances share the same width and shift. Specify the mask at compile time to simplify functions. Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220714100351.1834711-7-dmitry.baryshkov@linaro.org
2022-09-26clk: qcom: cpu-8996: don't store parents in clk_cpu_8996_pmuxDmitry Baryshkov
Don't store pointers to parents in struct clk_cpu_8996_pmux. Instead use clk_hw_get_parent_by_index to fetch them. Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220714100351.1834711-6-dmitry.baryshkov@linaro.org
2022-09-26clk: qcom: cpu-8996: move ACD logic to clk_cpu_8996_pmux_determine_rateDmitry Baryshkov
Rather than telling everybody that we are using PLL as a parent (and using ACD clock instead) properly select ACD as a pmux parent clock. Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220714100351.1834711-5-dmitry.baryshkov@linaro.org
2022-09-26clk: qcom: cpu-8996: declare ACD clocksDmitry Baryshkov
To simplify the code, define 1:1 fixed factor clocks to represent the ACD pmux parent. Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220714100351.1834711-4-dmitry.baryshkov@linaro.org
2022-09-26clk: qcom: cpu-8996: switch to devm_clk_notifier_registerDmitry Baryshkov
Switch to using devres-managed version of clk_notifier_register(). This allows us to drop driver's remove() callback. Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220714100351.1834711-3-dmitry.baryshkov@linaro.org
2022-09-26clk: qcom: msm8996-cpu: Use parent_data/_hws for all clocksYassine Oudjana
Replace parent_names in PLLs, secondary muxes and primary muxes with parent_data. For primary muxes there were never any *cl_pll_acd clocks, so instead of adding them, put the primary PLLs in both PLL_INDEX and ACD_INDEX, then make sure ACD_INDEX is always picked over PLL_INDEX when setting parent since we always want ACD when using the primary PLLs. Signed-off-by: Yassine Oudjana <y.oudjana@protonmail.com> [DB: switch to parent_hws for pmux clocks] Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220714100351.1834711-2-dmitry.baryshkov@linaro.org