summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2023-01-13bpf: Fix pointer-leak due to insufficient speculative store bypass mitigationLuis Gerhorst
To mitigate Spectre v4, 2039f26f3aca ("bpf: Fix leakage due to insufficient speculative store bypass mitigation") inserts lfence instructions after 1) initializing a stack slot and 2) spilling a pointer to the stack. However, this does not cover cases where a stack slot is first initialized with a pointer (subject to sanitization) but then overwritten with a scalar (not subject to sanitization because the slot was already initialized). In this case, the second write may be subject to speculative store bypass (SSB) creating a speculative pointer-as-scalar type confusion. This allows the program to subsequently leak the numerical pointer value using, for example, a branch-based cache side channel. To fix this, also sanitize scalars if they write a stack slot that previously contained a pointer. Assuming that pointer-spills are only generated by LLVM on register-pressure, the performance impact on most real-world BPF programs should be small. The following unprivileged BPF bytecode drafts a minimal exploit and the mitigation: [...] // r6 = 0 or 1 (skalar, unknown user input) // r7 = accessible ptr for side channel // r10 = frame pointer (fp), to be leaked // r9 = r10 # fp alias to encourage ssb *(u64 *)(r9 - 8) = r10 // fp[-8] = ptr, to be leaked // lfence added here because of pointer spill to stack. // // Ommitted: Dummy bpf_ringbuf_output() here to train alias predictor // for no r9-r10 dependency. // *(u64 *)(r10 - 8) = r6 // fp[-8] = scalar, overwrites ptr // 2039f26f3aca: no lfence added because stack slot was not STACK_INVALID, // store may be subject to SSB // // fix: also add an lfence when the slot contained a ptr // r8 = *(u64 *)(r9 - 8) // r8 = architecturally a scalar, speculatively a ptr // // leak ptr using branch-based cache side channel: r8 &= 1 // choose bit to leak if r8 == 0 goto SLOW // no mispredict // architecturally dead code if input r6 is 0, // only executes speculatively iff ptr bit is 1 r8 = *(u64 *)(r7 + 0) # encode bit in cache (0: slow, 1: fast) SLOW: [...] After running this, the program can time the access to *(r7 + 0) to determine whether the chosen pointer bit was 0 or 1. Repeat this 64 times to recover the whole address on amd64. In summary, sanitization can only be skipped if one scalar is overwritten with another scalar. Scalar-confusion due to speculative store bypass can not lead to invalid accesses because the pointer bounds deducted during verification are enforced using branchless logic. See 979d63d50c0c ("bpf: prevent out of bounds speculation on pointer arithmetic") for details. Do not make the mitigation depend on !env->allow_{uninit_stack,ptr_leaks} because speculative leaks are likely unexpected if these were enabled. For example, leaking the address to a protected log file may be acceptable while disabling the mitigation might unintentionally leak the address into the cached-state of a map that is accessible to unprivileged processes. Fixes: 2039f26f3aca ("bpf: Fix leakage due to insufficient speculative store bypass mitigation") Signed-off-by: Luis Gerhorst <gerhorst@cs.fau.de> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Henriette Hofmeier <henriette.hofmeier@rub.de> Link: https://lore.kernel.org/bpf/edc95bad-aada-9cfc-ffe2-fa9bb206583c@cs.fau.de Link: https://lore.kernel.org/bpf/20230109150544.41465-1-gerhorst@cs.fau.de
2023-01-13context_tracking: Fix noinstr vs KASANPeter Zijlstra
Low level noinstr context-tracking code is calling out to instrumented code on KASAN: vmlinux.o: warning: objtool: __ct_user_enter+0x72: call to __kasan_check_write() leaves .noinstr.text section vmlinux.o: warning: objtool: __ct_user_exit+0x47: call to __kasan_check_write() leaves .noinstr.text section Use even lower level atomic methods to avoid the instrumentation. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230112195542.458034262@infradead.org
2023-01-13tracing, hardirq: No moar _rcuidle() tracingPeter Zijlstra
Robot reported that trace_hardirqs_{on,off}() tickle the forbidden _rcuidle() tracepoint through local_irq_{en,dis}able(). For 'sane' configs, these calls will only happen with RCU enabled and as such can use the regular tracepoint. This also means it's possible to trace them from NMI context again. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230112195541.477416709@infradead.org
2023-01-13tracing: WARN on rcuidlePeter Zijlstra
ARCH_WANTS_NO_INSTR (a superset of CONFIG_GENERIC_ENTRY) disallows any and all tracing when RCU isn't enabled. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195541.416110581@infradead.org
2023-01-13tracing: Remove trace_hardirqs_{on,off}_caller()Peter Zijlstra
Per commit 56e62a737028 ("s390: convert to generic entry") the last and only callers of trace_hardirqs_{on,off}_caller() went away, clean up. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230112195541.355283994@infradead.org
2023-01-13time/tick-broadcast: Remove RCU_NONIDLE() usagePeter Zijlstra
No callers left that have already disabled RCU. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.927904612@infradead.org
2023-01-13printk: Remove trace_.*_rcuidle() usagePeter Zijlstra
The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console tracepoint"), was printk usage from the cpuidle path where RCU was already disabled. Per the patches earlier in this series, this is no longer the case. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Petr Mladek <pmladek@suse.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.865735001@infradead.org
2023-01-13arch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabledPeter Zijlstra
Current arch_cpu_idle() is called with IRQs disabled, but will return with IRQs enabled. However, the very first thing the generic code does after calling arch_cpu_idle() is raw_local_irq_disable(). This means that architectures that can idle with IRQs disabled end up doing a pointless 'enable-disable' dance. Therefore, push this IRQ disabling into the idle function, meaning that those architectures can avoid the pointless IRQ state flipping. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64] Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.618076436@infradead.org
2023-01-13cpuidle, cpu_pm: Remove RCU fiddling from cpu_pm_{enter,exit}()Peter Zijlstra
All callers should still have RCU enabled. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.190860672@infradead.org
2023-01-13cpuidle: Fix ct_idle_*() usagePeter Zijlstra
The whole disable-RCU, enable-IRQS dance is very intricate since changing IRQ state is traced, which depends on RCU. Add two helpers for the cpuidle case that mirror the entry code: ct_cpuidle_enter() ct_cpuidle_exit() And fix all the cases where the enter/exit dance was buggy. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.130014793@infradead.org
2023-01-13sched/fair: Fixes for capacity inversion detectionQais Yousef
Traversing the Perf Domains requires rcu_read_lock() to be held and is conditional on sched_energy_enabled(). Ensure right protections applied. Also skip capacity inversion detection for our own pd; which was an error. Fixes: 44c7b80bffc3 ("sched/fair: Detect capacity inversion") Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20230112122708.330667-3-qyousef@layalina.io
2023-01-13sched/uclamp: Fix a uninitialized variable warningsQais Yousef
Addresses the following warnings: > config: riscv-randconfig-m031-20221111 > compiler: riscv64-linux-gcc (GCC) 12.1.0 > > smatch warnings: > kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_min'. > kernel/sched/fair.c:7263 find_energy_efficient_cpu() error: uninitialized symbol 'util_max'. Fixes: 244226035a1f ("sched/uclamp: Fix fits_capacity() check in feec()") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20230112122708.330667-2-qyousef@layalina.io
2023-01-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
drivers/net/usb/r8152.c be53771c87f4 ("r8152: add vendor/device ID pair for Microsoft Devkit") ec51fbd1b8a2 ("r8152: add USB device driver for config selection") https://lore.kernel.org/all/20230113113339.658c4723@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-01-12bpf: hash map, avoid deadlock with suitable hash maskTonghao Zhang
The deadlock still may occur while accessed in NMI and non-NMI context. Because in NMI, we still may access the same bucket but with different map_locked index. For example, on the same CPU, .max_entries = 2, we update the hash map, with key = 4, while running bpf prog in NMI nmi_handle(), to update hash map with key = 20, so it will have the same bucket index but have different map_locked index. To fix this issue, using min mask to hash again. Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked") Signed-off-by: Tonghao Zhang <tong@infragraf.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Song Liu <song@kernel.org> Cc: Yonghong Song <yhs@fb.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Hou Tao <houtao1@huawei.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20230111092903.92389-1-tong@infragraf.org Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-01-12Merge tag 'timers-urgent-2023-01-12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer doc fixes from Ingo Molnar: - Fix various DocBook formatting errors in kernel/time/ that generated (justified) warnings during a kernel-doc build. * tag 'timers-urgent-2023-01-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: time: Fix various kernel-doc problems
2023-01-12Merge tag 'sched-urgent-2023-01-12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Ingo Molnar: - Fix scheduler frequency invariance bug related to overly long tickless periods triggering an integer overflow and disabling the feature. - Fix use-after-free bug in dup_user_cpus_ptr(). - Fix do_set_cpus_allowed() deadlock scenarios related to calling kfree() with the pi_lock held. NOTE: the rcu_free() is the 'lazy' solution here - we looked at patches to free the structure after the pi_lock got dropped, but that looked quite a bit messier - and none of this is truly performance critical. We can revisit this if it's too lazy of a solution ... * tag 'sched-urgent-2023-01-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/core: Use kfree_rcu() in do_set_cpus_allowed() sched/core: Fix use-after-free bug in dup_user_cpus_ptr() sched/core: Fix arch_scale_freq_tick() on tickless systems
2023-01-12rcu: Remove redundant call to rcu_boost_kthread_setaffinity()Zqiang
The rcu_boost_kthread_setaffinity() function is invoked at rcutree_online_cpu() and rcutree_offline_cpu() time, early in the online timeline and late in the offline timeline, respectively. It is also invoked from rcutree_dead_cpu(), however, in the absence of userspace manipulations (for which userspace must take responsibility), this call is redundant with that from rcutree_offline_cpu(). This redundancy can be demonstrated by printing out the relevant cpumasks This commit therefore removes the call to rcu_boost_kthread_setaffinity() from rcutree_dead_cpu(). Signed-off-by: Zqiang <qiang1.zhang@intel.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
2023-01-12workqueue: Unbind kworkers before sending them to exit()Valentin Schneider
It has been reported that isolated CPUs can suffer from interference due to per-CPU kworkers waking up just to die. A surge of workqueue activity during initial setup of a latency-sensitive application (refresh_vm_stats() being one of the culprits) can cause extra per-CPU kworkers to be spawned. Then, said latency-sensitive task can be running merrily on an isolated CPU only to be interrupted sometime later by a kworker marked for death (cf. IDLE_WORKER_TIMEOUT, 5 minutes after last kworker activity). Prevent this by affining kworkers to the wq_unbound_cpumask (which doesn't contain isolated CPUs, cf. HK_TYPE_WQ) before waking them up after marking them with WORKER_DIE. Changing the affinity does require a sleepable context, leverage the newly introduced pool->idle_cull_work to get that. Remove dying workers from pool->workers and keep track of them in a separate list. This intentionally prevents for_each_loop_worker() from iterating over workers that are marked for death. Rename destroy_worker() to set_working_dying() to better reflect its effects and relationship with wake_dying_workers(). Signed-off-by: Valentin Schneider <vschneid@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-12workqueue: Don't hold any lock while rcuwait'ing for !POOL_MANAGER_ACTIVEValentin Schneider
put_unbound_pool() currently passes wq_manager_inactive() as exit condition to rcuwait_wait_event(), which grabs pool->lock to check for pool->flags & POOL_MANAGER_ACTIVE A later patch will require destroy_worker() to be invoked with wq_pool_attach_mutex held, which needs to be acquired before pool->lock. A mutex cannot be acquired within rcuwait_wait_event(), as it could clobber the task state set by rcuwait_wait_event() Instead, restructure the waiting logic to acquire any necessary lock outside of rcuwait_wait_event(). Since further work cannot be inserted into unbound pwqs that have reached ->refcnt==0, this is bound to make forward progress as eventually the worklist will be drained and need_more_worker(pool) will remain false, preventing any worker from stealing the manager position from us. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Valentin Schneider <vschneid@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-12workqueue: Convert the idle_timer to a timer + work_structValentin Schneider
A later patch will require a sleepable context in the idle worker timeout function. Converting worker_pool.idle_timer to a delayed_work gives us just that, however this would imply turning all idle_timer expiries into scheduler events (waking up a worker to handle the dwork). Instead, implement a "custom dwork" where the timer callback does some extra checks before queuing the associated work. No change in functionality intended. Signed-off-by: Valentin Schneider <vschneid@redhat.com> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-12workqueue: Factorize unbind/rebind_workers() logicValentin Schneider
Later patches will reuse this code, move it into reusable functions. Signed-off-by: Valentin Schneider <vschneid@redhat.com> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-12workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutexLai Jiangshan
When unbind_workers() reads wq_unbound_cpumask to set the affinity of freshly-unbound kworkers, it only holds wq_pool_attach_mutex. This isn't sufficient as wq_unbound_cpumask is only protected by wq_pool_mutex. Make wq_unbound_cpumask protected with wq_pool_attach_mutex and also remove the need of temporary saved_cpumask. Fixes: 10a5a651e3af ("workqueue: Restrict kworker in the offline CPU pool running on housekeeping CPUs") Reported-by: Valentin Schneider <vschneid@redhat.com> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-11irq/s390: Add arch_is_isolated_msi() for s390Jason Gunthorpe
s390 doesn't use irq_domains, so it has no place to set IRQ_DOMAIN_FLAG_ISOLATED_MSI. Instead of continuing to abuse the iommu subsystem to convey this information add a simple define which s390 can make statically true. The define will cause msi_device_has_isolated() to return true. Remove IOMMU_CAP_INTR_REMAP from the s390 iommu driver. Link: https://lore.kernel.org/r/8-v3-3313bb5dd3a3+10f11-secure_msi_jgg@nvidia.com Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-01-11genirq/msi: Rename IRQ_DOMAIN_MSI_REMAP to IRQ_DOMAIN_ISOLATED_MSIJason Gunthorpe
What x86 calls "interrupt remapping" is one way to achieve isolated MSI, make it clear this is talking about isolated MSI, no matter how it is achieved. This matches the new driver facing API name of msi_device_has_isolated_msi() No functional change. Link: https://lore.kernel.org/r/6-v3-3313bb5dd3a3+10f11-secure_msi_jgg@nvidia.com Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-01-11genirq/irqdomain: Remove unused irq_domain_check_msi_remap() codeJason Gunthorpe
After converting the users of irq_domain_check_msi_remap() it and the helpers are no longer needed. The new version does not require all the #ifdef helpers and inlines because CONFIG_GENERIC_MSI_IRQ always requires CONFIG_IRQ_DOMAIN and IRQ_DOMAIN_HIERARCHY. Link: https://lore.kernel.org/r/5-v3-3313bb5dd3a3+10f11-secure_msi_jgg@nvidia.com Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-01-11genirq/msi: Add msi_device_has_isolated_msi()Jason Gunthorpe
This will replace irq_domain_check_msi_remap() in following patches. The new API makes it more clear what "msi_remap" actually means from a functional perspective instead of identifying an implementation specific HW feature. Isolated MSI means that HW modeled by an irq_domain on the path from the initiating device to the CPU will validate that the MSI message specifies an interrupt number that the device is authorized to trigger. This must block devices from triggering interrupts they are not authorized to trigger. Currently authorization means the MSI vector is one assigned to the device. This is interesting for securing VFIO use cases where a rouge MSI (eg created by abusing a normal PCI MemWr DMA) must not allow the VFIO userspace to impact outside its security domain, eg userspace triggering interrupts on kernel drivers, a VM triggering interrupts on the hypervisor, or a VM triggering interrupts on another VM. As this is actually modeled as a per-irq_domain property, not a global platform property, correct the interface to accept the device parameter and scan through only the part of the irq_domains hierarchy originating from the source device. Locate the new code in msi.c as it naturally only works with CONFIG_GENERIC_MSI_IRQ, which also requires CONFIG_IRQ_DOMAIN and IRQ_DOMAIN_HIERARCHY. Link: https://lore.kernel.org/r/1-v3-3313bb5dd3a3+10f11-secure_msi_jgg@nvidia.com Tested-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-01-11genirq: Add might_sleep() to disable_irq()Manfred Spraul
With the introduction of threaded interrupt handlers, it is virtually never safe to call disable_irq() from non-premptible context. Thus: Update the documentation, add an explicit might_sleep() to catch any offenders. This is more obvious and straight forward than the implicit might_sleep() check deeper down in the disable_irq() call chain. Fixes: 3aa551c9b4c4 ("genirq: add threaded interrupt handler support") Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20221216150441.200533-3-manfred@colorfullife.com
2023-01-11timers: Prevent union confusion from unexpected restart_syscall()Jann Horn
The nanosleep syscalls use the restart_block mechanism, with a quirk: The `type` and `rmtp`/`compat_rmtp` fields are set up unconditionally on syscall entry, while the rest of the restart_block is only set up in the unlikely case that the syscall is actually interrupted by a signal (or pseudo-signal) that doesn't have a signal handler. If the restart_block was set up by a previous syscall (futex(..., FUTEX_WAIT, ...) or poll()) and hasn't been invalidated somehow since then, this will clobber some of the union fields used by futex_wait_restart() and do_restart_poll(). If userspace afterwards wrongly calls the restart_syscall syscall, futex_wait_restart()/do_restart_poll() will read struct fields that have been clobbered. This doesn't actually lead to anything particularly interesting because none of the union fields contain trusted kernel data, and futex(..., FUTEX_WAIT, ...) and poll() aren't syscalls where it makes much sense to apply seccomp filters to their arguments. So the current consequences are just of the "if userspace does bad stuff, it can damage itself, and that's not a problem" flavor. But still, it seems like a hazard for future developers, so invalidate the restart_block when partly setting it up in the nanosleep syscalls. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230105134403.754986-1-jannh@google.com
2023-01-11printk: adjust string limit macrosJohn Ogness
The various internal size limit macros have names and/or values that do not fit well to their current usage. Rename the macros so that their purpose is clear and, if needed, provide a more appropriate value. In general, the new macros and values will lead to less memory usage. The new macros are... PRINTK_MESSAGE_MAX: This is the maximum size for a formatted message on a console, devkmsg, or syslog. It does not matter which format the message has (normal or extended). It replaces the use of CONSOLE_EXT_LOG_MAX for console and devkmsg. It replaces the use of CONSOLE_LOG_MAX for syslog. Historically, normal messages have been allowed to print up to 1kB, whereas extended messages have been allowed to print up to 8kB. However, the difference in lengths of these message types is not significant and in multi-line records, normal messages are probably larger. Also, because 1kB is only slightly above the allowed record size, multi-line normal messages could be easily truncated during formatting. This new macro should be significantly larger than the allowed record size to allow sufficient space for extended or multi-line prefix text. A value of 2kB should be plenty of space. For normal messages this represents a doubling of the historically allowed amount. For extended messages it reduces the excessive 8kB size, thus reducing memory usage needed for message formatting. PRINTK_PREFIX_MAX: This is the maximum size allowed for a record prefix (used by console and syslog). It replaces PREFIX_MAX. The value is left unchanged. PRINTKRB_RECORD_MAX: This is the maximum size allowed to be reserved for a record in the ringbuffer. It is used by all readers and writers with the printk ringbuffer. It replaces LOG_LINE_MAX. Previously this was set to "1kB - PREFIX_MAX", which makes some sense if 1kB is the limit for normal message output and prefixes are enabled. However, with the allowance of larger output and the existence of multi-line records, the value is rather bizarre. Round the value up to 1kB. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230109100800.1085541-9-john.ogness@linutronix.de
2023-01-11printk: use printk_buffers for devkmsgJohn Ogness
Replace the buffers in struct devkmsg_user with a struct printk_buffers. This reduces the number of buffers to keep track of. As a side-effect, @text_buf was 8kB large, even though it only needed to be the max size of a ringbuffer record. By switching to struct printk_buffers, ~7kB less memory is allocated when opening /dev/kmsg. And since struct printk_buffers will be used now, reduce duplicate code by calling printk_get_next_message() to handle the record reading and formatting. Note that since /dev/kmsg never suppresses records based on loglevel, printk_get_next_message() is extended with an extra bool argument to specify if suppression is allowed. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230109100800.1085541-8-john.ogness@linutronix.de
2023-01-11printk: introduce console_prepend_dropped() for dropped messagesJohn Ogness
Currently "dropped messages" are separately printed immediately before printing the printk message. Since normal consoles are now using an output buffer that is much larger than previously, the "dropped message" could be prepended to the printk message and then output everything in a single write() call. Introduce a helper function console_prepend_dropped() to prepend an existing message with a "dropped message". This simplifies the code by allowing all message formatting to be handled together and then only requires a single write() call to output the full message. And since this helper does not require any locking, it can be used in the future for other console printing contexts as well. Note that console_prepend_dropped() is defined as a NOP for !CONFIG_PRINTK. Although the function will never be called for !CONFIG_PRINTK, compiling the function can lead to warnings of "always true" conditionals due to the size macro values used in !CONFIG_PRINTK. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230109100800.1085541-7-john.ogness@linutronix.de
2023-01-11printk: introduce printk_get_next_message() and printk_messageJohn Ogness
Code for performing the console output is intermixed with code that is formatting the output for that console. Introduce a new helper function printk_get_next_message() to handle the reading and formatting of the printk text. The helper does not require any locking so that in the future it can be used for other printing contexts as well. This also introduces a new struct printk_message to wrap the struct printk_buffers, adding metadata about its contents. This allows users of printk_get_next_message() to receive all relevant information about the message that was read and formatted. Why is struct printk_message a wrapper struct? It is intentional that a wrapper struct is introduced instead of adding the metadata directly to struct printk_buffers. The upcoming atomic consoles support multiple printing contexts per CPU. This means that while a CPU is formatting a message, it can be interrupted and the interrupting context may also format a (possibly different) message. Since the printk buffers are rather large, there will only be one struct printk_buffers per CPU and it must be shared by the possible contexts of that CPU. If the metadata was part of struct printk_buffers, interrupting contexts would clobber the metadata being prepared by the interrupted context. This could be handled by robustifying the message formatting functions to cope with metadata unexpectedly changing. However, this would require significant amounts of extra data copying, also adding significant complexity to the code. Instead, the metadata can live on the stack of the formatting context and the message formatting functions do not need to be concerned about the metadata changing underneath them. Note that the message formatting functions can handle unexpected text buffer changes. So it is perfectly OK if a shared text buffer is clobbered by an interrupting context. The atomic console implementation will recognize the interruption and avoid printing the (probably garbage) text buffer. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230109100800.1085541-6-john.ogness@linutronix.de
2023-01-11printk: introduce struct printk_buffersJohn Ogness
Introduce a new struct printk_buffers to contain all the buffers needed to read and format a printk message for output. Putting the buffers inside a struct reduces the number of buffer pointers that need to be tracked. Also, it allows usage of the sizeof() macro for the buffer sizes, rather than expecting certain sized buffers being passed in. Note that since the output buffer for normal consoles is now CONSOLE_EXT_LOG_MAX instead of CONSOLE_LOG_MAX, multi-line messages that may have been previously truncated will now be printed in full. This should be considered a feature and not a bug since the CONSOLE_LOG_MAX restriction was about limiting static buffer usage rather than limiting printed text. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230109100800.1085541-5-john.ogness@linutronix.de
2023-01-11printk: move size limit macros into internal.hJohn Ogness
The size limit macros are located further down in printk.c and behind ifdef conditionals. This complicates their usage for upcoming changes. Move the macros into internal.h so that they are still invisible outside of printk, but easily accessible for printk. Also, the maximum size of formatted extended messages does not need to be known by any code outside of printk, so move it to internal.h as well. And like CONSOLE_LOG_MAX, for !CONFIG_PRINTK set CONSOLE_EXT_LOG_MAX to 0 to reduce the static memory footprint. Signed-off-by: John Ogness <john.ogness@linutronix.de> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20230109100800.1085541-2-john.ogness@linutronix.de
2023-01-10bpf: btf: limit logging of ignored BTF mismatchesConnor O'Brien
Enabling CONFIG_MODULE_ALLOW_BTF_MISMATCH is an indication that BTF mismatches are expected and module loading should proceed anyway. Logging with pr_warn() on every one of these "benign" mismatches creates unnecessary noise when many such modules are loaded. Instead, handle this case with a single log warning that BTF info may be unavailable. Mismatches also result in calls to __btf_verifier_log() via __btf_verifier_log_type() or btf_verifier_log_member(), adding several additional lines of logging per mismatched module. Add checks to these paths to skip logging for module BTF mismatches in the "allow mismatch" case. All existing logging behavior is preserved in the default CONFIG_MODULE_ALLOW_BTF_MISMATCH=n case. Signed-off-by: Connor O'Brien <connoro@google.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20230107025331.3240536-1-connoro@google.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-01-10cgroup/cpuset: fix a few kernel-doc warnings & coding styleRandy Dunlap
Fix kernel-doc notation warnings: kernel/cgroup/cpuset.c:1309: warning: Excess function parameter 'cpuset' description in 'update_parent_subparts_cpumask' kernel/cgroup/cpuset.c:3909: warning: expecting prototype for cpuset_mem_spread_node(). Prototype was for cpuset_spread_node() instead Also drop a blank line before EXPORT_SYMBOL_GPL() to be consistent with kernel coding style. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Waiman Long <longman@redhat.com> Cc: Zefan Li <lizefan.x@bytedance.com> Cc: Tejun Heo <tj@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: cgroups@vger.kernel.org Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-10Merge tag 'xtensa-20230110' of https://github.com/jcmvbkbc/linux-xtensaLinus Torvalds
Pull xtensa fixes from Max Filippov: - fix xtensa allmodconfig build broken by the kcsan test - drop unused members of struct thread_struct * tag 'xtensa-20230110' of https://github.com/jcmvbkbc/linux-xtensa: xtensa: drop unused members of struct thread_struct kcsan: test: don't put the expect array on the stack
2023-01-09bpf: remove the do_idr_lock parameter from bpf_prog_free_id()Paul Moore
It was determined that the do_idr_lock parameter to bpf_prog_free_id() was not necessary as it should always be true. Suggested-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Paul Moore <paul@paul-moore.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20230106154400.74211-2-paul@paul-moore.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-01-09bpf: restore the ebpf program ID for BPF_AUDIT_UNLOAD and ↵Paul Moore
PERF_BPF_EVENT_PROG_UNLOAD When changing the ebpf program put() routines to support being called from within IRQ context the program ID was reset to zero prior to calling the perf event and audit UNLOAD record generators, which resulted in problems as the ebpf program ID was bogus (always zero). This patch addresses this problem by removing an unnecessary call to bpf_prog_free_id() in __bpf_prog_offload_destroy() and adjusting __bpf_prog_put() to only call bpf_prog_free_id() after audit and perf have finished their bpf program unload tasks in bpf_prog_put_deferred(). For the record, no one can determine, or remember, why it was necessary to free the program ID, and remove it from the IDR, prior to executing bpf_prog_put_deferred(); regardless, both Stanislav and Alexei agree that the approach in this patch should be safe. It is worth noting that when moving the bpf_prog_free_id() call, the do_idr_lock parameter was forced to true as the ebpf devs determined this was the correct as the do_idr_lock should always be true. The do_idr_lock parameter will be removed in a follow-up patch, but it was kept here to keep the patch small in an effort to ease any stable backports. I also modified the bpf_audit_prog() logic used to associate the AUDIT_BPF record with other associated records, e.g. @ctx != NULL. Instead of keying off the operation, it now keys off the execution context, e.g. '!in_irg && !irqs_disabled()', which is much more appropriate and should help better connect the UNLOAD operations with the associated audit state (other audit records). Cc: stable@vger.kernel.org Fixes: d809e134be7a ("bpf: Prepare bpf_prog_put() to be called from irq context.") Reported-by: Burn Alting <burn.alting@iinet.net.au> Reported-by: Jiri Olsa <olsajiri@gmail.com> Suggested-by: Stanislav Fomichev <sdf@google.com> Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by: Paul Moore <paul@paul-moore.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20230106154400.74211-1-paul@paul-moore.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-01-09rcu: Allow up to five minutes expedited RCU CPU stall-warning timeoutsPaul E. McKenney
The maximum value of RCU CPU stall-warning timeouts has historically been five minutes (300 seconds). However, the recently introduced expedited RCU CPU stall-warning timeout is instead limited to 21 seconds. This causes problems for CI/fuzzing services such as syzkaller by obscuring the issue in question with expedited RCU CPU stall-warning timeout splats. This commit therefore sets the RCU_EXP_CPU_STALL_TIMEOUT Kconfig options upper bound to 300000 milliseconds, which is 300 seconds (AKA 5 minutes). [ paulmck: Apply feedback from Hillf Danton. ] [ paulmck: Apply feedback from Geert Uytterhoeven. ] Reported-by: Dave Chinner <david@fromorbit.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Tested-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-01-09sched/core: Use kfree_rcu() in do_set_cpus_allowed()Waiman Long
Commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") may call kfree() if user_cpus_ptr was previously set. Unfortunately, some of the callers of do_set_cpus_allowed() may have pi_lock held when calling it. So the following splats may be printed especially when running with a PREEMPT_RT kernel: WARNING: possible circular locking dependency detected BUG: sleeping function called from invalid context To avoid these problems, kfree_rcu() is used instead. An internal cpumask_rcuhead union is created for the sole purpose of facilitating the use of kfree_rcu() to free the cpumask. Since user_cpus_ptr is not being used in non-SMP configs, the newly introduced alloc_user_cpus_ptr() helper will return NULL in this case and sched_setaffinity() is modified to handle this special case. Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20221231041120.440785-3-longman@redhat.com
2023-01-09sched/core: Fix use-after-free bug in dup_user_cpus_ptr()Waiman Long
Since commit 07ec77a1d4e8 ("sched: Allow task CPU affinity to be restricted on asymmetric systems"), the setting and clearing of user_cpus_ptr are done under pi_lock for arm64 architecture. However, dup_user_cpus_ptr() accesses user_cpus_ptr without any lock protection. Since sched_setaffinity() can be invoked from another process, the process being modified may be undergoing fork() at the same time. When racing with the clearing of user_cpus_ptr in __set_cpus_allowed_ptr_locked(), it can lead to user-after-free and possibly double-free in arm64 kernel. Commit 8f9ea86fdf99 ("sched: Always preserve the user requested cpumask") fixes this problem as user_cpus_ptr, once set, will never be cleared in a task's lifetime. However, this bug was re-introduced in commit 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") which allows the clearing of user_cpus_ptr in do_set_cpus_allowed(). This time, it will affect all arches. Fix this bug by always clearing the user_cpus_ptr of the newly cloned/forked task before the copying process starts and check the user_cpus_ptr state of the source task under pi_lock. Note to stable, this patch won't be applicable to stable releases. Just copy the new dup_user_cpus_ptr() function over. Fixes: 07ec77a1d4e8 ("sched: Allow task CPU affinity to be restricted on asymmetric systems") Fixes: 851a723e45d1 ("sched: Always clear user_cpus_ptr in do_set_cpus_allowed()") Reported-by: David Wang 王标 <wangbiao3@xiaomi.com> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20221231041120.440785-2-longman@redhat.com
2023-01-07sched/core: Fix arch_scale_freq_tick() on tickless systemsYair Podemsky
In order for the scheduler to be frequency invariant we measure the ratio between the maximum CPU frequency and the actual CPU frequency. During long tickless periods of time the calculations that keep track of that might overflow, in the function scale_freq_tick(): if (check_shl_overflow(acnt, 2*SCHED_CAPACITY_SHIFT, &acnt)) goto error; eventually forcing the kernel to disable the feature for all CPUs, and show the warning message: "Scheduler frequency invariance went wobbly, disabling!". Let's avoid that by limiting the frequency invariant calculations to CPUs with regular tick. Fixes: e2b0d619b400 ("x86, sched: check for counters overflow in frequency invariant accounting") Suggested-by: "Peter Zijlstra (Intel)" <peterz@infradead.org> Signed-off-by: Yair Podemsky <ypodemsk@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <vschneid@redhat.com> Acked-by: Giovanni Gherdovich <ggherdovich@suse.cz> Link: https://lore.kernel.org/r/20221130125121.34407-1-ypodemsk@redhat.com
2023-01-07sched/membarrier: Introduce MEMBARRIER_CMD_GET_REGISTRATIONSMichal Clapinski
Provide a method to query previously issued registrations. Signed-off-by: Michal Clapinski <mclapinski@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Paul E. McKenney <paulmck@kernel.org> Link: https://lore.kernel.org/r/20221207164338.1535591-2-mclapinski@google.com
2023-01-07cpufreq, sched/util: Optimize operations with single CPU capacity lookupLukasz Luba
The max CPU capacity is the same for all CPUs sharing frequency domain. There is a way to avoid heavy operations in a loop for each CPU by leveraging this knowledge. Thus, simplify the looping code in the sugov_next_freq_shared() and drop heavy multiplications. Instead, use simple max() to get the highest utilization from these CPUs. This is useful for platforms with many (4 or 6) little CPUs. We avoid heavy 2*PD_CPU_NUM multiplications in that loop, which is called billions of times, since it's not limited by the schedutil time delta filter in sugov_should_update_freq(). When there was no need to change frequency the code bailed out, not updating the sg_policy::last_freq_update_time. Then every visit after delta_ns time longer than the sg_policy::freq_update_delay_ns goes through and triggers the next frequency calculation code. Although, if the next frequency, as outcome of that, would be the same as current frequency, we won't update the sg_policy::last_freq_update_time and the story will be repeated (in a very short period, sometimes a few microseconds). The max CPU capacity must be fetched every time we are called, due to difficulties during the policy setup, where we are not able to get the normalized CPU capacity at the right time. The fetched CPU capacity value is than used in sugov_iowait_apply() to calculate the right boost. This required a few changes in the local functions and arguments. The capacity value should hopefully be fetched once when needed and then passed over CPU registers to those functions. Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20221208160256.859-2-lukasz.luba@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Patrick Bellasi <patrick.bellasi@arm.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Viresh Kumar <viresh.kumar@linaro.org>
2023-01-07sched/core: Reorganize ttwu_do_wakeup() and ttwu_do_activate()Chengming Zhou
ttwu_do_activate() is used for a complete wakeup, in which we will activate_task() and use ttwu_do_wakeup() to mark the task runnable and perform wakeup-preemption, also call class->task_woken() callback and update the rq->idle_stamp. Since ttwu_runnable() is not a complete wakeup, don't need all those done in ttwu_do_wakeup(), so we can move those to ttwu_do_activate() to simplify ttwu_do_wakeup(), making it only mark the task runnable to be reused in ttwu_runnable() and try_to_wake_up(). This patch should not have any functional changes. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20221223103257.4962-2-zhouchengming@bytedance.com
2023-01-07sched/core: Micro-optimize ttwu_runnable()Chengming Zhou
ttwu_runnable() is used as a fast wakeup path when the wakee task is running on CPU or runnable on RQ, in both cases we can just set its state to TASK_RUNNING to prevent a sleep. If the wakee task is on_cpu running, we don't need to update_rq_clock() or check_preempt_curr(). But if the wakee task is on_rq && !on_cpu (e.g. an IRQ hit before the task got to schedule() and the task been preempted), we should check_preempt_curr() to see if it can preempt the current running. This also removes the class->task_woken() callback from ttwu_runnable(), which wasn't required per the RT/DL implementations: any required push operation would have been queued during class->set_next_task() when p got preempted. ttwu_runnable() also loses the update to rq->idle_stamp, as by definition the rq cannot be idle in this scenario. Suggested-by: Valentin Schneider <vschneid@redhat.com> Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <vschneid@redhat.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20221223103257.4962-1-zhouchengming@bytedance.com
2023-01-06workqueue: Make show_pwq() use run-length encodingPaul E. McKenney
The show_pwq() function dumps out a pool_workqueue structure's activity, including the pending work-queue handlers: Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x1 nice=0 active=10/256 refcnt=11 in-flight: 7:test_work_func, 64:test_work_func, 249:test_work_func pending: test_work_func, test_work_func, test_work_func1, test_work_func1, test_work_func1, test_work_func1, test_work_func1 When large systems are facing certain types of hang conditions, it is not unusual for this "pending" list to contain runs of hundreds of identical function names. This "wall of text" is difficult to read, and worse yet, it can be interleaved with other output such as stack traces. Therefore, make show_pwq() use run-length encoding so that the above printout instead looks like this: Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x1 nice=0 active=10/256 refcnt=11 in-flight: 7:test_work_func, 64:test_work_func, 249:test_work_func pending: 2*test_work_func, 5*test_work_func1 When no comma would be printed, including the WORK_STRUCT_LINKED case, a new run is started unconditionally. This output is more readable, places less stress on the hardware, firmware, and software on the console-log path, and reduces interference with other output. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Dave Jones <davej@codemonkey.org.uk> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-01-06bpf: Skip task with pid=1 in send_signal_common()Hao Sun
The following kernel panic can be triggered when a task with pid=1 attaches a prog that attempts to send killing signal to itself, also see [1] for more details: Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b CPU: 3 PID: 1 Comm: systemd Not tainted 6.1.0-09652-g59fe41b5255f #148 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x100/0x178 lib/dump_stack.c:106 panic+0x2c4/0x60f kernel/panic.c:275 do_exit.cold+0x63/0xe4 kernel/exit.c:789 do_group_exit+0xd4/0x2a0 kernel/exit.c:950 get_signal+0x2460/0x2600 kernel/signal.c:2858 arch_do_signal_or_restart+0x78/0x5d0 arch/x86/kernel/signal.c:306 exit_to_user_mode_loop kernel/entry/common.c:168 [inline] exit_to_user_mode_prepare+0x15f/0x250 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:296 do_syscall_64+0x44/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd So skip task with pid=1 in bpf_send_signal_common() to avoid the panic. [1] https://lore.kernel.org/bpf/20221222043507.33037-1-sunhao.th@gmail.com Signed-off-by: Hao Sun <sunhao.th@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230106084838.12690-1-sunhao.th@gmail.com
2023-01-06bpf: Skip invalid kfunc call in backtrack_insnHao Sun
The verifier skips invalid kfunc call in check_kfunc_call(), which would be captured in fixup_kfunc_call() if such insn is not eliminated by dead code elimination. However, this can lead to the following warning in backtrack_insn(), also see [1]: ------------[ cut here ]------------ verifier backtracking bug WARNING: CPU: 6 PID: 8646 at kernel/bpf/verifier.c:2756 backtrack_insn kernel/bpf/verifier.c:2756 __mark_chain_precision kernel/bpf/verifier.c:3065 mark_chain_precision kernel/bpf/verifier.c:3165 adjust_reg_min_max_vals kernel/bpf/verifier.c:10715 check_alu_op kernel/bpf/verifier.c:10928 do_check kernel/bpf/verifier.c:13821 [inline] do_check_common kernel/bpf/verifier.c:16289 [...] So make backtracking conservative with this by returning ENOTSUPP. [1] https://lore.kernel.org/bpf/CACkBjsaXNceR8ZjkLG=dT3P=4A8SBsg0Z5h5PWLryF5=ghKq=g@mail.gmail.com/ Reported-by: syzbot+4da3ff23081bafe74fc2@syzkaller.appspotmail.com Signed-off-by: Hao Sun <sunhao.th@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20230104014709.9375-1-sunhao.th@gmail.com