summaryrefslogtreecommitdiff
path: root/arch/powerpc/include/asm/paravirt.h
AgeCommit message (Collapse)Author
2023-12-15powerpc/sched: Cleanup vcpu_is_preempted()Aneesh Kumar K.V
No functional change in this patch. A helper is added to find if vcpu is dispatched by hypervisor. Use that instead of opencoding. Also clarify some of the comments. Signed-off-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231114071219.198222-1-aneesh.kumar@linux.ibm.com
2023-10-19powerpc/paravirt: Improve vcpu_is_preemptedSrikar Dronamraju
PowerVM Hypervisor dispatches on a whole core basis. In a shared LPAR, a CPU from a core that is CEDED or preempted may have a larger latency. In such a scenario, its preferable to choose a different CPU to run. If one of the CPUs in the core is active, i.e neither CEDED nor preempted, then consider this CPU as not preempted. Also if any of the CPUs in the core has yielded but OS has not requested CEDE or CONFER, then consider this CPU to be preempted. Correct detection of preempted CPUs is important for detecting idle CPUs/cores in task scheduler. Tested-by: Aboorva Devarajan <aboorvad@linux.vnet.ibm.com> Reviewed-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231019091452.95260-1-srikar@linux.vnet.ibm.com
2023-08-24powerpc: Don't include lppaca.h in paca.hMichael Ellerman
By adding a forward declaration for struct lppaca we can untangle paca.h and lppaca.h. Also move get_lppaca() into lppaca.h for consistency. Add includes of lppaca.h to some files that need it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230823055317.751786-3-mpe@ellerman.id.au
2022-09-05powerpc/pseries: Implement CONFIG_PARAVIRT_TIME_ACCOUNTINGNicholas Piggin
CONFIG_VIRT_CPU_ACCOUNTING_GEN under pseries does not provide stolen time accounting unless CONFIG_PARAVIRT_TIME_ACCOUNTING is enabled. Implement this using the VPA accumulated wait counters. Note this will not work on current KVM hosts because KVM does not implement the VPA dispatch counters (yet). It could be implemented with the dispatch trace log as it is for VIRT_CPU_ACCOUNTING_NATIVE, but that is not necessary for the more limited accounting provided by PARAVIRT_TIME_ACCOUNTING, and it is more expensive, complex, and has downsides like potential log wrap. From Shrikanth: [...] it was tested on Power10 [PowerVM] Shared LPAR. system has two LPAR. we will call first one LPAR1 and second one as LPAR2. Test was carried out in SMT=1. Similar observation was seen in SMT=8 as well. LPAR config header from each LPAR is below. LPAR1 is twice as big as LPAR2. Since Both are sharing the same underlying hardware, work stealing will happen when both the LPAR's are contending for the same resource. LPAR1: type=Shared mode=Uncapped smt=Off lcpu=40 cpus=40 ent=20.00 LPAR2: type=Shared mode=Uncapped smt=Off lcpu=20 cpus=40 ent=10.00 mpstat was used to check for the utilization. stress-ng has been used as the workload. Few cases are tested. when the both LPAR are idle there is no steal time. when LPAR1 starts running at 100% which consumes all of the physical resource, steal time starts to get accounted. With LPAR1 running at 100% and LPAR2 starts running, steal time starts increasing. This is as expected. When the LPAR2 Load is increased further, steal time increases further. Case 1: 0% LPAR1; 0% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 0.00 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 99.95 Case 2: 100% LPAR1; 0% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 97.68 0.00 0.00 0.00 0.00 0.00 2.32 0.00 0.00 0.00 Case 3: 100% LPAR1; 50% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 86.34 0.00 0.10 0.00 0.00 0.03 13.54 0.00 0.00 0.00 Case 4: 100% LPAR1; 100% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 78.54 0.00 0.07 0.00 0.00 0.02 21.36 0.00 0.00 0.00 Case 5: 50% LPAR1; 100% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 49.37 0.00 0.00 0.00 0.00 0.00 1.17 0.00 0.00 49.47 Patch is accounting for the steal time and basic tests are holding good. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Shrikanth Hegde <sshegde@linux.ibm.com> [mpe: Add SPDX tag to new paravirt_api_clock.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220902085316.2071519-3-npiggin@gmail.com
2021-10-09powerpc/paravirt: correct preempt debug splat in vcpu_is_preempted()Nathan Lynch
vcpu_is_preempted() can be used outside of preempt-disabled critical sections, yielding warnings such as: BUG: using smp_processor_id() in preemptible [00000000] code: systemd-udevd/185 caller is rwsem_spin_on_owner+0x1cc/0x2d0 CPU: 1 PID: 185 Comm: systemd-udevd Not tainted 5.15.0-rc2+ #33 Call Trace: [c000000012907ac0] [c000000000aa30a8] dump_stack_lvl+0xac/0x108 (unreliable) [c000000012907b00] [c000000001371f70] check_preemption_disabled+0x150/0x160 [c000000012907b90] [c0000000001e0e8c] rwsem_spin_on_owner+0x1cc/0x2d0 [c000000012907be0] [c0000000001e1408] rwsem_down_write_slowpath+0x478/0x9a0 [c000000012907ca0] [c000000000576cf4] filename_create+0x94/0x1e0 [c000000012907d10] [c00000000057ac08] do_symlinkat+0x68/0x1a0 [c000000012907d70] [c00000000057ae18] sys_symlink+0x58/0x70 [c000000012907da0] [c00000000002e448] system_call_exception+0x198/0x3c0 [c000000012907e10] [c00000000000c54c] system_call_common+0xec/0x250 The result of vcpu_is_preempted() is always used speculatively, and the function does not access per-cpu resources in a (Linux) preempt-unsafe way. Use raw_smp_processor_id() to avoid such warnings, adding explanatory comments. Fixes: ca3f969dcb11 ("powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted()") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210928214147.312412-3-nathanl@linux.ibm.com
2021-10-09powerpc/paravirt: vcpu_is_preempted() commentaryNathan Lynch
Add comments more clearly documenting that this function determines whether hypervisor-level preemption of the VM has occurred. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210928214147.312412-2-nathanl@linux.ibm.com
2021-05-12powerpc/pseries: Fix hcall tracing recursion in pv queued spinlocksNicholas Piggin
The paravit queued spinlock slow path adds itself to the queue then calls pv_wait to wait for the lock to become free. This is implemented by calling H_CONFER to donate cycles. When hcall tracing is enabled, this H_CONFER call can lead to a spin lock being taken in the tracing code, which will result in the lock to be taken again, which will also go to the slow path because it queues behind itself and so won't ever make progress. An example trace of a deadlock: __pv_queued_spin_lock_slowpath trace_clock_global ring_buffer_lock_reserve trace_event_buffer_lock_reserve trace_event_buffer_reserve trace_event_raw_event_hcall_exit __trace_hcall_exit plpar_hcall_norets_trace __pv_queued_spin_lock_slowpath trace_clock_global ring_buffer_lock_reserve trace_event_buffer_lock_reserve trace_event_buffer_reserve trace_event_raw_event_rcu_dyntick rcu_irq_exit irq_exit __do_irq call_do_irq do_IRQ hardware_interrupt_common_virt Fix this by introducing plpar_hcall_norets_notrace(), and using that to make SPLPAR virtual processor dispatching hcalls by the paravirt spinlock code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210508101455.1578318-2-npiggin@gmail.com
2021-01-31powerpc: Fix build error in paravirt.hMichal Suchanek
./arch/powerpc/include/asm/paravirt.h:83:44: error: implicit declaration of function 'smp_processor_id'; did you mean 'raw_smp_processor_id'? smp_processor_id is defined in linux/smp.h but it is not included. The build error happens only when the patch is applied to 5.3 kernel but it only works by chance in mainline. Fixes: ca3f969dcb11 ("powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted()") Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210120132838.15589-1-msuchanek@suse.de
2020-12-04powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted()Srikar Dronamraju
If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-07-27powerpc/pseries: Implement paravirt qspinlocks for SPLPARNicholas Piggin
This implements the generic paravirt qspinlocks using H_PROD and H_CONFER to kick and wait. This uses an un-directed yield to any CPU rather than the directed yield to a pre-empted lock holder that paravirtualised simple spinlocks use, that requires no kick hcall. This is something that could be investigated and improved in future. Performance results can be found in the commit which added queued spinlocks. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-5-npiggin@gmail.com
2020-07-26powerpc/pseries: Move some PAPR paravirt functions to their own fileNicholas Piggin
These functions will be used by the queued spinlock implementation, and may be useful elsewhere too, so move them out of spinlock.h. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-2-npiggin@gmail.com