summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-09-26KVM: nVMX: Refactor unsupported eVMCS controls logic to use 2-d arrayVitaly Kuznetsov
Refactor the handling of unsupported eVMCS to use a 2-d array to store the set of unsupported controls. KVM's handling of eVMCS is completely broken as there is no way for userspace to query which features are unsupported, nor does KVM prevent userspace from attempting to enable unsupported features. A future commit will remedy that by filtering and enforcing unsupported features when eVMCS, but that needs to be opt-in from userspace to avoid breakage, i.e. KVM needs to maintain its legacy behavior by snapshotting the exact set of controls that are currently (un)supported by eVMCS. No functional change intended. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> [sean: split to standalone patch, write changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220830133737.1539624-8-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: nVMX: Treat eVMCS as enabled for guest iff Hyper-V is also enabledSean Christopherson
When querying whether or not eVMCS is enabled on behalf of the guest, treat eVMCS as enable if and only if Hyper-V is enabled/exposed to the guest. Note, flows that come from the host, e.g. KVM_SET_NESTED_STATE, must NOT check for Hyper-V being enabled as KVM doesn't require guest CPUID to be set before most ioctls(). Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220830133737.1539624-7-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Report error when setting CPUID if Hyper-V allocation failsSean Christopherson
Return -ENOMEM back to userspace if allocating the Hyper-V vCPU struct fails when enabling Hyper-V in guest CPUID. Silently ignoring failure means that KVM will not have an up-to-date CPUID cache if allocating the struct succeeds later on, e.g. when activating SynIC. Rejecting the CPUID operation also guarantess that vcpu->arch.hyperv is non-NULL if hyperv_enabled is true, which will allow for additional cleanup, e.g. in the eVMCS code. Note, the initialization needs to be done before CPUID is set, and more subtly before kvm_check_cpuid(), which potentially enables dynamic XFEATURES. Sadly, there's no easy way to avoid exposing Hyper-V details to CPUID or vice versa. Expose kvm_hv_vcpu_init() and the Hyper-V CPUID signature to CPUID instead of exposing cpuid_entry2_find() outside of CPUID code. It's hard to envision kvm_hv_vcpu_init() being misused, whereas cpuid_entry2_find() absolutely shouldn't be used outside of core CPUID code. Fixes: 10d7bf1e46dc ("KVM: x86: hyper-v: Cache guest CPUID leaves determining features availability") Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220830133737.1539624-6-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Check for existing Hyper-V vCPU in kvm_hv_vcpu_init()Sean Christopherson
When potentially allocating/initializing the Hyper-V vCPU struct, check for an existing instance in kvm_hv_vcpu_init() instead of requiring callers to perform the check. Relying on callers to do the check is risky as it's all too easy for KVM to overwrite vcpu->arch.hyperv and leak memory, and it adds additional burden on callers without much benefit. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Wei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/20220830133737.1539624-5-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Zero out entire Hyper-V CPUID cache before processing entriesVitaly Kuznetsov
Wipe the whole 'hv_vcpu->cpuid_cache' with memset() instead of having to zero each particular member when the corresponding CPUID entry was not found. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> [sean: split to separate patch] Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Wei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/20220830133737.1539624-4-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26x86/hyperv: Update 'struct hv_enlightened_vmcs' definitionVitaly Kuznetsov
Updated Hyper-V Enlightened VMCS specification lists several new fields for the following features: - PerfGlobalCtrl - EnclsExitingBitmap - Tsc Scaling - GuestLbrCtl - CET - SSP Update the definition. Note, the updated spec also provides an additional CPUID feature flag, CPUIDD.0x4000000A.EBX BIT(0), for PerfGlobalCtrl to workaround a Windows 11 quirk. Despite what the TLFS says: Indicates support for the GuestPerfGlobalCtrl and HostPerfGlobalCtrl fields in the enlightened VMCS. guests can safely use the fields if they are enumerated in the architectural VMX MSRs. I.e. KVM-on-HyperV doesn't need to check the CPUID bit, but KVM-as-HyperV must ensure the bit is set if PerfGlobalCtrl fields are exposed to L1. https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/tlfs Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> [sean: tweak CPUID name to make it PerfGlobalCtrl only] Signed-off-by: Sean Christopherson <seanjc@google.com> Acked-by: Wei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/20220830133737.1539624-3-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26x86/hyperv: Fix 'struct hv_enlightened_vmcs' definitionVitaly Kuznetsov
Section 1.9 of TLFS v6.0b says: "All structures are padded in such a way that fields are aligned naturally (that is, an 8-byte field is aligned to an offset of 8 bytes and so on)". 'struct enlightened_vmcs' has a glitch: ... struct { u32 nested_flush_hypercall:1; /* 836: 0 4 */ u32 msr_bitmap:1; /* 836: 1 4 */ u32 reserved:30; /* 836: 2 4 */ } hv_enlightenments_control; /* 836 4 */ u32 hv_vp_id; /* 840 4 */ u64 hv_vm_id; /* 844 8 */ u64 partition_assist_page; /* 852 8 */ ... And the observed values in 'partition_assist_page' make no sense at all. Fix the layout by padding the structure properly. Fixes: 68d1eb72ee99 ("x86/hyper-v: define struct hv_enlightened_vmcs and clean field bits") Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220830133737.1539624-2-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: selftests: Require DISABLE_NX_HUGE_PAGES cap for NX hugepage testOliver Upton
Require KVM_CAP_VM_DISABLE_NX_HUGE_PAGES for the entire NX hugepage test instead of skipping the "disable" subtest if the capability isn't supported by the host kernel. While the "enable" subtest does provide value when the capability isn't supported, silently providing only half the promised coveraged is undesirable, i.e. it's better to skip the test so that the user knows something. Alternatively, the test could print something to alert the user instead of silently skipping the subtest, but that would encourage other tests to follow suit, and it's not clear that it's desirable to take selftests in that direction. And if selftests do head down the path of skipping subtests, such behavior needs first-class support in the framework. Opportunistically convert other test preconditions to TEST_REQUIRE(). Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: David Matlack <dmatlack@google.com> Link: https://lore.kernel.org/r/20220812175301.3915004-1-oliver.upton@linux.dev [sean: rewrote changelog to capture discussion about skipping the test] Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: VMX: Do not declare vmread_error() asmlinkageUros Bizjak
There is no need to declare vmread_error() asmlinkage, its arguments can be passed via registers for both 32-bit and 64-bit targets. Function argument registers are considered call-clobbered registers, they are saved in the trampoline just before the function call and restored afterwards. Dropping "asmlinkage" patch unifies trampoline function argument handling between 32-bit and 64-bit targets and improves generated code for 32-bit targets. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Link: https://lore.kernel.org/r/20220817144045.3206-1-ubizjak@gmail.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM:x86: Clean up ModR/M "reg" initialization in reg op decodingLiam Ni
Refactor decode_register_operand() to get the ModR/M register if and only if the instruction uses a ModR/M encoding to make it more obvious how the register operand is retrieved. Signed-off-by: Liam Ni <zhiguangni01@gmail.com> Link: https://lore.kernel.org/r/20220908141210.1375828-1-zhiguangni01@zhaoxin.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Print guest pgd in kvm_nested_vmenter()Mingwei Zhang
Print guest pgd in kvm_nested_vmenter() to enrich the information for tracing. When tdp is enabled, print the value of tdp page table (EPT/NPT); when tdp is disabled, print the value of non-nested CR3. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mingwei Zhang <mizhang@google.com> Link: https://lore.kernel.org/r/20220825225755.907001-4-mizhang@google.com [sean: print nested_cr3 vs. nested_eptp vs. guest_cr3] Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: nVMX: Add tracepoint for nested VM-EnterDavid Matlack
Call trace_kvm_nested_vmenter() during nested VMLAUNCH/VMRESUME to bring parity with nSVM's usage of the tracepoint during nested VMRUN. Attempt to use analagous VMCS fields to the VMCB fields that are reported in the SVM case: "int_ctl": 32-bit field of the VMCB that the CPU uses to deliver virtual interrupts. The analagous VMCS field is the 16-bit "guest interrupt status". "event_inj": 32-bit field of VMCB that is used to inject events (exceptions and interrupts) into the guest. The analagous VMCS field is the "VM-entry interruption-information field". "npt_enabled": 1 when the VCPU has enabled nested paging. The analagous VMCS field is the enable-EPT execution control. "npt_addr": 64-bit field when the VCPU has enabled nested paging. The analagous VMCS field is the ept_pointer. Signed-off-by: David Matlack <dmatlack@google.com> [move the code into the nested_vmx_enter_non_root_mode().] Signed-off-by: Mingwei Zhang <mizhang@google.com> Link: https://lore.kernel.org/r/20220825225755.907001-3-mizhang@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Update trace function for nested VM entry to support VMXMingwei Zhang
Update trace function for nested VM entry to support VMX. Existing trace function only supports nested VMX and the information printed out is AMD specific. So, rename trace_kvm_nested_vmrun() to trace_kvm_nested_vmenter(), since 'vmenter' is generic. Add a new field 'isa' to recognize Intel and AMD; Update the output to print out VMX/SVM related naming respectively, eg., vmcb vs. vmcs; npt vs. ept. Opportunistically update the call site of trace_kvm_nested_vmenter() to make one line per parameter. Signed-off-by: Mingwei Zhang <mizhang@google.com> Link: https://lore.kernel.org/r/20220825225755.907001-2-mizhang@google.com [sean: align indentation, s/update/rename in changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Use u64 for address and error code in page fault tracepointSean Christopherson
Track the address and error code as 64-bit values in the page fault tracepoint. When TDP is enabled, the address is a GPA and thus can be a 64-bit value even on 32-bit hosts. And SVM's #NPF genereates 64-bit error codes. Opportunistically clean up the formatting. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: Add extra information in kvm_page_fault trace pointWonhyuk Yang
Currently, kvm_page_fault trace point provide fault_address and error code. However it is not enough to find which cpu and instruction cause kvm_page_faults. So add vcpu id and instruction pointer in kvm_page_fault trace point. Cc: Baik Song An <bsahn@etri.re.kr> Cc: Hong Yeon Kim <kimhy@etri.re.kr> Cc: Taeung Song <taeung@reallinux.co.kr> Cc: linuxgeek@linuxgeek.io Signed-off-by: Wonhyuk Yang <vvghjk1234@gmail.com> Link: https://lore.kernel.org/r/20220510071001.87169-1-vvghjk1234@gmail.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: x86: Delete duplicate documentation for KVM_X86_SET_MSR_FILTERAaron Lewis
Two copies of KVM_X86_SET_MSR_FILTER somehow managed to make it's way into the documentation. Remove one copy and merge the difference from the removed copy into the copy that's being kept. Fixes: fd49e8ee70b3 ("Merge branch 'kvm-sev-cgroup' into HEAD") Signed-off-by: Aaron Lewis <aaronlewis@google.com> Link: https://lore.kernel.org/r/20220712001045.2364298-2-aaronlewis@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: SVM: remove unnecessary check on INIT interceptPaolo Bonzini
Since svm_check_nested_events() is now handling INIT signals, there is no need to latch it until the VMEXIT is injected. The only condition under which INIT signals are latched is GIF=0. Suggested-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20220819165643.83692-1-pbonzini@redhat.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM/VMX: Avoid stack engine synchronization uop in __vmx_vcpu_runUros Bizjak
Avoid instructions with explicit uses of the stack pointer between instructions that implicitly refer to it. The sequence of POP %reg; ADD $x, %RSP; POP %reg forces emission of synchronization uop to synchronize the value of the stack pointer in the stack engine and the out-of-order core. Using POP with the dummy register instead of ADD $x, %RSP results in a smaller code size and faster code. The patch also fixes the reference to the wrong register in the nearby comment. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Link: https://lore.kernel.org/r/20220816211010.25693-1-ubizjak@gmail.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26KVM: fix memoryleak in kvm_init()Miaohe Lin
When alloc_cpumask_var_node() fails for a certain cpu, there might be some allocated cpumasks for percpu cpu_kick_mask. We should free these cpumasks or memoryleak will occur. Fixes: baff59ccdc65 ("KVM: Pre-allocate cpumasks for kvm_make_all_cpus_request_except()") Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Link: https://lore.kernel.org/r/20220823063414.59778-1-linmiaohe@huawei.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-09-26erofs: introduce partial-referenced pclustersGao Xiang
Due to deduplication for compressed data, pclusters can be partially referenced with their prefixes. Together with the user-space implementation, it enables EROFS variable-length global compressed data deduplication with rolling hash. Link: https://lore.kernel.org/r/20220923014915.4362-1-hsiangkao@linux.alibaba.com Reviewed-by: Yue Hu <huyue2@coolpad.com> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2022-09-26erofs: support on-disk compressed fragments dataYue Hu
Introduce on-disk compressed fragments data feature. This approach adds a new field called `h_fragmentoff' in the per-file compression header to indicate the fragment offset of each tail pcluster or the whole file in the special packed inode. Similar to ztailpacking, it will also find and record the 'headlcn' of the tail pcluster when initializing per-inode zmap for making follow-on requests more easy. Signed-off-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/YzHKxcFTlHGgXeH9@B-P7TQMD6M-0146.local Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2022-09-26skmsg: Schedule psock work if the cached skb exists on the psockLiu Jian
In sk_psock_backlog function, for ingress direction skb, if no new data packet arrives after the skb is cached, the cached skb does not have a chance to be added to the receive queue of psock. As a result, the cached skb cannot be received by the upper-layer application. Fix this by reschedule the psock work to dispose the cached skb in sk_msg_recvmsg function. Fixes: 604326b41a6f ("bpf, sockmap: convert to generic sk_msg interface") Signed-off-by: Liu Jian <liujian56@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20220907071311.60534-1-liujian56@huawei.com
2022-09-26selftests/bpf: Add wait send memory test for sockmap redirectLiu Jian
Add one test for wait redirect sock's send memory test for sockmap. Signed-off-by: Liu Jian <liujian56@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20220823133755.314697-3-liujian56@huawei.com
2022-09-26net: If sock is dead don't access sock's sk_wq in sk_stream_wait_memoryLiu Jian
Fixes the below NULL pointer dereference: [...] [ 14.471200] Call Trace: [ 14.471562] <TASK> [ 14.471882] lock_acquire+0x245/0x2e0 [ 14.472416] ? remove_wait_queue+0x12/0x50 [ 14.473014] ? _raw_spin_lock_irqsave+0x17/0x50 [ 14.473681] _raw_spin_lock_irqsave+0x3d/0x50 [ 14.474318] ? remove_wait_queue+0x12/0x50 [ 14.474907] remove_wait_queue+0x12/0x50 [ 14.475480] sk_stream_wait_memory+0x20d/0x340 [ 14.476127] ? do_wait_intr_irq+0x80/0x80 [ 14.476704] do_tcp_sendpages+0x287/0x600 [ 14.477283] tcp_bpf_push+0xab/0x260 [ 14.477817] tcp_bpf_sendmsg_redir+0x297/0x500 [ 14.478461] ? __local_bh_enable_ip+0x77/0xe0 [ 14.479096] tcp_bpf_send_verdict+0x105/0x470 [ 14.479729] tcp_bpf_sendmsg+0x318/0x4f0 [ 14.480311] sock_sendmsg+0x2d/0x40 [ 14.480822] ____sys_sendmsg+0x1b4/0x1c0 [ 14.481390] ? copy_msghdr_from_user+0x62/0x80 [ 14.482048] ___sys_sendmsg+0x78/0xb0 [ 14.482580] ? vmf_insert_pfn_prot+0x91/0x150 [ 14.483215] ? __do_fault+0x2a/0x1a0 [ 14.483738] ? do_fault+0x15e/0x5d0 [ 14.484246] ? __handle_mm_fault+0x56b/0x1040 [ 14.484874] ? lock_is_held_type+0xdf/0x130 [ 14.485474] ? find_held_lock+0x2d/0x90 [ 14.486046] ? __sys_sendmsg+0x41/0x70 [ 14.486587] __sys_sendmsg+0x41/0x70 [ 14.487105] ? intel_pmu_drain_pebs_core+0x350/0x350 [ 14.487822] do_syscall_64+0x34/0x80 [ 14.488345] entry_SYSCALL_64_after_hwframe+0x63/0xcd [...] The test scenario has the following flow: thread1 thread2 ----------- --------------- tcp_bpf_sendmsg tcp_bpf_send_verdict tcp_bpf_sendmsg_redir sock_close tcp_bpf_push_locked __sock_release tcp_bpf_push //inet_release do_tcp_sendpages sock->ops->release sk_stream_wait_memory // tcp_close sk_wait_event sk->sk_prot->close release_sock(__sk); *** lock_sock(sk); __tcp_close sock_orphan(sk) sk->sk_wq = NULL release_sock **** lock_sock(__sk); remove_wait_queue(sk_sleep(sk), &wait); sk_sleep(sk) //NULL pointer dereference &rcu_dereference_raw(sk->sk_wq)->wait While waiting for memory in thread1, the socket is released with its wait queue because thread2 has closed it. This caused by tcp_bpf_send_verdict didn't increase the f_count of psock->sk_redir->sk_socket->file in thread1. We should check if SOCK_DEAD flag is set on wakeup in sk_stream_wait_memory before accessing the wait queue. Suggested-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Liu Jian <liujian56@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Cc: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20220823133755.314697-2-liujian56@huawei.com
2022-09-26spi: spi-fsl-qspi: Use devm_platform_ioremap_resource_byname()Yang Yingliang
Use the devm_platform_ioremap_resource_byname() helper instead of calling platform_get_resource_byname() and devm_ioremap_resource() separately. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20220924131854.964923-3-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26spi: spi-fsl-lpspi: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use the devm_platform_get_and_ioremap_resource() helper instead of calling platform_get_resource() and devm_ioremap_resource() separately. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20220924131854.964923-2-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26spi: spi-fsl-dspi: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use the devm_platform_get_and_ioremap_resource() helper instead of calling platform_get_resource() and devm_ioremap_resource() separately. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20220924131854.964923-1-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26spi/omap100k:Fix PM disable depth imbalance in omap1_spi100k_probeZhang Qilong
The pm_runtime_enable will increase power disable depth. Thus a pairing decrement is needed on the error handling path to keep it balanced according to context. Fixes:db91841b58f9a ("spi/omap100k: Convert to runtime PM") Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> Link: https://lore.kernel.org/r/20220924121310.78331-4-zhangqilong3@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26spi: dw: Fix PM disable depth imbalance in dw_spi_bt1_probeZhang Qilong
The pm_runtime_enable will increase power disable depth. Thus a pairing decrement is needed on the error handling path to keep it balanced according to context. Fixes:abf00907538e2 ("spi: dw: Add Baikal-T1 SPI Controller glue driver") Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> Link: https://lore.kernel.org/r/20220924121310.78331-3-zhangqilong3@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26spi: cadence-quadspi: Fix PM disable depth imbalance in cqspi_probeZhang Qilong
The pm_runtime_enable will increase power disable depth. Thus a pairing decrement is needed on the error handling path to keep it balanced according to context. Fixes:73d5fe0462702 ("spi: cadence-quadspi: Remove spi_master_put() in probe failure path") Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> Link: https://lore.kernel.org/r/20220924121310.78331-2-zhangqilong3@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26regulator: tps65219: Fix is_enabled checking in tps65219_set_bypassAxel Lin
Testing .enable cannot tell if a regulator is enabled or not, check return value of .is_enabled() instead. Also remove unneeded ret variable. Signed-off-by: Axel Lin <axel.lin@ingics.com> Link: https://lore.kernel.org/r/20220919122353.384171-1-axel.lin@ingics.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26ASoC: MAINTAINERS: add bindings and APR to Qualcomm Audio entryKrzysztof Kozlowski
Extend the Qualcomm Audio maintainer entry to include sound related bindings and the Qualcomm APR/GPR (Asynchronous/Generic Packet Router) IPC driver, which is tightly related to the Audio DSP. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Acked-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20220923203140.514730-1-krzysztof.kozlowski@linaro.org Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26ASoC: codecs: wcd934x: Fix Kconfig dependencyRen Zhijie
If CONFIG_REGMAP_SLIMBUS is not set, make ARCH=x86_64 CROSS_COMPILE=x86_64-linux-gnu-, will be failed, like this: sound/soc/codecs/wcd934x.o: In function `wcd934x_codec_probe': wcd934x.c:(.text+0x3310): undefined reference to `__regmap_init_slimbus' make: *** [vmlinux] Error 1 Add select REGMAP_SLIMBUS to config SND_SOC_WCD934X. Fixes: a61f3b4f476e ("ASoC: wcd934x: add support to wcd9340/wcd9341 codec") Signed-off-by: Ren Zhijie <renzhijie2@huawei.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20220926074042.13297-1-renzhijie2@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26ASoC: SOF: mediatek: mt8195: Add pcm_pointer callbackChunxu Li
Add pcm_pointer callback for mt8195 to support read host position from DSP Signed-off-by: Chunxu Li <chunxu.li@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20220924033559.26599-3-chunxu.li@mediatek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26ASoC: SOF: mediatek: mt8195: Add pcm_hw_params callbackChunxu Li
Add pcm_hw_params callback for mt8195 to support continue update dma host position Signed-off-by: Chunxu Li <chunxu.li@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20220924033559.26599-2-chunxu.li@mediatek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-09-26x86/cpu: Include the header of init_ia32_feat_ctl()'s prototypeLuciano Leão
Include the header containing the prototype of init_ia32_feat_ctl(), solving the following warning: $ make W=1 arch/x86/kernel/cpu/feat_ctl.o arch/x86/kernel/cpu/feat_ctl.c:112:6: warning: no previous prototype for ‘init_ia32_feat_ctl’ [-Wmissing-prototypes] 112 | void init_ia32_feat_ctl(struct cpuinfo_x86 *c) This warning appeared after commit 5d5103595e9e5 ("x86/cpu: Reinitialize IA32_FEAT_CTL MSR on BSP during wakeup") had moved the function init_ia32_feat_ctl()'s prototype from arch/x86/kernel/cpu/cpu.h to arch/x86/include/asm/cpu.h. Note that, before the commit mentioned above, the header include "cpu.h" (arch/x86/kernel/cpu/cpu.h) was added by commit 0e79ad863df43 ("x86/cpu: Fix a -Wmissing-prototypes warning for init_ia32_feat_ctl()") solely to fix init_ia32_feat_ctl()'s missing prototype. So, the header include "cpu.h" is no longer necessary. [ bp: Massage commit message. ] Fixes: 5d5103595e9e5 ("x86/cpu: Reinitialize IA32_FEAT_CTL MSR on BSP during wakeup") Signed-off-by: Luciano Leão <lucianorsleao@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nícolas F. R. A. Prado <n@nfraprado.net> Link: https://lore.kernel.org/r/20220922200053.1357470-1-lucianorsleao@gmail.com
2022-09-26fs: dlm: fix possible use after free if tracingAlexander Aring
This patch fixes a possible use after free if tracing for the specific event is enabled. To avoid the use after free we introduce a out_put label like all other user lock specific requests and safe in a boolean to do a put or not which depends on the execution path of dlm_user_request(). Cc: stable@vger.kernel.org Fixes: 7a3de7324c2b ("fs: dlm: trace user space callbacks") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: David Teigland <teigland@redhat.com>
2022-09-26io_uring/net: fix cleanup double free free_iov initPavel Begunkov
Having ->async_data doesn't mean it's initialised and previously we vere relying on setting F_CLEANUP at the right moment. With zc sendmsg though, we set F_CLEANUP early in prep when we alloc a notif and so we may allocate async_data, fail in copy_msg_hdr() leaving struct io_async_msghdr not initialised correctly but with F_CLEANUP set, which causes a ->free_iov double free and probably other nastiness. Always initialise ->free_iov. Also, now it might point to fast_iov when fails, so avoid freeing it during cleanups. Reported-by: syzbot+edfd15cd4246a3fc615a@syzkaller.appspotmail.com Fixes: 493108d95f146 ("io_uring/net: zerocopy sendmsg") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-09-26drm/panel-edp: Add INX N116BCA-EA2Sean Hong
Add support for the INX - N116BCA-EA2 (HW: C1) panel Signed-off-by: Sean Hong <sean.hong@quanta.corp-partner.google.com> Reviewed-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Douglas Anderson <dianders@chromium.org> Link: https://patchwork.freedesktop.org/patch/msgid/20220926100839.482804-1-sean.hong@quanta.corp-partner.google.com
2022-09-26drm/i915/gt: Restrict forced preemption to the active contextChris Wilson
When we submit a new pair of contexts to ELSP for execution, we start a timer by which point we expect the HW to have switched execution to the pending contexts. If the promotion to the new pair of contexts has not occurred, we declare the executing context to have hung and force the preemption to take place by resetting the engine and resubmitting the new contexts. This can lead to an unfair situation where almost all of the preemption timeout is consumed by the first context which just switches into the second context immediately prior to the timer firing and triggering the preemption reset (assuming that the timer interrupts before we process the CS events for the context switch). The second context hasn't yet had a chance to yield to the incoming ELSP (and send the ACk for the promotion) and so ends up being blamed for the reset. If we see that a context switch has occurred since setting the preemption timeout, but have not yet received the ACK for the ELSP promotion, rearm the preemption timer and check again. This is especially significant if the first context was not schedulable and so we used the shortest timer possible, greatly increasing the chance of accidentally blaming the second innocent context. Fixes: 3a7a92aba8fb ("drm/i915/execlists: Force preemption") Fixes: d12acee84ffb ("drm/i915/execlists: Cancel banned contexts on schedule-out") Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Tested-by: Andrzej Hajda <andrzej.hajda@intel.com> Cc: <stable@vger.kernel.org> # v5.5+ Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220921135258.1714873-1-andrzej.hajda@intel.com (cherry picked from commit 107ba1a2c705f4358f2602ec2f2fd821bb651f42) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2022-09-26perf tests powerpc: Fix branch stack sampling test to include sanity check ↵Athira Rajeev
for branch filter Commit b55878c90ab92a24 ("perf test: Add test for branch stack sampling") added test for branch stack sampling. There is a sanity check in the beginning to skip the test if the hardware doesn't support branch stack sampling. Snippet <<>> skip the test if the hardware doesn't support branch stack sampling perf record -b -o- -B true > /dev/null 2>&1 || exit 2 <<>> But the testcase also uses branch sample types: save_type, any. if any platform doesn't support the branch filters used in the test, the testcase will fail. In powerpc, currently mutliple branch filters are not supported and hence this test fails in powerpc. Fix the sanity check to look at the support for branch filters used in this test before proceeding with the test. Fixes: b55878c90ab92a24 ("perf test: Add test for branch stack sampling") Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com> Reviewed-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Athira Jajeev <atrajeev@linux.vnet.ibm.com> Cc: German Gomez <german.gomez@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nageswara R Sastry <rnsastry@linux.ibm.com> Link: https://lore.kernel.org/r/20220921145255.20972-2-atrajeev@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-09-26perf parse-events: Remove "not supported" hybrid cache eventsZhengjun Xing
By default, we create two hybrid cache events, one is for cpu_core, and another is for cpu_atom. But Some hybrid hardware cache events are only available on one CPU PMU. For example, the 'L1-dcache-load-misses' is only available on cpu_core, while the 'L1-icache-loads' is only available on cpu_atom. We need to remove "not supported" hybrid cache events. By extending is_event_supported() to global API and using it to check if the hybrid cache events are supported before being created, we can remove the "not supported" hybrid cache events. Before: # ./perf stat -e L1-dcache-load-misses,L1-icache-loads -a sleep 1 Performance counter stats for 'system wide': 52,570 cpu_core/L1-dcache-load-misses/ <not supported> cpu_atom/L1-dcache-load-misses/ <not supported> cpu_core/L1-icache-loads/ 1,471,817 cpu_atom/L1-icache-loads/ 1.004915229 seconds time elapsed After: # ./perf stat -e L1-dcache-load-misses,L1-icache-loads -a sleep 1 Performance counter stats for 'system wide': 54,510 cpu_core/L1-dcache-load-misses/ 1,441,286 cpu_atom/L1-icache-loads/ 1.005114281 seconds time elapsed Fixes: 30def61f64bac5f5 ("perf parse-events: Create two hybrid cache events") Reported-by: Yi Ammy <ammy.yi@intel.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220923030013.3726410-2-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-09-26perf print-events: Fix "perf list" can not display the PMU prefix for some ↵Zhengjun Xing
hybrid cache events Some hybrid hardware cache events are only available on one CPU PMU. For example, 'L1-dcache-load-misses' is only available on cpu_core. We have supported in the perf list clearly reporting this info, the function works fine before but recently the argument "config" in API is_event_supported() is changed from "u64" to "unsigned int" which caused a regression, the "perf list" then can not display the PMU prefix for some hybrid cache events. For the hybrid systems, the PMU type ID is stored at config[63:32], define config to "unsigned int" will miss the PMU type ID information, then the regression happened, the config should be defined as "u64". Before: # ./perf list |grep "Hardware cache event" L1-dcache-load-misses [Hardware cache event] L1-dcache-loads [Hardware cache event] L1-dcache-stores [Hardware cache event] L1-icache-load-misses [Hardware cache event] L1-icache-loads [Hardware cache event] LLC-load-misses [Hardware cache event] LLC-loads [Hardware cache event] LLC-store-misses [Hardware cache event] LLC-stores [Hardware cache event] branch-load-misses [Hardware cache event] branch-loads [Hardware cache event] dTLB-load-misses [Hardware cache event] dTLB-loads [Hardware cache event] dTLB-store-misses [Hardware cache event] dTLB-stores [Hardware cache event] iTLB-load-misses [Hardware cache event] node-load-misses [Hardware cache event] node-loads [Hardware cache event] After: # ./perf list |grep "Hardware cache event" L1-dcache-loads [Hardware cache event] L1-dcache-stores [Hardware cache event] L1-icache-load-misses [Hardware cache event] LLC-load-misses [Hardware cache event] LLC-loads [Hardware cache event] LLC-store-misses [Hardware cache event] LLC-stores [Hardware cache event] branch-load-misses [Hardware cache event] branch-loads [Hardware cache event] cpu_atom/L1-icache-loads/ [Hardware cache event] cpu_core/L1-dcache-load-misses/ [Hardware cache event] cpu_core/node-load-misses/ [Hardware cache event] cpu_core/node-loads/ [Hardware cache event] dTLB-load-misses [Hardware cache event] dTLB-loads [Hardware cache event] dTLB-store-misses [Hardware cache event] dTLB-stores [Hardware cache event] iTLB-load-misses [Hardware cache event] Fixes: 9b7c7728f4e4ba8d ("perf parse-events: Break out tracepoint and printing") Reported-by: Yi Ammy <ammy.yi@intel.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com> Acked-by: Ian Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20220923030013.3726410-1-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-09-26perf tools: Get a perf cgroup more portably in BPFNamhyung Kim
The perf_event_cgrp_id can be different on other configurations. To be more portable as CO-RE, it needs to get the cgroup subsys id using the bpf_core_enum_value() helper. Suggested-by: Ian Rogers <irogers@google.com> Reviewed-by: Ian Rogers <irogers@google.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Hao Luo <haoluo@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Song Liu <songliubraving@fb.com> Cc: bpf@vger.kernel.org Link: https://lore.kernel.org/r/20220923063205.772936-1-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2022-09-26powerpc: Remove direct call to personality syscall handlerRohan McLure
Syscall handlers should not be invoked internally by their symbol names, as these symbols defined by the architecture-defined SYSCALL_DEFINE macro. Fortunately, in the case of ppc64_personality, its call to sys_personality can be replaced with an invocation to the equivalent ksys_personality inline helper in <linux/syscalls.h>. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220921065605.1051927-13-rmclure@linux.ibm.com
2022-09-26powerpc/32: Remove powerpc select specialisationRohan McLure
Syscall #82 has been implemented for 32-bit platforms in a unique way on powerpc systems. This hack will in effect guess whether the caller is expecting new select semantics or old select semantics. It does so via a guess, based off the first parameter. In new select, this parameter represents the length of a user-memory array of file descriptors, and in old select this is a pointer to an arguments structure. The heuristic simply interprets sufficiently large values of its first parameter as being a call to old select. The following is a discussion on how this syscall should be handled. As discussed in this thread, the existence of such a hack suggests that for whatever powerpc binaries may predate glibc, it is most likely that they would have taken use of the old select semantics. x86 and arm64 both implement this syscall with oldselect semantics. Remove the powerpc implementation, and update syscall.tbl to refer to emit a reference to sys_old_select and compat_sys_old_select for 32-bit binaries, in keeping with how other architectures support syscall #82. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/lkml/13737de5-0eb7-e881-9af0-163b0d29a1a0@csgroup.eu/ Link: https://lore.kernel.org/r/20220921065605.1051927-12-rmclure@linux.ibm.com
2022-09-26powerpc: Use generic fallocate compatibility syscallRohan McLure
The powerpc fallocate compat syscall handler is identical to the generic implementation provided by commit 59c10c52f573f ("riscv: compat: syscall: Add compat_sys_call_table implementation"), and as such can be removed in favour of the generic implementation. A future patch series will replace more architecture-defined syscall handlers with generic implementations, dependent on introducing generic implementations that are compatible with powerpc and arm's parameter reorderings. Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220921065605.1051927-11-rmclure@linux.ibm.com
2022-09-26asm-generic: compat: Support BE for long long args in 32-bit ABIsRohan McLure
32-bit ABIs support passing 64-bit integers by registers via argument translation. Commit 59c10c52f573 ("riscv: compat: syscall: Add compat_sys_call_table implementation") implements the compat_arg_u64 macro for efficiently defining little endian compatibility syscalls. Architectures supporting big endianness may benefit from reciprocal argument translation, but are welcome also to implement their own. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Arnd Bergmann <arnd@anrdb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220921065605.1051927-10-rmclure@linux.ibm.com
2022-09-26powerpc: Fix fallocate and fadvise64_64 compat parameter combinationRohan McLure
As reported[1] by Arnd, the arch-specific fadvise64_64 and fallocate compatibility handlers assume parameters are passed with 32-bit big-endian ABI. This affects the assignment of odd-even parameter pairs to the high or low words of a 64-bit syscall parameter. Fix fadvise64_64 fallocate compat handlers to correctly swap upper/lower 32 bits conditioned on endianness. A future patch will replace the arch-specific compat fallocate with an asm-generic implementation. This patch is intended for ease of back-port. [1]: https://lore.kernel.org/all/be29926f-226e-48dc-871a-e29a54e80583@www.fastmail.com/ Fixes: 57f48b4b74e7 ("powerpc/compat_sys: swap hi/lo parts of 64-bit syscall args in LE mode") Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220921065605.1051927-9-rmclure@linux.ibm.com
2022-09-26powerpc/64s: Fix comment on interrupt handler prologueRohan McLure
Interrupt handlers on 64s systems will often need to save register state from the interrupted process to make space for loading special purpose registers or for internal state. Fix a comment documenting a common code path macro in the beginning of interrupt handlers where r10 is saved to the PACA to afford space for the value of the CFAR. Comment is currently written as if r10-r12 are saved to PACA, but in fact only r10 is saved, with r11-r12 saved much later. The distance in code between these saves has grown over the many revisions of this macro. Fix this by signalling with a comment where r11-r12 are saved to the PACA. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reported-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220921065605.1051927-8-rmclure@linux.ibm.com