summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-02-26x86/ibt: Implement FineIBT-BHI mitigationPeter Zijlstra
While WAIT_FOR_ENDBR is specified to be a full speculation stop; it has been shown that some implementations are 'leaky' to such an extend that speculation can escape even the FineIBT preamble. To deal with this, add additional hardening to the FineIBT preamble. Notably, using a new LLVM feature: https://github.com/llvm/llvm-project/commit/e223485c9b38a5579991b8cebb6a200153eee245 which encodes the number of arguments in the kCFI preamble's register. Using this register<->arity mapping, have the FineIBT preamble CALL into a stub clobbering the relevant argument registers in the speculative case. Scott sayeth thusly: Microarchitectural attacks such as Branch History Injection (BHI) and Intra-mode Branch Target Injection (IMBTI) [1] can cause an indirect call to mispredict to an adversary-influenced target within the same hardware domain (e.g., within the kernel). Instructions at the mispredicted target may execute speculatively and potentially expose kernel data (e.g., to a user-mode adversary) through a microarchitectural covert channel such as CPU cache state. CET-IBT [2] is a coarse-grained control-flow integrity (CFI) ISA extension that enforces that each indirect call (or indirect jump) must land on an ENDBR (end branch) instruction, even speculatively*. FineIBT is a software technique that refines CET-IBT by associating each function type with a 32-bit hash and enforcing (at the callee) that the hash of the caller's function pointer type matches the hash of the callee's function type. However, recent research [3] has demonstrated that the conditional branch that enforces FineIBT's hash check can be coerced to mispredict, potentially allowing an adversary to speculatively bypass the hash check: __cfi_foo: ENDBR64 SUB R10d, 0x01234567 JZ foo # Even if the hash check fails and ZF=0, this branch could still mispredict as taken UD2 foo: ... The techniques demonstrated in [3] require the attacker to be able to control the contents of at least one live register at the mispredicted target. Therefore, this patch set introduces a sequence of CMOV instructions at each indirect-callable target that poisons every live register with data that the attacker cannot control whenever the FineIBT hash check fails, thus mitigating any potential attack. The security provided by this scheme has been discussed in detail on an earlier thread [4]. [1] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html [2] Intel Software Developer's Manual, Volume 1, Chapter 18 [3] https://www.vusec.net/projects/native-bhi/ [4] https://lore.kernel.org/lkml/20240927194925.707462984@infradead.org/ *There are some caveats for certain processors, see [1] for more info Suggested-by: Scott Constable <scott.d.constable@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.820402212@infradead.org
2025-02-26x86/bhi: Add BHI stubsPeter Zijlstra
Add an array of code thunks, to be called from the FineIBT preamble, clobbering the first 'n' argument registers for speculative execution. Notably the 0th entry will clobber no argument registers and will never be used, it exists so the array can be naturally indexed, while the 7th entry will clobber all the 6 argument registers and also RSP in order to mess up stack based arguments. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.717378681@infradead.org
2025-02-26selftests/x86/avx: Add AVX testsChang S. Bae
Add xstate testing specifically for those vector register states, validating kernel's context switching and ensuring ABI compliance. Use the established xstate testing framework. Alternatively, this invocation could be placed directly in xstate.c::main(). However, the current test file naming convention, which clearly specifies the tested area, seems reasonable. Adding avx.c considerably aligns with that convention. The test output should be like this for ZMM_Hi256 as an example: $ avx_64 ... [RUN] AVX-512 ZMM_Hi256: check context switches, 10 iterations, 5 threads. [OK] No incorrect case was found. [RUN] AVX-512 ZMM_Hi256: inject xstate via ptrace(). [OK] 'xfeatures' in SW reserved area was correctly written [OK] xstate was correctly updated. [RUN] AVX-512 ZMM_Hi256: load xstate and raise SIGUSR1 [OK] 'magic1' is valid [OK] 'xfeatures' in SW reserved area is valid [OK] 'xfeatures' in XSAVE header is valid [OK] xstate delivery was successful [OK] 'magic2' is valid [RUN] AVX-512 ZMM_Hi256: load new xstate from sighandler and check it after sigreturn [OK] xstate was restored correctly But systems without AVX-512 will look like: ... The kernel does not support feature number: 5 The kernel does not support feature number: 6 The kernel does not support feature number: 7 Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-10-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Clarify supported xstatesChang S. Bae
The established xstate test code is designed to be generic, but certain xstates require special handling and cannot be tested without additional adjustments. Clarify which xstates are currently supported, and enforce testing only for them. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-9-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Consolidate test invocations into a single entryChang S. Bae
Currently, each of the three xstate tests runs as a separate invocation, requiring the xstate number to be passed and state information to be reconstructed repeatedly. This approach arose from their individual and isolated development, but now it makes sense to unify them. Introduce a wrapper function that first verifies feature availability from the kernel and constructs the necessary state information once. The wrapper then sequentially invokes all tests to ensure consistent execution. Update the AMX test to use this unified invocation. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-8-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Introduce signal ABI testChang S. Bae
With the refactored test cases, another xstate exposure to userspace is through signal delivery. While amx.c includes signal-related scenarios, its primary focus is on xstate permission management, which is largely specific to dynamic states. The remaining gap is testing xstate preservation and restoration across signal delivery. The kernel defines an ABI for presenting xstate in the signal frame, closely resembling the hardware XSAVE format, where xstate modification is also possible. Introduce a new test case to verify xstate preservation across signal delivery and return, that is ensuring ABI compatibility by: - Loading xstate before raising a signal. - Verifying correct exposure in the signal frame - Modifying xstate in the signal frame before returning. - Checking the state restoration upon signal return. Integrate this test into the AMX test suite as an initial usage site. Expected output: $ amx_64 ... [RUN] AMX Tile data: load xstate and raise SIGUSR1 [OK] 'magic1' is valid [OK] 'xfeatures' in SW reserved area is valid [OK] 'xfeatures' in XSAVE header is valid [OK] xstate delivery was successful [OK] 'magic2' is valid [RUN] AMX Tile data: load new xstate from sighandler and check it after sigreturn [OK] xstate was restored correctly Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-7-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Refactor ptrace ABI testChang S. Bae
Following the refactoring of the context switching test, the ptrace test is another component reusable for other xstate features. As part of this restructuring, add a missing check to validate the user_xstateregs->xstate_fx_sw field in the ABI. Also, replace err() and fatal_error() with ksft_exit_fail_msg() for consistency in error handling. Expected output: $ amx_64 ... [RUN] AMX Tile data: inject xstate via ptrace(). [OK] 'xfeatures' in SW reserved area was correctly written [OK] xstate was correctly updated. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-6-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Refactor context switching testChang S. Bae
The existing context switching and ptrace tests in amx.c are not specific to dynamic states, making them reusable for general xstate testing. As a first step, move the context switching test to xstate.c. Refactor the test code to allow specifying which xstate component being tested. To decouple the test from dynamic states, remove the permission request code. In fact, The permission request inside the test wrapper was redundant. Additionally, replace fatal_error() with ksft_exit_fail_msg() for consistency in error handling. Expected output: $ amx_64 ... [RUN] AMX Tile data: check context switches, 10 iterations, 5 threads. [OK] No incorrect case was found. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-5-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Enumerate and name xstate componentsChang S. Bae
After moving essential helpers from amx.c, the code remains neutral regarding which xstate components it handles. However, explicitly listing known components helps users identify which features are ready for testing. Enumerate xstate components to facilitate identification. Extend struct xstate_info to include a name field, providing a human-readable identifier. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-4-chang.seok.bae@intel.com
2025-02-26selftests/x86/xstate: Refactor XSAVE helpers for general useChang S. Bae
The AMX test introduced several XSAVE-related helper functions, but so far, it has been the only user of them. These helpers can be generalized for broader test of multiple xstate features. Move most XSAVE-related code into xsave.h, making it shareable. The restructuring includes: * Establishing low-level XSAVE helpers for saving and restoring register states, as well as handling XSAVE buffers. * Generalizing state data manipuldations: set_rand_data() * Introducing a generic feature query helper: get_xstate_info() While doing so, remove unused defines in amx.c. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-3-chang.seok.bae@intel.com
2025-02-26selftests/x86: Consolidate redundant signal helper functionsChang S. Bae
The x86 selftests frequently register and clean up signal handlers, but the sethandler() and clearhandler() functions have been redundantly copied across multiple .c files. Move these functions to helpers.h to enable reuse across tests, eliminating around 250 lines of duplicate code. Converge the error handling by using ksft_exit_fail_msg(), which is functionally equivalent with err() within the selftest framework. This change is a prerequisite for the upcoming xstate selftest, which requires signal handling for registering and cleaning up handlers. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226010731.2456-2-chang.seok.bae@intel.com
2025-02-26Merge tag 'v6.14-rc4' into x86/fpu, to pick up fixes and refresh the branchIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-02-26x86/ibt: Add paranoid FineIBT modePeter Zijlstra
Due to concerns about circumvention attacks against FineIBT on 'naked' ENDBR, add an additional caller side hash check to FineIBT. This should make it impossible to pivot over such a 'naked' ENDBR instruction at the cost of an additional load. The specific pivot reported was against the SYSCALL entry site and FRED will have all those holes fixed up. https://lore.kernel.org/linux-hardening/Z60NwR4w%2F28Z7XUa@ubun/ This specific fineibt_paranoid_start[] sequence was concocted by Scott. Suggested-by: Scott Constable <scott.d.constable@intel.com> Reported-by: Jennifer Miller <jmill@asu.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.598033084@infradead.org
2025-02-26x86/traps: Decode LOCK Jcc.d8 as #UDPeter Zijlstra
Because overlapping code sequences are all the rage. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.486463917@infradead.org
2025-02-26x86/ibt: Optimize the FineIBT instruction sequencePeter Zijlstra
Scott notes that non-taken branches are faster. Abuse overlapping code that traps instead of explicit UD2 instructions. And LEA does not modify flags and will have less dependencies. Suggested-by: Scott Constable <scott.d.constable@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.371942555@infradead.org
2025-02-26x86/traps: Allow custom fixups in handle_bug()Peter Zijlstra
The normal fixup in handle_bug() is simply continuing at the next instruction. However upcoming patches make this the wrong thing, so allow handlers (specifically handle_cfi_failure()) to over-ride regs->ip. The callchain is such that the fixup needs to be done before it is determined if the exception is fatal, as such, revert any changes in that case. Additionally, have handle_cfi_failure() remember the regs->ip value it starts with for reporting. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.275223080@infradead.org
2025-02-26x86/traps: Decode 0xEA instructions as #UDPeter Zijlstra
FineIBT will start using 0xEA as #UD. Normally '0xEA' is a 'bad', invalid instruction for the CPU. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.166774696@infradead.org
2025-02-26x86/alternatives: Clean up preprocessor conditional block commentsIngo Molnar
When in the middle of a kernel source code file a kernel developer sees a lone #else or #endif: ... #else ... It's not obvious at a glance what those preprocessor blocks are conditional on, if the starting #ifdef is outside visible range. So apply the standard pattern we use in such cases elsewhere in the kernel for large preprocessor blocks: #ifdef CONFIG_XXX ... ... ... #endif /* CONFIG_XXX */ ... #ifdef CONFIG_XXX ... ... ... #else /* !CONFIG_XXX: */ ... ... ... #endif /* !CONFIG_XXX */ ( Note that in the #else case we use the /* !CONFIG_XXX */ marker in the final #endif, not /* CONFIG_XXX */, which serves as an easy visual marker to differentiate #else or #elif related #endif closures from singular #ifdef/#endif blocks. ) Also clean up __CFI_DEFAULT definition with a bit more vertical alignment applied, and a pointless tab converted to the standard space we use in such definitions. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: linux-kernel@vger.kernel.org Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
2025-02-26x86/ibt: Add exact_endbr() helperPeter Zijlstra
For when we want to exactly match ENDBR, and not everything that we can scribble it with. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124200.059556588@infradead.org
2025-02-26x86/cfi: Add 'cfi=warn' boot optionPeter Zijlstra
Rebuilding with CONFIG_CFI_PERMISSIVE=y enabled is such a pain, esp. since clang is so slow. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20250224124159.924496481@infradead.org
2025-02-26ata: ahci: Make ahci_ignore_port() handle empty mask_port_mapNiklas Cassel
Commit 8c87215dd3a2 ("ata: libahci_platform: support non-consecutive port numbers") added a skip to ahci_platform_enable_phys() for ports that are not in mask_port_map. The code in ahci_platform_get_resources(), will currently set mask_port_map for each child "port" node it finds in the device tree. However, device trees that do not have any child "port" nodes will not have mask_port_map set, and for non-device tree platforms mask_port_map will only exist as a quirk for specific PCI device + vendor IDs, or as a kernel module parameter, but will not be set by default. Therefore, the common thing is that mask_port_map is only set if you do not want to use all ports (as defined by Offset 0Ch: PI – Ports Implemented register), but instead only want to use the ports in mask_port_map. If mask_port_map is not set, all ports are available. Thus, ahci_ignore_port() must be able to handle an empty mask_port_map. Fixes: 8c87215dd3a2 ("ata: libahci_platform: support non-consecutive port numbers") Fixes: 2c202e6c4f4d ("ata: libahci_platform: Do not set mask_port_map when not needed") Fixes: c9b5be909e65 ("ahci: Introduce ahci_ignore_port() helper") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Closes: https://lore.kernel.org/linux-ide/10b31dd0-d0bb-4f76-9305-2195c3e17670@samsung.com/ Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Co-developed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20250225141612.942170-2-cassel@kernel.org Signed-off-by: Niklas Cassel <cassel@kernel.org>
2025-02-26drm/imagination: remove unnecessary header include pathMasahiro Yamada
drivers/gpu/drm/imagination/ includes local headers with the double-quote form (#include "..."). Hence, the header search path addition is unneeded. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Matt Coster <matt.coster@imgtec.com> Link: https://patchwork.freedesktop.org/patch/msgid/20250210102352.1517115-1-masahiroy@kernel.org Signed-off-by: Matt Coster <matt.coster@imgtec.com>
2025-02-26KVM: nVMX: Process events on nested VM-Exit if injectable IRQ or NMI is pendingSean Christopherson
Process pending events on nested VM-Exit if the vCPU has an injectable IRQ or NMI, as the event may have become pending while L2 was active, i.e. may not be tracked in the context of vmcs01. E.g. if L1 has passed its APIC through to L2 and an IRQ arrives while L2 is active, then KVM needs to request an IRQ window prior to running L1, otherwise delivery of the IRQ will be delayed until KVM happens to process events for some other reason. The missed failure is detected by vmx_apic_passthrough_tpr_threshold_test in KVM-Unit-Tests, but has effectively been masked due to a flaw in KVM's PIC emulation that causes KVM to make spurious KVM_REQ_EVENT requests (and apparently no one ever ran the test with split IRQ chips). Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26KVM: x86: Free vCPUs before freeing VM stateSean Christopherson
Free vCPUs before freeing any VM state, as both SVM and VMX may access VM state when "freeing" a vCPU that is currently "in" L2, i.e. that needs to be kicked out of nested guest mode. Commit 6fcee03df6a1 ("KVM: x86: avoid loading a vCPU after .vm_destroy was called") partially fixed the issue, but for unknown reasons only moved the MMU unloading before VM destruction. Complete the change, and free all vCPU state prior to destroying VM state, as nVMX accesses even more state than nSVM. In addition to the AVIC, KVM can hit a use-after-free on MSR filters: kvm_msr_allowed+0x4c/0xd0 __kvm_set_msr+0x12d/0x1e0 kvm_set_msr+0x19/0x40 load_vmcs12_host_state+0x2d8/0x6e0 [kvm_intel] nested_vmx_vmexit+0x715/0xbd0 [kvm_intel] nested_vmx_free_vcpu+0x33/0x50 [kvm_intel] vmx_free_vcpu+0x54/0xc0 [kvm_intel] kvm_arch_vcpu_destroy+0x28/0xf0 kvm_vcpu_destroy+0x12/0x50 kvm_arch_destroy_vm+0x12c/0x1c0 kvm_put_kvm+0x263/0x3c0 kvm_vm_release+0x21/0x30 and an upcoming fix to process injectable interrupts on nested VM-Exit will access the PIC: BUG: kernel NULL pointer dereference, address: 0000000000000090 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page CPU: 23 UID: 1000 PID: 2658 Comm: kvm-nx-lpage-re RIP: 0010:kvm_cpu_has_extint+0x2f/0x60 [kvm] Call Trace: <TASK> kvm_cpu_has_injectable_intr+0xe/0x60 [kvm] nested_vmx_vmexit+0x2d7/0xdf0 [kvm_intel] nested_vmx_free_vcpu+0x40/0x50 [kvm_intel] vmx_vcpu_free+0x2d/0x80 [kvm_intel] kvm_arch_vcpu_destroy+0x2d/0x130 [kvm] kvm_destroy_vcpus+0x8a/0x100 [kvm] kvm_arch_destroy_vm+0xa7/0x1d0 [kvm] kvm_destroy_vm+0x172/0x300 [kvm] kvm_vcpu_release+0x31/0x50 [kvm] Inarguably, both nSVM and nVMX need to be fixed, but punt on those cleanups for the moment. Conceptually, vCPUs should be freed before VM state. Assets like the I/O APIC and PIC _must_ be allocated before vCPUs are created, so it stands to reason that they must be freed _after_ vCPUs are destroyed. Reported-by: Aaron Lewis <aaronlewis@google.com> Closes: https://lore.kernel.org/all/20240703175618.2304869-2-aaronlewis@google.com Cc: Jim Mattson <jmattson@google.com> Cc: Yan Zhao <yan.y.zhao@intel.com> Cc: Rick P Edgecombe <rick.p.edgecombe@intel.com> Cc: Kai Huang <kai.huang@intel.com> Cc: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250224235542.2562848-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-26nfsd: drop fh_update() from S_IFDIR branch of nfsd_create_locked()NeilBrown
nfsd_create_locked() doesn't need to explicitly call fh_update(). On success (which is the only time that fh_update() matters at all), nfsd_create_setattr() will be called and it will call fh_update(). This extra call is not harmful, but is not necessary. Signed-off-by: NeilBrown <neilb@suse.de> Link: https://lore.kernel.org/r/20250226062135.2043651-3-neilb@suse.de Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26nfs/vfs: discard d_exact_alias()NeilBrown
d_exact_alias() is a descendent of d_add_unique() which was introduced 20 years ago mostly likely to work around problems with NFS servers of the time. It is now not used in several situations were it was originally needed and there have been no reports of problems - presumably the old NFS servers have been improved. This only place it is now use is in NFSv4 code and the old problematic servers are thought to have been v2/v3 only. There is no clear benefit in reusing a unhashed() dentry which happens to have the same name as the dentry we are adding. So this patch removes d_exact_alias() and the one place that it is used. Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: NeilBrown <neilb@suse.de> Link: https://lore.kernel.org/r/20250226062135.2043651-2-neilb@suse.de Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26tcp: Defer ts_recent changes until req is ownedWang Hai
Recently a bug was discovered where the server had entered TCP_ESTABLISHED state, but the upper layers were not notified. The same 5-tuple packet may be processed by different CPUSs, so two CPUs may receive different ack packets at the same time when the state is TCP_NEW_SYN_RECV. In that case, req->ts_recent in tcp_check_req may be changed concurrently, which will probably cause the newsk's ts_recent to be incorrectly large. So that tcp_validate_incoming will fail. At this point, newsk will not be able to enter the TCP_ESTABLISHED. cpu1 cpu2 tcp_check_req tcp_check_req req->ts_recent = rcv_tsval = t1 req->ts_recent = rcv_tsval = t2 syn_recv_sock tcp_sk(child)->rx_opt.ts_recent = req->ts_recent = t2 // t1 < t2 tcp_child_process tcp_rcv_state_process tcp_validate_incoming tcp_paws_check if ((s32)(rx_opt->ts_recent - rx_opt->rcv_tsval) <= paws_win) // t2 - t1 > paws_win, failed tcp_v4_do_rcv tcp_rcv_state_process // TCP_ESTABLISHED The cpu2's skb or a newly received skb will call tcp_v4_do_rcv to get the newsk into the TCP_ESTABLISHED state, but at this point it is no longer possible to notify the upper layer application. A notification mechanism could be added here, but the fix is more complex, so the current fix is used. In tcp_check_req, req->ts_recent is used to assign a value to tcp_sk(child)->rx_opt.ts_recent, so removing the change in req->ts_recent and changing tcp_sk(child)->rx_opt.ts_recent directly after owning the req fixes this bug. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Wang Hai <wanghai38@huawei.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-02-26Merge patch series "iomap: incremental advance conversion -- phase 2"Christian Brauner
Brian Foster <bfoster@redhat.com> says: Here's phase 2 of the incremental iter advance conversions. This updates all remaining iomap operations to advance the iter within the operation and thus removes the need to advance from the core iomap iterator. Once all operations are switched over, the core advance code is removed and the processed field is renamed to reflect that it is now a pure status code. For context, this was first introduced in a previous series [1] that focused mainly on the core mechanism and iomap buffered write. This is because original impetus was to facilitate a folio batch mechanism where a filesystem can optionally provide a batch of folios to process for a given mapping (i.e. zero range of an unwritten mapping with dirty folios in pagecache). That is still WIP, but the broader point is that this was originally intended as an optional mode until consensus that fell out of discussion was that it would be preferable to convert over everything. This presumably facilitates some other future work and simplifies semantics in the core iteration code. Patches 1-3 convert over iomap buffered read, direct I/O and various other remaining ops (swap, etc.). Patches 4-9 convert over the various DAX iomap operations. Finally, patches 10-12 introduce some cleanups now that all iomap operations have updated iteration semantics. * patches from https://lore.kernel.org/r/20250224144757.237706-1-bfoster@redhat.com: iomap: introduce a full map advance helper iomap: rename iomap_iter processed field to status iomap: remove unnecessary advance from iomap_iter() dax: advance the iomap_iter on pte and pmd faults dax: advance the iomap_iter on dedupe range dax: advance the iomap_iter on unshare range dax: advance the iomap_iter on zero range dax: push advance down into dax_iomap_iter() for read and write dax: advance the iomap_iter in the read/write path iomap: convert misc simple ops to incremental advance iomap: advance the iter on direct I/O iomap: advance the iter directly on buffered read Link: https://lore.kernel.org/r/20250224144757.237706-1-bfoster@redhat.com Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26iomap: introduce a full map advance helperBrian Foster
Various iomap_iter_advance() calls advance by the full mapping length and thus have no need for the current length input or post-advance remaining length output from the standard advance function. Add an iomap_iter_advance_full() helper to clean up these cases. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-13-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26iomap: rename iomap_iter processed field to statusBrian Foster
The iter.processed field name is no longer appropriate now that iomap operations do not return the number of bytes processed. Rename the field to iter.status to reflect that a success or error code is expected. Also change the type to int as there is no longer a need for an s64. This reduces the size of iomap_iter by 8 bytes due to a combination of smaller type and reduction in structure padding. While here, fix up the return types of various _iter() helpers to reflect the type change. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-12-bfoster@redhat.com Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26iomap: remove unnecessary advance from iomap_iter()Brian Foster
At this point, all iomap operations have been updated to advance the iomap_iter directly before returning to iomap_iter(). Therefore, the complexity of handling both the old and new semantics is no longer required and can be removed from iomap_iter(). Update iomap_iter() to expect success or failure status in iter.processed. As a precaution and developer hint to prevent inadvertent use of old semantics, warn on a positive return code and fail the operation. Remove the unnecessary advance and simplify the termination logic. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-11-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26dax: advance the iomap_iter on pte and pmd faultsBrian Foster
Advance the iomap_iter on PTE and PMD faults. Each of these operations assign a hardcoded size to iter.processed. Replace those with an advance and status return. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-10-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26dax: advance the iomap_iter on dedupe rangeBrian Foster
Advance the iter on successful dedupe. Dedupe range uses two iters and iterates so long as both have outstanding work, so correspondingly this needs to advance both on each iteration. Since dax_range_compare_iter() now returns status instead of a byte count, update the variable name in the caller as well. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-9-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26dax: advance the iomap_iter on unshare rangeBrian Foster
Advance the iter and return 0 or an error code for success or failure. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-8-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26dax: advance the iomap_iter on zero rangeBrian Foster
Update the DAX zero range iomap iter handler to advance the iter directly. Advance by the full length in the hole/unwritten case, or otherwise advance incrementally in the zeroing loop. In either case, return 0 or an error code for success or failure. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-7-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26dax: push advance down into dax_iomap_iter() for read and writeBrian Foster
DAX read and write currently advances the iter after the dax_iomap_iter() returns the number of bytes processed rather than internally within the iter handler itself, as most other iomap operations do. Push the advance down into dax_iomap_iter() and update the function to return op status instead of bytes processed. dax_iomap_iter() shortcuts reads from a hole or unwritten mapping by directly zeroing the iov_iter, so advance the iomap_iter similarly in that case. The DAX processing loop can operate on a range slightly different than defined by the iomap_iter depending on circumstances. For example, a read may be truncated by inode size, a read or write range can be increased due to page alignment, etc. Therefore, this patch aims to retain as much of the existing logic as possible. The loop control logic remains pos based, but is sampled from the iomap_iter on each iteration after the advance instead of being updated manually. Similarly, length is updated based on the output of the advance instead of being updated manually. The advance itself is based on the number of bytes transferred, which was previously used to update the local copies of pos and length. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-6-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26dax: advance the iomap_iter in the read/write pathBrian Foster
DAX reads and writes flow through dax_iomap_iter(), which has one or more subtleties in terms of how it processes a range vs. what is specified in the iomap_iter. To keep things simple and remove the dependency on iomap_iter() advances, convert a positive return from dax_iomap_iter() to the new advance and status return semantics. The advance can be pushed further down in future patches. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-5-bfoster@redhat.com Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26iomap: convert misc simple ops to incremental advanceBrian Foster
Update several of the remaining iomap operations to advance the iter directly rather than via return value. This includes page faults, fiemap, seek data/hole and swapfile activation. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-4-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26iomap: advance the iter on direct I/OBrian Foster
Update iomap direct I/O to advance the iter directly rather than via iter.processed. Update each mapping type helper to advance based on the amount of data processed and return success or failure. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-3-bfoster@redhat.com Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26iomap: advance the iter directly on buffered readBrian Foster
iomap buffered read advances the iter via iter.processed. To continue separating iter advance from return status, update iomap_readpage_iter() to advance the iter instead of returning the number of bytes processed. In turn, drop the offset parameter and sample the updated iter->pos at the start of the function. Update the callers to loop based on remaining length in the current iteration instead of number of bytes processed. Signed-off-by: Brian Foster <bfoster@redhat.com> Link: https://lore.kernel.org/r/20250224144757.237706-2-bfoster@redhat.com Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-02-26btrfs: replace deprecated strncpy() with strscpy()Thorsten Blum
strncpy() is deprecated for NUL-terminated destination buffers. Use strscpy() instead and don't zero-initialize the param array. Link: https://github.com/KSPP/linux/issues/90 Cc: linux-hardening@vger.kernel.org Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-02-26btrfs: zoned: fix extent range end unlock in cow_file_range()Naohiro Aota
Running generic/751 on the for-next branch often results in a hang like below. They are both stack by locking an extent. This suggests someone forget to unlock an extent. INFO: task kworker/u128:1:12 blocked for more than 323 seconds. Not tainted 6.13.0-BTRFS-ZNS+ #503 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u128:1 state:D stack:0 pid:12 tgid:12 ppid:2 flags:0x00004000 Workqueue: btrfs-fixup btrfs_work_helper [btrfs] Call Trace: <TASK> __schedule+0x534/0xdd0 schedule+0x39/0x140 __lock_extent+0x31b/0x380 [btrfs] ? __pfx_autoremove_wake_function+0x10/0x10 btrfs_writepage_fixup_worker+0xf1/0x3a0 [btrfs] btrfs_work_helper+0xff/0x480 [btrfs] ? lock_release+0x178/0x2c0 process_one_work+0x1ee/0x570 ? srso_return_thunk+0x5/0x5f worker_thread+0x1d1/0x3b0 ? __pfx_worker_thread+0x10/0x10 kthread+0x10b/0x230 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK> INFO: task kworker/u134:0:184 blocked for more than 323 seconds. Not tainted 6.13.0-BTRFS-ZNS+ #503 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u134:0 state:D stack:0 pid:184 tgid:184 ppid:2 flags:0x00004000 Workqueue: writeback wb_workfn (flush-btrfs-4) Call Trace: <TASK> __schedule+0x534/0xdd0 schedule+0x39/0x140 __lock_extent+0x31b/0x380 [btrfs] ? __pfx_autoremove_wake_function+0x10/0x10 find_lock_delalloc_range+0xdb/0x260 [btrfs] writepage_delalloc+0x12f/0x500 [btrfs] ? srso_return_thunk+0x5/0x5f extent_write_cache_pages+0x232/0x840 [btrfs] btrfs_writepages+0x72/0x130 [btrfs] do_writepages+0xe7/0x260 ? srso_return_thunk+0x5/0x5f ? lock_acquire+0xd2/0x300 ? srso_return_thunk+0x5/0x5f ? find_held_lock+0x2b/0x80 ? wbc_attach_and_unlock_inode.part.0+0x102/0x250 ? wbc_attach_and_unlock_inode.part.0+0x102/0x250 __writeback_single_inode+0x5c/0x4b0 writeback_sb_inodes+0x22d/0x550 __writeback_inodes_wb+0x4c/0xe0 wb_writeback+0x2f6/0x3f0 wb_workfn+0x32a/0x510 process_one_work+0x1ee/0x570 ? srso_return_thunk+0x5/0x5f worker_thread+0x1d1/0x3b0 ? __pfx_worker_thread+0x10/0x10 kthread+0x10b/0x230 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK> This happens because we have another success path for the zoned mode. When there is no active zone available, btrfs_reserve_extent() returns -EAGAIN. In this case, we have two reactions. (1) If the given range is never allocated, we can only wait for someone to finish a zone, so wait on BTRFS_FS_NEED_ZONE_FINISH bit and retry afterward. (2) Or, if some allocations are already done, we must bail out and let the caller to send IOs for the allocation. This is because these IOs may be necessary to finish a zone. The commit 06f364284794 ("btrfs: do proper folio cleanup when cow_file_range() failed") moved the unlock code from the inside of the loop to the outside. So, previously, the allocated extents are unlocked just after the allocation and so before returning from the function. However, they are no longer unlocked on the case (2) above. That caused the hang issue. Fix the issue by modifying the 'end' to the end of the allocated range. Then, we can exit the loop and the same unlock code can properly handle the case. Reported-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Tested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Fixes: 06f364284794 ("btrfs: do proper folio cleanup when cow_file_range() failed") CC: stable@vger.kernel.org Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-02-25Merge tag 'powerpc-6.14-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fix from Madhavan Srinivasan: - Fix for cross-reference in documentation and deprecation warning Thanks to Andrew Donnellan and Bagas Sanjaya. * tag 'powerpc-6.14-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: cxl: Fix cross-reference in documentation and add deprecation warning
2025-02-25Merge branch 'net-enetc-fix-some-known-issues'Jakub Kicinski
Wei Fang says: ==================== net: enetc: fix some known issues There are some issues with the enetc driver, some of which are specific to the LS1028A platform, and some of which were introduced recently when i.MX95 ENETC support was added, so this patch set aims to clean up those issues. v1: https://lore.kernel.org/20250217093906.506214-1-wei.fang@nxp.com v2: https://lore.kernel.org/20250219054247.733243-1-wei.fang@nxp.com ==================== Link: https://patch.msgid.link/20250224111251.1061098-1-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-25net: enetc: fix the off-by-one issue in enetc_map_tx_tso_buffs()Wei Fang
There is an off-by-one issue for the err_chained_bd path, it will free one more tx_swbd than expected. But there is no such issue for the err_map_data path. To fix this off-by-one issue and make the two error handling consistent, the increment of 'i' and 'count' remain in sync and enetc_unwind_tx_frame() is called for error handling. Fixes: fb8629e2cbfc ("net: enetc: add support for software TSO") Cc: stable@vger.kernel.org Suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Claudiu Manoil <claudiu.manoil@nxp.com> Link: https://patch.msgid.link/20250224111251.1061098-9-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-25net: enetc: remove the mm_lock from the ENETC v4 driverWei Fang
Currently, the ENETC v4 driver has not added the MAC merge layer support in the upstream, so the mm_lock is not initialized and used, so remove the mm_lock from the driver. Fixes: 99100d0d9922 ("net: enetc: add preliminary support for i.MX95 ENETC PF") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250224111251.1061098-8-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-25net: enetc: add missing enetc4_link_deinit()Wei Fang
The enetc4_link_init() is called when the PF driver probes to create phylink and MDIO bus, but we forgot to call enetc4_link_deinit() to free the phylink and MDIO bus when the driver was unbound. so add missing enetc4_link_deinit() to enetc4_pf_netdev_destroy(). Fixes: 99100d0d9922 ("net: enetc: add preliminary support for i.MX95 ENETC PF") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250224111251.1061098-7-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-25net: enetc: update UDP checksum when updating originTimestamp fieldWei Fang
There is an issue with one-step timestamp based on UDP/IP. The peer will discard the sync packet because of the wrong UDP checksum. For ENETC v1, the software needs to update the UDP checksum when updating the originTimestamp field, so that the hardware can correctly update the UDP checksum when updating the correction field. Otherwise, the UDP checksum in the sync packet will be wrong. Fixes: 7294380c5211 ("enetc: support PTP Sync packet one-step timestamping") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250224111251.1061098-6-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-25net: enetc: VFs do not support HWTSTAMP_TX_ONESTEP_SYNCWei Fang
Actually ENETC VFs do not support HWTSTAMP_TX_ONESTEP_SYNC because only ENETC PF can access PMa_SINGLE_STEP registers. And there will be a crash if VFs are used to test one-step timestamp, the crash log as follows. [ 129.110909] Unable to handle kernel paging request at virtual address 00000000000080c0 [ 129.287769] Call trace: [ 129.290219] enetc_port_mac_wr+0x30/0xec (P) [ 129.294504] enetc_start_xmit+0xda4/0xe74 [ 129.298525] enetc_xmit+0x70/0xec [ 129.301848] dev_hard_start_xmit+0x98/0x118 Fixes: 41514737ecaa ("enetc: add get_ts_info interface for ethtool") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250224111251.1061098-5-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-02-25net: enetc: correct the xdp_tx statisticsWei Fang
The 'xdp_tx' is used to count the number of XDP_TX frames sent, not the number of Tx BDs. Fixes: 7ed2bc80074e ("net: enetc: add support for XDP_TX") Cc: stable@vger.kernel.org Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250224111251.1061098-4-wei.fang@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>