summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-04-12x86/sev: Take NUMA node into account when allocating memory for per-CPU SEV dataLi RongQing
per-CPU SEV data is dominantly accessed from their own local CPUs, so allocate them node-local to improve performance. Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Nikunj A Dadhania <nikunj@amd.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/r/20240412030130.49704-1-lirongqing@baidu.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-04-12Merge branch 'x86/urgent' into x86/cpu, to resolve conflictIngo Molnar
There's a new conflict between this commit pending in x86/cpu: 63edbaa48a57 x86/cpu/topology: Add support for the AMD 0x80000026 leaf And these fixes in x86/urgent: c064b536a8f9 x86/cpu/amd: Make the NODEID_MSR union actually work 1b3108f6898e x86/cpu/amd: Make the CPUID 0x80000008 parser correct Resolve them. Conflicts: arch/x86/kernel/cpu/topology_amd.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-04-12iommu/vt-d: Fix WARN_ON in iommu probe pathLu Baolu
Commit 1a75cc710b95 ("iommu/vt-d: Use rbtree to track iommu probed devices") adds all devices probed by the iommu driver in a rbtree indexed by the source ID of each device. It assumes that each device has a unique source ID. This assumption is incorrect and the VT-d spec doesn't state this requirement either. The reason for using a rbtree to track devices is to look up the device with PCI bus and devfunc in the paths of handling ATS invalidation time out error and the PRI I/O page faults. Both are PCI ATS feature related. Only track the devices that have PCI ATS capabilities in the rbtree to avoid unnecessary WARN_ON in the iommu probe path. Otherwise, on some platforms below kernel splat will be displayed and the iommu probe results in failure. WARNING: CPU: 3 PID: 166 at drivers/iommu/intel/iommu.c:158 intel_iommu_probe_device+0x319/0xd90 Call Trace: <TASK> ? __warn+0x7e/0x180 ? intel_iommu_probe_device+0x319/0xd90 ? report_bug+0x1f8/0x200 ? handle_bug+0x3c/0x70 ? exc_invalid_op+0x18/0x70 ? asm_exc_invalid_op+0x1a/0x20 ? intel_iommu_probe_device+0x319/0xd90 ? debug_mutex_init+0x37/0x50 __iommu_probe_device+0xf2/0x4f0 iommu_probe_device+0x22/0x70 iommu_bus_notifier+0x1e/0x40 notifier_call_chain+0x46/0x150 blocking_notifier_call_chain+0x42/0x60 bus_notify+0x2f/0x50 device_add+0x5ed/0x7e0 platform_device_add+0xf5/0x240 mfd_add_devices+0x3f9/0x500 ? preempt_count_add+0x4c/0xa0 ? up_write+0xa2/0x1b0 ? __debugfs_create_file+0xe3/0x150 intel_lpss_probe+0x49f/0x5b0 ? pci_conf1_write+0xa3/0xf0 intel_lpss_pci_probe+0xcf/0x110 [intel_lpss_pci] pci_device_probe+0x95/0x120 really_probe+0xd9/0x370 ? __pfx___driver_attach+0x10/0x10 __driver_probe_device+0x73/0x150 driver_probe_device+0x19/0xa0 __driver_attach+0xb6/0x180 ? __pfx___driver_attach+0x10/0x10 bus_for_each_dev+0x77/0xd0 bus_add_driver+0x114/0x210 driver_register+0x5b/0x110 ? __pfx_intel_lpss_pci_driver_init+0x10/0x10 [intel_lpss_pci] do_one_initcall+0x57/0x2b0 ? kmalloc_trace+0x21e/0x280 ? do_init_module+0x1e/0x210 do_init_module+0x5f/0x210 load_module+0x1d37/0x1fc0 ? init_module_from_file+0x86/0xd0 init_module_from_file+0x86/0xd0 idempotent_init_module+0x17c/0x230 __x64_sys_finit_module+0x56/0xb0 do_syscall_64+0x6e/0x140 entry_SYSCALL_64_after_hwframe+0x71/0x79 Fixes: 1a75cc710b95 ("iommu/vt-d: Use rbtree to track iommu probed devices") Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/10689 Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240407011429.136282-1-baolu.lu@linux.intel.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12iommu/vt-d: Allocate local memory for page request queueJacob Pan
The page request queue is per IOMMU, its allocation should be made NUMA-aware for performance reasons. Fixes: a222a7f0bb6c ("iommu/vt-d: Implement page request handling") Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240403214007.985600-1-jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12iommu/vt-d: Fix wrong use of pasid configXuchun Shang
The commit "iommu/vt-d: Add IOMMU perfmon support" introduce IOMMU PMU feature, but use the wrong config when set pasid filter. Fixes: 7232ab8b89e9 ("iommu/vt-d: Add IOMMU perfmon support") Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Link: https://lore.kernel.org/r/20240401060753.3321318-1-xuchun.shang@linux.alibaba.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12x86/cpu/amd: Move TOPOEXT enablement into the topology parserThomas Gleixner
The topology rework missed that early_init_amd() tries to re-enable the Topology Extensions when the BIOS disabled them. The new parser is invoked before early_init_amd() so the re-enable attempt happens too late. Move it into the AMD specific topology parser code where it belongs. Fixes: f7fb3b2dd92c ("x86/cpu: Provide an AMD/HYGON specific topology parser") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/878r1j260l.ffs@tglx
2024-04-12x86/cpu/amd: Make the NODEID_MSR union actually workThomas Gleixner
A system with NODEID_MSR was reported to crash during early boot without any output. The reason is that the union which is used for accessing the bitfields in the MSR is written wrongly and the resulting executable code accesses the wrong part of the MSR data. As a consequence a later division by that value results in 0 and that result is used for another division as divisor, which obviously does not work well. The magic world of C, unions and bitfields: union { u64 bita : 3, bitb : 3; u64 all; } x; x.all = foo(); a = x.bita; b = x.bitb; results in the effective executable code of: a = b = x.bita; because bita and bitb are treated as union members and therefore both end up at bit offset 0. Wrapping the bitfield into an anonymous struct: union { struct { u64 bita : 3, bitb : 3; }; u64 all; } x; works like expected. Rework the NODEID_MSR union in exactly that way to cure the problem. Fixes: f7fb3b2dd92c ("x86/cpu: Provide an AMD/HYGON specific topology parser") Reported-by: "kernelci.org bot" <bot@kernelci.org> Reported-by: Laura Nao <laura.nao@collabora.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Laura Nao <laura.nao@collabora.com> Link: https://lore.kernel.org/r/20240410194311.596282919@linutronix.de Closes: https://lore.kernel.org/all/20240322175210.124416-1-laura.nao@collabora.com/
2024-04-12x86/cpu/amd: Make the CPUID 0x80000008 parser correctThomas Gleixner
CPUID 0x80000008 ECX.cpu_nthreads describes the number of threads in the package. The parser uses this value to initialize the SMT domain level. That's wrong because cpu_nthreads does not describe the number of threads per physical core. So this needs to set the CORE domain level and let the later parsers set the SMT shift if available. Preset the SMT domain level with the assumption of one thread per core, which is correct ifrt here are no other CPUID leafs to parse, and propagate cpu_nthreads and the core level APIC bitwidth into the CORE domain. Fixes: f7fb3b2dd92c ("x86/cpu: Provide an AMD/HYGON specific topology parser") Reported-by: "kernelci.org bot" <bot@kernelci.org> Reported-by: Laura Nao <laura.nao@collabora.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Laura Nao <laura.nao@collabora.com> Link: https://lore.kernel.org/r/20240410194311.535206450@linutronix.de
2024-04-12x86/bugs: Replace CONFIG_SPECTRE_BHI_{ON,OFF} with CONFIG_MITIGATION_SPECTRE_BHIJosh Poimboeuf
For consistency with the other CONFIG_MITIGATION_* options, replace the CONFIG_SPECTRE_BHI_{ON,OFF} options with a single CONFIG_MITIGATION_SPECTRE_BHI option. [ mingo: Fix ] Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Sean Christopherson <seanjc@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nikolay Borisov <nik.borisov@suse.com> Link: https://lore.kernel.org/r/3833812ea63e7fdbe36bf8b932e63f70d18e2a2a.1712813475.git.jpoimboe@kernel.org
2024-04-12x86/bugs: Remove CONFIG_BHI_MITIGATION_AUTO and spectre_bhi=autoJosh Poimboeuf
Unlike most other mitigations' "auto" options, spectre_bhi=auto only mitigates newer systems, which is confusing and not particularly useful. Remove it. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/412e9dc87971b622bbbaf64740ebc1f140bff343.1712813475.git.jpoimboe@kernel.org
2024-04-12iommu: mtk: fix module autoloadingKrzysztof Kozlowski
Add MODULE_DEVICE_TABLE(), so modules could be properly autoloaded based on the alias from of_device_id table. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20240410164109.233308-1-krzk@kernel.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12iommu/amd: Do not enable SNP when V2 page table is enabledVasant Hegde
DTE[Mode]=0 is not supported when SNP is enabled in the host. That means to support SNP, IOMMU must be configured with V1 page table (See IOMMU spec [1] for the details). If user passes kernel command line to configure IOMMU domains with v2 page table (amd_iommu=pgtbl_v2) then disable SNP as the user asked by not forcing the page table to v1. [1] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/specifications/48882_IOMMU.pdf Cc: Ashish Kalra <ashish.kalra@amd.com> Cc: Michael Roth <michael.roth@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240410085702.31869-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12iommu/amd: Fix possible irq lock inversion dependency issueVasant Hegde
LOCKDEP detector reported below warning: ---------------------------------------- [ 23.796949] ======================================================== [ 23.796950] WARNING: possible irq lock inversion dependency detected [ 23.796952] 6.8.0fix+ #811 Not tainted [ 23.796954] -------------------------------------------------------- [ 23.796954] kworker/0:1/8 just changed the state of lock: [ 23.796956] ff365325e084a9b8 (&domain->lock){..-.}-{3:3}, at: amd_iommu_flush_iotlb_all+0x1f/0x50 [ 23.796969] but this lock took another, SOFTIRQ-unsafe lock in the past: [ 23.796970] (pd_bitmap_lock){+.+.}-{3:3} [ 23.796972] and interrupts could create inverse lock ordering between them. [ 23.796973] other info that might help us debug this: [ 23.796974] Chain exists of: &domain->lock --> &dev_data->lock --> pd_bitmap_lock [ 23.796980] Possible interrupt unsafe locking scenario: [ 23.796981] CPU0 CPU1 [ 23.796982] ---- ---- [ 23.796983] lock(pd_bitmap_lock); [ 23.796985] local_irq_disable(); [ 23.796985] lock(&domain->lock); [ 23.796988] lock(&dev_data->lock); [ 23.796990] <Interrupt> [ 23.796991] lock(&domain->lock); Fix this issue by disabling interrupt when acquiring pd_bitmap_lock. Note that this is temporary fix. We have a plan to replace custom bitmap allocator with IDA allocator. Fixes: 87a6f1f22c97 ("iommu/amd: Introduce per-device domain ID to fix potential TLB aliasing issue") Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20240404102717.6705-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-04-12perf/bpf: Change the !CONFIG_BPF_SYSCALL stubs to static inlinesIngo Molnar
Otherwise the compiler will be unhappy if they go unused, which they do on allnoconfigs. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Kyle Huey <me@kylehuey.com> Link: https://lore.kernel.org/r/ZhkE9F4dyfR2dH2D@gmail.com
2024-04-12selftest/bpf: Test a perf BPF program that suppresses side effectsKyle Huey
The test sets a hardware breakpoint and uses a BPF program to suppress the side effects of a perf event sample, including I/O availability signals, SIGTRAPs, and decrementing the event counter limit, if the IP matches the expected value. Then the function with the breakpoint is executed multiple times to test that all effects behave as expected. Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Song Liu <song@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-8-khuey@kylehuey.com
2024-04-12perf/bpf: Allow a BPF program to suppress all sample side effectsKyle Huey
Returning zero from a BPF program attached to a perf event already suppresses any data output. Return early from __perf_event_overflow() in this case so it will also suppress event_limit accounting, SIGTRAP generation, and F_ASYNC signalling. Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Song Liu <song@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-7-khuey@kylehuey.com
2024-04-12perf/bpf: Remove unneeded uses_default_overflow_handler()Kyle Huey
Now that struct perf_event's orig_overflow_handler is gone, there's no need for the functions and macros to support looking past overflow_handler to orig_overflow_handler. This patch is solely a refactoring and results in no behavior change. Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Will Deacon <will@kernel.org> Acked-by: Song Liu <song@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-6-khuey@kylehuey.com
2024-04-12perf/bpf: Call BPF handler directly, not through overflow machineryKyle Huey
To ultimately allow BPF programs attached to perf events to completely suppress all of the effects of a perf event overflow (rather than just the sample output, as they do today), call bpf_overflow_handler() from __perf_event_overflow() directly rather than modifying struct perf_event's overflow_handler. Return the BPF program's return value from bpf_overflow_handler() so that __perf_event_overflow() knows how to proceed. Remove the now unnecessary orig_overflow_handler from struct perf_event. This patch is solely a refactoring and results in no behavior change. Suggested-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Song Liu <song@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-5-khuey@kylehuey.com
2024-04-12perf/bpf: Remove #ifdef CONFIG_BPF_SYSCALL from struct perf_event membersKyle Huey
This will allow __perf_event_overflow() (which is independent of CONFIG_BPF_SYSCALL) to use struct perf_event's prog to decide whether to call bpf_overflow_handler(). Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-4-khuey@kylehuey.com
2024-04-12perf/bpf: Create bpf_overflow_handler() stub for !CONFIG_BPF_SYSCALLKyle Huey
This will allow __perf_event_overflow() (which is independent of CONFIG_BPF_SYSCALL) to call bpf_overflow_handler(). Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-3-khuey@kylehuey.com
2024-04-12perf/bpf: Reorder bpf_overflow_handler() ahead of __perf_event_overflow()Kyle Huey
This will allow __perf_event_overflow() to call bpf_overflow_handler(). Signed-off-by: Kyle Huey <khuey@kylehuey.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20240412015019.7060-2-khuey@kylehuey.com
2024-04-12locking/pvqspinlock/x86: Remove redundant CMP after CMPXCHG in ↵Uros Bizjak
__raw_callee_save___pv_queued_spin_unlock() x86 CMPXCHG instruction returns success in the ZF flag. Remove redundant CMP instruction after CMPXCHG that performs the same check. Also update the function comment to mention the modern version of the equivalent C code. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20240412083908.282802-1-ubizjak@gmail.com
2024-04-12locking/pvqspinlock: Use try_cmpxchg() in qspinlock_paravirt.hUros Bizjak
Use try_cmpxchg(*ptr, &old, new) instead of cmpxchg(*ptr, old, new) == old in qspinlock_paravirt.h x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after cmpxchg. No functional change intended. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Waiman Long <longman@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20240411192317.25432-2-ubizjak@gmail.com
2024-04-12phy: phy-rockchip-samsung-hdptx: Select CONFIG_RATIONALCristian Ciocaltea
Ensure CONFIG_RATIONAL is selected in order to fix the following link error with some kernel configurations: drivers/phy/rockchip/phy-rockchip-samsung-hdptx.o: in function `rk_hdptx_ropll_tmds_cmn_config': phy-rockchip-samsung-hdptx.c:(.text+0x950): undefined reference to `rational_best_approximation' Fixes: 553be2830c5f ("phy: rockchip: Add Samsung HDMI/eDP Combo PHY driver") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202404090540.2l1TEkDF-lkp@intel.com/ Signed-off-by: Cristian Ciocaltea <cristian.ciocaltea@collabora.com> Reviewed-by: Heiko Stuebner <heiko@sntech.de> Link: https://lore.kernel.org/r/20240408222926.32708-1-cristian.ciocaltea@collabora.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2024-04-12locking/pvqspinlock: Use try_cmpxchg_acquire() in trylock_clear_pending()Uros Bizjak
Replace this pattern in trylock_clear_pending(): cmpxchg_acquire(*ptr, old, new) == old ... with the simpler and faster: try_cmpxchg_acquire(*ptr, &old, new) The x86 CMPXCHG instruction returns success in the ZF flag, so this change saves a compare after the CMPXCHG. Also change the return type of the function to bool and streamline the control flow in the _Q_PENDING_BITS == 8 variant a bit. No functional change intended. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20240325140943.815051-1-ubizjak@gmail.com
2024-04-12ftrace: Choose RCU Tasks based on TASKS_RCU rather than PREEMPTIONPaul E. McKenney
The advent of CONFIG_PREEMPT_AUTO, AKA lazy preemption, will mean that even kernels built with CONFIG_PREEMPT_NONE or CONFIG_PREEMPT_VOLUNTARY might see the occasional preemption, and that this preemption just might happen within a trampoline. Therefore, update ftrace_shutdown() to invoke synchronize_rcu_tasks() based on CONFIG_TASKS_RCU instead of CONFIG_PREEMPTION. [ paulmck: Apply Steven Rostedt feedback. ] Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Ankur Arora <ankur.a.arora@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <linux-trace-kernel@vger.kernel.org> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
2024-04-12bpf: Choose RCU Tasks based on TASKS_RCU rather than PREEMPTIONPaul E. McKenney
The advent of CONFIG_PREEMPT_AUTO, AKA lazy preemption, will mean that even kernels built with CONFIG_PREEMPT_NONE or CONFIG_PREEMPT_VOLUNTARY might see the occasional preemption, and that this preemption just might happen within a trampoline. Therefore, update bpf_tramp_image_put() to choose call_rcu_tasks() based on CONFIG_TASKS_RCU instead of CONFIG_PREEMPTION. This change might enable further simplifications, but the goal of this effort is to make the code safe, not necessarily optimal. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Song Liu <song@kernel.org> Cc: Yonghong Song <yonghong.song@linux.dev> Cc: KP Singh <kpsingh@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Ankur Arora <ankur.a.arora@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <bpf@vger.kernel.org> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
2024-04-12crypto: x86/aes-xts - make non-AVX implementation use new glue codeEric Biggers
Make the non-AVX implementation of AES-XTS (xts-aes-aesni) use the new glue code that was introduced for the AVX implementations of AES-XTS. This reduces code size, and it improves the performance of xts-aes-aesni due to the optimization for messages that don't span page boundaries. This required moving the new glue functions higher up in the file and allowing the IV encryption function to be specified by the caller. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12X.509: Introduce scope-based x509_certificate allocationLukas Wunner
Add a DEFINE_FREE() clause for x509_certificate structs and use it in x509_cert_parse() and x509_key_preparse(). These are the only functions where scope-based x509_certificate allocation currently makes sense. A third user will be introduced with the forthcoming SPDM library (Security Protocol and Data Model) for PCI device authentication. Unlike most other DEFINE_FREE() clauses, this one checks for IS_ERR() instead of NULL before calling x509_free_certificate() at end of scope. That's because the "constructor" of x509_certificate structs, x509_cert_parse(), returns a valid pointer or an ERR_PTR(), but never NULL. Comparing the Assembler output before/after has shown they are identical, save for the fact that gcc-12 always generates two return paths when __cleanup() is used, one for the success case and one for the error case. In x509_cert_parse(), add a hint for the compiler that kzalloc() never returns an ERR_PTR(). Otherwise the compiler adds a gratuitous IS_ERR() check on return. Introduce an assume() macro for this which can be re-used elsewhere in the kernel to provide hints for the compiler. Suggested-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Link: https://lore.kernel.org/all/20231003153937.000034ca@Huawei.com/ Link: https://lwn.net/Articles/934679/ Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/qm - Add the err memory release process to qm uninitChenghai Huang
When the qm uninit command is executed, the err data needs to be released to prevent memory leakage. The error information release operation and uacce_remove are integrated in qm_remove_uacce. So add the qm_remove_uacce to qm uninit to avoid err memory leakage. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/debugfs - Resolve the problem of applying for redundant ↵Chenghai Huang
space in sq dump When dumping SQ, only the corresponding ID's SQE needs to be dumped, and there is no need to apply for the entire SQE memory. This is because excessive dump operations can lead to memory resource waste. Therefor apply for the space corresponding to sqe_id separately to avoid space waste. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/sec - Fix memory leak for sec resource releaseChenghai Huang
The AIV is one of the SEC resources. When releasing resources, it need to release the AIV resources at the same time. Otherwise, memory leakage occurs. The aiv resource release is added to the sec resource release function. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon - Adjust debugfs creation and release orderChenghai Huang
There is a scenario where the file directory is created but the file memory is not set. In this case, if a user accesses the file, an error occurs. So during the creation process of debugfs, memory should be allocated first before creating the directory. In the release process, the directory should be deleted first before releasing the memory to avoid the situation where the memory does not exist when accessing the directory. In addition, the directory released by the debugfs is a global variable. When the debugfs of an accelerator fails to be initialized, releasing the directory of the global variable affects the debugfs initialization of other accelerators. The debugfs root directory released by debugfs init should be a member of qm, not a global variable. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/qm - Add the default processing branchChenghai Huang
The cmd type can be extended. Currently, only four types of cmd can be processed. Therefor, add the default processing branch to intercept incorrect parameter input. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/debugfs - Fix the processing logic issue in the debugfs ↵Chenghai Huang
creation There is a scenario where the file directory is created but the file attribute is not set. In this case, if a user accesses the file, an error occurs. So adjust the processing logic in the debugfs creation to prevent the file from being accessed before the file attributes such as the index are set. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/sgl - Delete redundant parameter verificationChenghai Huang
The input parameter check in acc_get_sgl is redundant. The caller has been verified once. When the check is performed for multiple times, the performance deteriorates. So the redundant parameter verification is deleted, and the index verification is changed to the module entry function for verification. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/debugfs - Fix debugfs uninit process issueChenghai Huang
During the zip probe process, the debugfs failure does not stop the probe. When debugfs initialization fails, jumping to the error branch will also release regs, in addition to its own rollback operation. As a result, it may be released repeatedly during the regs uninit process. Therefore, the null check needs to be added to the regs uninit process. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: hisilicon/sec - Add the condition for configuring the sriov functionChenghai Huang
When CONFIG_PCI_IOV is disabled, the SRIOV configuration function is not required. An error occurs if this function is incorrectly called. Consistent with other modules, add the condition for configuring the sriov function of sec_pci_driver. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: x86/sha512-avx2 - add missing vzeroupperEric Biggers
Since sha512_transform_rorx() uses ymm registers, execute vzeroupper before returning from it. This is necessary to avoid reducing the performance of SSE code. Fixes: e01d69cb0195 ("crypto: sha512 - Optimized SHA512 x86_64 assembly routine using AVX instructions.") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: x86/sha256-avx2 - add missing vzeroupperEric Biggers
Since sha256_transform_rorx() uses ymm registers, execute vzeroupper before returning from it. This is necessary to avoid reducing the performance of SSE code. Fixes: d34a460092d8 ("crypto: sha256 - Optimized sha256 x86_64 routine using AVX2's RORX instructions") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: x86/nh-avx2 - add missing vzeroupperEric Biggers
Since nh_avx2() uses ymm registers, execute vzeroupper before returning from it. This is necessary to avoid reducing the performance of SSE code. Fixes: 0f961f9f670e ("crypto: x86/nhpoly1305 - add AVX2 accelerated NHPoly1305") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: iaa - Use cpumask_weight() when rebalancingTom Zanussi
If some cpus are offlined, or if the node mask is smaller than expected, the 'nonexistent cpu' warning in rebalance_wq_table() may be erroneously triggered. Use cpumask_weight() to make sure we only iterate over the exact number of cpus in the mask. Also use num_possible_cpus() instead of num_online_cpus() to make sure all slots in the wq table are initialized. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: x509 - Add OID for NIST P521 and extend parser for itStefan Berger
Enable the x509 parser to accept NIST P521 certificates and add the OID for ansip521r1, which is the identifier for NIST P521. Cc: David Howells <dhowells@redhat.com> Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: asymmetric_keys - Adjust signature size calculation for NIST P521Stefan Berger
Adjust the calculation of the maximum signature size for support of NIST P521. While existing curves may prepend a 0 byte to their coordinates (to make the number positive), NIST P521 will not do this since only the first bit in the most significant byte is used. If the encoding of the x & y coordinates requires at least 128 bytes then an additional byte is needed for the encoding of the length. Take this into account when calculating the maximum signature size. Reviewed-by: Lukas Wunner <lukas@wunner.de> Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecdsa - Register NIST P521 and extend test suiteStefan Berger
Register NIST P521 as an akcipher and extend the testmgr with NIST P521-specific test vectors. Add a module alias so the module gets automatically loaded by the crypto subsystem when the curve is needed. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecdsa - Rename keylen to bufsize where necessaryStefan Berger
In cases where 'keylen' was referring to the size of the buffer used by a curve's digits, it does not reflect the purpose of the variable anymore once NIST P521 is used. What it refers to then is the size of the buffer, which may be a few bytes larger than the size a coordinate of a key. Therefore, rename keylen to bufsize where appropriate. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecdsa - Replace ndigits with nbits where precision is neededStefan Berger
Replace the usage of ndigits with nbits where precise space calculations are needed, such as in ecdsa_max_size where the length of a coordinate is determined. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecc - Add NIST P521 curve parametersStefan Berger
Add the parameters for the NIST P521 curve and define a new curve ID for it. Make the curve available in ecc_get_curve. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecc - Add special case for NIST P521 in ecc_point_multStefan Berger
In ecc_point_mult use the number of bits of the NIST P521 curve + 2. The change is required specifically for NIST P521 to pass mathematical tests on the public key. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-04-12crypto: ecc - Implement vli_mmod_fast_521 for NIST p521Stefan Berger
Implement vli_mmod_fast_521 following the description for how to calculate the modulus for NIST P521 in the NIST publication "Recommendations for Discrete Logarithm-Based Cryptography: Elliptic Curve Domain Parameters" section G.1.4. NIST p521 requires 9 64bit digits, so increase the ECC_MAX_DIGITS so that the vli digit array provides enough elements to fit the larger integers required by this curve. Tested-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>