Age | Commit message (Collapse) | Author |
|
Update the firmware API to have partial decomp as an argument.
Modify the firmware descriptor to support auto-select best and partial
decompress.
Define the maximal auto-select best value.
Define the mask and bit position for the partial decompress field in the
firmware descriptor.
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Export the function adf_init_admin_pm() as it will be used by the
qat_6xxx driver to send the power management initialization messages
to the firmware.
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The functions related to compression and crypto configurations were
previously declared static, restricting the visibility to the defining
source file. Remove the static qualifier, allowing it to be used in other
files as needed. This is necessary for sharing this configuration functions
with other QAT generations.
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Export the function adf_get_service_mask() as it will be used by the
qat_6xxx driver to configure the device.
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support for the QAT GEN6 devices in the firmware loader.
This includes handling firmware images signed with the RSA 3K and the
XMSS algorithms.
Co-developed-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Signed-off-by: Jack Xu <jack.xu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The current implementation is designed to support single FW signing
authentication only.
Refactor the implementation to support other FW signing methods.
This does not include any functional change.
Co-developed-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Signed-off-by: Jack Xu <jack.xu@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add pr_fmt() to qat uclo.c logging and update the debug and error messages
to utilize it accordingly.
This does not introduce any functional changes.
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The logic that generates the compression templates, which are used by to
submit compression requests to the QAT device, is very similar between
QAT devices and diverges mainly on the HW generation-specific
configuration word.
This makes the logic that generates the compression and decompression
templates common between GEN2 and GEN4 devices and abstracts the
generation-specific logic to the generation-specific implementations.
The adf_gen2_dc.c and adf_gen4_dc.c have been replaced by adf_dc.c, and
the generation-specific logic has been reduced and moved to
adf_gen2_hw_data.c and adf_gen4_hw_data.c.
This does not introduce any functional change.
Co-developed-by: Vijay Sundar Selvamani <vijay.sundar.selvamani@intel.com>
Signed-off-by: Vijay Sundar Selvamani <vijay.sundar.selvamani@intel.com>
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rename adf_gen4_timer.c to adf_timer.c and adf_gen4_timer.h to
adf_timer.h to make the files generation-agnostic. This includes
renaming the start() and stop() timer APIs and macro definitions
to be generic, allowing for reuse across different device
generations.
This does not introduce any functional changes.
Signed-off-by: George Abraham P <george.abraham.p@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Explicitly include linux/init.h rather than pulling it through
potluck.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit c4741b23059794bd99beef0f700103b0d983b3fd.
Crypto API self-tests no longer run at registration time and now
occur either at late_initcall or upon the first use.
Therefore the premise of the above commit no longer exists. Revert
it and subsequent additions of subsys_initcall and arch_initcall.
Note that lib/crypto calls will stay at subsys_initcall (or rather
downgraded from arch_initcall) because they may need to occur
before Crypto API registration.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As sha512 requires 128-bit counters, extend the hash length counters
to that length. Previously they were just 32 bits which means that
a >4G sha256 hash would be incorrect.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the Crypto API partial block handling.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
To ensure proper functionality, each specific driver needs to access
functions located in the qat_common folder.
Move the include path for qat_common to the top-level Makefile.
This eliminates the need for redundant include directives in the
Makefiles of individual drivers.
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Follow best practices by changing the length parameters to size_t and
explicitly specifying the length of the output digest arrays.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
sha256_base.h is no longer used, so remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
To match sha256_blocks_arch(), change the type of the nblocks parameter
of the assembly functions from int to size_t. The assembly functions
actually already treated it as size_t.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since arch/sparc/crypto/opcodes.h is now needed outside the
arch/sparc/crypto/ directory, move it into arch/sparc/include/asm/ so
that it can be included as <asm/opcodes.h>.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
To match sha256_blocks_arch(), change the type of the nblocks parameter
of the assembly function from int to size_t. The assembly function
actually already treated it as size_t.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
Remove support for SHA-256 finalization from the ARMv8 CE assembly code,
since the library does not yet support architecture-specific overrides
of the finalization. (Support for that has been omitted for now, for
simplicity and because usually it isn't performance-critical.)
To match sha256_blocks_arch(), change the type of the nblocks parameter
of the assembly functions from int or 'unsigned int' to size_t. Update
the ARMv8 CE assembly function accordingly. The scalar and NEON
assembly functions actually already treated it as size_t.
While renaming the assembly files, also fix the naming quirks where
"sha2" meant sha256, and "sha512" meant both sha256 and sha512.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since kernel-mode NEON sections are now preemptible on arm64, there is
no longer any need to limit the length of them.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default. SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.
To merge the scalar, NEON, and CE code all into one module cleanly, add
!CPU_V7M as a direct dependency of the CE code. Previously, !CPU_V7M
was only a direct dependency of the scalar and NEON code. The result is
still the same because CPU_V7M implies !KERNEL_MODE_NEON, so !CPU_V7M
was already an indirect dependency of the CE code.
To match sha256_blocks_arch(), change the type of the nblocks parameter
of the assembly functions from int to size_t. The assembly functions
actually already treated it as size_t.
While renaming the assembly files, also fix the naming quirk where
"sha2" meant sha256. (SHA-512 is also part of SHA-2.)
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As has been done for various other algorithms, rework the design of the
SHA-256 library to support arch-optimized implementations, and make
crypto/sha256.c expose both generic and arch-optimized shash algorithms
that wrap the library functions.
This allows users of the SHA-256 library functions to take advantage of
the arch-optimized code, and this makes it much simpler to integrate
SHA-256 for each architecture.
Note that sha256_base.h is not used in the new design. It will be
removed once all the architecture-specific code has been updated.
Move the generic block function into its own module to avoid a circular
dependency from libsha256.ko => sha256-$ARCH.ko => libsha256.ko.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Add export and import functions to maintain existing export format.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that every architecture provides a block function, use that
to implement the lib/poly1305 and remove the old per-arch code.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As there are no in-kernel users of the Crypto API poly1305 left,
remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As poly1305 no longer has any in-kernel users, remove its tests.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since the poly1305 algorithm is fixed, there is no point in going
through the Crypto API for it. Use the lib/crypto poly1305 interface
instead.
For compatiblity keep the poly1305 parameter in the algorithm name.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add block-only interface.
Also remove the unnecessary SIMD fallback path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add block-only interface.
Also remove the unnecessary SIMD fallback path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add block-only interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add block-only interface.
Also remove the unnecessary SIMD fallback path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add block-only interface.
Also remove the unnecessary SIMD fallback path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a block-only interface for poly1305. Implement the generic
code first.
Also use the generic partial block helper.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Extract the common partial block handling into a helper macro
that can be reused by other library code.
Also delete the unused sha256_base_do_finalize function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Merge mainline to pick up bcachefs poly1305 patch 4bf4b5046de0
("bcachefs: use library APIs for ChaCha20 and Poly1305"). This
is a prerequisite for removing the poly1305 shash algorithm.
|
|
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools
Pull perf tools fixes from Namhyung Kim:
"Just a couple of build fixes on arm64"
* tag 'perf-tools-fixes-for-v6.15-2025-05-04' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools:
perf tools: Fix in-source libperf build
perf tools: Fix arm64 build by generating unistd_64.h
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing fixes from Steven Rostedt:
- Fix read out of bounds bug in tracing_splice_read_pipe()
The size of the sub page being read can now be greater than a page.
But the buffer used in tracing_splice_read_pipe() only allocates a
page size. The data copied to the buffer is the amount in sub buffer
which can overflow the buffer.
Use min((size_t)trace_seq_used(&iter->seq), PAGE_SIZE) to limit the
amount copied to the buffer to a max of PAGE_SIZE.
- Fix the test for NULL from "!filter_hash" to "!*filter_hash"
The add_next_hash() function checked for NULL at the wrong pointer
level.
- Do not use the array in trace_adjust_address() if there are no
elements
The trace_adjust_address() finds the offset of a module that was
stored in the persistent buffer when reading the previous boot buffer
to see if the address belongs to a module that was loaded in the
previous boot. An array is created that matches currently loaded
modules with previously loaded modules. The trace_adjust_address()
uses that array to find the new offset of the address that's in the
previous buffer. But if no module was loaded, it ends up reading the
last element in an array that was never allocated.
Check if nr_entries is zero and exit out early if it is.
- Remove nested lock of trace_event_sem in print_event_fields()
The print_event_fields() function iterates over the ftrace_events
list and requires the trace_event_sem semaphore held for read. But
this function is always called with that semaphore held for read.
Remove the taking of the semaphore and replace it with
lockdep_assert_held_read(&trace_event_sem)
* tag 'trace-v6.15-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
tracing: Do not take trace_event_sem in print_event_fields()
tracing: Fix trace_adjust_address() when there is no modules in scratch area
ftrace: Fix NULL memory allocation check
tracing: Fix oob write in trace_seq_to_buffer()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux
Pull parisc fix from Helge Deller:
"Fix a double SIGFPE crash"
* tag 'parisc-for-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
parisc: Fix double SIGFPE crash
|
|
Camm noticed that on parisc a SIGFPE exception will crash an application with
a second SIGFPE in the signal handler. Dave analyzed it, and it happens
because glibc uses a double-word floating-point store to atomically update
function descriptors. As a result of lazy binding, we hit a floating-point
store in fpe_func almost immediately.
When the T bit is set, an assist exception trap occurs when when the
co-processor encounters *any* floating-point instruction except for a double
store of register %fr0. The latter cancels all pending traps. Let's fix this
by clearing the Trap (T) bit in the FP status register before returning to the
signal handler in userspace.
The issue can be reproduced with this test program:
root@parisc:~# cat fpe.c
static void fpe_func(int sig, siginfo_t *i, void *v) {
sigset_t set;
sigemptyset(&set);
sigaddset(&set, SIGFPE);
sigprocmask(SIG_UNBLOCK, &set, NULL);
printf("GOT signal %d with si_code %ld\n", sig, i->si_code);
}
int main() {
struct sigaction action = {
.sa_sigaction = fpe_func,
.sa_flags = SA_RESTART|SA_SIGINFO };
sigaction(SIGFPE, &action, 0);
feenableexcept(FE_OVERFLOW);
return printf("%lf\n",1.7976931348623158E308*1.7976931348623158E308);
}
root@parisc:~# gcc fpe.c -lm
root@parisc:~# ./a.out
Floating point exception
root@parisc:~# strace -f ./a.out
execve("./a.out", ["./a.out"], 0xf9ac7034 /* 20 vars */) = 0
getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0
...
rt_sigaction(SIGFPE, {sa_handler=0x1110a, sa_mask=[], sa_flags=SA_RESTART|SA_SIGINFO}, NULL, 8) = 0
--- SIGFPE {si_signo=SIGFPE, si_code=FPE_FLTOVF, si_addr=0x1078f} ---
--- SIGFPE {si_signo=SIGFPE, si_code=FPE_FLTOVF, si_addr=0xf8f21237} ---
+++ killed by SIGFPE +++
Floating point exception
Signed-off-by: Helge Deller <deller@gmx.de>
Suggested-by: John David Anglin <dave.anglin@bell.net>
Reported-by: Camm Maguire <camm@maguirefamily.org>
Cc: stable@vger.kernel.org
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
Pull EDAC fixes from Borislav Petkov:
- Test the correct structure member when handling correctable errors
and avoid spurious interrupts, in altera_edac
* tag 'edac_urgent_for_v6.15_rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/altera: Set DDR and SDMMC interrupt mask before registration
EDAC/altera: Test the correct error reg offset
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fix from Ingo Molnar:
"Fix SEV-SNP memory acceptance from the EFI stub for guests
running at VMPL >0"
* tag 'x86-urgent-2025-05-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/boot/sev: Support memory acceptance in the EFI stub under SVSM
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull misc perf fixes from Ingo Molnar:
- Require group events for branch counter groups and
PEBS counter snapshotting groups to be x86 events.
- Fix the handling of counter-snapshotting of non-precise
events, where counter values may move backwards a bit,
temporarily, confusing the code.
- Restrict perf/KVM PEBS to guest-owned events.
* tag 'perf-urgent-2025-05-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/intel: KVM: Mask PEBS_ENABLE loaded for guest with vCPU's value.
perf/x86/intel/ds: Fix counter backwards of non-precise events counters-snapshotting
perf/x86/intel: Check the X86 leader for pebs_counter_event_group
perf/x86/intel: Only check the group flag for X86 leader
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fixes from Ingo Molnar:
- Prevent NULL pointer dereference in msi_domain_debug_show()
- Fix crash in the qcom-mpm irqchip driver when configuring
interrupts for non-wake GPIOs
* tag 'irq-urgent-2025-05-04' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/qcom-mpm: Prevent crash when trying to handle non-wake GPIOs
genirq/msi: Prevent NULL pointer dereference in msi_domain_debug_show()
|
|
Commit:
d54d610243a4 ("x86/boot/sev: Avoid shared GHCB page for early memory acceptance")
provided a fix for SEV-SNP memory acceptance from the EFI stub when
running at VMPL #0. However, that fix was insufficient for SVSM SEV-SNP
guests running at VMPL >0, as those rely on a SVSM calling area, which
is a shared buffer whose address is programmed into a SEV-SNP MSR, and
the SEV init code that sets up this calling area executes much later
during the boot.
Given that booting via the EFI stub at VMPL >0 implies that the firmware
has configured this calling area already, reuse it for performing memory
acceptance in the EFI stub.
Fixes: fcd042e86422 ("x86/sev: Perform PVALIDATE using the SVSM when not at VMPL0")
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Co-developed-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: <stable@vger.kernel.org>
Cc: Dionna Amalie Glaze <dionnaglaze@google.com>
Cc: Kevin Loughlin <kevinloughlin@google.com>
Cc: linux-efi@vger.kernel.org
Link: https://lore.kernel.org/r/20250428174322.2780170-2-ardb+git@google.com
|