Age | Commit message (Collapse) | Author |
|
There is a spelling mistake in fs/xfs/xfs_log.c. Fix it.
Signed-off-by: Zhang Xianwei <zhang.xianwei8@zte.com.cn>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Carlos Maiolino <cem@kernel.org>
|
|
When filling the taskfile result for a successful NCQ command, we use
the SDB FIS from the FIS Receive Area, see e.g. ahci_qc_ncq_fill_rtf().
However, the SDB FIS only has fields STATUS and ERROR.
For a successful NCQ command that has sense data, we will have a
successful sense data descriptor, in the Sense Data for Successful NCQ
Commands log.
Since we have access to additional taskfile result fields, fill in these
additional fields in qc->result_tf.
This matches how for failing/aborted NCQ commands, we will use e.g.
ahci_qc_fill_rtf() to fill in some fields, but then for the command that
actually caused the NCQ error, we will use ata_eh_read_log_10h(), which
provides additional fields, saving additional fields/overriding the
qc->result_tf that was fetched using ahci_qc_fill_rtf().
Fixes: 18bd7718b5c4 ("scsi: ata: libata: Handle completion of CDL commands using policy 0xD")
Signed-off-by: Niklas Cassel <cassel@kernel.org>
Reviewed-by: Igor Pylypiv <ipylypiv@google.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
|
|
The ACPI byte code inside the ACPI control method responsible for
handling the WMI method calls uses a global buffer for constructing
the return value, yet the ACPI control method itself is not marked
as "Serialized".
This means that calling WMI methods on this WMI device is not
thread-safe, as concurrent WMI method calls will corrupt the global
buffer.
Fix this by serializing the WMI method calls using a mutex.
Cc: stable@vger.kernel.org # 6.x.x: 912d614ac99e: platform/x86: msi-wmi-platform: Rename "data" variable
Fixes: 9c0beb6b29e7 ("platform/x86: wmi: Add MSI WMI Platform driver")
Tested-by: Antheas Kapenekakis <lkml@antheas.dev>
Signed-off-by: Armin Wolf <W_Armin@gmx.de>
Link: https://lore.kernel.org/r/20250414140453.7691-2-W_Armin@gmx.de
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
Returning a negative error code in a function with an unsigned
return type is a pretty bad idea. It is probably worse when the
justification for the change is "our static analisys tool found it".
Fixes: cf7de25878a1 ("cppc_cpufreq: Fix possible null pointer dereference")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Lifeng Zheng <zhenglifeng1@huawei.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
These fields are no longer needed, so remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The x86 Poly1305 code never falls back to the generic code, so selecting
CRYPTO_LIB_POLY1305_GENERIC is unnecessary.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
arch/mips/crypto/Kconfig is sourced only when CONFIG_MIPS is enabled, so
there is no need for options defined in that file to depend on it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm
that uses the architecture's Poly1305 library functions, individual
architectures no longer need to do the same. Therefore, remove the
redundant shash algorithm from the arch-specific code and leave just the
library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions. This eliminates the need for every architecture to
implement the same shash glue code.
Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init(). Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.
Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it. Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently the Power10 optimized Poly1305 is only wired up to the
crypto_shash API, which makes it unavailable to users of the library
API. The crypto_shash API for Poly1305 is going to change to be
implemented on top of the library API, so the library API needs to be
supported. And of course it's needed anyway to serve the library users.
Therefore, change the Power10 optimized Poly1305 code to implement the
library API instead of the crypto_shash API.
Cc: Danny Tsen <dtsen@linux.ibm.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ard's recent series of patches removing 'comp' implementations
left behind a bunch of trivial structs, remove them.
These are:
crypto842_ctx - commit 2d985ff0072f ("crypto: 842 - drop obsolete 'comp'
implementation")
lz4_ctx - commit 33335afe33c9 ("crypto: lz4 - drop obsolete 'comp'
implementation")
lz4hc_ctx - commit dbae96559eef ("crypto: lz4hc - drop obsolete
'comp' implementation")
lzo_ctx - commit a3e43a25bad0 ("crypto: lzo - drop obsolete
'comp' implementation")
lzorle_ctx - commit d32da55c5b0c ("crypto: lzo-rle - drop obsolete
'comp' implementation")
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The block size of a hash algorithm is meant to be the number of
bytes its block function can handle. For cbcmac that should be
the block size of the underlying block cipher instead of one.
Set the block size of all cbcmac implementations accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove the duplicate init code and simply call sm3_init.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Export the generic block function so that it can be used by the
Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the sm3 library code into lib/crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The biggest context is not sha3_generic (356), but sha-s390 (360).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Instead of relying on linux/module.h being included through the
header file sha512_base.h, include it directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The hardwrae is only capable of one hash at a time, so add a lock
to make sure that it isn't used concurrently.
Fixes: 7ecc3e34474b ("crypto: xilinx - Add Xilinx SHA3 driver")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed. The intended usage is:
HASH_REQUEST_ON_STACK(req, tfm);
...
err = crypto_ahash_digest(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = HASH_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_ahash_digest(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As all users of the dynamic descsize have been converted to use
a static one instead, remove support for dynamic descsize.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting descsize in init_tfm, make it an algorithm
attribute and set it during instance construction.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting descsize in init_tfm, set it statically and
double-check it in init_tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting descsize in init_tfm, set it statically and
double-check it in init_tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration. This is inteded for
hardware-based algorithms that may be unplugged at will.
Do not use this if the algorithm data structure is embedded in a
bigger data structure. Perform the duplication in the driver
instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Reduce skcipher_walk's struct size by 8 bytes by realigning its members.
pahole output before:
/* size: 120, cachelines: 2, members: 13 */
/* sum members: 108, holes: 2, sum holes: 8 */
/* padding: 4 */
/* last cacheline: 56 bytes */
and after:
/* size: 112, cachelines: 2, members: 13 */
/* padding: 4 */
/* last cacheline: 48 bytes */
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the asm/simd.h files have been made safe against double
inclusion, include it directly in internal/simd.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add missing header inclusions and protect against double inclusion.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add missing header inclusions and protect against double inclusion.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add missing header inclusions and protect against double inclusion.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The powerpc aes/ghash code was relying on pagefault_disable from
being pulled in by random header files.
Fix this by explicitly including uaccess.h. Also add other missing
header files to prevent similar problems in future.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The functions currently leaving dangling pointers in the passed-in path
leading to hard to debug bugs in the long run. Ensure that the path is
left in pristine state just like we do in e.g., path_parentat() and
other helpers.
Link: https://lore.kernel.org/20250414-rennt-wimmeln-f186c3a780f1@brauner
Signed-off-by: Christian Brauner <brauner@kernel.org>
|
|
Add a struct device pointer field to the device's context struct. This
makes using the unsigned long priv pointer in struct hwrng unnecessary, so
remove that one as well.
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a struct device pointer field to the device's context struct. This
makes using the unsigned long priv pointer in struct hwrng unnecessary, so
remove that one as well.
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a struct device pointer field to the device's context struct. This
makes using the unsigned long priv pointer in struct hwrng unnecessary, so
remove that one as well.
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fix smatch warning:
drivers/crypto/ccp/sev-dev.c:1755 __sev_snp_shutdown_locked()
error: uninitialized symbol 'dfflush_error'.
Fixes: 9770b428b1a2 ("crypto: ccp - Move dev_info/err messages for SEV/SNP init and shutdown")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/linux-crypto/d9c2e79c-e26e-47b7-8243-ff6e7b101ec3@stanley.mountain/
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The current algorithm unregistration mechanism originated from
software crypto. The code relies on module reference counts to
stop in-use algorithms from being unregistered. Therefore if
the unregistration function is reached, it is assumed that the
module reference count has hit zero and thus the algorithm reference
count should be exactly 1.
This is completely broken for hardware devices, which can be
unplugged at random.
Fix this by allowing algorithms to be destroyed later if a destroy
callback is provided.
Reported-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the destination buffer has a fixed length, strscpy() automatically
determines its size using sizeof() when the argument is omitted. This
makes the explicit size argument unnecessary - remove it.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.
That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.
Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().
The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes. This matches with the units in struct keyctl_pkey_query.
Fixes: a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
KEYCTL_PKEY_QUERY system calls for ecdsa keys return the key size as
max_enc_size and max_dec_size, even though such keys cannot be used for
encryption/decryption. They're exclusively for signature generation or
verification.
Only rsa keys with pkcs1 encoding can also be used for encryption or
decryption.
Return 0 instead for ecdsa keys (as well as ecrdsa keys).
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
On i.MX8QM, caam clocks are turned on automatically and Linux does not have
access to the caam controller's register page, so skip clocks
initialization.
Signed-off-by: Thomas Richard <thomas.richard@bootlin.com>
Reviewed-by: Gaurav Jain <gaurav.jain@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting up the fallback request by hand, use
ahash_request_set_callback() and ahash_request_set_crypt() API helpers
to properly setup the new request.
Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting up the fallback request by hand, use
ahash_request_set_callback() and ahash_request_set_crypt() API helpers
to properly setup the new request.
This also ensures that the completion callback is properly passed down
to the fallback algorithm, which avoids a crash with async fallbacks.
Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Rather than setting up the fallback request by hand, use
ahash_request_set_callback() and ahash_request_set_crypt() API helpers
to properly setup the new request.
This also ensures that the completion callback is properly passed down
to the fallback algorithm, which avoids a crash with async fallbacks.
Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize field and remove reqsize from ahash_alg.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove the type-specific reqsize field in favour of the common one.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize field for acomp algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize field for acomp algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|