Age | Commit message (Collapse) | Author |
|
One of the expectations with feature ID registers is that their values
survive a vCPU reset. Start testing that.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20240502233529.1958459-7-oliver.upton@linux.dev
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Rather than comparing against what is returned by the ioctl, store
expected values for the feature ID registers in a table and compare with
that instead.
This will prove useful for subsequent tests involving vCPU reset.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20240502233529.1958459-6-oliver.upton@linux.dev
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Prepare for a later change that'll cram in per-vCPU feature ID test
cases by renaming the current test case.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20240502233529.1958459-5-oliver.upton@linux.dev
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
The general expecation with feature ID registers is that they're 'reset'
exactly once by KVM for the lifetime of a vCPU/VM, such that any
userspace changes to the CPU features / identity are honored after a
vCPU gets reset (e.g. PSCI_ON).
KVM handles what it calls VM-scoped feature ID registers correctly, but
feature ID registers local to a vCPU (CLIDR_EL1, MPIDR_EL1) get wiped
after every reset. What's especially concerning is that a
potentially-changing MPIDR_EL1 breaks MPIDR compression for indexing
mpidr_data, as the mask of useful bits to build the index could change.
This is absolutely no good. Avoid resetting vCPU feature ID registers
more than once.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20240502233529.1958459-4-oliver.upton@linux.dev
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
A subsequent change to KVM will expand the range of feature ID registers
that get special treatment at reset. Fold the existing ones back in to
kvm_reset_sys_regs() to avoid the need for an additional table walk.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20240502233529.1958459-3-oliver.upton@linux.dev
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
The naming of some of the feature ID checks is ambiguous. Rephrase the
is_id_reg() helper to make its purpose slightly clearer.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20240502233529.1958459-2-oliver.upton@linux.dev
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull hwmon fixes from Guenter Roeck:
- pmbus/ucd9000: Increase chip access delay to avoid random access
errors
- corsair-cpro: Protect kernel code against parallel hidraw access from
userspace
* tag 'hwmon-for-v6.9-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
hwmon: (pmbus/ucd9000) Increase delay from 250 to 500us
hwmon: (corsair-cpro) Protect ccp->wait_input_report with a spinlock
hwmon: (corsair-cpro) Use complete_all() instead of complete() in ccp_raw_event()
hwmon: (corsair-cpro) Use a separate buffer for sending commands
|
|
Cross-merge networking fixes after downstream PR.
No conflicts.
Adjacent changes:
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
35d92abfbad8 ("net: hns3: fix kernel crash when devlink reload during initialization")
2a1a1a7b5fd7 ("net: hns3: add command queue trace for hns3")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
There are several reports that the DMA sync shortcut broke non-coherent
devices.
dev->dma_need_sync is false after the &device allocation and if a driver
didn't call dma_set_mask*(), it will still be false even if the device
is not DMA-coherent and thus needs synchronizing. Due to historical
reasons, there's still a lot of drivers not calling it.
Invert the boolean, so that the sync will be performed by default and
the shortcut will be enabled only when calling dma_set_mask*().
Reported-by: Steven Price <steven.price@arm.com>
Closes: https://lore.kernel.org/lkml/010686f5-3049-46a1-8230-7752a1b433ff@arm.com
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Closes: https://lore.kernel.org/lkml/46160534-5003-4809-a408-6b3a3f4921e9@samsung.com
Fixes: f406c8e4b770. ("dma: avoid redundant calls for sync operations")
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Steven Price <steven.price@arm.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
|
This includes zswpin, zswpout and zswpwb.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
Acked-by: Nhat Pham <nphamcs@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Message-ID: <20240502185307.3942173-2-usamaarif642@gmail.com>
|
|
Following the failure observed with a delay of 250us, experiments were
conducted with various delays. It was found that a delay of 350us
effectively mitigated the issue.
To provide a more optimal solution while still allowing a margin for
stability, the delay is being adjusted to 500us.
Signed-off-by: Lakshmi Yadlapati <lakshmiy@us.ibm.com>
Link: https://lore.kernel.org/r/20240507194603.1305750-1-lakshmiy@us.ibm.com
Fixes: 8d655e6523764 ("hwmon: (ucd90320) Add minimum delay between bus accesses")
Reviewed-by: Eddie James <eajames@linux.ibm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
|
|
The function claims to return the bitmap size, if Nth bit doesn't exist.
This rule is violated in inline case because the fns() that is used
there doesn't know anything about size of the bitmap.
So, relax this requirement to '>= size', and make the outline
implementation a bit cheaper.
All in-tree kernel users of find_nth_bit() are safe against that.
Reported-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Closes: https://lore.kernel.org/all/Zi50cAgR8nZvgLa3@yury-ThinkPad/T/#m6da806a0525e74dcc91f35e5f20766ed4e853e8a
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
The test now is limited to be compiled as a module. There's no technical
reason for it. Now that the test bears some performance benchmarks, it
would be reasonable to run it at kernel load time, before userspace
starts, to reduce possible jitter.
Reviewed-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
The current fns() repeatedly uses __ffs() to find the index of the
least significant bit and then clears the corresponding bit using
__clear_bit(). The method for clearing the least significant bit can be
optimized by using word &= word - 1 instead.
Typically, the execution time of one __ffs() plus one __clear_bit() is
longer than that of a bitwise AND operation and a subtraction. To
improve performance, the loop for clearing the least significant bit
has been replaced with word &= word - 1, followed by a single __ffs()
operation to obtain the answer. This change reduces the number of
__ffs() iterations from n to just one, enhancing overall performance.
This modification significantly accelerates the fns() function in the
test_bitops benchmark, improving its speed by approximately 7.6 times.
Additionally, it enhances the performance of find_nth_bit() in the
find_bit benchmark by approximately 26%.
Before:
test_bitops: fns: 58033164 ns
find_nth_bit: 4254313 ns, 16525 iterations
After:
test_bitops: fns: 7637268 ns
find_nth_bit: 3362863 ns, 16501 iterations
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
Introduce a benchmark test for the fns(). It measures the total time
taken by fns() to process 10,000 test data generated using
get_random_bytes() for each n in the range [0, BITS_PER_LONG).
example:
test_bitops: fns: 7637268 ns
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Rasmus Villemoes <linux@rasmusvillemoes.dk>
CC: David Laight <David.Laight@aculab.com>
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Suggested-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
In some cases like performance benchmarking, we need to call a
function, but don't need to read the returned value. If compiler
recognizes the function as pure or const, it can remove the function
invocation, which is not what we want.
To prevent that, the common practice is assigning the return value to
a temporary static volatile variable. From compiler's point of view,
the variable is unused because never read back after been assigned.
To make sure the variable is always emitted, we provide a __used
attribute. This works with GCC, but clang still emits
Wunused-but-set-variable. To suppress that warning, we need to teach
clang to do that with the 'unused' attribute.
Nathan Chancellor explained that in details:
While having used and unused attributes together might look unusual,
reading the GCC attribute manual makes it seem like these attributes
fulfill similar yet different roles, __unused__ prevents any unused
warnings while __used__ forces the variable to be emitted. A strict
reading of that does not make it seem like __used__ implies disabling
unused warnings
The compiler documentation makes it clear what happens behind the 'used'
and 'unused' attributes, but the chosen names may confuse readers if
such combination catches an eye in a random code.
This patch adds __always_used macro, which combines both attributes
and comments on what happens for those interested in details.
Suggested-by: Nathan Chancellor <nathan@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/202405030808.UsoMKFNP-lkp@intel.com/
Acked-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
Optimize topology_span_sane() by removing duplicate comparisons.
Since topology_span_sane() is called inside of for_each_cpu(), each
previous CPU has already been compared against every other CPU. The
current CPU only needs to be compared against higher-numbered CPUs.
The total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 on each non-NUMA scheduling domain level.
Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
Add for_each_cpu_from() as a generic cpumask macro.
for_each_cpu_from() is the same as for_each_cpu(), except it starts at
@cpu instead of zero.
Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Acked-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
|
|
*-objs suffix is reserved rather for (user-space) host programs while
usually *-y suffix is used for kernel drivers (although *-objs works
for that purpose for now).
Let's correct the old usages of *-objs in Makefiles.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Message-Id: <20240508152129.1445372-1-andriy.shevchenko@linux.intel.com>
|
|
Zero-length arrays are deprecated and flexible arrays should be
used instead: https://www.kernel.org/doc/html/v6.9-rc7/process/deprecated.html#zero-length-and-one-element-arrays
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@inria.fr>
Closes: https://lore.kernel.org/r/202405051824.AmjAI5Pg-lkp@intel.com/
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240506141917.205714-1-lucas.demarchi@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
(cherry picked from commit ee7284230644e21fef0e38fc5bf8f907b6bb7f7c)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
System work queues are shared, use a dedicated work queue for G2H
processing to avoid G2H processing getting block behind system tasks.
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: <stable@vger.kernel.org>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240506034758.3697397-1-matthew.brost@intel.com
(cherry picked from commit 50aec9665e0babd62b9eee4e613d9a1ef8d2b7de)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Paolo Abeni:
"Including fixes from bluetooth and IPsec.
The bridge patch is actually a follow-up to a recent fix in the same
area. We have a pending v6.8 AF_UNIX regression; it should be solved
soon, but not in time for this PR.
Current release - regressions:
- eth: ks8851: Queue RX packets in IRQ handler instead of disabling
BHs
- net: bridge: fix corrupted ethernet header on multicast-to-unicast
Current release - new code bugs:
- xfrm: fix possible bad pointer derferencing in error path
Previous releases - regressionis:
- core: fix out-of-bounds access in ops_init
- ipv6:
- fix potential uninit-value access in __ip6_make_skb()
- fib6_rules: avoid possible NULL dereference in fib6_rule_action()
- tcp: use refcount_inc_not_zero() in tcp_twsk_unique().
- rtnetlink: correct nested IFLA_VF_VLAN_LIST attribute validation
- rxrpc: fix congestion control algorithm
- bluetooth:
- l2cap: fix slab-use-after-free in l2cap_connect()
- msft: fix slab-use-after-free in msft_do_close()
- eth: hns3: fix kernel crash when devlink reload during
initialization
- eth: dsa: mv88e6xxx: add phylink_get_caps for the mv88e6320/21
family
Previous releases - always broken:
- xfrm: preserve vlan tags for transport mode software GRO
- tcp: defer shutdown(SEND_SHUTDOWN) for TCP_SYN_RECV sockets
- eth: hns3: keep using user config after hardware reset"
* tag 'net-6.9-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (47 commits)
net: dsa: mv88e6xxx: read cmode on mv88e6320/21 serdes only ports
net: dsa: mv88e6xxx: add phylink_get_caps for the mv88e6320/21 family
net: hns3: fix kernel crash when devlink reload during initialization
net: hns3: fix port vlan filter not disabled issue
net: hns3: use appropriate barrier function after setting a bit value
net: hns3: release PTP resources if pf initialization failed
net: hns3: change type of numa_node_mask as nodemask_t
net: hns3: direct return when receive a unknown mailbox message
net: hns3: using user configure after hardware reset
net/smc: fix neighbour and rtable leak in smc_ib_find_route()
ipv6: prevent NULL dereference in ip6_output()
hsr: Simplify code for announcing HSR nodes timer setup
ipv6: fib6_rules: avoid possible NULL dereference in fib6_rule_action()
dt-bindings: net: mediatek: remove wrongly added clocks and SerDes
rxrpc: Only transmit one ACK per jumbo packet received
rxrpc: Fix congestion control algorithm
selftests: test_bridge_neigh_suppress.sh: Fix failures due to duplicate MAC
ipv6: Fix potential uninit-value access in __ip6_make_skb()
net: phy: marvell-88q2xxx: add support for Rev B1 and B2
appletalk: Improve handling of broadcast packets
...
|
|
Currently, the DesignWare SPI controller driver supports only host mode.
However, spi2 on the Kendryte K210 SoC supports only target mode,
triggering an error message on e.g. SiPEED MAiXBiT since commit
98d75b9ef282f6b9 ("spi: dw: Drop default number of CS setting"):
dw_spi_mmio 50240000.spi: error -22: problem registering spi host
dw_spi_mmio 50240000.spi: probe with driver dw_spi_mmio failed with error -22
As spi2 rightfully has no "num-cs" property, num_chipselect is now zero,
causing spi_alloc_host() to fail to register the controller. Before,
the driver silently registered an SPI host controller with 4 chip
selects.
Reject target mode early on and warn the user, getting rid of the
error message.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Link: https://lore.kernel.org/r/7ae28d83bff7351f34782658ae1bb69cc731693e.1715163113.git.geert+renesas@glider.be
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Other cgroup policy like bfq, iocost are lazy-initialized when they are
configured for the first time for the device, but blk-throttle is
initialized unconditionally from blkcg_init_disk().
Delay initialization of blk-throttle as well, to save some cpu and
memory overhead if it's not configured.
Noted that once it's initialized, it can't be destroyed until disk
removal, even if it's disabled.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20240509121107.3195568-3-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
One the one hand, it's marked EXPERIMENTAL since 2017, and looks like
there are no users since then, and no testers and no developers, it's
just not active at all.
On the other hand, even if the config is disabled, there are still many
fields in throtl_grp and throtl_data and many functions that are only
used for throtl low.
At last, currently blk-throtl is initialized during disk initialization,
and destroyed during disk removal, and it exposes many functions to be
called directly from block layer.
Remove throtl low to make code much more cleaner and follow up work much
easier.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Acked-by: Tejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/r/20240509121107.3195568-2-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Pull ARM fix from Russell King:
- clear stale KASan stack poison when a CPU resumes
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rmk/linux:
ARM: 9381/1: kasan: clear stale stack poison
|
|
D1 contains two pairs of LDOs, "analog" LDOs and "system" LDOs. They are
similar and can share a driver, but only the system LDOs have a DT
binding defined so far.
The system LDOs have a single linear range. The voltage step is not an
integer, so a custom .list_voltage is needed to get the rounding right.
Signed-off-by: Samuel Holland <samuel@sholland.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Jernej Skrabec <jernej.skrabec@gmail.com>
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Link: https://lore.kernel.org/r/20240509153107.438220-3-wens@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
The Allwinner D1 SoC contains two pairs of in-package LDOs. The pair of
"system" LDOs is for general purpose use. LDOA generally powers the
board's 1.8 V rail. LDOB powers the in-package DRAM, where applicable.
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Samuel Holland <samuel@sholland.org>
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Link: https://lore.kernel.org/r/20240509153107.438220-2-wens@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
regulator_get() may sometimes be called more than once for the same
consumer device, something which before commit dbe954d8f163 ("regulator:
core: Avoid debugfs: Directory ... already present! error") resulted in
errors being logged.
A couple of recent commits broke the handling of such cases so that
attributes are now erroneously created in the debugfs root directory the
second time a regulator is requested and the log is filled with errors
like:
debugfs: File 'uA_load' in directory '/' already present!
debugfs: File 'min_uV' in directory '/' already present!
debugfs: File 'max_uV' in directory '/' already present!
debugfs: File 'constraint_flags' in directory '/' already present!
on any further calls.
Fixes: 2715bb11cfff ("regulator: core: Fix more error checking for debugfs_create_dir()")
Fixes: 08880713ceec ("regulator: core: Streamline debugfs operations")
Cc: stable@vger.kernel.org
Cc: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
Link: https://lore.kernel.org/r/20240509133304.8883-1-johan+linaro@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
For DSP_A, data is a BCK cycle behind LRCK trigger edge. For DSP_B, this
delay doesn't exist. Fix the delay configuration to match the standard.
Fixes: 52fcd65414abfc ("ASoC: mediatek: mt8192: support tdm in platform driver")
Signed-off-by: Hsin-Te Yuan <yuanhsinte@chromium.org>
Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Reviewed-by: Chen-Yu Tsai <wenst@chromium.org>
Link: https://lore.kernel.org/r/20240509-8192-tdm-v1-1-530b54645763@chromium.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Pull dentry leak fix from Al Viro:
"Dentry leak fix in the qibfs driver that I forgot to send a pull
request for ;-/
My apologies - it actually sat in vfs.git#fixes for more than two
months..."
* tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
qibfs: fix dentry leak
|
|
Update the documentation for trusted and encrypted KEYS with DCP as new
trust source:
- Describe security properties of DCP trust source
- Describe key usage
- Document blob format
Co-developed-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
Co-developed-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Gstir <david@sigma-star.at>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
|
|
Document the kernel parameters trusted.dcp_use_otp_key
and trusted.dcp_skip_zk_test for DCP-backed trusted keys.
Co-developed-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
Co-developed-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Gstir <david@sigma-star.at>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
|
|
This covers trusted keys backed by NXP's DCP (Data Co-Processor) chip
found in smaller i.MX SoCs.
Signed-off-by: David Gstir <david@sigma-star.at>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
|
|
DCP (Data Co-Processor) is the little brother of NXP's CAAM IP.
Beside of accelerated crypto operations, it also offers support for
hardware-bound keys. Using this feature it is possible to implement a blob
mechanism similar to what CAAM offers. Unlike on CAAM, constructing and
parsing the blob has to happen in software (i.e. the kernel).
The software-based blob format used by DCP trusted keys encrypts
the payload using AES-128-GCM with a freshly generated random key and nonce.
The random key itself is AES-128-ECB encrypted using the DCP unique
or OTP key.
The DCP trusted key blob format is:
/*
* struct dcp_blob_fmt - DCP BLOB format.
*
* @fmt_version: Format version, currently being %1
* @blob_key: Random AES 128 key which is used to encrypt @payload,
* @blob_key itself is encrypted with OTP or UNIQUE device key in
* AES-128-ECB mode by DCP.
* @nonce: Random nonce used for @payload encryption.
* @payload_len: Length of the plain text @payload.
* @payload: The payload itself, encrypted using AES-128-GCM and @blob_key,
* GCM auth tag of size AES_BLOCK_SIZE is attached at the end of it.
*
* The total size of a DCP BLOB is sizeof(struct dcp_blob_fmt) + @payload_len +
* AES_BLOCK_SIZE.
*/
struct dcp_blob_fmt {
__u8 fmt_version;
__u8 blob_key[AES_KEYSIZE_128];
__u8 nonce[AES_KEYSIZE_128];
__le32 payload_len;
__u8 payload[];
} __packed;
By default the unique key is used. It is also possible to use the
OTP key. While the unique key should be unique it is not documented how
this key is derived. Therefore selection the OTP key is supported as
well via the use_otp_key module parameter.
Co-developed-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
Co-developed-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Gstir <david@sigma-star.at>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
|
|
Enabling trusted keys requires at least one trust source implementation
(currently TPM, TEE or CAAM) to be enabled. Currently, this is
done by checking each trust source's config option individually.
This does not scale when more trust sources like the one for DCP
are added, because the condition will get long and hard to read.
Add config HAVE_TRUSTED_KEYS which is set to true by each trust source
once its enabled and adapt the check for having at least one active trust
source to use this option. Whenever a new trust source is added, it now
needs to select HAVE_TRUSTED_KEYS.
Signed-off-by: David Gstir <david@sigma-star.at>
Tested-by: Jarkko Sakkinen <jarkko@kernel.org> # for TRUSTED_KEYS_TPM
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
|
|
DCP (Data Co-Processor) is able to derive private keys for a fused
random seed, which can be referenced by handle but not accessed by
the CPU. Similarly, DCP is able to store arbitrary keys in four
dedicated key slots located in its secure memory area (internal SRAM).
These keys can be used to perform AES encryption.
Expose these derived keys and key slots through the crypto API via their
handle. The main purpose is to add DCP-backed trusted keys. Other
use cases are possible too (see similar existing paes implementations),
but these should carefully be evaluated as e.g. enabling AF_ALG will
give userspace full access to use keys. In scenarios with untrustworthy
userspace, this will enable en-/decryption oracles.
Co-developed-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
Co-developed-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Signed-off-by: David Gstir <david@sigma-star.at>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
|
|
* for-next/tlbi:
arm64: tlb: Allow range operation for MAX_TLBI_RANGE_PAGES
arm64: tlb: Improve __TLBI_VADDR_RANGE()
arm64: tlb: Fix TLBI RANGE operand
|
|
* for-next/selftests:
kselftest: arm64: Add a null pointer check
kselftest/arm64: Remove unused parameters in abi test
|
|
* for-next/perf: (41 commits)
arm64: Add USER_STACKTRACE support
drivers/perf: hisi: hns3: Actually use devm_add_action_or_reset()
drivers/perf: hisi: hns3: Fix out-of-bound access when valid event group
drivers/perf: hisi_pcie: Fix out-of-bound access when valid event group
perf/arm-spe: Assign parents for event_source device
perf/arm-smmuv3: Assign parents for event_source device
perf/arm-dsu: Assign parents for event_source device
perf/arm-dmc620: Assign parents for event_source device
perf/arm-ccn: Assign parents for event_source device
perf/arm-cci: Assign parents for event_source device
perf/alibaba_uncore: Assign parents for event_source device
perf/arm_pmu: Assign parents for event_source devices
perf/imx_ddr: Assign parents for event_source devices
perf/qcom: Assign parents for event_source devices
Documentation: qcom-pmu: Use /sys/bus/event_source/devices paths
perf/riscv: Assign parents for event_source devices
perf/thunderx2: Assign parents for event_source devices
Documentation: thunderx2-pmu: Use /sys/bus/event_source/devices paths
perf/xgene: Assign parents for event_source devices
Documentation: xgene-pmu: Use /sys/bus/event_source/devices paths
...
|
|
* for-next/mm:
arm64/mm: Fix pud_user_accessible_page() for PGTABLE_LEVELS <= 2
arm64/mm: Add uffd write-protect support
arm64/mm: Move PTE_PRESENT_INVALID to overlay PTE_NG
arm64/mm: Remove PTE_PROT_NONE bit
arm64/mm: generalize PMD_PRESENT_INVALID for all levels
arm64: mm: Don't remap pgtables for allocate vs populate
arm64: mm: Batch dsb and isb when populating pgtables
arm64: mm: Don't remap pgtables per-cont(pte|pmd) block
|
|
* for-next/misc:
arm64: simplify arch_static_branch/_jump function
arm64: Add the arm64.no32bit_el0 command line option
arm64: defer clearing DAIF.D
arm64: assembler: update stale comment for disable_step_tsk
arm64/sysreg: Update PIE permission encodings
arm64: Add Neoverse-V2 part
arm64: Remove unnecessary irqflags alternative.h include
|
|
* for-next/kbuild:
arm64: boot: Support Flat Image Tree
arm64: Add BOOT_TARGETS variable
|
|
* for-next/acpi:
arm64: acpi: Honour firmware_signature field of FACS, if it exists
ACPICA: Detect FACS even for hardware reduced platforms
|
|
As 'shrink_type' is exported. The module prefix 'jbd2' is added to
distinguish from memory reclamation.
Signed-off-by: Ye Bin <yebin10@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
Link: https://lore.kernel.org/r/20240407065355.1528580-3-yebin10@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
__jbd2_journal_clean_checkpoint_list()
"enum shrink_type" can clearly express the meaning of the parameter of
__jbd2_journal_clean_checkpoint_list(), and there is no need to use the
bool type.
Signed-off-by: Ye Bin <yebin10@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
Link: https://lore.kernel.org/r/20240407065355.1528580-2-yebin10@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
The recent change to use pud_valid() as part of the implementation of
pud_user_accessible_page() fails to build when PGTABLE_LEVELS <= 2
because pud_valid() is not defined in that case.
Fix this by defining pud_valid() to false for this case. This means that
pud_user_accessible_page() will correctly always return false for this
config.
Fixes: f0f5863a0fb0 ("arm64/mm: Remove PTE_PROT_NONE bit")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202405082221.43rfWxz5-lkp@intel.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20240509122844.563320-1-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
System work queues are shared, use a dedicated work queue for G2H
processing to avoid G2H processing getting block behind system tasks.
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: <stable@vger.kernel.org>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240506034758.3697397-1-matthew.brost@intel.com
(cherry picked from commit 50aec9665e0babd62b9eee4e613d9a1ef8d2b7de)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
The initialization via drmm_mutex_init can fail, so we need to check the
return code and escalate the failure.
The mutex initialization has been moved after all the other init steps
that can't fail, so we're always guaranteed to have those done and don't
have to check in the cleanup code.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240321195512.274210-1-daniele.ceraolospurio@intel.com
(cherry picked from commit b4abeb5545bb3ddcdda3c19067680ad0b2259be4)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
In the following concurrency we will access the uninitialized rs->lock:
ext4_fill_super
ext4_register_sysfs
// sysfs registered msg_ratelimit_interval_ms
// Other processes modify rs->interval to
// non-zero via msg_ratelimit_interval_ms
ext4_orphan_cleanup
ext4_msg(sb, KERN_INFO, "Errors on filesystem, "
__ext4_msg
___ratelimit(&(EXT4_SB(sb)->s_msg_ratelimit_state)
if (!rs->interval) // do nothing if interval is 0
return 1;
raw_spin_trylock_irqsave(&rs->lock, flags)
raw_spin_trylock(lock)
_raw_spin_trylock
__raw_spin_trylock
spin_acquire(&lock->dep_map, 0, 1, _RET_IP_)
lock_acquire
__lock_acquire
register_lock_class
assign_lock_key
dump_stack();
ratelimit_state_init(&sbi->s_msg_ratelimit_state, 5 * HZ, 10);
raw_spin_lock_init(&rs->lock);
// init rs->lock here
and get the following dump_stack:
=========================================================
INFO: trying to register non-static key.
The code is fine but needs lockdep annotation, or maybe
you didn't initialize this object before use?
turning off the locking correctness validator.
CPU: 12 PID: 753 Comm: mount Tainted: G E 6.7.0-rc6-next-20231222 #504
[...]
Call Trace:
dump_stack_lvl+0xc5/0x170
dump_stack+0x18/0x30
register_lock_class+0x740/0x7c0
__lock_acquire+0x69/0x13a0
lock_acquire+0x120/0x450
_raw_spin_trylock+0x98/0xd0
___ratelimit+0xf6/0x220
__ext4_msg+0x7f/0x160 [ext4]
ext4_orphan_cleanup+0x665/0x740 [ext4]
__ext4_fill_super+0x21ea/0x2b10 [ext4]
ext4_fill_super+0x14d/0x360 [ext4]
[...]
=========================================================
Normally interval is 0 until s_msg_ratelimit_state is initialized, so
___ratelimit() does nothing. But registering sysfs precedes initializing
rs->lock, so it is possible to change rs->interval to a non-zero value
via the msg_ratelimit_interval_ms interface of sysfs while rs->lock is
uninitialized, and then a call to ext4_msg triggers the problem by
accessing an uninitialized rs->lock. Therefore register sysfs after all
initializations are complete to avoid such problems.
Signed-off-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20240102133730.1098120-1-libaokun1@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|