summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-01-06net/mlx5: Move affinity assignment into irq_requestShay Drory
Move affinity binding of the IRQ to irq_request function in order to bind the IRQ before inserting it to the xarray. After this change, the IRQ is ready for use when inserted to the xarray. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Introduce control IRQ request APIShay Drory
Currently, IRQ layer have a separate flow for ctrl and comp IRQs, and the distinction between ctrl and comp IRQs is done in the IRQ layer. In order to ease the coding and maintenance of the IRQ layer, introduce a new API for requesting control IRQs - mlx5_ctrl_irq_request(struct mlx5_core_dev *dev). Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: mlx5e_hv_vhca_stats_create return type to voidSaeed Mahameed
Callers of this functions ignore its return value, as reported by Wang Qing, in one of the return paths, it returns positive values. Since return value is ignored anyways, void out the return type of the function. Reported-by: Wang Qing <wangqing@vivo.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-07random: don't reset crng_init_cnt on urandom_read()Jann Horn
At the moment, urandom_read() (used for /dev/urandom) resets crng_init_cnt to zero when it is called at crng_init<2. This is inconsistent: We do it for /dev/urandom reads, but not for the equivalent getrandom(GRND_INSECURE). (And worse, as Jason pointed out, we're only doing this as long as maxwarn>0.) crng_init_cnt is only read in crng_fast_load(); it is relevant at crng_init==0 for determining when to switch to crng_init==1 (and where in the RNG state array to write). As far as I understand: - crng_init==0 means "we have nothing, we might just be returning the same exact numbers on every boot on every machine, we don't even have non-cryptographic randomness; we should shove every bit of entropy we can get into the RNG immediately" - crng_init==1 means "well we have something, it might not be cryptographic, but at least we're not gonna return the same data every time or whatever, it's probably good enough for TCP and ASLR and stuff; we now have time to build up actual cryptographic entropy in the input pool" - crng_init==2 means "this is supposed to be cryptographically secure now, but we'll keep adding more entropy just to be sure". The current code means that if someone is pulling data from /dev/urandom fast enough at crng_init==0, we'll keep resetting crng_init_cnt, and we'll never make forward progress to crng_init==1. It seems to be intended to prevent an attacker from bruteforcing the contents of small individual RNG inputs on the way from crng_init==0 to crng_init==1, but that's misguided; crng_init==1 isn't supposed to provide proper cryptographic security anyway, RNG users who care about getting secure RNG output have to wait until crng_init==2. This code was inconsistent, and it probably made things worse - just get rid of it. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: avoid superfluous call to RDRAND in CRNG extractionJason A. Donenfeld
RDRAND is not fast. RDRAND is actually quite slow. We've known this for a while, which is why functions like get_random_u{32,64} were converted to use batching of our ChaCha-based CRNG instead. Yet CRNG extraction still includes a call to RDRAND, in the hot path of every call to get_random_bytes(), /dev/urandom, and getrandom(2). This call to RDRAND here seems quite superfluous. CRNG is already extracting things based on a 256-bit key, based on good entropy, which is then reseeded periodically, updated, backtrack-mutated, and so forth. The CRNG extraction construction is something that we're already relying on to be secure and solid. If it's not, that's a serious problem, and it's unlikely that mixing in a measly 32 bits from RDRAND is going to alleviate things. And in the case where the CRNG doesn't have enough entropy yet, we're already initializing the ChaCha key row with RDRAND in crng_init_try_arch_early(). Removing the call to RDRAND improves performance on an i7-11850H by 370%. In other words, the vast majority of the work done by extract_crng() prior to this commit was devoted to fetching 32 bits of RDRAND. Reviewed-by: Theodore Ts'o <tytso@mit.edu> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: early initialization of ChaCha constantsDominik Brodowski
Previously, the ChaCha constants for the primary pool were only initialized in crng_initialize_primary(), called by rand_initialize(). However, some randomness is actually extracted from the primary pool beforehand, e.g. by kmem_cache_create(). Therefore, statically initialize the ChaCha constants for the primary pool. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: <linux-crypto@vger.kernel.org> Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: use IS_ENABLED(CONFIG_NUMA) instead of ifdefsJason A. Donenfeld
Rather than an awkward combination of ifdefs and __maybe_unused, we can ensure more source gets parsed, regardless of the configuration, by using IS_ENABLED for the CONFIG_NUMA conditional code. This makes things cleaner and easier to follow. I've confirmed that on !CONFIG_NUMA, we don't wind up with excess code by accident; the generated object file is the same. Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: harmonize "crng init done" messagesDominik Brodowski
We print out "crng init done" for !TRUST_CPU, so we should also print out the same for TRUST_CPU. Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: mix bootloader randomness into poolJason A. Donenfeld
If we're trusting bootloader randomness, crng_fast_load() is called by add_hwgenerator_randomness(), which sets us to crng_init==1. However, usually it is only called once for an initial 64-byte push, so bootloader entropy will not mix any bytes into the input pool. So it's conceivable that crng_init==1 when crng_initialize_primary() is called later, but then the input pool is empty. When that happens, the crng state key will be overwritten with extracted output from the empty input pool. That's bad. In contrast, if we're not trusting bootloader randomness, we call crng_slow_load() *and* we call mix_pool_bytes(), so that later crng_initialize_primary() isn't drawing on nothing. In order to prevent crng_initialize_primary() from extracting an empty pool, have the trusted bootloader case mirror that of the untrusted bootloader case, mixing the input into the pool. [linux@dominikbrodowski.net: rewrite commit message] Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: do not throw away excess input to crng_fast_loadJason A. Donenfeld
When crng_fast_load() is called by add_hwgenerator_randomness(), we currently will advance to crng_init==1 once we've acquired 64 bytes, and then throw away the rest of the buffer. Usually, that is not a problem: When add_hwgenerator_randomness() gets called via EFI or DT during setup_arch(), there won't be any IRQ randomness. Therefore, the 64 bytes passed by EFI exactly matches what is needed to advance to crng_init==1. Usually, DT seems to pass 64 bytes as well -- with one notable exception being kexec, which hands over 128 bytes of entropy to the kexec'd kernel. In that case, we'll advance to crng_init==1 once 64 of those bytes are consumed by crng_fast_load(), but won't continue onward feeding in bytes to progress to crng_init==2. This commit fixes the issue by feeding any leftover bytes into the next phase in add_hwgenerator_randomness(). [linux@dominikbrodowski.net: rewrite commit message] Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: do not re-init if crng_reseed completes before primary initJason A. Donenfeld
If the bootloader supplies sufficient material and crng_reseed() is called very early on, but not too early that wqs aren't available yet, then we might transition to crng_init==2 before rand_initialize()'s call to crng_initialize_primary() made. Then, when crng_initialize_primary() is called, if we're trusting the CPU's RDRAND instructions, we'll needlessly reinitialize the RNG and emit a message about it. This is mostly harmless, as numa_crng_init() will allocate and then free what it just allocated, and excessive calls to invalidate_batched_entropy() aren't so harmful. But it is funky and the extra message is confusing, so avoid the re-initialization all together by checking for crng_init < 2 in crng_initialize_primary(), just as we already do in crng_reseed(). Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: fix crash on multiple early calls to add_bootloader_randomness()Dominik Brodowski
Currently, if CONFIG_RANDOM_TRUST_BOOTLOADER is enabled, multiple calls to add_bootloader_randomness() are broken and can cause a NULL pointer dereference, as noted by Ivan T. Ivanov. This is not only a hypothetical problem, as qemu on arm64 may provide bootloader entropy via EFI and via devicetree. On the first call to add_hwgenerator_randomness(), crng_fast_load() is executed, and if the seed is long enough, crng_init will be set to 1. On subsequent calls to add_bootloader_randomness() and then to add_hwgenerator_randomness(), crng_fast_load() will be skipped. Instead, wait_event_interruptible() and then credit_entropy_bits() will be called. If the entropy count for that second seed is large enough, that proceeds to crng_reseed(). However, both wait_event_interruptible() and crng_reseed() depends (at least in numa_crng_init()) on workqueues. Therefore, test whether system_wq is already initialized, which is a sufficient indicator that workqueue_init_early() has progressed far enough. If we wind up hitting the !system_wq case, we later want to do what would have been done there when wqs are up, so set a flag, and do that work later from the rand_initialize() call. Reported-by: Ivan T. Ivanov <iivanov@suse.de> Fixes: 18b915ac6b0a ("efi/random: Treat EFI_RNG_PROTOCOL output as bootloader randomness") Cc: stable@vger.kernel.org Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> [Jason: added crng_need_done state and related logic.] Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: do not sign extend bytes for rotation when mixingJason A. Donenfeld
By using `char` instead of `unsigned char`, certain platforms will sign extend the byte when `w = rol32(*bytes++, input_rotate)` is called, meaning that bit 7 is overrepresented when mixing. This isn't a real problem (unless the mixer itself is already broken) since it's still invertible, but it's not quite correct either. Fix this by using an explicit unsigned type. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: use BLAKE2s instead of SHA1 in extractionJason A. Donenfeld
This commit addresses one of the lower hanging fruits of the RNG: its usage of SHA1. BLAKE2s is generally faster, and certainly more secure, than SHA1, which has [1] been [2] really [3] very [4] broken [5]. Additionally, the current construction in the RNG doesn't use the full SHA1 function, as specified, and allows overwriting the IV with RDRAND output in an undocumented way, even in the case when RDRAND isn't set to "trusted", which means potential malicious IV choices. And its short length means that keeping only half of it secret when feeding back into the mixer gives us only 2^80 bits of forward secrecy. In other words, not only is the choice of hash function dated, but the use of it isn't really great either. This commit aims to fix both of these issues while also keeping the general structure and semantics as close to the original as possible. Specifically: a) Rather than overwriting the hash IV with RDRAND, we put it into BLAKE2's documented "salt" and "personal" fields, which were specifically created for this type of usage. b) Since this function feeds the full hash result back into the entropy collector, we only return from it half the length of the hash, just as it was done before. This increases the construction's forward secrecy from 2^80 to a much more comfortable 2^128. c) Rather than using the raw "sha1_transform" function alone, we instead use the full proper BLAKE2s function, with finalization. This also has the advantage of supplying 16 bytes at a time rather than SHA1's 10 bytes, which, in addition to having a faster compression function to begin with, means faster extraction in general. On an Intel i7-11850H, this commit makes initial seeding around 131% faster. BLAKE2s itself has the nice property of internally being based on the ChaCha permutation, which the RNG is already using for expansion, so there shouldn't be any issue with newness, funkiness, or surprising CPU behavior, since it's based on something already in use. [1] https://eprint.iacr.org/2005/010.pdf [2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf [3] https://eprint.iacr.org/2015/967.pdf [4] https://shattered.io/static/shattered.pdf [5] https://www.usenix.org/system/files/sec20-leurent.pdf Reviewed-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07lib/crypto: blake2s: include as built-inJason A. Donenfeld
In preparation for using blake2s in the RNG, we change the way that it is wired-in to the build system. Instead of using ifdefs to select the right symbol, we use weak symbols. And because ARM doesn't need the generic implementation, we make the generic one default only if an arch library doesn't need it already, and then have arch libraries that do need it opt-in. So that the arch libraries can remain tristate rather than bool, we then split the shash part from the glue code. Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: linux-kbuild@vger.kernel.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: fix data race on crng init timeEric Biggers
_extract_crng() does plain loads of crng->init_time and crng_global_init_time, which causes undefined behavior if crng_reseed() and RNDRESEEDCRNG modify these corrently. Use READ_ONCE() and WRITE_ONCE() to make the behavior defined. Don't fix the race on crng->init_time by protecting it with crng->lock, since it's not a problem for duplicate reseedings to occur. I.e., the lockless access with READ_ONCE() is fine. Fixes: d848e5f8e1eb ("random: add new ioctl RNDRESEEDCRNG") Fixes: e192be9d9a30 ("random: replace non-blocking pool with a Chacha20-based CRNG") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: fix data race on crng_node_poolEric Biggers
extract_crng() and crng_backtrack_protect() load crng_node_pool with a plain load, which causes undefined behavior if do_numa_crng_init() modifies it concurrently. Fix this by using READ_ONCE(). Note: as per the previous discussion https://lore.kernel.org/lkml/20211219025139.31085-1-ebiggers@kernel.org/T/#u, READ_ONCE() is believed to be sufficient here, and it was requested that it be used here instead of smp_load_acquire(). Also change do_numa_crng_init() to set crng_node_pool using cmpxchg_release() instead of mb() + cmpxchg(), as the former is sufficient here but is more lightweight. Fixes: 1e7f583af67b ("random: make /dev/urandom scalable for silly userspace programs") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07irq: remove unused flags argument from __handle_irq_event_percpu()Sebastian Andrzej Siewior
The __IRQF_TIMER bit from the flags argument was used in add_interrupt_randomness() to distinguish the timer interrupt from other interrupts. This is no longer the case. Remove the flags argument from __handle_irq_event_percpu(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: remove unused irq_flags argument from add_interrupt_randomness()Sebastian Andrzej Siewior
Since commit ee3e00e9e7101 ("random: use registers from interrupted code for CPU's w/o a cycle counter") the irq_flags argument is no longer used. Remove unused irq_flags. Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dexuan Cui <decui@microsoft.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wei Liu <wei.liu@kernel.org> Cc: linux-hyperv@vger.kernel.org Cc: x86@kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07random: document add_hwgenerator_randomness() with other input functionsMark Brown
The section at the top of random.c which documents the input functions available does not document add_hwgenerator_randomness() which might lead a reader to overlook it. Add a brief note about it. Signed-off-by: Mark Brown <broonie@kernel.org> [Jason: reorganize position of function in doc comment and also document add_bootloader_randomness() while we're at it.] Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07MAINTAINERS: add git tree for random.cJason A. Donenfeld
This is handy not just for humans, but also so that the 0-day bot can automatically test posted mailing list patches against the right tree. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-06bpf/selftests: Test bpf_d_path on rdonly_mem.Hao Luo
The second parameter of bpf_d_path() can only accept writable memories. Rdonly_mem obtained from bpf_per_cpu_ptr() can not be passed into bpf_d_path for modification. This patch adds a selftest to verify this behavior. Signed-off-by: Hao Luo <haoluo@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220106205525.2116218-1-haoluo@google.com
2022-01-06libbpf: Add documentation for bpf_map batch operationsGrant Seltzer
This adds documention for: - bpf_map_delete_batch() - bpf_map_lookup_batch() - bpf_map_lookup_and_delete_batch() - bpf_map_update_batch() This also updates the public API for the `keys` parameter of `bpf_map_delete_batch()`, and both the `keys` and `values` parameters of `bpf_map_update_batch()` to be constants. Signed-off-by: Grant Seltzer <grantseltzer@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220106201304.112675-1-grantseltzer@gmail.com
2022-01-06Merge tag 'trace-v5.16-rc8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing fixes from Steven Rostedt: "Three minor tracing fixes: - Fix missing prototypes in sample module for direct functions - Fix check of valid buffer in get_trace_buf() - Fix annotations of percpu pointers" * tag 'trace-v5.16-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: tracing: Tag trace_percpu_buffer as a percpu pointer tracing: Fix check for trace_percpu_buffer validity in get_trace_buf() ftrace/samples: Add missing prototypes direct functions
2022-01-06selftests/bpf: Don't rely on preserving volatile in PT_REGS macros in loop3Andrii Nakryiko
PT_REGS*() macro on some architectures force-cast struct pt_regs to other types (user_pt_regs, etc) and might drop volatile modifiers, if any. Volatile isn't really required as pt_regs value isn't supposed to change during the BPF program run, so this is correct behavior. But progs/loop3.c relies on that volatile modifier to ensure that loop is preserved. Fix loop3.c by declaring i and sum variables as volatile instead. It preserves the loop and makes the test pass on all architectures (including s390x which is currently broken). Fixes: 3cc31d794097 ("libbpf: Normalize PT_REGS_xxx() macro definitions") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220106205156.955373-1-andrii@kernel.org
2022-01-06selftests: cgroup: Test open-time cgroup namespace usage for migration checksTejun Heo
When a task is writing to an fd opened by a different task, the perm check should use the cgroup namespace of the latter task. Add a test for it. Tested-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-01-06selftests: cgroup: Test open-time credential usage for migration checksTejun Heo
When a task is writing to an fd opened by a different task, the perm check should use the credentials of the latter task. Add a test for it. Tested-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-01-06selftests: cgroup: Make cg_create() use 0755 for permission instead of 0644Tejun Heo
0644 is an odd perm to create a cgroup which is a directory. Use the regular 0755 instead. This is necessary for euid switching test case. Reviewed-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-01-06cgroup: Use open-time cgroup namespace for process migration perm checksTejun Heo
cgroup process migration permission checks are performed at write time as whether a given operation is allowed or not is dependent on the content of the write - the PID. This currently uses current's cgroup namespace which is a potential security weakness as it may allow scenarios where a less privileged process tricks a more privileged one into writing into a fd that it created. This patch makes cgroup remember the cgroup namespace at the time of open and uses it for migration permission checks instad of current's. Note that this only applies to cgroup2 as cgroup1 doesn't have namespace support. This also fixes a use-after-free bug on cgroupns reported in https://lore.kernel.org/r/00000000000048c15c05d0083397@google.com Note that backporting this fix also requires the preceding patch. Reported-by: "Eric W. Biederman" <ebiederm@xmission.com> Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org> Cc: Michal Koutný <mkoutny@suse.com> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Michal Koutný <mkoutny@suse.com> Reported-by: syzbot+50f5cf33a284ce738b62@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/00000000000048c15c05d0083397@google.com Fixes: 5136f6365ce3 ("cgroup: implement "nsdelegate" mount option") Signed-off-by: Tejun Heo <tj@kernel.org>
2022-01-06cgroup: Allocate cgroup_file_ctx for kernfs_open_file->privTejun Heo
of->priv is currently used by each interface file implementation to store private information. This patch collects the current two private data usages into struct cgroup_file_ctx which is allocated and freed by the common path. This allows generic private data which applies to multiple files, which will be used to in the following patch. Note that cgroup_procs iterator is now embedded as procs.iter in the new cgroup_file_ctx so that it doesn't need to be allocated and freed separately. v2: union dropped from cgroup_file_ctx and the procs iterator is embedded in cgroup_file_ctx as suggested by Linus. v3: Michal pointed out that cgroup1's procs pidlist uses of->priv too. Converted. Didn't change to embedded allocation as cgroup1 pidlists get stored for caching. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Michal Koutný <mkoutny@suse.com>
2022-01-06cgroup: Use open-time credentials for process migraton perm checksTejun Heo
cgroup process migration permission checks are performed at write time as whether a given operation is allowed or not is dependent on the content of the write - the PID. This currently uses current's credentials which is a potential security weakness as it may allow scenarios where a less privileged process tricks a more privileged one into writing into a fd that it created. This patch makes both cgroup2 and cgroup1 process migration interfaces to use the credentials saved at the time of open (file->f_cred) instead of current's. Reported-by: "Eric W. Biederman" <ebiederm@xmission.com> Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org> Fixes: 187fe84067bd ("cgroup: require write perm on common ancestor when moving processes on the default hierarchy") Reviewed-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2022-01-07Merge tag 'amd-drm-fixes-5.16-2021-12-31' of ↵Dave Airlie
ssh://gitlab.freedesktop.org/agd5f/linux into drm-fixes amd-drm-fixes-5.16-2021-12-31: amdgpu: - Suspend/resume fix - Restore runtime pm behavior with efifb Signed-off-by: Dave Airlie <airlied@redhat.com> From: Alex Deucher <alexander.deucher@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211231143825.11479-1-alexander.deucher@amd.com
2022-01-06ice: Use bitmap_free() to free bitmapChristophe JAILLET
kfree() and bitmap_free() are the same. But using the latter is more consistent when freeing memory allocated with bitmap_zalloc(). Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: Optimize a few bitmap operationsChristophe JAILLET
When a bitmap is local to a function, it is safe to use the non-atomic __[set|clear]_bit(). No concurrent accesses can occur. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: Slightly simply ice_find_free_recp_res_idxChristophe JAILLET
The 'possible_idx' bitmap is set just after it is zeroed, so we can save the first step. The 'free_idx' bitmap is used only at the end of the function as the result of a bitmap xor operation. So there is no need to explicitly zero it before. So, slightly simply the code and remove 2 useless 'bitmap_zero()' call Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: improve switchdev's slow-pathWojciech Drewek
In current switchdev implementation, every VF PR is assigned to individual ring on switchdev ctrl VSI. For slow-path traffic, there is a mapping VF->ring done in software based on src_vsi value (by calling ice_eswitch_get_target_netdev function). With this change, HW solution is introduced which is more efficient. For each VF, src MAC (VF's MAC) filter will be created, which forwards packets to the corresponding switchdev ctrl VSI queue based on src MAC address. This filter has to be removed and then replayed in case of resetting one VF. Keep information about this rule in repr->mac_rule, thanks to that we know which rule has to be removed and replayed for a given VF. In case of CORE/GLOBAL all rules are removed automatically. We have to take care of readding them. This is done by ice_replay_vsi_adv_rule. When driver leaves switchdev mode, remove all advanced rules from switchdev ctrl VSI. This is done by ice_rem_adv_rule_for_vsi. Flag repr->rule_added is needed because in some cases reset might be triggered before VF sends request to add MAC. Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06x86, sched: Fix undefined reference to init_freq_invariance_cppc() build errorHuang Rui
The init_freq_invariance_cppc function is implemented in smpboot and depends on CONFIG_SMP. MODPOST vmlinux.symvers MODINFO modules.builtin.modinfo GEN modules.builtin LD .tmp_vmlinux.kallsyms1 ld: drivers/acpi/cppc_acpi.o: in function `acpi_cppc_processor_probe': /home/ray/brahma3/linux/drivers/acpi/cppc_acpi.c:819: undefined reference to `init_freq_invariance_cppc' make: *** [Makefile:1161: vmlinux] Error 1 See https://lore.kernel.org/lkml/484af487-7511-647e-5c5b-33d4429acdec@infradead.org/. Fixes: 41ea667227ba ("x86, sched: Calculate frequency invariance for AMD systems") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Huang Rui <ray.huang@amd.com> [ rjw: Subject edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-01-06cpufreq: amd-pstate: Fix Kconfig dependencies for AMD P-StateHuang Rui
The AMD P-State driver is based on ACPI CPPC function, so ACPI should be dependence of this driver in the kernel config. In file included from ../drivers/cpufreq/amd-pstate.c:40:0: ../include/acpi/processor.h:226:2: error: unknown type name ‘phys_cpuid_t’ phys_cpuid_t phys_id; /* CPU hardware ID such as APIC ID for x86 */ ^~~~~~~~~~~~ ../include/acpi/processor.h:355:1: error: unknown type name ‘phys_cpuid_t’; did you mean ‘phys_addr_t’? phys_cpuid_t acpi_get_phys_id(acpi_handle, int type, u32 acpi_id); ^~~~~~~~~~~~ phys_addr_t CC drivers/rtc/rtc-rv3029c2.o ../include/acpi/processor.h:356:1: error: unknown type name ‘phys_cpuid_t’; did you mean ‘phys_addr_t’? phys_cpuid_t acpi_map_madt_entry(u32 acpi_id); ^~~~~~~~~~~~ phys_addr_t ../include/acpi/processor.h:357:20: error: unknown type name ‘phys_cpuid_t’; did you mean ‘phys_addr_t’? int acpi_map_cpuid(phys_cpuid_t phys_id, u32 acpi_id); ^~~~~~~~~~~~ phys_addr_t See https://lore.kernel.org/lkml/20e286d4-25d7-fb6e-31a1-4349c805aae3@infradead.org/. Reported-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Huang Rui <ray.huang@amd.com> [ rjw: Subject edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-01-06cpufreq: amd-pstate: Fix struct amd_cpudata kernel-doc commentYang Li
Add the description of @req and @boost_supported in struct amd_cpudata kernel-doc comment to remove warnings found by running scripts/kernel-doc, which is caused by using 'make W=1'. drivers/cpufreq/amd-pstate.c:104: warning: Function parameter or member 'req' not described in 'amd_cpudata' drivers/cpufreq/amd-pstate.c:104: warning: Function parameter or member 'boost_supported' not described in 'amd_cpudata' Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Acked-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-01-06ice: replay advanced rules after resetVictor Raj
ice_replay_vsi_adv_rule will replay advanced rules for a given VSI. Exit this function when list of rules for given recipe is empty. Do not add rule when given vsi_handle does not match vsi_handle from the rule info. Use ICE_MAX_NUM_RECIPES instead of ICE_SW_LKUP_LAST in order to find advanced rules as well. Signed-off-by: Victor Raj <victor.raj@intel.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06Bluetooth: hci_event: Rework hci_inquiry_result_with_rssi_evtLuiz Augusto von Dentz
This rework the handling of hci_inquiry_result_with_rssi_evt to not use a union to represent the different inquiry responses. Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com> Tested-by: Soenke Huster <soenke.huster@eknoes.de> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2022-01-06Bluetooth: btbcm: disable read tx power for MacBook Air 8,1 and 8,2Aditya Garg
The MacBook Air 8,1 and 8,2 also need querying of LE Tx power to be disabled for Bluetooth to work. Signed-off-by: Aditya Garg <gargaditya08@live.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org> Cc: stable@vger.kernel.org
2022-01-06net/smc: Reset conn->lgr when link group registration failsWen Gu
SMC connections might fail to be registered in a link group due to unable to find a usable link during its creation. As a result, smc_conn_create() will return a failure and most resources related to the connection won't be applied or initialized, such as conn->abort_work or conn->lnk. If smc_conn_free() is invoked later, it will try to access the uninitialized resources related to the connection, thus causing a warning or crash. This patch tries to fix this by resetting conn->lgr to NULL if an abnormal exit occurs in smc_lgr_register_conn(), thus avoiding the access to uninitialized resources in smc_conn_free(). Meanwhile, the new created link group should be terminated if smc connections can't be registered in it. So smc_lgr_cleanup_early() is modified to take care of link group only and invoked to terminate unusable link group by smc_conn_create(). The call to smc_conn_free() is moved out from smc_lgr_cleanup_early() to smc_conn_abort(). Fixes: 56bc3b2094b4 ("net/smc: assign link to a new connection") Suggested-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: Wen Gu <guwen@linux.alibaba.com> Acked-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06Bluetooth: hci_qca: Fix NULL vs IS_ERR_OR_NULL check in qca_serdev_probeMiaoqian Lin
The function devm_gpiod_get_index() return error pointers on error. Thus devm_gpiod_get_index_optional() could return NULL and error pointers. The same as devm_gpiod_get_optional() function. Using IS_ERR_OR_NULL() check to catch error pointers. Fixes: 77131dfe ("Bluetooth: hci_qca: Replace devm_gpiod_get() with devm_gpiod_get_optional()") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2022-01-06Bluetooth: hci_bcm: Check for error irqJiasheng Jiang
For the possible failure of the platform_get_irq(), the returned irq could be error number and will finally cause the failure of the request_irq(). Consider that platform_get_irq() can now in certain cases return -EPROBE_DEFER, and the consequences of letting request_irq() effectively convert that into -EINVAL, even at probe time rather than later on. So it might be better to check just now. Fixes: 0395ffc1ee05 ("Bluetooth: hci_bcm: Add PM for BCM devices") Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2022-01-06fsl/fman: Check for null pointer after calling devm_ioremapJiasheng Jiang
As the possible failure of the allocation, the devm_ioremap() may return NULL pointer. Take tgec_initialization() as an example. If allocation fails, the params->base_addr will be NULL pointer and will be assigned to tgec->regs in tgec_config(). Then it will cause the dereference of NULL pointer in set_mac_address(), which is called by tgec_init(). Therefore, it should be better to add the sanity check after the calling of the devm_ioremap(). Fixes: 3933961682a3 ("fsl/fman: Add FMan MAC driver") Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06veth: Do not record rx queue hint in veth_xmitDaniel Borkmann
Laurent reported that they have seen a significant amount of TCP retransmissions at high throughput from applications residing in network namespaces talking to the outside world via veths. The drops were seen on the qdisc layer (fq_codel, as per systemd default) of the phys device such as ena or virtio_net due to all traffic hitting a _single_ TX queue _despite_ multi-queue device. (Note that the setup was _not_ using XDP on veths as the issue is generic.) More specifically, after edbea9220251 ("veth: Store queue_mapping independently of XDP prog presence") which made it all the way back to v4.19.184+, skb_record_rx_queue() would set skb->queue_mapping to 1 (given 1 RX and 1 TX queue by default for veths) instead of leaving at 0. This is eventually retained and callbacks like ena_select_queue() will also pick single queue via netdev_core_pick_tx()'s ndo_select_queue() once all the traffic is forwarded to that device via upper stack or other means. Similarly, for others not implementing ndo_select_queue() if XPS is disabled, netdev_pick_tx() might call into the skb_tx_hash() and check for prior skb_rx_queue_recorded() as well. In general, it is a _bad_ idea for virtual devices like veth to mess around with queue selection [by default]. Given dev->real_num_tx_queues is by default 1, the skb->queue_mapping was left untouched, and so prior to edbea9220251 the netdev_core_pick_tx() could do its job upon __dev_queue_xmit() on the phys device. Unbreak this and restore prior behavior by removing the skb_record_rx_queue() from veth_xmit() altogether. If the veth peer has an XDP program attached, then it would return the first RX queue index in xdp_md->rx_queue_index (unless configured in non-default manner). However, this is still better than breaking the generic case. Fixes: edbea9220251 ("veth: Store queue_mapping independently of XDP prog presence") Fixes: 638264dc9022 ("veth: Support per queue XDP ring") Reported-by: Laurent Bernaille <laurent.bernaille@datadoghq.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Cc: Toshiaki Makita <toshiaki.makita1@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06ethernet: ibmveth: use default_groups in kobj_typeGreg Kroah-Hartman
There are currently 2 ways to create a set of sysfs files for a kobj_type, through the default_attrs field, and the default_groups field. Move the ibmveth sysfs code to use default_groups field which has been the preferred way since aa30f47cf666 ("kobject: Add support for default attribute groups to kobj_type") so that we can soon get rid of the obsolete default_attrs field. Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Cristobal Forno <cforno12@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: netdev@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06rocker: fix a sleeping in atomic bugDan Carpenter
This code is holding the &ofdpa->flow_tbl_lock spinlock so it is not allowed to sleep. That means we have to pass the OFDPA_OP_FLAG_NOWAIT flag to ofdpa_flow_tbl_del(). Fixes: 936bd486564a ("rocker: use FIB notifications instead of switchdev calls") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06i2c: mpc: Avoid out of bounds memory accessChris Packham
When performing an I2C transfer where the last message was a write KASAN would complain: BUG: KASAN: slab-out-of-bounds in mpc_i2c_do_action+0x154/0x630 Read of size 2 at addr c814e310 by task swapper/2/0 CPU: 2 PID: 0 Comm: swapper/2 Tainted: G B 5.16.0-rc8 #1 Call Trace: [e5ee9d50] [c08418e8] dump_stack_lvl+0x4c/0x6c (unreliable) [e5ee9d70] [c02f8a14] print_address_description.constprop.13+0x64/0x3b0 [e5ee9da0] [c02f9030] kasan_report+0x1f0/0x204 [e5ee9de0] [c0c76ee4] mpc_i2c_do_action+0x154/0x630 [e5ee9e30] [c0c782c4] mpc_i2c_isr+0x164/0x240 [e5ee9e60] [c00f3a04] __handle_irq_event_percpu+0xf4/0x3b0 [e5ee9ec0] [c00f3d40] handle_irq_event_percpu+0x80/0x110 [e5ee9f40] [c00f3e48] handle_irq_event+0x78/0xd0 [e5ee9f60] [c00fcfec] handle_fasteoi_irq+0x19c/0x370 [e5ee9fa0] [c00f1d84] generic_handle_irq+0x54/0x80 [e5ee9fc0] [c0006b54] __do_irq+0x64/0x200 [e5ee9ff0] [c0007958] __do_IRQ+0xe8/0x1c0 [c812dd50] [e3eaab20] 0xe3eaab20 [c812dd90] [c0007a4c] do_IRQ+0x1c/0x30 [c812dda0] [c0000c04] ExternalInput+0x144/0x160 --- interrupt: 500 at arch_cpu_idle+0x34/0x60 NIP: c000b684 LR: c000b684 CTR: c0019688 REGS: c812ddb0 TRAP: 0500 Tainted: G B (5.16.0-rc8) MSR: 00029002 <CE,EE,ME> CR: 22000488 XER: 20000000 GPR00: c10ef7fc c812de90 c80ff200 c2394718 00000001 00000001 c10e3f90 00000003 GPR08: 00000000 c0019688 c2394718 fc7d625b 22000484 00000000 21e17000 c208228c GPR16: e3e99284 00000000 ffffffff c2390000 c001bac0 c2082288 c812df60 c001ba60 GPR24: c23949c0 00000018 00080000 00000004 c80ff200 00000002 c2348ee4 c2394718 NIP [c000b684] arch_cpu_idle+0x34/0x60 LR [c000b684] arch_cpu_idle+0x34/0x60 --- interrupt: 500 [c812de90] [c10e3f90] rcu_eqs_enter.isra.60+0xc0/0x110 (unreliable) [c812deb0] [c10ef7fc] default_idle_call+0xbc/0x230 [c812dee0] [c00af0e8] do_idle+0x1c8/0x200 [c812df10] [c00af3c0] cpu_startup_entry+0x20/0x30 [c812df20] [c001e010] start_secondary+0x5d0/0xba0 [c812dff0] [c00028a0] __secondary_start+0x90/0xdc This happened because we would overrun the i2c->msgs array on the final interrupt for the I2C STOP. This didn't happen if the last message was a read because there is no interrupt in that case. Ensure that we only access the current message if we are not processing a I2C STOP condition. Fixes: 1538d82f4647 ("i2c: mpc: Interrupt driven transfer") Reported-by: Maxime Bizon <mbizon@freebox.fr> Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Wolfram Sang <wsa@kernel.org>