summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2023-12-08Merge tag 'gpio-remove-gpiochip_is_requested-for-v6.8-rc1' into gpio/for-nextBartosz Golaszewski
gpio: remove gpiochip_is_requested() - provide a safer alternative to gpiochip_is_requested() - convert all existing users - remove gpiochip_is_requested()
2023-12-08gpiolib: remove gpiochip_is_requested()Bartosz Golaszewski
We have no external users of gpiochip_is_requested(). Let's remove it and replace its internal calls with direct testing of the REQUESTED flag. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org>
2023-12-08gpiolib: use gpiochip_dup_line_label() in for_each helpersBartosz Golaszewski
Rework for_each_requested_gpio_in_range() to use the new helper to retrieve a dynamically allocated copy of the descriptor label and free it at the end of each iteration. We need to leverage the CLASS()' destructor to make sure that the label is freed even when breaking out of the loop. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org>
2023-12-08gpiolib: provide gpiochip_dup_line_label()Bartosz Golaszewski
gpiochip_is_requested() not only has a misleading name but it returns a pointer to a string that is freed when the descriptor is released. Provide a new helper meant to replace it, which returns a copy of the label string instead. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org>
2023-12-08crypto: hisilicon/qm - save capability registers in qm init processZhiqi Song
In previous capability register implementation, qm irq related values were read from capability registers dynamically when needed. But in abnormal scenario, e.g. the core is timeout and the device needs to soft reset and reset failed after disabling the MSE, the device can not be removed normally, causing the following call trace: | Call trace: | pci_irq_vector+0xfc/0x140 | hisi_qm_uninit+0x278/0x3b0 [hisi_qm] | hpre_remove+0x16c/0x1c0 [hisi_hpre] | pci_device_remove+0x6c/0x264 | device_release_driver_internal+0x1ec/0x3e0 | device_release_driver+0x3c/0x60 | pci_stop_bus_device+0xfc/0x22c | pci_stop_and_remove_bus_device+0x38/0x70 | pci_iov_remove_virtfn+0x108/0x1c0 | sriov_disable+0x7c/0x1e4 | pci_disable_sriov+0x4c/0x6c | hisi_qm_sriov_disable+0x90/0x160 [hisi_qm] | hpre_remove+0x1a8/0x1c0 [hisi_hpre] | pci_device_remove+0x6c/0x264 | device_release_driver_internal+0x1ec/0x3e0 | driver_detach+0x168/0x2d0 | bus_remove_driver+0xc0/0x230 | driver_unregister+0x58/0xdc | pci_unregister_driver+0x40/0x220 | hpre_exit+0x34/0x64 [hisi_hpre] | __arm64_sys_delete_module+0x374/0x620 [...] | Call trace: | free_msi_irqs+0x25c/0x300 | pci_disable_msi+0x19c/0x264 | pci_free_irq_vectors+0x4c/0x70 | hisi_qm_pci_uninit+0x44/0x90 [hisi_qm] | hisi_qm_uninit+0x28c/0x3b0 [hisi_qm] | hpre_remove+0x16c/0x1c0 [hisi_hpre] | pci_device_remove+0x6c/0x264 [...] The reason for this call trace is that when the MSE is disabled, the value of capability registers in the BAR space become invalid. This will make the subsequent unregister process get the wrong irq vector through capability registers and get the wrong irq number by pci_irq_vector(). So add a capability table structure to pre-store the valid value of the irq information capability register in qm init process, avoid obtaining invalid capability register value after the MSE is disabled. Fixes: 3536cc55cada ("crypto: hisilicon/qm - support get device irq information from hardware registers") Signed-off-by: Zhiqi Song <songzhiqi1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-08crypto: hisilicon/qm - add a function to set qm algsWenkai Lin
Extract a public function to set qm algs and remove the similar code for setting qm algs in each module. Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com> Signed-off-by: Hao Fang <fanghao11@huawei.com> Signed-off-by: Zhiqi Song <songzhiqi1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. Conflicts: drivers/net/ethernet/stmicro/stmmac/dwmac5.c drivers/net/ethernet/stmicro/stmmac/dwmac5.h drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c drivers/net/ethernet/stmicro/stmmac/hwif.h 37e4b8df27bc ("net: stmmac: fix FPE events losing") c3f3b97238f6 ("net: stmmac: Refactor EST implementation") https://lore.kernel.org/all/20231206110306.01e91114@canb.auug.org.au/ Adjacent changes: net/ipv4/tcp_ao.c 9396c4ee93f9 ("net/tcp: Don't store TCP-AO maclen on reqsk") 7b0f570f879a ("tcp: Move TCP-AO bits from cookie_v[46]_check() to tcp_ao_syncookie().") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-12-07Merge tag 'net-6.7-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bpf and netfilter. Current release - regressions: - veth: fix packet segmentation in veth_convert_skb_to_xdp_buff Current release - new code bugs: - tcp: assorted fixes to the new Auth Option support Older releases - regressions: - tcp: fix mid stream window clamp - tls: fix incorrect splice handling - ipv4: ip_gre: handle skb_pull() failure in ipgre_xmit() - dsa: mv88e6xxx: restore USXGMII support for 6393X - arcnet: restore support for multiple Sohard Arcnet cards Older releases - always broken: - tcp: do not accept ACK of bytes we never sent - require admin privileges to receive packet traces via netlink - packet: move reference count in packet_sock to atomic_long_t - bpf: - fix incorrect branch offset comparison with cpu=v4 - fix prog_array_map_poke_run map poke update - netfilter: - three fixes for crashes on bad admin commands - xt_owner: fix race accessing sk->sk_socket, TOCTOU null-deref - nf_tables: fix 'exist' matching on bigendian arches - leds: netdev: fix RTNL handling to prevent potential deadlock - eth: tg3: prevent races in error/reset handling - eth: r8169: fix rtl8125b PAUSE storm when suspended - eth: r8152: improve reset and surprise removal handling - eth: hns: fix race between changing features and sending - eth: nfp: fix sleep in atomic for bonding offload" * tag 'net-6.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (62 commits) vsock/virtio: fix "comparison of distinct pointer types lacks a cast" warning net/smc: fix missing byte order conversion in CLC handshake net: dsa: microchip: provide a list of valid protocols for xmit handler drop_monitor: Require 'CAP_SYS_ADMIN' when joining "events" group psample: Require 'CAP_NET_ADMIN' when joining "packets" group bpf: sockmap, updating the sg structure should also update curr net: tls, update curr on splice as well nfp: flower: fix for take a mutex lock in soft irq context and rcu lock net: dsa: mv88e6xxx: Restore USXGMII support for 6393X tcp: do not accept ACK of bytes we never sent selftests/bpf: Add test for early update in prog_array_map_poke_run bpf: Fix prog_array_map_poke_run map poke update netfilter: xt_owner: Fix for unsafe access of sk->sk_socket netfilter: nf_tables: validate family when identifying table via handle netfilter: nf_tables: bail out on mismatching dynset and set expressions netfilter: nf_tables: fix 'exist' matching on bigendian arches netfilter: nft_set_pipapo: skip inactive elements during set walk netfilter: bpf: fix bad registration on nf_defrag leds: trigger: netdev: fix RTNL handling to prevent potential deadlock octeontx2-af: Update Tx link register range ...
2023-12-07cgroup: Move rcu_head up near the top of cgroup_rootWaiman Long
Commit d23b5c577715 ("cgroup: Make operations on the cgroup root_list RCU safe") adds a new rcu_head to the cgroup_root structure and kvfree_rcu() for freeing the cgroup_root. The current implementation of kvfree_rcu(), however, has the limitation that the offset of the rcu_head structure within the larger data structure must be less than 4096 or the compilation will fail. See the macro definition of __is_kvfree_rcu_offset() in include/linux/rcupdate.h for more information. By putting rcu_head below the large cgroup structure, any change to the cgroup structure that makes it larger run the risk of causing build failure under certain configurations. Commit 77070eeb8821 ("cgroup: Avoid false cacheline sharing of read mostly rstat_cpu") happens to be the last straw that breaks it. Fix this problem by moving the rcu_head structure up before the cgroup structure. Fixes: d23b5c577715 ("cgroup: Make operations on the cgroup root_list RCU safe") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Closes: https://lore.kernel.org/lkml/20231207143806.114e0a74@canb.auug.org.au/ Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Yafang Shao <laoar.shao@gmail.com> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Reviewed-by: Michal Koutný <mkoutny@suse.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-12-07spi: Add support for stacked/parallel memoriesMark Brown
Merge series from Amit Kumar Mahapatra <amit.kumar-mahapatra@amd.com>: This patch series adds support to the SPI framework for using multiple chip selects.
2023-12-07Merge tag 'platform-drivers-x86-v6.7-3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86 Pull x86 platform driver fixes from Ilpo Järvinen: - Fix i8042 filter resource handling, input, and suspend issues in asus-wmi - Skip zero instance WMI blocks to avoid issues with some laptops - Differentiate dev/production keys in mlxbf-bootctl - Correct surface serdev related return value to avoid leaking errno into userspace - Error checking fixes * tag 'platform-drivers-x86-v6.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: platform/mellanox: Check devm_hwmon_device_register_with_groups() return value platform/mellanox: Add null pointer checks for devm_kasprintf() mlxbf-bootctl: correctly identify secure boot with development keys platform/x86: wmi: Skip blocks with zero instances platform/surface: aggregator: fix recv_buf() return value platform/x86: asus-wmi: disable USB0 hub on ROG Ally before suspend platform/x86: asus-wmi: Filter Volume key presses if also reported via atkbd platform/x86: asus-wmi: Change q500a_i8042_filter() into a generic i8042-filter platform/x86: asus-wmi: Move i8042 filter install to shared asus-wmi code
2023-12-07Merge tag 'for-netdev' of ↵Jakub Kicinski
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf Daniel Borkmann says: ==================== pull-request: bpf 2023-12-06 We've added 4 non-merge commits during the last 6 day(s) which contain a total of 7 files changed, 185 insertions(+), 55 deletions(-). The main changes are: 1) Fix race found by syzkaller on prog_array_map_poke_run when a BPF program's kallsym symbols were still missing, from Jiri Olsa. 2) Fix BPF verifier's branch offset comparison for BPF_JMP32 | BPF_JA, from Yonghong Song. 3) Fix xsk's poll handling to only set mask on bound xsk sockets, from Yewon Choi. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: selftests/bpf: Add test for early update in prog_array_map_poke_run bpf: Fix prog_array_map_poke_run map poke update xsk: Skip polling event check for unbound socket bpf: Fix a verifier bug due to incorrect branch offset comparison with cpu=v4 ==================== Link: https://lore.kernel.org/r/20231206220528.12093-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-12-07spi: Add multi-cs memories support in SPI coreAmit Kumar Mahapatra
AMD-Xilinx GQSPI controller has two advanced mode that allows the controller to consider two flashes as one single device. One of these two mode is the parallel mode in which each byte of data is stored in both devices, the even bits in the lower flash & the odd bits in the upper flash. The byte split is automatically handled by the QSPI controller. The other mode is the stacked mode in which both the flashes share the same SPI bus but each of the device contain half of the data. In this mode, the controller does not follow CS requests but instead internally wires the two CS levels with the value of the most significant address bit. For supporting both these modes SPI core need to be updated for providing multiple CS for a single SPI device. For adding multi CS support the SPI device need to be aware of all the CS values. So, the "chip_select" member in the spi_device structure is now an array that holds all the CS values. spi_device structure now has a "cs_index_mask" member. This acts as an index to the chip_select array. If nth bit of spi->cs_index_mask is set then the driver would assert spi->chip_select[n]. In parallel mode all the chip selects are asserted/de-asserted simultaneously and each byte of data is stored in both devices, the even bits in one, the odd bits in the other. The split is automatically handled by the GQSPI controller. The GQSPI controller supports a maximum of two flashes connected in parallel mode. A SPI_CONTROLLER_MULTI_CS flag bit is added in the spi controller flags, through ctlr->flags the spi core will make sure that the controller is capable of handling multiple chip selects at once. For supporting multiple CS via GPIO the cs_gpiod member of the spi_device structure is now an array that holds the gpio descriptor for each chipselect. CS GPIO is not tested on our hardware, but it has been tested by @Stefan https://lore.kernel.org/all/005001da1efc$619ad5a0$24d080e0$@opensource.cirrus.com/ Signed-off-by: Amit Kumar Mahapatra <amit.kumar-mahapatra@amd.com> Tested-by: Stefan Binding <sbinding@opensource.cirrus.com> Link: https://lore.kernel.org/r/20231125092137.2948-4-amit.kumar-mahapatra@amd.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-12-07mfd: Fix a few spelling mistakes in PMIC header file commentsKaihua Zhong
Fix four comment typos in MFD PMIC header files. Reported-by: k2ci <kernel-bot@kylinos.cn> Signed-off-by: Kaihua Zhong <zhongkaihua@kylinos.cn> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20231129015526.3302865-1-zhongkaihua@kylinos.cn Signed-off-by: Lee Jones <lee@kernel.org>
2023-12-07w1: gpio: Don't use platform data for driver dataUwe Kleine-König
struct device's .platform_data isn't for drivers to write to. For driver-specific data there is .driver_data instead. As there is no in-tree platform that provides w1_gpio_platform_data, drop the include file and replace it by a local struct w1_gpio_ddata. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Link: https://lore.kernel.org/r/8f7ebe03ddaa5a5c6e2b36fecdf59da7fc373527.1701727212.git.u.kleine-koenig@pengutronix.de Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
2023-12-07mmc: core: Remove packed command leftoversAvri Altman
Packed commands support was removed long time ago, but some bits got left behind. Remove them. Signed-off-by: Avri Altman <avri.altman@wdc.com> Link: https://lore.kernel.org/r/20231030062226.1895692-1-avri.altman@wdc.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-12-07net: dsa: microchip: move ksz_chip_id enum to platform includeDaniel Danzberger
With the ksz_chip_id enums moved to the platform include file for ksz switches, platform code that instantiates a device can now use these to set ksz_platform_data::chip_id. Signed-off-by: Daniel Danzberger <dd@embedd.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-07net: dsa: microchip: properly support platform_data probingVladimir Oltean
The ksz driver has bits and pieces of platform_data probing support, but it doesn't work. The conventional thing to do is to have an encapsulating structure for struct dsa_chip_data that gets put into dev->platform_data. This driver expects a struct ksz_platform_data, but that doesn't contain a struct dsa_chip_data as first element, which will obviously not work with dsa_switch_probe() -> dsa_switch_parse(). Pointing dev->platform_data to a struct dsa_chip_data directly is in principle possible, but that doesn't work either. The driver has ksz_switch_detect() to read the device ID from hardware, followed by ksz_check_device_id() to compare it against a predetermined expected value. This protects against early errors in the SPI/I2C communication. With platform_data, the mechanism in ksz_check_device_id() doesn't work and even leads to NULL pointer dereferences, since of_device_get_match_data() doesn't work in that probe path. So obviously, the platform_data support is actually missing, and the existing handling of struct ksz_platform_data is bogus. Complete the support by adding a struct dsa_chip_data as first element, and fixing up ksz_check_device_id() to pick up the platform_data instead of the unavailable of_device_get_match_data(). The early dev->chip_id assignment from ksz_switch_register() is also bogus, because ksz_switch_detect() sets it to an initial value. So remove it. Also, ksz_platform_data :: enabled_ports isn't used anywhere, delete it. Link: https://lore.kernel.org/netdev/20231204154315.3906267-1-dd@embedd.com/ Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Daniel Danzberger <dd@embedd.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-07mm, pmem, xfs: Introduce MF_MEM_PRE_REMOVE for unbindShiyang Ruan
Now, if we suddenly remove a PMEM device(by calling unbind) which contains FSDAX while programs are still accessing data in this device, e.g.: ``` $FSSTRESS_PROG -d $SCRATCH_MNT -n 99999 -p 4 & # $FSX_PROG -N 1000000 -o 8192 -l 500000 $SCRATCH_MNT/t001 & echo "pfn1.1" > /sys/bus/nd/drivers/nd_pmem/unbind ``` it could come into an unacceptable state: 1. device has gone but mount point still exists, and umount will fail with "target is busy" 2. programs will hang and cannot be killed 3. may crash with NULL pointer dereference To fix this, we introduce a MF_MEM_PRE_REMOVE flag to let it know that we are going to remove the whole device, and make sure all related processes could be notified so that they could end up gracefully. This patch is inspired by Dan's "mm, dax, pmem: Introduce dev_pagemap_failure()"[1]. With the help of dax_holder and ->notify_failure() mechanism, the pmem driver is able to ask filesystem on it to unmap all files in use, and notify processes who are using those files. Call trace: trigger unbind -> unbind_store() -> ... (skip) -> devres_release_all() -> kill_dax() -> dax_holder_notify_failure(dax_dev, 0, U64_MAX, MF_MEM_PRE_REMOVE) -> xfs_dax_notify_failure() `-> freeze_super() // freeze (kernel call) `-> do xfs rmap ` -> mf_dax_kill_procs() ` -> collect_procs_fsdax() // all associated processes ` -> unmap_and_kill() ` -> invalidate_inode_pages2_range() // drop file's cache `-> thaw_super() // thaw (both kernel & user call) Introduce MF_MEM_PRE_REMOVE to let filesystem know this is a remove event. Use the exclusive freeze/thaw[2] to lock the filesystem to prevent new dax mapping from being created. Do not shutdown filesystem directly if configuration is not supported, or if failure range includes metadata area. Make sure all files and processes(not only the current progress) are handled correctly. Also drop the cache of associated files before pmem is removed. [1]: https://lore.kernel.org/linux-mm/161604050314.1463742.14151665140035795571.stgit@dwillia2-desk3.amr.corp.intel.com/ [2]: https://lore.kernel.org/linux-xfs/169116275623.3187159.16862410128731457358.stg-ugh@frogsfrogsfrogs/ Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Chandan Babu R <chandanbabu@kernel.org>
2023-12-07PCI: add INTEL_HDA_ARL to pci_ids.hPierre-Louis Bossart
The PCI ID insertion follows the increasing order in the table, but this hardware follows MTL (MeteorLake). Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Péter Ujfalusi <peter.ujfalusi@linux.intel.com> Reviewed-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Acked-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20231204212710.185976-2-pierre-louis.bossart@linux.intel.com Signed-off-by: Takashi Iwai <tiwai@suse.de>
2023-12-07firmware: zynqmp: Add support to handle IPI CRC failureJay Buddhabhatti
Added new PM error code XST_PM_INVALID_CRC to handle CRC validation failure during IPI communication. Co-developed-by: Naman Trivedi Manojbhai <naman.trivedimanojbhai@amd.com> Signed-off-by: Naman Trivedi Manojbhai <naman.trivedimanojbhai@amd.com> Signed-off-by: Jay Buddhabhatti <jay.buddhabhatti@amd.com> Link: https://lore.kernel.org/r/20231129112713.22718-6-jay.buddhabhatti@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-07drivers: soc: xilinx: Fix error message on SGI registration failureJay Buddhabhatti
Failure to register SGI for firmware event notification is non-fatal error when feature is not supported by other modules such as Xen and TF-A. Add _info level log message for such special case. Also add XST_PM_INVALID_VERSION error code and map it to -EOPNOSUPP Linux kernel error code. If feature is not supported or EEMI API version is mismatch, firmware can return XST_PM_INVALID_VERSION = 4 or XST_PM_NO_FEATURE = 19 error code. Co-developed-by: Tanmay Shah <tanmay.shah@amd.com> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com> Signed-off-by: Jay Buddhabhatti <jay.buddhabhatti@amd.com> Link: https://lore.kernel.org/r/20231129112713.22718-5-jay.buddhabhatti@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-07firmware: xilinx: Expand feature check to support all PLM modulesJay Buddhabhatti
To support feature check for all modules, append the module id of the API that is being checked to the feature check API so it could be routed to the target module for processing. There is no need to check compatible string because the board information is taken via firmware interface. Co-developed-by: Saeed Nowshadi <saeed.nowshadi@amd.com> Signed-off-by: Saeed Nowshadi <saeed.nowshadi@amd.com> Signed-off-by: Jay Buddhabhatti <jay.buddhabhatti@amd.com> Link: https://lore.kernel.org/r/20231129112713.22718-3-jay.buddhabhatti@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-07firmware: xilinx: Update firmware call interface to support additional argsJay Buddhabhatti
System-level platform management layer (do_fw_call()) has support for maximum of 5 arguments as of now (1 EEMI API ID + 4 command arguments). In order to support new EEMI PM_IOCTL IDs (Secure Read/Write), this support must be extended to support one additional argument, which results in a configuration of - 1 EEMI API ID + 5 command arguments. Update zynqmp_pm_invoke_fn() and do_fw_call() with this new definition containing variable arguments. As a result, update all the references to pm invoke function with the updated definition. Co-developed-by: Izhar Ameer Shaikh <izhar.ameer.shaikh@amd.com> Signed-off-by: Izhar Ameer Shaikh <izhar.ameer.shaikh@amd.com> Signed-off-by: Jay Buddhabhatti <jay.buddhabhatti@amd.com> Link: https://lore.kernel.org/r/20231129112713.22718-2-jay.buddhabhatti@amd.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-06bpf: Use arch_bpf_trampoline_sizeSong Liu
Instead of blindly allocating PAGE_SIZE for each trampoline, check the size of the trampoline with arch_bpf_trampoline_size(). This size is saved in bpf_tramp_image->size, and used for modmem charge/uncharge. The fallback arch_alloc_bpf_trampoline() still allocates a whole page because we need to use set_memory_* to protect the memory. struct_ops trampoline still uses a whole page for multiple trampolines. With this size check at caller (regular trampoline and struct_ops trampoline), remove arch_bpf_trampoline_size() from arch_prepare_bpf_trampoline() in archs. Also, update bpf_image_ksym_add() to handle symbol of different sizes. Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> # on riscv Link: https://lore.kernel.org/r/20231206224054.492250-7-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: Add arch_bpf_trampoline_size()Song Liu
This helper will be used to calculate the size of the trampoline before allocating the memory. arch_prepare_bpf_trampoline() for arm64 and riscv64 can use arch_bpf_trampoline_size() to check the trampoline fits in the image. OTOH, arch_prepare_bpf_trampoline() for s390 has to call the JIT process twice, so it cannot use arch_bpf_trampoline_size(). Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Björn Töpel <bjorn@rivosinc.com> # on riscv Link: https://lore.kernel.org/r/20231206224054.492250-6-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: Add helpers for trampoline image managementSong Liu
As BPF trampoline of different archs moves from bpf_jit_[alloc|free]_exec() to bpf_prog_pack_[alloc|free](), we need to use different _alloc, _free for different archs during the transition. Add the following helpers for this transition: void *arch_alloc_bpf_trampoline(unsigned int size); void arch_free_bpf_trampoline(void *image, unsigned int size); void arch_protect_bpf_trampoline(void *image, unsigned int size); void arch_unprotect_bpf_trampoline(void *image, unsigned int size); The fallback version of these helpers require size <= PAGE_SIZE, but they are only called with size == PAGE_SIZE. They will be called with size < PAGE_SIZE when arch_bpf_trampoline_size() helper is introduced later. Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20231206224054.492250-4-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: Adjust argument names of arch_prepare_bpf_trampoline()Song Liu
We are using "im" for "struct bpf_tramp_image" and "tr" for "struct bpf_trampoline" in most of the code base. The only exception is the prototype and fallback version of arch_prepare_bpf_trampoline(). Update them to match the rest of the code base. We mix "orig_call" and "func_addr" for the argument in different versions of arch_prepare_bpf_trampoline(). s/orig_call/func_addr/g so they match. Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20231206224054.492250-3-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: Let bpf_prog_pack_free handle any pointerSong Liu
Currently, bpf_prog_pack_free only can only free pointer to struct bpf_binary_header, which is not flexible. Add a size argument to bpf_prog_pack_free so that it can handle any pointer. Signed-off-by: Song Liu <song@kernel.org> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> # on s390x Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20231206224054.492250-2-song@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06Merge branch 'master' into mm-hotfixes-stableAndrew Morton
2023-12-06highmem: fix a memory copy problem in memcpy_from_folioSu Hui
Clang static checker complains that value stored to 'from' is never read. And memcpy_from_folio() only copy the last chunk memory from folio to destination. Use 'to += chunk' to replace 'from += chunk' to fix this typo problem. Link: https://lkml.kernel.org/r/20231130034017.1210429-1-suhui@nfschina.com Fixes: b23d03ef7af5 ("highmem: add memcpy_to_folio() and memcpy_from_folio()") Signed-off-by: Su Hui <suhui@nfschina.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Jiaqi Yan <jiaqiyan@google.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Tom Rix <trix@redhat.com> Cc: Tony Luck <tony.luck@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-06units: add missing headerAndy Shevchenko
BITS_PER_BYTE is defined in bits.h. Link: https://lkml.kernel.org/r/20231128174404.393393-1-andriy.shevchenko@linux.intel.com Fixes: e8eed5f7366f ("units: Add BYTES_PER_*BIT") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Cc: Damian Muszynski <damian.muszynski@intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-06hugetlb: fix null-ptr-deref in hugetlb_vma_lock_writeMike Kravetz
The routine __vma_private_lock tests for the existence of a reserve map associated with a private hugetlb mapping. A pointer to the reserve map is in vma->vm_private_data. __vma_private_lock was checking the pointer for NULL. However, it is possible that the low bits of the pointer could be used as flags. In such instances, vm_private_data is not NULL and not a valid pointer. This results in the null-ptr-deref reported by syzbot: general protection fault, probably for non-canonical address 0xdffffc000000001d: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x00000000000000e8-0x00000000000000ef] CPU: 0 PID: 5048 Comm: syz-executor139 Not tainted 6.6.0-rc7-syzkaller-00142-g88 8cf78c29e2 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 1 0/09/2023 RIP: 0010:__lock_acquire+0x109/0x5de0 kernel/locking/lockdep.c:5004 ... Call Trace: <TASK> lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_write+0x93/0x200 kernel/locking/rwsem.c:1573 hugetlb_vma_lock_write mm/hugetlb.c:300 [inline] hugetlb_vma_lock_write+0xae/0x100 mm/hugetlb.c:291 __hugetlb_zap_begin+0x1e9/0x2b0 mm/hugetlb.c:5447 hugetlb_zap_begin include/linux/hugetlb.h:258 [inline] unmap_vmas+0x2f4/0x470 mm/memory.c:1733 exit_mmap+0x1ad/0xa60 mm/mmap.c:3230 __mmput+0x12a/0x4d0 kernel/fork.c:1349 mmput+0x62/0x70 kernel/fork.c:1371 exit_mm kernel/exit.c:567 [inline] do_exit+0x9ad/0x2a20 kernel/exit.c:861 __do_sys_exit kernel/exit.c:991 [inline] __se_sys_exit kernel/exit.c:989 [inline] __x64_sys_exit+0x42/0x50 kernel/exit.c:989 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Mask off low bit flags before checking for NULL pointer. In addition, the reserve map only 'belongs' to the OWNER (parent in parent/child relationships) so also check for the OWNER flag. Link: https://lkml.kernel.org/r/20231114012033.259600-1-mike.kravetz@oracle.com Reported-by: syzbot+6ada951e7c0f7bc8a71e@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/00000000000078d1e00608d7878b@google.com/ Fixes: bf4916922c60 ("hugetlbfs: extend hugetlb_vma_lock to private VMAs") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Rik van Riel <riel@surriel.com> Cc: Edward Adam Davis <eadavis@qq.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Tom Rix <trix@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-06bpf: Fix prog_array_map_poke_run map poke updateJiri Olsa
Lee pointed out issue found by syscaller [0] hitting BUG in prog array map poke update in prog_array_map_poke_run function due to error value returned from bpf_arch_text_poke function. There's race window where bpf_arch_text_poke can fail due to missing bpf program kallsym symbols, which is accounted for with check for -EINVAL in that BUG_ON call. The problem is that in such case we won't update the tail call jump and cause imbalance for the next tail call update check which will fail with -EBUSY in bpf_arch_text_poke. I'm hitting following race during the program load: CPU 0 CPU 1 bpf_prog_load bpf_check do_misc_fixups prog_array_map_poke_track map_update_elem bpf_fd_array_map_update_elem prog_array_map_poke_run bpf_arch_text_poke returns -EINVAL bpf_prog_kallsyms_add After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next poke update fails on expected jump instruction check in bpf_arch_text_poke with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. Similar race exists on the program unload. Fixing this by moving the update to bpf_arch_poke_desc_update function which makes sure we call __bpf_arch_text_poke that skips the bpf address check. Each architecture has slightly different approach wrt looking up bpf address in bpf_arch_text_poke, so instead of splitting the function or adding new 'checkip' argument in previous version, it seems best to move the whole map_poke_run update as arch specific code. [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Cc: Lee Jones <lee@kernel.org> Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/bpf/20231206083041.1306660-2-jolsa@kernel.org
2023-12-06thermal: sysfs: Rework the handling of trip point updatesRafael J. Wysocki
Both trip_point_temp_store() and trip_point_hyst_store() use thermal_zone_set_trip() to update a given trip point, but none of them actually needs to change more than one field in struct thermal_trip representing it. However, each of them effectively calls __thermal_zone_get_trip() twice in a row for the same trip index value, once directly and once via thermal_zone_set_trip(), which is not particularly efficient, and the way in which thermal_zone_set_trip() carries out the update is not particularly straightforward. Moreover, input processing need not be done under the thermal zone lock in any of these functions. Rework trip_point_temp_store() and trip_point_hyst_store() to address the above, move the part of thermal_zone_set_trip() that is still useful to a new function called thermal_zone_trip_updated() and drop the rest of it. While at it, make trip_point_hyst_store() reject negative hysteresis values. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2023-12-06cgroup/cpuset: Include isolated cpuset CPUs in cpu_is_isolated() checkWaiman Long
Currently, the cpu_is_isolated() function checks only the statically isolated CPUs specified via the "isolcpus" and "nohz_full" kernel command line options. This function is used by vmstat and memcg to reduce interference with isolated CPUs by not doing stat flushing or scheduling works on those CPUs. Workloads running on isolated CPUs within isolated cpuset partitions should receive the same treatment to reduce unnecessary interference. This patch introduces a new cpuset_cpu_is_isolated() function to be called by cpu_is_isolated() so that the set of dynamically created cpuset isolated CPUs will be included in the check. Assuming that testing a bit in a cpumask is atomic, no synchronization primitive is currently used to synchronize access to the cpuset's isolated_cpus mask. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2023-12-06bpf,lsm: add BPF token LSM hooksAndrii Nakryiko
Wire up bpf_token_create and bpf_token_free LSM hooks, which allow to allocate LSM security blob (we add `void *security` field to struct bpf_token for that), but also control who can instantiate BPF token. This follows existing pattern for BPF map and BPF prog. Also add security_bpf_token_allow_cmd() and security_bpf_token_capable() LSM hooks that allow LSM implementation to control and negate (if necessary) BPF token's delegation of a specific bpf_cmd and capability, respectively. Acked-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-12-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf,lsm: refactor bpf_map_alloc/bpf_map_free LSM hooksAndrii Nakryiko
Similarly to bpf_prog_alloc LSM hook, rename and extend bpf_map_alloc hook into bpf_map_create, taking not just struct bpf_map, but also bpf_attr and bpf_token, to give a fuller context to LSMs. Unlike bpf_prog_alloc, there is no need to move the hook around, as it currently is firing right before allocating BPF map ID and FD, which seems to be a sweet spot. But like bpf_prog_alloc/bpf_prog_free combo, make sure that bpf_map_free LSM hook is called even if bpf_map_create hook returned error, as if few LSMs are combined together it could be that one LSM successfully allocated security blob for its needs, while subsequent LSM rejected BPF map creation. The former LSM would still need to free up LSM blob, so we need to ensure security_bpf_map_free() is called regardless of the outcome. Acked-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-11-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf,lsm: refactor bpf_prog_alloc/bpf_prog_free LSM hooksAndrii Nakryiko
Based on upstream discussion ([0]), rework existing bpf_prog_alloc_security LSM hook. Rename it to bpf_prog_load and instead of passing bpf_prog_aux, pass proper bpf_prog pointer for a full BPF program struct. Also, we pass bpf_attr union with all the user-provided arguments for BPF_PROG_LOAD command. This will give LSMs as much information as we can basically provide. The hook is also BPF token-aware now, and optional bpf_token struct is passed as a third argument. bpf_prog_load LSM hook is called after a bunch of sanity checks were performed, bpf_prog and bpf_prog_aux were allocated and filled out, but right before performing full-fledged BPF verification step. bpf_prog_free LSM hook is now accepting struct bpf_prog argument, for consistency. SELinux code is adjusted to all new names, types, and signatures. Note, given that bpf_prog_load (previously bpf_prog_alloc) hook can be used by some LSMs to allocate extra security blob, but also by other LSMs to reject BPF program loading, we need to make sure that bpf_prog_free LSM hook is called after bpf_prog_load/bpf_prog_alloc one *even* if the hook itself returned error. If we don't do that, we run the risk of leaking memory. This seems to be possible today when combining SELinux and BPF LSM, as one example, depending on their relative ordering. Also, for BPF LSM setup, add bpf_prog_load and bpf_prog_free to sleepable LSM hooks list, as they are both executed in sleepable context. Also drop bpf_prog_load hook from untrusted, as there is no issue with refcount or anything else anymore, that originally forced us to add it to untrusted list in c0c852dd1876 ("bpf: Do not mark certain LSM hook arguments as trusted"). We now trigger this hook much later and it should not be an issue anymore. [0] https://lore.kernel.org/bpf/9fe88aef7deabbe87d3fc38c4aea3c69.paul@paul-moore.com/ Acked-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-10-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: consistently use BPF token throughout BPF verifier logicAndrii Nakryiko
Remove remaining direct queries to perfmon_capable() and bpf_capable() in BPF verifier logic and instead use BPF token (if available) to make decisions about privileges. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-9-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: take into account BPF token when fetching helper protosAndrii Nakryiko
Instead of performing unconditional system-wide bpf_capable() and perfmon_capable() calls inside bpf_base_func_proto() function (and other similar ones) to determine eligibility of a given BPF helper for a given program, use previously recorded BPF token during BPF_PROG_LOAD command handling to inform the decision. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-8-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: add BPF token support to BPF_PROG_LOAD commandAndrii Nakryiko
Add basic support of BPF token to BPF_PROG_LOAD. Wire through a set of allowed BPF program types and attach types, derived from BPF FS at BPF token creation time. Then make sure we perform bpf_token_capable() checks everywhere where it's relevant. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-7-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: add BPF token support to BPF_MAP_CREATE commandAndrii Nakryiko
Allow providing token_fd for BPF_MAP_CREATE command to allow controlled BPF map creation from unprivileged process through delegated BPF token. Wire through a set of allowed BPF map types to BPF token, derived from BPF FS at BPF token creation time. This, in combination with allowed_cmds allows to create a narrowly-focused BPF token (controlled by privileged agent) with a restrictive set of BPF maps that application can attempt to create. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-5-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: introduce BPF token objectAndrii Nakryiko
Add new kind of BPF kernel object, BPF token. BPF token is meant to allow delegating privileged BPF functionality, like loading a BPF program or creating a BPF map, from privileged process to a *trusted* unprivileged process, all while having a good amount of control over which privileged operations could be performed using provided BPF token. This is achieved through mounting BPF FS instance with extra delegation mount options, which determine what operations are delegatable, and also constraining it to the owning user namespace (as mentioned in the previous patch). BPF token itself is just a derivative from BPF FS and can be created through a new bpf() syscall command, BPF_TOKEN_CREATE, which accepts BPF FS FD, which can be attained through open() API by opening BPF FS mount point. Currently, BPF token "inherits" delegated command, map types, prog type, and attach type bit sets from BPF FS as is. In the future, having an BPF token as a separate object with its own FD, we can allow to further restrict BPF token's allowable set of things either at the creation time or after the fact, allowing the process to guard itself further from unintentionally trying to load undesired kind of BPF programs. But for now we keep things simple and just copy bit sets as is. When BPF token is created from BPF FS mount, we take reference to the BPF super block's owning user namespace, and then use that namespace for checking all the {CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_ADMIN} capabilities that are normally only checked against init userns (using capable()), but now we check them using ns_capable() instead (if BPF token is provided). See bpf_token_capable() for details. Such setup means that BPF token in itself is not sufficient to grant BPF functionality. User namespaced process has to *also* have necessary combination of capabilities inside that user namespace. So while previously CAP_BPF was useless when granted within user namespace, now it gains a meaning and allows container managers and sys admins to have a flexible control over which processes can and need to use BPF functionality within the user namespace (i.e., container in practice). And BPF FS delegation mount options and derived BPF tokens serve as a per-container "flag" to grant overall ability to use bpf() (plus further restrict on which parts of bpf() syscalls are treated as namespaced). Note also, BPF_TOKEN_CREATE command itself requires ns_capable(CAP_BPF) within the BPF FS owning user namespace, rounding up the ns_capable() story of BPF token. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06bpf: add BPF token delegation mount options to BPF FSAndrii Nakryiko
Add few new mount options to BPF FS that allow to specify that a given BPF FS instance allows creation of BPF token (added in the next patch), and what sort of operations are allowed under BPF token. As such, we get 4 new mount options, each is a bit mask - `delegate_cmds` allow to specify which bpf() syscall commands are allowed with BPF token derived from this BPF FS instance; - if BPF_MAP_CREATE command is allowed, `delegate_maps` specifies a set of allowable BPF map types that could be created with BPF token; - if BPF_PROG_LOAD command is allowed, `delegate_progs` specifies a set of allowable BPF program types that could be loaded with BPF token; - if BPF_PROG_LOAD command is allowed, `delegate_attachs` specifies a set of allowable BPF program attach types that could be loaded with BPF token; delegate_progs and delegate_attachs are meant to be used together, as full BPF program type is, in general, determined through both program type and program attach type. Currently, these mount options accept the following forms of values: - a special value "any", that enables all possible values of a given bit set; - numeric value (decimal or hexadecimal, determined by kernel automatically) that specifies a bit mask value directly; - all the values for a given mount option are combined, if specified multiple times. E.g., `mount -t bpf nodev /path/to/mount -o delegate_maps=0x1 -o delegate_maps=0x2` will result in a combined 0x3 mask. Ideally, more convenient (for humans) symbolic form derived from corresponding UAPI enums would be accepted (e.g., `-o delegate_progs=kprobe|tracepoint`) and I intend to implement this, but it requires a bunch of UAPI header churn, so I postponed it until this feature lands upstream or at least there is a definite consensus that this feature is acceptable and is going to make it, just to minimize amount of wasted effort and not increase amount of non-essential code to be reviewed. Attentive reader will notice that BPF FS is now marked as FS_USERNS_MOUNT, which theoretically makes it mountable inside non-init user namespace as long as the process has sufficient *namespaced* capabilities within that user namespace. But in reality we still restrict BPF FS to be mountable only by processes with CAP_SYS_ADMIN *in init userns* (extra check in bpf_fill_super()). FS_USERNS_MOUNT is added to allow creating BPF FS context object (i.e., fsopen("bpf")) from inside unprivileged process inside non-init userns, to capture that userns as the owning userns. It will still be required to pass this context object back to privileged process to instantiate and mount it. This manipulation is important, because capturing non-init userns as the owning userns of BPF FS instance (super block) allows to use that userns to constraint BPF token to that userns later on (see next patch). So creating BPF FS with delegation inside unprivileged userns will restrict derived BPF token objects to only "work" inside that intended userns, making it scoped to a intended "container". Also, setting these delegation options requires capable(CAP_SYS_ADMIN), so unprivileged process cannot set this up without involvement of a privileged process. There is a set of selftests at the end of the patch set that simulates this sequence of steps and validates that everything works as intended. But careful review is requested to make sure there are no missed gaps in the implementation and testing. This somewhat subtle set of aspects is the result of previous discussions ([0]) about various user namespace implications and interactions with BPF token functionality and is necessary to contain BPF token inside intended user namespace. [0] https://lore.kernel.org/bpf/20230704-hochverdient-lehne-eeb9eeef785e@brauner/ Acked-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231130185229.2688956-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-12-06ACPI: bus: update acpi_dev_hid_uid_match() to support multiple typesRaag Jadav
Now that we have _UID matching support for both integer and string types, we can support them into acpi_dev_hid_uid_match() helper as well. Signed-off-by: Raag Jadav <raag.jadav@intel.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-06ACPI: bus: update acpi_dev_uid_match() to support multiple typesRaag Jadav
According to the ACPI specification, a _UID object can evaluate to either a numeric value or a string. Update acpi_dev_uid_match() to support _UID matching for both integer and string types. Suggested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Raag Jadav <raag.jadav@intel.com> [ rjw: Rename auxiliary macros, relocate kerneldoc comment ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-06Merge tag 'ffa-fixes-6.7' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into arm/fixes Arm FF-A fixes for v6.7 A bunch of fixes addressing issues around the notification support that was added this cycle. They address issue in partition IDs handling in ffa_notification_info_get(), notifications cleanup path and the size of the allocation in ffa_partitions_cleanup(). It also adds check for the notification enabled state so that the drivers registering the callbacks can be rejected if not enabled/supported. It also moves the partitions setup operation after the notification initialisation so that the driver has the correct state for notification enabled/supported before the partitions are initialised/setup. It also now allows FF-A initialisation to complete successfully even when the notification initialisation fails as it is an optional support in the specification. Initial support just allowed it only if the firmware didn't support notifications. Finally, it also adds a fix for smatch warning by declaring ffa_bus_type structure in the header. * tag 'ffa-fixes-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux: firmware: arm_ffa: Fix ffa_notification_info_get() IDs handling firmware: arm_ffa: Fix the size of the allocation in ffa_partitions_cleanup() firmware: arm_ffa: Fix FFA notifications cleanup path firmware: arm_ffa: Add checks for the notification enabled state firmware: arm_ffa: Setup the partitions after the notification initialisation firmware: arm_ffa: Allow FF-A initialisation even when notification fails firmware: arm_ffa: Declare ffa_bus_type structure in the header Link: https://lore.kernel.org/r/20231116191603.929767-1-sudeep.holla@arm.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-12-06Merge tag 'asahi-soc-mailbox-6.8' of https://github.com/AsahiLinux/linux ↵Arnd Bergmann
into soc/drivers Apple SoC mailbox updates for 6.8 This moves the mailbox driver out of the mailbox subsystem and into SoC, next to its only consumer (RTKit). It has been cooking in linux-next for a long while, so it's time to pull it in. * tag 'asahi-soc-mailbox-6.8' of https://github.com/AsahiLinux/linux: soc: apple: mailbox: Add explicit include of platform_device.h soc: apple: mailbox: Rename config symbol to APPLE_MAILBOX mailbox: apple: Delete driver soc: apple: rtkit: Port to the internal mailbox driver soc: apple: mailbox: Add ASC/M3 mailbox driver soc: apple: rtkit: Get rid of apple_rtkit_send_message_wait Link: https://lore.kernel.org/r/6e64472e-c55d-4499-9a61-da59cfd28021@marcan.st Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-12-06cpu/hotplug: Remove unused CPU hotplug statesZenghui Yu
There are unused hotplug states which either have never been used or the removal of the usage did not remove the state constant. Drop them to reduce the size of the cpuhp_hp_states array. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20231124121615.1604-1-yuzenghui@huawei.com