summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2023-08-02net: add hwtstamping helpers for stackable net devicesMaxim Georgiev
The stackable net devices with hwtstamping support (vlan, macvlan, bonding) only pass the hwtstamping ops to the lower (real) device. These drivers are the first that need to be converted to the new timestamping API, because if they aren't prepared to handle that, then no real device driver cannot be converted to the new API either. After studying what vlan_dev_ioctl(), macvlan_eth_ioctl() and bond_eth_ioctl() have in common, here we propose two generic implementations of ndo_hwtstamp_get() and ndo_hwtstamp_set() which can be called by those 3 drivers, with "dev" being their lower device. These helpers cover both cases, when the lower driver is converted to the new API or unconverted. We need some hacks in case of an unconverted driver, namely to stuff some pointers in struct kernel_hwtstamp_config which shouldn't have been there (since the new API isn't supposed to need it). These will be removed when all drivers will have been converted to the new API. Signed-off-by: Maxim Georgiev <glipus@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20230801142824.1772134-3-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-02net: add NDOs for configuring hardware timestampingMaxim Georgiev
Current hardware timestamping API for NICs requires implementing .ndo_eth_ioctl() for SIOCGHWTSTAMP and SIOCSHWTSTAMP. That API has some boilerplate such as request parameter translation between user and kernel address spaces, handling possible translation failures correctly, etc. Since it is the same all across the board, it would be desirable to handle it through generic code. Here we introduce .ndo_hwtstamp_get() and .ndo_hwtstamp_set(), which implement that boilerplate and allow drivers to just act upon requests. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Maxim Georgiev <glipus@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Horatiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20230801142824.1772134-2-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-02net/mlx5e: Make TC and IPsec offloads mutually exclusive on a netdevJianbo Liu
For IPsec packet offload mode, the order of TC offload and IPsec offload on the same netdevice is not aligned with the order in the non-offload software. For example, for RX, the software performs TC first and then IPsec transformation, but the implementation for offload does that in the opposite way. To resolve the difference for now, either IPsec offload or TC offload, not both, is allowed for a specific interface. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/8e2e5e3b0984d785066e8663aaf97b3ba1bb873f.1690802064.git.leon@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-02net/mlx5e: Support IPsec packet offload for TX in switchdev modeJianbo Liu
The IPsec encryption is done at the last, so add new prio for IPsec offload in FDB, and put it just lower than the slow path prio and higher than the per-vport prio. Three levels are added for TX. The first one is for ip xfrm policy. The sa table is created in the second level for ip xfrm state. The status table is created at the last to count the number of packets encrypted. The rules, which forward packets to uplink, are changed to forward them to IPsec TX tables first. These rules are restored after those tables are destroyed, which is done immediately when there is no reference to them, just as what does in legacy mode. The support for slow path is added here, by refreshing uplink's channels. But, the handling for TC fast path, which is more complicated, will be added later. Besides, reg c4 is used instead to match reqid. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/cfd0e6ffaf0b8c55ebaa9fb0649b7c504b6b8ec6.1690802064.git.leon@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-02net/mlx5e: Handle IPsec offload for RX datapath in switchdev modeJianbo Liu
Reuse tun opts bits in reg c1, to pass IPsec obj id to datapath. As this is only for RX SA and there are only 11 bits, xarray is used to map IPsec obj id to an index, which is between 1 and 0x7ff, and replace obj id to write to reg c1. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/43d60fbcc9cd672a97d7e2a2f7fe6a3d9e9a776d.1690802064.git.leon@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-02net/mlx5e: Support IPsec packet offload for RX in switchdev modeJianbo Liu
As decryption must be done first, add new prio for IPsec offload in FDB, and put it just lower than BYPASS prio and higher than TC prio. Three levels are added for RX. The first one is for ip xfrm policy. SA table is created in the second level for ip xfrm state. The status table is created in the last to check the decryption result. If success, packets continue with the next process, or dropped otherwise. For now, the set of reg c1 is removed for swtichdev mode, and the datapath process will be added in the next patch. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/c91063554cf643fb50b99cf093e8a9bf11729de5.1690802064.git.leon@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-08-02Merge tag 'soc-fixes-6.5-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc Pull ARM SoC fixes from Arnd Bergmann: "A couple of platforms get a lone dts fix each: - SoCFPGA: Fix incorrect I2C property for SCL signal - Renesas: Fix interrupt names for MTU3 channels on RZ/G2L and RZ/V2L. - Juno/Vexpress: remove a dangling symlink - at91: sam9x60 SoC detection compatible strings - nspire: Fix arm primecell compatible string On the NXP i.MX platform, there multiple issues that get addressed: - A couple of ARM DTS fixes for i.MX6SLL usbphy and supported CPU frequency of sk-imx53 board - Add missing pull-up for imx8mn-var-som onboard PHY reset pinmux - A couple of imx8mm-venice fixes from Tim Harvey to diable disp_blk_ctrl - A couple of phycore-imx8mm fixes from Yashwanth Varakala to correct VPU label and gpio-line-names - Fix imx8mp-blk-ctrl driver to register HSIO PLL clock as bus_power_dev child, so that runtime PM can translate into the necessary GPC power domain action On the driver side, there are two fixes for tegra memory controller drivers addressing regressions from the merge window, a couple of minor correctness fixes for SCMI and SMCCC firmware, as well as a build fix for an lcd backlight driver" * tag 'soc-fixes-6.5-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (22 commits) backlight: corgi_lcd: fix missing prototype memory: tegra: make icc_set_bw return zero if BWMGR not supported arm64: dts: renesas: rzg2l: Update overfow/underflow IRQ names for MTU3 channels dt-bindings: serial: atmel,at91-usart: update compatible for sam9x60 ARM: dts: at91: sam9x60: fix the SOC detection ARM: dts: nspire: Fix arm primecell compatible string firmware: arm_scmi: Fix chan_free cleanup on SMC firmware: arm_scmi: Drop OF node reference in the transport channel setup soc: imx: imx8mp-blk-ctrl: register HSIO PLL clock as bus_power_dev child ARM: dts: nxp/imx: limit sk-imx53 supported frequencies firmware: arm_scmi: Fix signed error return values handling firmware: smccc: Fix use of uninitialised results structure arm64: dts: freescale: Fix VPU G2 clock arm64: dts: imx8mn-var-som: add missing pull-up for onboard PHY reset pinmux arm64: dts: phycore-imx8mm: Correction in gpio-line-names arm64: dts: phycore-imx8mm: Label typo-fix of VPU ARM: dts: nxp/imx6sll: fix wrong property name in usbphy node arm64: dts: imx8mm-venice-gw7904: disable disp_blk_ctrl arm64: dts: imx8mm-venice-gw7903: disable disp_blk_ctrl arm64: dts: arm: Remove the dangling vexpress-v2m-rs1.dtsi symlink ...
2023-08-02Merge tag 'bitmap-6.5-rc5' of https://github.com:/norov/linuxLinus Torvalds
Pull bitmap fixes from Yury Norov: - Fix for bitmap documentation - Fix for kernel build under certain configurations * tag 'bitmap-6.5-rc5' of https://github.com:/norov/linux: lib/bitmap: workaround const_eval test build failure cpumask: eliminate kernel-doc warnings
2023-08-02Drivers: hv: vmbus: Remove unused extern declaration vmbus_ontimer()YueHaibing
Since commit 30fbee49b071 ("Staging: hv: vmbus: Get rid of the unused function vmbus_ontimer()") this is not used anymore, so can remove it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/20230725142108.27280-1-yuehaibing@huawei.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2023-08-02x86/shstk: Move arch detail comment out of core mmRick Edgecombe
The comment around VM_SHADOW_STACK in mm.h refers to a lot of x86 specific details that don't belong in a cross arch file. Remove these out of core mm, and just leave the non-arch details. Since the comment includes some useful details that would be good to retain in the source somewhere, put the arch specifics parts in arch/x86/shstk.c near alloc_shstk(), where memory of this type is allocated. Include a reference to the existence of the x86 details near the VM_SHADOW_STACK definition mm.h. Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/all/20230706233248.445713-1-rick.p.edgecombe%40intel.com
2023-08-02x86: Expose thread features in /proc/$PID/statusRick Edgecombe
Applications and loaders can have logic to decide whether to enable shadow stack. They usually don't report whether shadow stack has been enabled or not, so there is no way to verify whether an application actually is protected by shadow stack. Add two lines in /proc/$PID/status to report enabled and locked features. Since, this involves referring to arch specific defines in asm/prctl.h, implement an arch breakout to emit the feature lines. [Switched to CET, added to commit log] Co-developed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/all/20230613001108.3040476-37-rick.p.edgecombe%40intel.com
2023-08-02x86/shstk: Introduce map_shadow_stack syscallRick Edgecombe
When operating with shadow stacks enabled, the kernel will automatically allocate shadow stacks for new threads, however in some cases userspace will need additional shadow stacks. The main example of this is the ucontext family of functions, which require userspace allocating and pivoting to userspace managed stacks. Unlike most other user memory permissions, shadow stacks need to be provisioned with special data in order to be useful. They need to be setup with a restore token so that userspace can pivot to them via the RSTORSSP instruction. But, the security design of shadow stacks is that they should not be written to except in limited circumstances. This presents a problem for userspace, as to how userspace can provision this special data, without allowing for the shadow stack to be generally writable. Previously, a new PROT_SHADOW_STACK was attempted, which could be mprotect()ed from RW permissions after the data was provisioned. This was found to not be secure enough, as other threads could write to the shadow stack during the writable window. The kernel can use a special instruction, WRUSS, to write directly to userspace shadow stacks. So the solution can be that memory can be mapped as shadow stack permissions from the beginning (never generally writable in userspace), and the kernel itself can write the restore token. First, a new madvise() flag was explored, which could operate on the PROT_SHADOW_STACK memory. This had a couple of downsides: 1. Extra checks were needed in mprotect() to prevent writable memory from ever becoming PROT_SHADOW_STACK. 2. Extra checks/vma state were needed in the new madvise() to prevent restore tokens being written into the middle of pre-used shadow stacks. It is ideal to prevent restore tokens being added at arbitrary locations, so the check was to make sure the shadow stack had never been written to. 3. It stood out from the rest of the madvise flags, as more of direct action than a hint at future desired behavior. So rather than repurpose two existing syscalls (mmap, madvise) that don't quite fit, just implement a new map_shadow_stack syscall to allow userspace to map and setup new shadow stacks in one step. While ucontext is the primary motivator, userspace may have other unforeseen reasons to setup its own shadow stacks using the WRSS instruction. Towards this provide a flag so that stacks can be optionally setup securely for the common case of ucontext without enabling WRSS. Or potentially have the kernel set up the shadow stack in some new way. The following example demonstrates how to create a new shadow stack with map_shadow_stack: void *shstk = map_shadow_stack(addr, stack_size, SHADOW_STACK_SET_TOKEN); Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Tested-by: Pengfei Xu <pengfei.xu@intel.com> Tested-by: John Allen <john.allen@amd.com> Tested-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/all/20230613001108.3040476-35-rick.p.edgecombe%40intel.com
2023-08-02bpf: fix bpf_probe_read_kernel prototype mismatchArnd Bergmann
bpf_probe_read_kernel() has a __weak definition in core.c and another definition with an incompatible prototype in kernel/trace/bpf_trace.c, when CONFIG_BPF_EVENTS is enabled. Since the two are incompatible, there cannot be a shared declaration in a header file, but the lack of a prototype causes a W=1 warning: kernel/bpf/core.c:1638:12: error: no previous prototype for 'bpf_probe_read_kernel' [-Werror=missing-prototypes] On 32-bit architectures, the local prototype u64 __weak bpf_probe_read_kernel(void *dst, u32 size, const void *unsafe_ptr) passes arguments in other registers as the one in bpf_trace.c BPF_CALL_3(bpf_probe_read_kernel, void *, dst, u32, size, const void *, unsafe_ptr) which uses 64-bit arguments in pairs of registers. As both versions of the function are fairly simple and only really differ in one line, just move them into a header file as an inline function that does not add any overhead for the bpf_trace.c callers and actually avoids a function call for the other one. Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/ac25cb0f-b804-1649-3afb-1dc6138c2716@iogearbox.net/ Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20230801111449.185301-1-arnd@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-08-02fs: add CONFIG_BUFFER_HEADChristoph Hellwig
Add a new config option that controls building the buffer_head code, and select it from all file systems and stacking drivers that need it. For the block device nodes and alternative iomap based buffered I/O path is provided when buffer_head support is not enabled, and iomap needs a a small tweak to define the IOMAP_F_BUFFER_HEAD flag to 0 to not call into the buffer_head code when it doesn't exist. Otherwise this is just Kconfig and ifdef changes. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/20230801172201.1923299-7-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-02fs: rename and move block_page_mkwrite_returnChristoph Hellwig
block_page_mkwrite_return is neither block nor mkwrite specific, and should not be under CONFIG_BLOCK. Move it to mm.h and rename it to vmf_fs_error. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Christian Brauner <brauner@kernel.org> Link: https://lore.kernel.org/r/20230801172201.1923299-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-02x86/kprobes: Prohibit probing on compiler generated CFI checking codeMasami Hiramatsu
Prohibit probing on the compiler generated CFI typeid checking code because it is used for decoding typeid when CFI error happens. The compiler generates the following instruction sequence for indirect call checks on x86;   movl -<id>, %r10d ; 6 bytes addl -4(%reg), %r10d ; 4 bytes je .Ltmp1 ; 2 bytes ud2 ; <- regs->ip And handle_cfi_failure() decodes these instructions (movl and addl) for the typeid and the target address. Thus if we put a kprobe on those instructions, the decode will fail and report a wrong typeid and target address. Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/168904025785.116016.12766408611437534723.stgit@devnote2
2023-08-02ata: libata: remove deprecated EH callbacksNiklas Cassel
Now that all libata drivers have migrated to use the error_handler callback, remove the deprecated phy_reset and eng_timeout callbacks. Also remove references to non-existent functions sata_phy_reset and ata_qc_timeout from Documentation/driver-api/libata.rst. Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata: libata-core: remove ata_bus_probe()Niklas Cassel
Remove ata_bus_probe() as it is unused. Also, remove references to ata_bus_probe and port_disable in Documentation/driver-api/libata.rst, as neither exist anymore. Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata,scsi: remove ata_sas_port_init()Niklas Cassel
ata_sas_port_init() now only contains a single initialization. Move this single initialization to ata_sas_port_alloc(), since: 1) ata_sas_port_alloc() already initializes some of the struct members. 2) ata_sas_port_alloc() is only used by libsas. Suggested-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata,scsi: cleanup __ata_port_probe()Hannes Reinecke
Rename __ata_port_probe() to ata_port_probe() and drop the wrapper ata_sas_async_probe(). Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata: libata-sata: remove ata_sas_sync_probe()Hannes Reinecke
Unused. Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata,scsi: remove ata_sas_port_destroy()Hannes Reinecke
Is now a wrapper around kfree(), so call it directly. Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata,scsi: remove ata_sas_port_{start,stop} callbacksHannes Reinecke
Callbacks are empty now, so remove them. Also, remove the call to ap->ops->port_start() in ata_sas_port_init(), as this would otherwise cause a NULL pointer dereference, now when the callback is gone. Signed-off-by: Hannes Reinecke <hare@suse.de> [niklas: remove the call to ap->ops->port_start() in ata_sas_port_init()] Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata: libata: remove references to non-existing error_handler()Hannes Reinecke
With commit 65a15d6560df ("scsi: ipr: Remove SATA support") all libata drivers now have the error_handler() callback provided, so we can stop checking for non-existing error_handler callback. Signed-off-by: Hannes Reinecke <hare@suse.de> [niklas: fixed review comments, rebased, solved conflicts during rebase, fixed bug that unconditionally dumped all QCs, removed the now unused function ata_dump_status(), removed the now unreachable failure paths in atapi_qc_complete(), removed the non-EH function to request ATAPI sense] Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata: fix debounce timings typeSergey Shtylyov
sata_deb_timing_{hotplug|long|normal}[] store 'unsigned long' debounce timeouts in ms, while sata_link_debounce() eventually uses those timeouts by calling ata_{deadline|msleep}( which take just 'unsigned int'. Change the debounce timeout table element's type to 'unsigned int' -- all these timeouts happily fit into 'unsigned int'... Signed-off-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata: libata-core: fix parameter types of ata_wait_register()Sergey Shtylyov
ata_wait_register() passes its 'unsigned long {interval|timeout}' params verbatim to ata_{msleep|deadline}() that just take 'unsigned int' param for the time intervals in ms -- eliminate unneeded implicit casts... Signed-off-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-02ata: libata: fix parameter type of ata_deadline()Sergey Shtylyov
ata_deadline() passes its 'unsigned long timeout_msecs' parameter verbatim to msecs_to_jiffies() which takes just 'unsigned int' -- eliminate unneeded implicit cast... Signed-off-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2023-08-01Merge tag 'xfs-async-dio.6-2023-08-01' of git://git.kernel.dk/linux into ↵Darrick J. Wong
iomap-6.6-mergeA Improve iomap/xfs async dio write performance iomap always punts async dio write completions to a workqueue, which has a cost in terms of efficiency (now you need an unrelated worker to process it) and latency (now you're bouncing a completion through an async worker, which is a classic slowdown scenario). io_uring handles IRQ completions via task_work, and for writes that don't need to do extra IO at completion time, we can safely complete them inline from that. This patchset adds IOCB_DIO_CALLER_COMP, which an IO issuer can set to inform the completion side that any extra work that needs doing for that completion can be punted to a safe task context. The iomap dio completion will happen in hard/soft irq context, and we need a saner context to process these completions. IOCB_DIO_CALLER_COMP is added, which can be set in a struct kiocb->ki_flags by the issuer. If the completion side of the iocb handling understands this flag, it can choose to set a kiocb->dio_complete() handler and just call ki_complete from IRQ context. The issuer must then ensure that this callback is processed from a task. io_uring punts IRQ completions to task_work already, so it's trivial wire it up to run more of the completion before posting a CQE. This is good for up to a 37% improvement in throughput/latency for low queue depth IO, patch 5 has the details. If we need to do real work at completion time, iomap will clear the IOMAP_DIO_CALLER_COMP flag. This work came about when Andres tested low queue depth dio writes for postgres and compared it to doing sync dio writes, showing that the async processing slows us down a lot. * tag 'xfs-async-dio.6-2023-08-01' of git://git.kernel.dk/linux: iomap: support IOCB_DIO_CALLER_COMP io_uring/rw: add write support for IOCB_DIO_CALLER_COMP fs: add IOCB flags related to passing back dio completions iomap: add IOMAP_DIO_INLINE_COMP iomap: only set iocb->private for polled bio iomap: treat a write through cache the same as FUA iomap: use an unsigned type for IOMAP_DIO_* defines iomap: cleanup up iomap_dio_bio_end_io() Signed-off-by: Darrick J. Wong <djwong@kernel.org>
2023-08-01fs: add IOCB flags related to passing back dio completionsJens Axboe
Async dio completions generally happen from hard/soft IRQ context, which means that users like iomap may need to defer some of the completion handling to a workqueue. This is less efficient than having the original issuer handle it, like we do for sync IO, and it adds latency to the completions. Add IOCB_DIO_CALLER_COMP, which the issuer can set if it is able to safely punt these completions to a safe context. If the dio handler is aware of this flag, assign a callback handler in kiocb->dio_complete and associated data io kiocb->private. The issuer will then call this handler with that data from task context. No functional changes in this patch. Reviewed-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-08-01fanotify: Remove unused extern declaration fsnotify_get_conn_fsid()YueHaibing
This is never used, so can remove it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jan Kara <jack@suse.cz> Message-Id: <20230725135528.25996-1-yuehaibing@huawei.com>
2023-08-01swiotlb: search the software IO TLB only if the device makes use of itPetr Tesarik
Skip searching the software IO TLB if a device has never used it, making sure these devices are not affected by the introduction of multiple IO TLB memory pools. Additional memory barrier is required to ensure that the new value of the flag is visible to other CPUs after mapping a new bounce buffer. For efficiency, the flag check should be inlined, and then the memory barrier must be moved to is_swiotlb_buffer(). However, it can replace the existing barrier in swiotlb_find_pool(), because all callers use is_swiotlb_buffer() first to verify that the buffer address belongs to the software IO TLB. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: allocate a new memory pool when existing pools are fullPetr Tesarik
When swiotlb_find_slots() cannot find suitable slots, schedule the allocation of a new memory pool. It is not possible to allocate the pool immediately, because this code may run in interrupt context, which is not suitable for large memory allocations. This means that the memory pool will be available too late for the currently requested mapping, but the stress on the software IO TLB allocator is likely to continue, and subsequent allocations will benefit from the additional pool eventually. Keep all memory pools for an allocator in an RCU list to avoid locking on the read side. For modifications, add a new spinlock to struct io_tlb_mem. The spinlock also protects updates to the total number of slabs (nslabs in struct io_tlb_mem), but not reads of the value. Readers may therefore encounter a stale value, but this is not an issue: - swiotlb_tbl_map_single() and is_swiotlb_active() only check for non-zero value. This is ensured by the existence of the default memory pool, allocated at boot. - The exact value is used only for non-critical purposes (debugfs, kernel messages). Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: determine potential physical address limitPetr Tesarik
The value returned by default_swiotlb_limit() should be constant, because it is used to decide whether DMA can be used. To allow allocating memory pools on the fly, use the maximum possible physical address rather than the highest address used by the default pool. For swiotlb_init_remap(), this is either an arch-specific limit used by memblock_alloc_low(), or the highest directly mapped physical address if the initialization flags include SWIOTLB_ANY. For swiotlb_init_late(), the highest address is determined by the GFP flags. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: if swiotlb is full, fall back to a transient memory poolPetr Tesarik
Try to allocate a transient memory pool if no suitable slots can be found and the respective SWIOTLB is allowed to grow. The transient pool is just enough big for this one bounce buffer. It is inserted into a per-device list of transient memory pools, and it is freed again when the bounce buffer is unmapped. Transient memory pools are kept in an RCU list. A memory barrier is required after adding a new entry, because any address within a transient buffer must be immediately recognized as belonging to the SWIOTLB, even if it is passed to another CPU. Deletion does not require any synchronization beyond RCU ordering guarantees. After a buffer is unmapped, its physical addresses may no longer be passed to the DMA API, so the memory range of the corresponding stale entry in the RCU list never matches. If the memory range gets allocated again, then it happens only after a RCU quiescent state. Since bounce buffers can now be allocated from different pools, add a parameter to swiotlb_alloc_pool() to let the caller know which memory pool is used. Add swiotlb_find_pool() to find the memory pool corresponding to an address. This function is now also used by is_swiotlb_buffer(), because a simple boundary check is no longer sufficient. The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(), simplified and enhanced to use coherent memory pools if needed. Note that this is not the most efficient way to provide a bounce buffer, but when a DMA buffer can't be mapped, something may (and will) actually break. At that point it is better to make an allocation, even if it may be an expensive operation. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: add a flag whether SWIOTLB is allowed to growPetr Tesarik
Add a config option (CONFIG_SWIOTLB_DYNAMIC) to enable or disable dynamic allocation of additional bounce buffers. If this option is set, mark the default SWIOTLB as able to grow and restricted DMA pools as unable. However, if the address of the default memory pool is explicitly queried, make the default SWIOTLB also unable to grow. This is currently used to set up PCI BAR movable regions on some Octeon MIPS boards which may not be able to use a SWIOTLB pool elsewhere in physical memory. See octeon_pci_setup() for more details. If a remap function is specified, it must be also called on any dynamically allocated pools, but there are some issues: - The remap function may block, so it should not be called from an atomic context. - There is no corresponding unremap() function if the memory pool is freed. - The only in-tree implementation (xen_swiotlb_fixup) requires that the number of slots in the memory pool is a multiple of SWIOTLB_SEGSIZE. Keep it simple for now and disable growing the SWIOTLB if a remap function was specified. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: separate memory pool data from other allocator dataPetr Tesarik
Carve out memory pool specific fields from struct io_tlb_mem. The original struct now contains shared data for the whole allocator, while the new struct io_tlb_pool contains data that is specific to one memory pool of (potentially) many. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: add documentation and rename swiotlb_do_find_slots()Petr Tesarik
Add some kernel-doc comments and move the existing documentation of struct io_tlb_slot to its correct location. The latter was forgotten in commit 942a8186eb445 ("swiotlb: move struct io_tlb_slot to swiotlb.c"). Use the opportunity to give swiotlb_do_find_slots() a more descriptive name and make it clear how it differs from swiotlb_find_slots(). Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01swiotlb: make io_tlb_default_mem local to swiotlb.cPetr Tesarik
SWIOTLB implementation details should not be exposed to the rest of the kernel. This will allow to make changes to the implementation without modifying non-swiotlb code. To avoid breaking existing users, provide helper functions for the few required fields. As a bonus, using a helper function to initialize struct device allows to get rid of an #ifdef in driver core. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01powercap: intel_rapl: Fix a sparse warning in TPMI interfaceZhang Rui
Depends on the interface used, the RAPL registers can be either MSR indexes or memory mapped IO addresses. Current RAPL common code uses u64 to save both MSR and memory mapped IO registers. With this, when handling register address with an __iomem annotation, it triggers a sparse warning like below: sparse warnings: (new ones prefixed by >>) >> drivers/powercap/intel_rapl_tpmi.c:141:41: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected unsigned long long [usertype] *tpmi_rapl_regs @@ got void [noderef] __iomem * @@ drivers/powercap/intel_rapl_tpmi.c:141:41: sparse: expected unsigned long long [usertype] *tpmi_rapl_regs drivers/powercap/intel_rapl_tpmi.c:141:41: sparse: got void [noderef] __iomem * Fix the problem by using a union to save the registers instead. Suggested-by: David Laight <David.Laight@ACULAB.COM> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202307031405.dy3druuy-lkp@intel.com/ Tested-by: Wang Wendy <wendy.wang@intel.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com> [ rjw: Subject and changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-08-01ASoC/SOF/Intel/AMD: cleanups for GCC11 -fanalyzerMark Brown
Merge series from Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>: GCC11 provides an '-fanalyzer' static analysis option which does not provide too many false-positives. This patch cleans-up known problematic code paths to help enable this capability in CI. We've used this for about a month already.
2023-08-01serial: core: Fix serial core port id to not use port->lineTony Lindgren
The serial core port id should be serial core controller specific port instance, which is not always the port->line index. For example, 8250 driver maps a number of legacy ports, and when a hardware specific device driver takes over, we typically have one driver instance for each port. Let's instead add port->port_id to keep track serial ports mapped to each serial core controller instance. Currently this is only a cosmetic issue for the serial core port device names. The issue can be noticed looking at /sys/bus/serial-base/devices for example though. Let's fix the issue to avoid port addressing issues later on. Fixes: 84a9582fd203 ("serial: core: Start managing serial controllers to enable runtime PM") Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Link: https://lore.kernel.org/r/20230725054216.45696-3-tony@atomide.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-01serial: core: Controller id cannot be negativeTony Lindgren
The controller id cannot be negative. Let's fix the ctrl_id in preparation for adding port_id to fix the device name. Fixes: 84a9582fd203 ("serial: core: Start managing serial controllers to enable runtime PM") Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Link: https://lore.kernel.org/r/20230725054216.45696-2-tony@atomide.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-31ASoC: soc-acpi: move link_slaves_found()Pierre-Louis Bossart
Move existing function in common library to make sure the code can be reused by other SoC vendors. No functionality change outside of the move and added prefix. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Rander Wang <rander.wang@intel.com> Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Link: https://lore.kernel.org/r/20230731213242.434594-3-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-07-31ASoC: SOF: mediatek: remove error checks on NULL ipcPierre-Louis Bossart
mtk_adsp_ipc_get_data() can return NULL, but the value is not checked before being used, leading to static analysis warnings. sound/soc/sof/mediatek/mt8195/mt8195.c:90:32: error: dereference of NULL ‘0’ [CWE-476] [-Werror=analyzer-null-dereference] 90 | spin_lock_irqsave(&priv->sdev->ipc_lock, flags); | ~~~~^~~~~~ It appears this is not really a possible problem, so remove those checks. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Rander Wang <rander.wang@intel.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Reviewed-by: Yaochun Hung <yc.hung@mediatek.com> Link: https://lore.kernel.org/r/20230731213748.440285-6-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-07-31ASoC: SOF: imx: remove error checks on NULL ipcPierre-Louis Bossart
imx_dsp_get_data() can return NULL, but the value is not checked before being usd, leading to static analysis warnings. sound/soc/sof/imx/imx8.c:85:32: error: dereference of NULL ‘0’ [CWE-476] [-Werror=analyzer-null-dereference] 85 | spin_lock_irqsave(&priv->sdev->ipc_lock, flags); | ~~~~^~~~~~ It appears this is not really a possible problem, so remove those checks. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Rander Wang <rander.wang@intel.com> Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com> Reviewed-by: Yaochun Hung <yc.hung@mediatek.com> Link: https://lore.kernel.org/r/20230731213748.440285-5-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-07-31tcx: Fix splat during dev unregisterMartin KaFai Lau
During unregister_netdevice_many_notify(), the ordering of our concerned function calls is like this: unregister_netdevice_many_notify dev_shutdown qdisc_put clsact_destroy tcx_uninstall The syzbot reproducer triggered a case that the qdisc refcnt is not zero during dev_shutdown(). tcx_uninstall() will then WARN_ON_ONCE(tcx_entry(entry)->miniq_active) because the miniq is still active and the entry should not be freed. The latter assumed that qdisc destruction happens before tcx teardown. This fix is to avoid tcx_uninstall() doing tcx_entry_free() when the miniq is still alive and let the clsact_destroy() do the free later, so that we do not assume any specific ordering for either of them. If still active, tcx_uninstall() does clear the entry when flushing out the prog/link. clsact_destroy() will then notice the "!tcx_entry_is_active()" and then does the tcx_entry_free() eventually. Fixes: e420bed02507 ("bpf: Add fd-based tcx multi-prog infra with link support") Reported-by: syzbot+376a289e86a0fd02b9ba@syzkaller.appspotmail.com Reported-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Co-developed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: syzbot+376a289e86a0fd02b9ba@syzkaller.appspotmail.com Tested-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/222255fe07cb58f15ee662e7ee78328af5b438e4.1690549248.git.daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-07-31fbdev: Align deferred I/O with naming of helpersThomas Zimmermann
Deferred-I/O generator macros generate callbacks for struct fb_ops that operate on memory ranges in I/O address space or system address space. Rename the macros to use the _IOMEM_ and _SYSMEM_ infixes of their underlying helpers. Adapt all users. No functional changes. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: Helge Deller <deller@gmx.de> Link: https://patchwork.freedesktop.org/patch/msgid/20230729193157.15446-5-tzimmermann@suse.de
2023-07-31fbdev: Use _DMAMEM_ infix for DMA-memory helpersThomas Zimmermann
Change the infix for fbdev's DMA-memory helpers from _DMA_ to _DMAMEM_. The helpers perform operations within DMA-able memory, but they don't perform DMA operations. Naming should make this clear. Adapt all users. No functional changes. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: Helge Deller <deller@gmx.de> Link: https://patchwork.freedesktop.org/patch/msgid/20230729193157.15446-4-tzimmermann@suse.de
2023-07-31fbdev: Use _SYSMEM_ infix for system-memory helpersThomas Zimmermann
Change the infix for fbdev's system-memory helpers from _SYS_ to _SYSMEM_. The helpers perform operations within system memory, but not on the state of the operating system itself. Naming should make this clear. Adapt all users. No functional changes. Suggested-by: Helge Deller <deller@gmx.de> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: Helge Deller <deller@gmx.de> Link: https://patchwork.freedesktop.org/patch/msgid/20230729193157.15446-3-tzimmermann@suse.de
2023-07-31fbdev: Use _IOMEM_ infix for I/O-memory helpersThomas Zimmermann
Change the infix for fbdev's I/O-memory helpers from _IO_ to _IOMEM_ to distiguish them from other types of I/O, such as file operations. The helpers operate on memory ranges in the I/O address space and the naming should make this clear. Adapt all users. No functional changes. Suggested-by: Helge Deller <deller@gmx.de> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: Helge Deller <deller@gmx.de> Link: https://patchwork.freedesktop.org/patch/msgid/20230729193157.15446-2-tzimmermann@suse.de