summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2021-03-18thunderbolt: Add tb_property_copy_dir()Mika Westerberg
This function takes a deep copy of the properties. We need this in order to support more dynamic properties per XDomain connection as required by the USB4 inter-domain service spec. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2021-03-18thunderbolt: Do not re-establish XDomain DMA paths automaticallyMika Westerberg
This step is actually not needed. The service drivers themselves will handle this once they have negotiated the service up and running again with the remote side. Also dropping this makes it easier to add support for multiple DMA tunnels over a single XDomain connection. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2021-03-18Merge tag 'v5.12-rc3' into x86/cleanups, to refresh the treeIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-18regmap-irq: Extend sub-irq to support non-fixed reg stridesGuru Das Srinagesh
Qualcomm's MFD chips have a top level interrupt status register and sub-irqs (peripherals). When a bit in the main status register goes high, it means that the peripheral corresponding to that bit has an unserviced interrupt. If the bit is not set, this means that the corresponding peripheral does not. Commit a2d21848d9211d ("regmap: regmap-irq: Add main status register support") introduced the sub-irq logic that is currently applied only when reading status registers, but not for any other functions like acking or masking. Extend the use of sub-irq to all other functions, with two caveats regarding the specification of offsets: - Each member of the sub_reg_offsets array should be of length 1 - The specified offsets should be the unequal strides for each sub-irq device. In QCOM's case, all the *_base registers are to be configured to the base addresses of the first sub-irq group, with offsets of each subsequent group calculated as a difference from these addresses. Continuing from the example mentioned in the cover letter: /* * Address of MISC_INT_MASK = 0x1011 * Address of TEMP_ALARM_INT_MASK = 0x2011 * Address of GPIO01_INT_MASK = 0x3011 * * Calculate offsets as: * offset_0 = 0x1011 - 0x1011 = 0 (to access MISC's * registers) * offset_1 = 0x2011 - 0x1011 = 0x1000 * offset_2 = 0x3011 - 0x1011 = 0x2000 */ static unsigned int sub_unit0_offsets[] = {0}; static unsigned int sub_unit1_offsets[] = {0x1000}; static unsigned int sub_unit2_offsets[] = {0x2000}; static struct regmap_irq_sub_irq_map chip_sub_irq_offsets[] = { REGMAP_IRQ_MAIN_REG_OFFSET(sub_unit0_offsets), REGMAP_IRQ_MAIN_REG_OFFSET(sub_unit0_offsets), REGMAP_IRQ_MAIN_REG_OFFSET(sub_unit0_offsets), }; static struct regmap_irq_chip chip_irq_chip = { --------8<-------- .not_fixed_stride = true, .mask_base = MISC_INT_MASK, .type_base = MISC_INT_TYPE, .ack_base = MISC_INT_ACK, .sub_reg_offsets = chip_sub_irq_offsets, --------8<-------- }; Signed-off-by: Guru Das Srinagesh <gurus@codeaurora.org> Link: https://lore.kernel.org/r/526562423eaa58b4075362083f561841f1d6956c.1615423027.git.gurus@codeaurora.org Signed-off-by: Mark Brown <broonie@kernel.org>
2021-03-18reset: Add reset_control_bulk APIPhilipp Zabel
Follow the clock and regulator subsystems' lead and add a bulk API for reset controls. Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de> Tested-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Link: https://lore.kernel.org/r/20210314154459.15375-5-digetx@gmail.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-03-18remoteproc: Properly deal with the resource table when detachingMathieu Poirier
If it is possible to detach the remote processor, keep an untouched copy of the resource table. That way we can start from the same resource table without having to worry about original values or what elements the startup code has changed when re-attaching to the remote processor. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Link: https://lore.kernel.org/r/20210312162453.1234145-12-mathieu.poirier@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-18remoteproc: Introduce function rproc_detach()Mathieu Poirier
Introduce function rproc_detach() to enable the remoteproc core to release the resources associated with a remote processor without stopping its operation. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Link: https://lore.kernel.org/r/20210312162453.1234145-11-mathieu.poirier@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-18remoteproc: Add new detach() remoteproc operationMathieu Poirier
Add an new detach() operation in order to support scenarios where the remoteproc core is going away but the remote processor is kept operating. This could be the case when the system is rebooted or when the platform driver is removed. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Link: https://lore.kernel.org/r/20210312162453.1234145-9-mathieu.poirier@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-18remoteproc: Add new get_loaded_rsc_table() to rproc_opsMathieu Poirier
Add a new get_loaded_rsc_table() operation in order to support scenarios where the remoteproc core has booted a remote processor and detaches from it. When re-attaching to the remote processor, the core needs to know where the resource table has been placed in memory. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Link: https://lore.kernel.org/r/20210312162453.1234145-6-mathieu.poirier@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-18remoteproc: Properly represent the attached stateMathieu Poirier
There is a need to know when a remote processor has been attached to rather than booted by the remoteproc core. In order to avoid manipulating two variables, i.e rproc::autonomous and rproc::state, get rid of the former and simply use the newly introduced RPROC_ATTACHED state. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Link: https://lore.kernel.org/r/20210312162453.1234145-5-mathieu.poirier@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-18remoteproc: Add new RPROC_ATTACHED stateMathieu Poirier
Add a new RPROC_ATTACHED state to take into account scenarios where the remoteproc core needs to attach to a remote processor that is booted by another entity. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Link: https://lore.kernel.org/r/20210312162453.1234145-4-mathieu.poirier@linaro.org Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-18iommu/vt-d: Report more information about invalidation errorsLu Baolu
When the invalidation queue errors are encountered, dump the information logged by the VT-d hardware together with the pending queue invalidation descriptors. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Tested-by: Guo Kaijie <Kaijie.Guo@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210318005340.187311-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-03-18iommu/dma: Resurrect the "forcedac" optionRobin Murphy
In converting intel-iommu over to the common IOMMU DMA ops, it quietly lost the functionality of its "forcedac" option. Since this is a handy thing both for testing and for performance optimisation on certain platforms, reimplement it under the common IOMMU parameter namespace. For the sake of fixing the inadvertent breakage of the Intel-specific parameter, remove the dmar_forcedac remnants and hook it up as an alias while documenting the transition to the new common parameter. Fixes: c588072bba6b ("iommu/vt-d: Convert intel iommu driver to the iommu ops") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/7eece8e0ea7bfbe2cd0e30789e0d46df573af9b0.1614961776.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-03-17bpf: net: Emit anonymous enum with BPF_TCP_CLOSE value explicitlyYonghong Song
The selftest failed to compile with clang-built bpf-next. Adding LLVM=1 to your vmlinux and selftest build will use clang. The error message is: progs/test_sk_storage_tracing.c:38:18: error: use of undeclared identifier 'BPF_TCP_CLOSE' if (newstate == BPF_TCP_CLOSE) ^ 1 error generated. make: *** [Makefile:423: /bpf-next/tools/testing/selftests/bpf/test_sk_storage_tracing.o] Error 1 The reason for the failure is that BPF_TCP_CLOSE, a value of an anonymous enum defined in uapi bpf.h, is not defined in vmlinux.h. gcc does not have this problem. Since vmlinux.h is derived from BTF which is derived from vmlinux DWARF, that means gcc-produced vmlinux DWARF has BPF_TCP_CLOSE while llvm-produced vmlinux DWARF does not have. BPF_TCP_CLOSE is referenced in net/ipv4/tcp.c as BUILD_BUG_ON((int)BPF_TCP_CLOSE != (int)TCP_CLOSE); The following test mimics the above BUILD_BUG_ON, preprocessed with clang compiler, and shows gcc DWARF contains BPF_TCP_CLOSE while llvm DWARF does not. $ cat t.c enum { BPF_TCP_ESTABLISHED = 1, BPF_TCP_CLOSE = 7, }; enum { TCP_ESTABLISHED = 1, TCP_CLOSE = 7, }; int test() { do { extern void __compiletime_assert_767(void) ; if ((int)BPF_TCP_CLOSE != (int)TCP_CLOSE) __compiletime_assert_767(); } while (0); return 0; } $ clang t.c -O2 -c -g && llvm-dwarfdump t.o | grep BPF_TCP_CLOSE $ gcc t.c -O2 -c -g && llvm-dwarfdump t.o | grep BPF_TCP_CLOSE DW_AT_name ("BPF_TCP_CLOSE") Further checking clang code find clang actually tried to evaluate condition at compile time. If it is definitely true/false, it will perform optimization and the whole if condition will be removed before generating IR/debuginfo. This patch explicited add an expression after the above mentioned BUILD_BUG_ON in net/ipv4/tcp.c like (void)BPF_TCP_ESTABLISHED to enable generation of debuginfo for the anonymous enum which also includes BPF_TCP_CLOSE. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210317174132.589276-1-yhs@fb.com
2021-03-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf 2021-03-18 The following pull-request contains BPF updates for your *net* tree. We've added 10 non-merge commits during the last 4 day(s) which contain a total of 14 files changed, 336 insertions(+), 94 deletions(-). The main changes are: 1) Fix fexit/fmod_ret trampoline for sleepable programs, and also fix a ftrace splat in modify_ftrace_direct() on address change, from Alexei Starovoitov. 2) Fix two oob speculation possibilities that allows unprivileged to leak mem via side-channel, from Piotr Krysiuk and Daniel Borkmann. 3) Fix libbpf's netlink handling wrt SOCK_CLOEXEC, from Kumar Kartikeya Dwivedi. 4) Fix libbpf's error handling on failure in getting section names, from Namhyung Kim. 5) Fix tunnel collect_md BPF selftest wrt Geneve option handling, from Hangbin Liu. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-18bpf: Fix fexit trampoline.Alexei Starovoitov
The fexit/fmod_ret programs can be attached to kernel functions that can sleep. The synchronize_rcu_tasks() will not wait for such tasks to complete. In such case the trampoline image will be freed and when the task wakes up the return IP will point to freed memory causing the crash. Solve this by adding percpu_ref_get/put for the duration of trampoline and separate trampoline vs its image life times. The "half page" optimization has to be removed, since first_half->second_half->first_half transition cannot be guaranteed to complete in deterministic time. Every trampoline update becomes a new image. The image with fmod_ret or fexit progs will be freed via percpu_ref_kill and call_rcu_tasks. Together they will wait for the original function and trampoline asm to complete. The trampoline is patched from nop to jmp to skip fexit progs. They are freed independently from the trampoline. The image with fentry progs only will be freed via call_rcu_tasks_trace+call_rcu_tasks which will wait for both sleepable and non-sleepable progs to complete. Fixes: fec56f5890d9 ("bpf: Introduce BPF trampoline") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Paul E. McKenney <paulmck@kernel.org> # for RCU Link: https://lore.kernel.org/bpf/20210316210007.38949-1-alexei.starovoitov@gmail.com
2021-03-17net: fix race between napi kthread mode and busy pollWei Wang
Currently, napi_thread_wait() checks for NAPI_STATE_SCHED bit to determine if the kthread owns this napi and could call napi->poll() on it. However, if socket busy poll is enabled, it is possible that the busy poll thread grabs this SCHED bit (after the previous napi->poll() invokes napi_complete_done() and clears SCHED bit) and tries to poll on the same napi. napi_disable() could grab the SCHED bit as well. This patch tries to fix this race by adding a new bit NAPI_STATE_SCHED_THREADED in napi->state. This bit gets set in ____napi_schedule() if the threaded mode is enabled, and gets cleared in napi_complete_done(), and we only poll the napi in kthread if this bit is set. This helps distinguish the ownership of the napi between kthread and other scenarios and fixes the race issue. Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support") Reported-by: Martin Zaharinov <micron10@gmail.com> Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Wei Wang <weiwan@google.com> Cc: Alexander Duyck <alexanderduyck@fb.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-17usb-storage: Add quirk to defeat Kindle's automatic unloadAlan Stern
Matthias reports that the Amazon Kindle automatically removes its emulated media if it doesn't receive another SCSI command within about one second after a SYNCHRONIZE CACHE. It does so even when the host has sent a PREVENT MEDIUM REMOVAL command. The reason for this behavior isn't clear, although it's not hard to make some guesses. At any rate, the results can be unexpected for anyone who tries to access the Kindle in an unusual fashion, and in theory they can lead to data loss (for example, if one file is closed and synchronized while other files are still in the middle of being written). To avoid such problems, this patch creates a new usb-storage quirks flag telling the driver always to issue a REQUEST SENSE following a SYNCHRONIZE CACHE command, and adds an unusual_devs entry for the Kindle with the flag set. This is sufficient to prevent the Kindle from doing its automatic unload, without interfering with proper operation. Another possible way to deal with this would be to increase the frequency of TEST UNIT READY polling that the kernel normally carries out for removable-media storage devices. However that would increase the overall load on the system and it is not as reliable, because the user can override the polling interval. Changing the driver's behavior is safer and has minimal overhead. CC: <stable@vger.kernel.org> Reported-and-tested-by: Matthias Schwarzott <zzam@gentoo.org> Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Link: https://lore.kernel.org/r/20210317190654.GA497856@rowland.harvard.edu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-17module: remove never implemented MODULE_SUPPORTED_DEVICELeon Romanovsky
MODULE_SUPPORTED_DEVICE was added in pre-git era and never was implemented. We can safely remove it, because the kernel has grown to have many more reliable mechanisms to determine if device is supported or not. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-03-17rpmsg: Move RPMSG_ADDR_ANY in user APIArnaud Pouliquen
As the RPMSG_ADDR_ANY is a valid src or dst address that can be set by user applications, migrate its definition in user API. Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> Link: https://lore.kernel.org/r/20210311140413.31725-3-arnaud.pouliquen@foss.st.com Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2021-03-17ethtool: Add common function for filling out stringsAlexander Duyck
Add a function to handle the common pattern of printing a string into the ethtool strings interface and incrementing the string pointer by the ETH_GSTRING_LEN. Most of the drivers end up doing this and several have implemented their own versions of this function so it would make sense to consolidate on one implementation. Signed-off-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-17Merge tag 'mlx5-updates-2021-03-16' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2021-03-16 mlx5 uplink representor netdev persistence. Before this patchset we used to have separate netdevs for Native NIC mode and Switchdev mode (uplink representor netdev), meaning that if user switches modes between Native to Switchdev and vice versa, the driver would cleanup the current netdev representor and create a new one for the new mode, such behavior created an administrative nightmare for users, where users need to be aware of such loss of both data path and control path configurations, e.g. netdev attributes and arp/route tables, where the later is more painful. A simple solution for this is not to replace the netdev in first place and use a single netdev to serve the uplink/physical port whether it is in switchdev mode or native mode. We already have different HW profiles for each netdev mode, in this series we just replace the HW profile on the fly and we keep the same netdev attached. Refactoring: Some refactoring has been made to overcome some technical difficulties 1) The netdev is created with the maximum amount of tx/rx queues to serve the two profiles. 2) Some ndos are not supported in some modes, so we added a mode check for such cases, e.g legacy sriov ndos must be blocked in switchdev mode. 3) Some mlx5 netdev private attributes need to be moved out of profiles and kept in a persistent place, where the netdev is created e.g devlink port and other global HW resources 4) The netdev devlink port is now always registered with the switch id Implementation: the last three patches implement the mechanism now as the netdev can be shared. 5) Don't recreate the netdev on switchdev mode changes 6) Prevent changing switchdev mode when some netdev operations are active, mostly when TC rules are being processed. This is required since the netdev is kept registered while switchdev mode can be changed. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-17rcu: Prevent false positive softirq warning on RTThomas Gleixner
Soft interrupt disabled sections can legitimately be preempted or schedule out when blocking on a lock on RT enabled kernels so the RCU preempt check warning has to be disabled for RT kernels. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309085727.626304079@linutronix.de
2021-03-17tick/sched: Prevent false positive softirq pending warnings on RTThomas Gleixner
On RT a task which has soft interrupts disabled can block on a lock and schedule out to idle while soft interrupts are pending. This triggers the warning in the NOHZ idle code which complains about going idle with pending soft interrupts. But as the task is blocked soft interrupt processing is temporarily blocked as well which means that such a warning is a false positive. To prevent that check the per CPU state which indicates that a scheduled out task has soft interrupts disabled. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309085727.527563866@linutronix.de
2021-03-17softirq: Make softirq control and processing RT awareThomas Gleixner
Provide a local lock based serialization for soft interrupts on RT which allows the local_bh_disabled() sections and servicing soft interrupts to be preemptible. Provide the necessary inline helpers which allow to reuse the bulk of the softirq processing code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309085727.426370483@linutronix.de
2021-03-17softirq: Add RT specific softirq accountingThomas Gleixner
RT requires the softirq processing and local bottomhalf disabled regions to be preemptible. Using the normal preempt count based serialization is therefore not possible because this implicitely disables preemption. RT kernels use a per CPU local lock to serialize bottomhalfs. As local_bh_disable() can nest the lock can only be acquired on the outermost invocation of local_bh_disable() and released when the nest count becomes zero. Tasks which hold the local lock can be preempted so its required to keep track of the nest count per task. Add a RT only counter to task struct and adjust the relevant macros in preempt.h. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309085726.983627589@linutronix.de
2021-03-17tasklets: Switch tasklet_disable() to the sleep wait variantThomas Gleixner
-- NOT FOR IMMEDIATE MERGING -- Now that all users of tasklet_disable() are invoked from sleepable context, convert it to use tasklet_unlock_wait() which might sleep. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084242.726452321@linutronix.de
2021-03-17tasklets: Prevent tasklet_unlock_spin_wait() deadlock on RTThomas Gleixner
tasklet_unlock_spin_wait() spin waits for the TASKLET_STATE_SCHED bit in the tasklet state to be cleared. This works on !RT nicely because the corresponding execution can only happen on a different CPU. On RT softirq processing is preemptible, therefore a task preempting the softirq processing thread can spin forever. Prevent this by invoking local_bh_disable()/enable() inside the loop. In case that the softirq processing thread was preempted by the current task, current will block on the local lock which yields the CPU to the preempted softirq processing thread. If the tasklet is processed on a different CPU then the local_bh_disable()/enable() pair is just a waste of processor cycles. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084241.988908275@linutronix.de
2021-03-17tasklets: Replace spin wait in tasklet_unlock_wait()Peter Zijlstra
tasklet_unlock_wait() spin waits for TASKLET_STATE_RUN to be cleared. This is wasting CPU cycles in a tight loop which is especially painful in a guest when the CPU running the tasklet is scheduled out. tasklet_unlock_wait() is invoked from tasklet_kill() which is used in teardown paths and not performance critical at all. Replace the spin wait with wait_var_event(). There are no users of tasklet_unlock_wait() which are invoked from atomic contexts. The usage in tasklet_disable() has been replaced temporarily with the spin waiting variant until the atomic users are fixed up and will be converted to the sleep wait variant later. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084241.783936921@linutronix.de
2021-03-17tasklets: Use spin wait in tasklet_disable() temporarilyThomas Gleixner
To ease the transition use spin waiting in tasklet_disable() until all usage sites from atomic context have been cleaned up. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084241.685352806@linutronix.de
2021-03-17tasklets: Provide tasklet_disable_in_atomic()Thomas Gleixner
Replacing the spin wait loops in tasklet_unlock_wait() with wait_var_event() is not possible as a handful of tasklet_disable() invocations are happening in atomic context. All other invocations are in teardown paths which can sleep. Provide tasklet_disable_in_atomic() and tasklet_unlock_spin_wait() to convert the few atomic use cases over, which allows to change tasklet_disable() and tasklet_unlock_wait() in a later step. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084241.563164193@linutronix.de
2021-03-17tasklets: Use static inlines for stub implementationsThomas Gleixner
Inlines exist for a reason. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084241.407702697@linutronix.de
2021-03-17tasklets: Replace barrier() with cpu_relax() in tasklet_unlock_wait()Thomas Gleixner
A barrier() in a tight loop which waits for something to happen on a remote CPU is a pointless exercise. Replace it with cpu_relax() which allows HT siblings to make progress. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210309084241.249343366@linutronix.de
2021-03-17quota: wire up quotactl_pathSascha Hauer
Wire up the quotactl_path syscall added in the previous patch. Link: https://lore.kernel.org/r/20210304123541.30749-3-s.hauer@pengutronix.de Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz>
2021-03-17locking/ww_mutex: Fix acquire/release imbalance in ↵Waiman Long
ww_acquire_init()/ww_acquire_fini() In ww_acquire_init(), mutex_acquire() is gated by CONFIG_DEBUG_LOCK_ALLOC. The dep_map in the ww_acquire_ctx structure is also gated by the same config. However mutex_release() in ww_acquire_fini() is gated by CONFIG_DEBUG_MUTEXES. It is possible to set CONFIG_DEBUG_MUTEXES without setting CONFIG_DEBUG_LOCK_ALLOC though it is an unlikely configuration. That may cause a compilation error as dep_map isn't defined in this case. Fix this potential problem by enclosing mutex_release() inside CONFIG_DEBUG_LOCK_ALLOC. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210316153119.13802-3-longman@redhat.com
2021-03-17phy: Add media type and speed serdes configuration interfacesSteen Hegelund
Provide new phy configuration interfaces for media type and speed that allows e.g. PHYs used for ethernet to be configured with this information. Signed-off-by: Lars Povlsen <lars.povlsen@microchip.com> Signed-off-by: Steen Hegelund <steen.hegelund@microchip.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Acked-By: Kishon Vijay Abraham I <kishon@ti.com> Link: https://lore.kernel.org/r/20210218161451.3489955-3-steen.hegelund@microchip.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-03-17scsi: storvsc: Enable scatterlist entry lengths > 4KbytesMichael Kelley
storvsc currently sets .dma_boundary to limit scatterlist entries to 4 Kbytes, which is less efficient with huge pages that offer large chunks of contiguous physical memory. Improve the algorithm for creating the Hyper-V guest physical address PFN array so that scatterlist entries with lengths > 4Kbytes are handled. As a result, remove the .dma_boundary setting. The improved algorithm also adds support for scatterlist entries with offsets >= 4Kbytes, which is supported by many other SCSI low-level drivers. And it retains support for architectures where possibly PAGE_SIZE != HV_HYP_PAGE_SIZE (such as ARM64). Link: https://lore.kernel.org/r/1614120294-1930-1-git-send-email-mikelley@microsoft.com Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-03-17swiotlb: split swiotlb_tbl_sync_singleChristoph Hellwig
Split swiotlb_tbl_sync_single into two separate funtions for the to device and to cpu synchronization. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_singleChristoph Hellwig
Now that swiotlb remembers the allocation size there is no need to pass it back to swiotlb_tbl_unmap_single. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-16net/mlx5e: Do not reload ethernet ports when changing eswitch modeRoi Dayan
When switching modes between legacy and switchdev and back, do not reload ethernet interfaces. just change the profile from nic profile to uplink rep profile in switchdev mode. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-16net/mlx5: Move devlink port from mlx5e priv to mlx5e resourcesRoi Dayan
We re-use the native NIC port net device instance for the Uplink representor, and the devlink port. When changing profiles we reset the mlx5e priv but we should still use the devlink port so move it to mlx5e resources. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-16net/mlx5: Move mlx5e hw resources into a sub objectRoi Dayan
This is to separate between resources attributes and other attributes we will want to use. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-16net: Change dev parameter to const in netif_device_present()Roi Dayan
Not all ndos check the present bit before calling the ndo and the driver may want to check it. Sometimes the dev parameter passed as const so we pass it to netif_device_present() as const. Since netif_device_present() doesn't modify dev parameter anyway, declare it as const. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-03-16Revert "net: socket: use BIT() for MSG_*"David S. Miller
This reverts commit 0bb3262c0248d44aea3be31076f44beb82a7b120. Breaks things on mips64/qemu Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-16net: ocelot: Remove ocelot_xfh_get_cpuqHoratiu Vultur
Now when extracting frames from CPU the cpuq is not used anymore so remove it. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-16net/sched: act_api: fix miss set post_ct for ovs after do conntrack in act_ctwenxu
When openvswitch conntrack offload with act_ct action. The first rule do conntrack in the act_ct in tc subsystem. And miss the next rule in the tc and fallback to the ovs datapath but miss set post_ct flag which will lead the ct_state_key with -trk flag. Fixes: 7baf2429a1a9 ("net/sched: cls_flower add CT_FLAGS_INVALID flag support") Signed-off-by: wenxu <wenxu@ucloud.cn> Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-03-16x86: Introduce restart_block->arch_data to remove TS_COMPAT_RESTARTOleg Nesterov
Save the current_thread_info()->status of X86 in the new restart_block->arch_data field so TS_COMPAT_RESTART can be removed again. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210201174716.GA17898@redhat.com
2021-03-16kernel, fs: Introduce and use set_restart_fn() and arch_set_restart_data()Oleg Nesterov
Preparation for fixing get_nr_restart_syscall() on X86 for COMPAT. Add a new helper which sets restart_block->fn and calls a dummy arch_set_restart_data() helper. Fixes: 609c19a385c8 ("x86/ptrace: Stop setting TS_COMPAT in ptrace code") Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210201174641.GA17871@redhat.com
2021-03-16Merge series "spi: Adding support for software nodes" from Heikki Krogerus ↵Mark Brown
<heikki.krogerus@linux.intel.com>: Hi, The older API used to supply additional device properties for the devices - so mainly the function device_add_properties() - is going to be removed. The reason why the API will be removed is because it gives false impression that the properties are assigned directly to the devices, which has actually never been the case - the properties have always been assigned to a software fwnode which was then just directly linked with the device when the old API was used. By only accepting device properties instead of complete software nodes, the subsystems remove any change of taking advantage of the other features the software nodes have. The change that is required from the spi subsystem and the drivers is trivial. Basically only the "properties" member in struct spi_board_info, which was a pointer to struct property_entry, is replaced with a pointer to a complete software node. thanks, Heikki Krogerus (4): spi: Add support for software nodes ARM: pxa: icontrol: Constify the software node ARM: pxa: zeus: Constify the software node spi: Remove support for dangling device properties arch/arm/mach-pxa/icontrol.c | 12 ++++++++---- arch/arm/mach-pxa/zeus.c | 6 +++++- drivers/spi/spi.c | 21 ++++++--------------- include/linux/spi/spi.h | 7 +++---- 4 files changed, 22 insertions(+), 24 deletions(-) -- 2.30.1 base-commit: a38fd8748464831584a19438cbb3082b5a2dab15
2021-03-16PCI: Add pci_find_vsec_capability() to find a specific VSECGustavo Pimentel
Add pci_find_vsec_capability() to locate a Vendor-Specific Extended Capability with the specified VSEC ID. The Vendor-Specific Extended Capability (VSEC) allows one or more proprietary capabilities defined by the vendor which aren't standard or shared between vendors. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/d89506834fb11c6fa0bd5d515c0dd55b13ac6958.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>