summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-10-24drivers/perf: hisi_pcie: Check the type first in pmu::event_init()Yicong Yang
Check whether the event type matches the PMU type firstly in pmu::event_init() before touching the event. Otherwise we'll change the events of others and lead to incorrect results. Since in perf_init_event() we may call every pmu's event_init() in a certain case, we should not modify the event if it's not ours. Fixes: 8404b0fbc7fb ("drivers/perf: hisi: Add driver for HiSilicon PCIe PMU") Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20231024092954.42297-2-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2023-10-24netfilter: nf_tables: expose opaque set element as struct nft_elem_privPablo Neira Ayuso
Add placeholder structure and place it at the beginning of each struct nft_*_elem for each existing set backend, instead of exposing elements as void type to the frontend which defeats compiler type checks. Use this pointer to this new type to replace void *. This patch updates the following set backend API to use this new struct nft_elem_priv placeholder structure: - update - deactivate - flush - get as well as the following helper functions: - nft_set_elem_ext() - nft_set_elem_init() - nft_set_elem_destroy() - nf_tables_set_elem_destroy() This patch adds nft_elem_priv_cast() to cast struct nft_elem_priv to native element representation from the corresponding set backend. BUILD_BUG_ON() makes sure this .priv placeholder is always at the top of the opaque set element representation. Suggested-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: set backend .flush always succeedsPablo Neira Ayuso
.flush is always successful since this results from iterating over the set elements to toggle mark the element as inactive in the next generation. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nft_set_pipapo: no need to call pipapo_deactivate() from flushPablo Neira Ayuso
Use the element object that is already offered instead. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Carry reset boolean in nft_obj_dump_ctxPhil Sutter
Relieve the dump callback from having to inspect nlmsg_type upon each call, just do it once at start of the dump. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: nft_obj_filter fits into cb->ctxPhil Sutter
No need to allocate it if one may just use struct netlink_callback's scratch area for it. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Carry s_idx in nft_obj_dump_ctxPhil Sutter
Prep work for moving the context into struct netlink_callback scratch area. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: A better name for nft_obj_filterPhil Sutter
Name it for what it is supposed to become, a real nft_obj_dump_ctx. No functional change intended. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Unconditionally allocate nft_obj_filterPhil Sutter
Prep work for moving the filter into struct netlink_callback's scratch area. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Drop pointless memset in nf_tables_dump_objPhil Sutter
The code does not make use of cb->args fields past the first one, no need to zero them. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: conntrack: switch connlabels to atomic_tFlorian Westphal
The spinlock is back from the day when connabels did not have a fixed size and reallocation had to be supported. Remove it. This change also allows to call the helpers from softirq or timers without deadlocks. Also add WARN()s to catch refcounting imbalances. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24br_netfilter: use single forward hook for ip and arpFlorian Westphal
br_netfilter registers two forward hooks, one for ip and one for arp. Just use a common function for both and then call the arp/ip helper as needed. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Add locking for NFT_MSG_GETRULE_RESET requestsPhil Sutter
Rule reset is not concurrency-safe per-se, so multiple CPUs may reset the same rule at the same time. At least counter and quota expressions will suffer from value underruns in this case. Prevent this by introducing dedicated locking callbacks for nfnetlink and the asynchronous dump handling to serialize access. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Introduce nf_tables_getrule_single()Phil Sutter
Outsource the reply skb preparation for non-dump getrule requests into a distinct function. Prep work for rule reset locking. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nf_tables: Open-code audit log call in nf_tables_getrule()Phil Sutter
The table lookup will be dropped from that function, so remove that dependency from audit logging code. Using whatever is in nla[NFTA_RULE_TABLE] is sufficient as long as the previous rule info filling succeded. Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nft_set_rbtree: prefer sync gc to async workerFlorian Westphal
There is no need for asynchronous garbage collection, rbtree inserts can only happen from the netlink control plane. We already perform on-demand gc on insertion, in the area of the tree where the insertion takes place, but we don't do a full tree walk there for performance reasons. Do a full gc walk at the end of the transaction instead and remove the async worker. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24netfilter: nft_set_rbtree: rename gc deactivate+erase functionFlorian Westphal
Next patch adds a cllaer that doesn't hold the priv->write lock and will need a similar function. Rename the existing function to make it clear that it can only be used for opportunistic gc during insertion. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-10-24net: sched: sch_qfq: Use non-work-conserving warning handlerLiu Jian
A helper function for printing non-work-conserving alarms is added in commit b00355db3f88 ("pkt_sched: sch_hfsc: sch_htb: Add non-work-conserving warning handler."). In this commit, use qdisc_warn_nonwc() instead of WARN_ONCE() to handle the non-work-conserving warning in qfq Qdisc. Signed-off-by: Liu Jian <liujian56@huawei.com> Link: https://lore.kernel.org/r/20231023064729.370649-1-liujian56@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24pmdomain: qcom: rpmpd: Add QM215 power domainsOtto Pflüger
QM215 is typically paired with a PM8916 PMIC and uses its SMPA1 and LDOA2 regulators in voltage level mode for VDDCX and VDDMX, respectively. Signed-off-by: Otto Pflüger <otto.pflueger@abscue.de> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20231014133823.14088-4-otto.pflueger@abscue.de Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-10-24pmdomain: qcom: rpmpd: Add MSM8917 power domainsOtto Pflüger
MSM8917 uses the SMPA2 and LDOA3 regulators provided by the PM8937 PMIC for the VDDCX and VDDMX power domains in voltage level mode, respectively. These definitions should also work on MSM8937. Signed-off-by: Otto Pflüger <otto.pflueger@abscue.de> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20231014133823.14088-3-otto.pflueger@abscue.de Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-10-24pmdomain: Merge branch genpd_dt into nextUlf Hansson
Merge the immutable branch genpd_dt into next, to allow the DT bindings to be tested together with new pmdomain changes that are targeted for v6.7. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-10-24dt-bindings: power: rpmpd: Add MSM8917, MSM8937 and QM215Otto Pflüger
The MSM8917, MSM8937 and QM215 SoCs have VDDCX and VDDMX power domains controlled in voltage level mode. Define the MSM8937 and QM215 power domains as aliases because these SoCs are similar to MSM8917 and may share some parts of the device tree. Also add the compatibles for these SoCs to the documentation, with qcom,msm8937-rpmpd using qcom,msm8917-rpmpd as a fallback compatible because there are no known differences. QM215 is not compatible with these because it uses different regulators. Signed-off-by: Otto Pflüger <otto.pflueger@abscue.de> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20231014133823.14088-2-otto.pflueger@abscue.de Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-10-24pmdomain: bcm: bcm2835-power: check if the ASB register is equal to enableMaíra Canal
The commit c494a447c14e ("soc: bcm: bcm2835-power: Refactor ASB control") refactored the ASB control by using a general function to handle both the enable and disable. But this patch introduced a subtle regression: we need to check if !!(readl(base + reg) & ASB_ACK) == enable, not just check if (readl(base + reg) & ASB_ACK) == true. Currently, this is causing an invalid register state in V3D when unloading and loading the driver, because `bcm2835_asb_disable()` will return -ETIMEDOUT and `bcm2835_asb_power_off()` will fail to disable the ASB slave for V3D. Fixes: c494a447c14e ("soc: bcm: bcm2835-power: Refactor ASB control") Signed-off-by: Maíra Canal <mcanal@igalia.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Stefan Wahren <stefan.wahren@i2se.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231024101251.6357-2-mcanal@igalia.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-10-24perf/core: Fix potential NULL derefPeter Zijlstra
Smatch is awesome. Fixes: 32671e3799ca ("perf: Disallow mis-matched inherited group reads") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2023-10-24Merge branch 'gtp-tunnel-driver-fixes'Paolo Abeni
Pablo Neira Ayuso says: ==================== GTP tunnel driver fixes The following patchset contains two fixes for the GTP tunnel driver: 1) Incorrect GTPA_MAX definition in UAPI headers. This is updating an existing UAPI definition but for a good reason, this is certainly broken. Similar fixes for incorrect _MAX definition in netlink headers were applied in the past too. 2) Fix GTP driver PMTU with GRO packets, add missing call to skb_gso_validate_network_len() to handle GRO packets. ==================== Link: https://lore.kernel.org/r/20231022202519.659526-1-pablo@netfilter.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24gtp: fix fragmentation needed check with gsoPablo Neira Ayuso
Call skb_gso_validate_network_len() to check if packet is over PMTU. Fixes: 459aa660eb1d ("gtp: add initial driver for datapath of GPRS Tunneling Protocol (GTP-U)") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24gtp: uapi: fix GTPA_MAXPablo Neira Ayuso
Subtract one to __GTPA_MAX, otherwise GTPA_MAX is off by 2. Fixes: 459aa660eb1d ("gtp: add initial driver for datapath of GPRS Tunneling Protocol (GTP-U)") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24xsk: Avoid starving the xsk further down the listAlbert Huang
In the previous implementation, when multiple xsk sockets were associated with a single xsk_buff_pool, a situation could arise where the xsk_tx_list maintained data at the front for one xsk socket while starving the xsk sockets at the back of the list. This could result in issues such as the inability to transmit packets, increased latency, and jitter. To address this problem, we introduce a new variable called tx_budget_spent, which limits each xsk to transmit a maximum of MAX_PER_SOCKET_BUDGET tx descriptors. This allocation ensures equitable opportunities for subsequent xsk sockets to send tx descriptors. The value of MAX_PER_SOCKET_BUDGET is set to 32. Signed-off-by: Albert Huang <huangjie.albert@bytedance.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20231023125732.82261-1-huangjie.albert@bytedance.com
2023-10-24Revert "dt-bindings: usb: Add bindings for multiport properties on DWC3 ↵Greg Kroah-Hartman
controller" This reverts commit eb3f1d9e42b1499152442e97b51bc1bcfee29d71. The patches for the features that these bindings described are still under active development, so odds are these bindings will also have to be changed in the future. As no in-tree code requires these bindings at the moment, revert them. Reported-by: Johan Hovold <johan@kernel.org> Link: https://lore.kernel.org/r/ZTeObdjSSok0tttg@hovoldconsulting.com Cc: Bjorn Andersson <quic_bjorande@quicinc.com> Cc: Krishna Kurapati <quic_kriskura@quicinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-24Revert "dt-bindings: usb: qcom,dwc3: Add bindings for SC8280 Multiport"Greg Kroah-Hartman
This reverts commit ca58c4ae75b65e1d78408b134f129ea91e9595b8. The patches for the features that these bindings described are still under active development, so odds are these bindings will also have to be changed in the future. As no in-tree code requires these bindings at the moment, revert them. Reported-by: Johan Hovold <johan@kernel.org> Link: https://lore.kernel.org/r/ZTeObdjSSok0tttg@hovoldconsulting.com Cc: Krishna Kurapati <quic_kriskura@quicinc.com> Cc: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-24serial: core: Fix runtime PM handling for pending txTony Lindgren
Richard reported that a serial port may end up sometimes with tx data pending in the buffer for long periods of time. Turns out we bail out early on any errors from pm_runtime_get(), including -EINPROGRESS. To fix the issue, we need to ignore -EINPROGRESS as we only care about the runtime PM usage count at this point. We check for an active runtime PM state later on for tx. Fixes: 84a9582fd203 ("serial: core: Start managing serial controllers to enable runtime PM") Cc: stable <stable@kernel.org> Reported-by: Richard Purdie <richard.purdie@linuxfoundation.org> Cc: Bruce Ashfield <bruce.ashfield@gmail.com> Cc: Mikko Rapeli <mikko.rapeli@linaro.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Randy MacLeod <randy.macleod@windriver.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Tested-by: Richard Purdie <richard.purdie@linuxfoundation.org> Link: https://lore.kernel.org/r/20231023074856.61896-1-tony@atomide.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-24net: ethernet: davinci_emac: Use MAC Address from Device TreeAdam Ford
Currently there is a device tree entry called "local-mac-address" which can be filled by the bootloader or manually set.This is useful when the user does not want to use the MAC address programmed into the SoC. Currently, the davinci_emac reads the MAC from the DT, copies it from pdata->mac_addr to priv->mac_addr, then blindly overwrites it by reading from registers in the SoC, and falls back to a random MAC if it's still not valid. This completely ignores any MAC address in the device tree. In order to use the local-mac-address, check to see if the contents of priv->mac_addr are valid before falling back to reading from the SoC when the MAC address is not valid. Signed-off-by: Adam Ford <aford173@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231022151911.4279-1-aford173@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24Documentation: security-bugs.rst: linux-distros relaxed their rulesWilly Tarreau
The linux-distros list relaxed their rules to try to adapt better to how the Linux kernel works. Let's update the Coordination part to explain why and when to contact them or not to and how to avoid trouble in the future. Link: https://www.openwall.com/lists/oss-security/2023/09/08/4 Cc: Kees Cook <keescook@chromium.org> Cc: Solar Designer <solar@openwall.com> Cc: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Willy Tarreau <w@1wt.eu> Link: https://lore.kernel.org/r/20231015130959.26242-1-w@1wt.eu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-24autofs: fix add autofs_parse_fd()Ian Kent
We are seeing systemd hang on its autofs direct mount at /proc/sys/fs/binfmt_misc. Historically this was due to a mismatch in the communication structure size between a 64 bit kernel and a 32 bit user space and was fixed by making the pipe communication record oriented. During autofs v5 development I decided to stay with the existing usage instead of changing to a packed structure for autofs <=> user space communications which turned out to be a mistake on my part. Problems arose and they were fixed by allowing for the 64 bit to 32 bit size difference in the automount(8) code. Along the way systemd started to use autofs and eventually encountered this problem too. systemd refused to compensate for the length difference insisting it be fixed in the kernel. Fortunately Linus implemented the packetized pipe which resolved the problem in a straight forward and simple way. In the autofs mount api conversion series I inadvertatly dropped the packet pipe flag settings when adding the autofs_parse_fd() function. This patch fixes that omission. Fixes: 546694b8f658 ("autofs: add autofs_parse_fd()") Signed-off-by: Ian Kent <raven@themaw.net> Link: https://lore.kernel.org/r/20231023093359.64265-1-raven@themaw.net Tested-by: Anders Roxell <anders.roxell@linaro.org> Cc: Bill O'Donnell <bodonnel@redhat.com> Cc: Christian Brauner <brauner@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dan Carpenter <dan.carpenter@linaro.org> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Naresh Kamboju <naresh.kamboju@linaro.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Reported-by: Linux Kernel Functional Testing <lkft@linaro.org> Reported-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-24Fix NULL pointer dereference in cn_filter()Anjali Kulkarni
Check that sk_user_data is not NULL, else return from cn_filter(). Could not reproduce this issue, but Oliver Sang verified it has fixed the "Closes" problem below. Fixes: 2aa1f7a1f47c ("connector/cn_proc: Add filtering to fix some bugs") Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202309201456.84c19e27-oliver.sang@intel.com/ Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com> Link: https://lore.kernel.org/r/20231020234058.2232347-1-anjali.k.kulkarni@oracle.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24vfs: Convert BUG_ON to WARN_ON_ONCE in open_last_lookupsBernd Schubert
The calling code actually handles -ECHILD, so this BUG_ON can be converted to WARN_ON_ONCE. Signed-off-by: Bernd Schubert <bschubert@ddn.com> Link: https://lore.kernel.org/r/20231023184718.11143-1-bschubert@ddn.com Cc: Christian Brauner <brauner@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Amir Goldstein <amir73il@gmail.com> Cc: Dharmendra Singh <dsingh@ddn.com> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-24sched/fair: Remove SIS_PROPPeter Zijlstra
SIS_UTIL seems to work well, lets remove the old thing. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20231020134337.GD33965@noisy.programming.kicks-ass.net
2023-10-24sched/fair: Use candidate prev/recent_used CPU if scanning failed for ↵Yicong Yang
cluster wakeup Chen Yu reports a hackbench regression of cluster wakeup when hackbench threads equal to the CPU number [1]. Analysis shows it's because we wake up more on the target CPU even if the prev_cpu is a good wakeup candidate and leads to the decrease of the CPU utilization. Generally if the task's prev_cpu is idle we'll wake up the task on it without scanning. On cluster machines we'll try to wake up the task in the same cluster of the target for better cache affinity, so if the prev_cpu is idle but not sharing the same cluster with the target we'll still try to find an idle CPU within the cluster. This will improve the performance at low loads on cluster machines. But in the issue above, if the prev_cpu is idle but not in the cluster with the target CPU, we'll try to scan an idle one in the cluster. But since the system is busy, we're likely to fail the scanning and use target instead, even if the prev_cpu is idle. Then leads to the regression. This patch solves this in 2 steps: o record the prev_cpu/recent_used_cpu if they're good wakeup candidates but not sharing the cluster with the target. o on scanning failure use the prev_cpu/recent_used_cpu if they're recorded as idle [1] https://lore.kernel.org/all/ZGzDLuVaHR1PAYDt@chenyu5-mobl1/ Closes: https://lore.kernel.org/all/ZGsLy83wPIpamy6x@chenyu5-mobl1/ Reported-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Tested-and-reviewed-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20231019033323.54147-4-yangyicong@huawei.com
2023-10-24sched/fair: Scan cluster before scanning LLC in wake-up pathBarry Song
For platforms having clusters like Kunpeng920, CPUs within the same cluster have lower latency when synchronizing and accessing shared resources like cache. Thus, this patch tries to find an idle cpu within the cluster of the target CPU before scanning the whole LLC to gain lower latency. This will be implemented in 2 steps in select_idle_sibling(): 1. When the prev_cpu/recent_used_cpu are good wakeup candidates, use them if they're sharing cluster with the target CPU. Otherwise trying to scan for an idle CPU in the target's cluster. 2. Scanning the cluster prior to the LLC of the target CPU for an idle CPU to wakeup. Testing has been done on Kunpeng920 by pinning tasks to one numa and two numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs. With this patch, We noticed enhancement on tbench and netperf within one numa or cross two numa on top of tip-sched-core commit 9b46f1abc6d4 ("sched/debug: Print 'tgid' in sched_show_task()") tbench results (node 0): baseline patched 1: 327.2833 372.4623 ( 13.80%) 4: 1320.5933 1479.8833 ( 12.06%) 8: 2638.4867 2921.5267 ( 10.73%) 16: 5282.7133 5891.5633 ( 11.53%) 32: 9810.6733 9877.3400 ( 0.68%) 64: 7408.9367 7447.9900 ( 0.53%) 128: 6203.2600 6191.6500 ( -0.19%) tbench results (node 0-1): baseline patched 1: 332.0433 372.7223 ( 12.25%) 4: 1325.4667 1477.6733 ( 11.48%) 8: 2622.9433 2897.9967 ( 10.49%) 16: 5218.6100 5878.2967 ( 12.64%) 32: 10211.7000 11494.4000 ( 12.56%) 64: 13313.7333 16740.0333 ( 25.74%) 128: 13959.1000 14533.9000 ( 4.12%) netperf results TCP_RR (node 0): baseline patched 1: 76546.5033 90649.9867 ( 18.42%) 4: 77292.4450 90932.7175 ( 17.65%) 8: 77367.7254 90882.3467 ( 17.47%) 16: 78519.9048 90938.8344 ( 15.82%) 32: 72169.5035 72851.6730 ( 0.95%) 64: 25911.2457 25882.2315 ( -0.11%) 128: 10752.6572 10768.6038 ( 0.15%) netperf results TCP_RR (node 0-1): baseline patched 1: 76857.6667 90892.2767 ( 18.26%) 4: 78236.6475 90767.3017 ( 16.02%) 8: 77929.6096 90684.1633 ( 16.37%) 16: 77438.5873 90502.5787 ( 16.87%) 32: 74205.6635 88301.5612 ( 19.00%) 64: 69827.8535 71787.6706 ( 2.81%) 128: 25281.4366 25771.3023 ( 1.94%) netperf results UDP_RR (node 0): baseline patched 1: 96869.8400 110800.8467 ( 14.38%) 4: 97744.9750 109680.5425 ( 12.21%) 8: 98783.9863 110409.9637 ( 11.77%) 16: 99575.0235 110636.2435 ( 11.11%) 32: 95044.7250 97622.8887 ( 2.71%) 64: 32925.2146 32644.4991 ( -0.85%) 128: 12859.2343 12824.0051 ( -0.27%) netperf results UDP_RR (node 0-1): baseline patched 1: 97202.4733 110190.1200 ( 13.36%) 4: 95954.0558 106245.7258 ( 10.73%) 8: 96277.1958 105206.5304 ( 9.27%) 16: 97692.7810 107927.2125 ( 10.48%) 32: 79999.6702 103550.2999 ( 29.44%) 64: 80592.7413 87284.0856 ( 8.30%) 128: 27701.5770 29914.5820 ( 7.99%) Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so the SMT branch in the code has not been tested but it supposed to work. Chen Yu also noticed this will improve the performance of tbench and netperf on a 24 CPUs Jacobsville machine, there are 4 CPUs in one cluster sharing L2 Cache. [https://lore.kernel.org/lkml/Ytfjs+m1kUs0ScSn@worktop.programming.kicks-ass.net] Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Reviewed-by: Chen Yu <yu.c.chen@intel.com> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Tested-and-reviewed-by: Chen Yu <yu.c.chen@intel.com> Tested-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lkml.kernel.org/r/20231019033323.54147-3-yangyicong@huawei.com
2023-10-24sched: Add cpus_share_resources APIBarry Song
Add cpus_share_resources() API. This is the preparation for the optimization of select_idle_cpu() on platforms with cluster scheduler level. On a machine with clusters cpus_share_resources() will test whether two cpus are within the same cluster. On a non-cluster machine it will behaves the same as cpus_share_cache(). So we use "resources" here for cache resources. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Tested-and-reviewed-by: Chen Yu <yu.c.chen@intel.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lkml.kernel.org/r/20231019033323.54147-2-yangyicong@huawei.com
2023-10-24sched/core: Fix RQCF_ACT_SKIP leakHao Jia
Igor Raits and Bagas Sanjaya report a RQCF_ACT_SKIP leak warning. This warning may be triggered in the following situations: CPU0 CPU1 __schedule() *rq->clock_update_flags <<= 1;* unregister_fair_sched_group() pick_next_task_fair+0x4a/0x410 destroy_cfs_bandwidth() newidle_balance+0x115/0x3e0 for_each_possible_cpu(i) *i=0* rq_unpin_lock(this_rq, rf) __cfsb_csd_unthrottle() raw_spin_rq_unlock(this_rq) rq_lock(*CPU0_rq*, &rf) rq_clock_start_loop_update() rq->clock_update_flags & RQCF_ACT_SKIP <-- raw_spin_rq_lock(this_rq) The purpose of RQCF_ACT_SKIP is to skip the update rq clock, but the update is very early in __schedule(), but we clear RQCF_*_SKIP very late, causing it to span that gap above and triggering this warning. In __schedule() we can clear the RQCF_*_SKIP flag immediately after update_rq_clock() to avoid this RQCF_ACT_SKIP leak warning. And set rq->clock_update_flags to RQCF_UPDATED to avoid rq->clock_update_flags < RQCF_ACT_SKIP warning that may be triggered later. Fixes: ebb83d84e49b ("sched/core: Avoid multiple calling update_rq_clock() in __cfsb_csd_unthrottle()") Closes: https://lore.kernel.org/all/20230913082424.73252-1-jiahao.os@bytedance.com Reported-by: Igor Raits <igor.raits@gmail.com> Reported-by: Bagas Sanjaya <bagasdotme@gmail.com> Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Hao Jia <jiahao.os@bytedance.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/a5dd536d-041a-2ce9-f4b7-64d8d85c86dc@gmail.com
2023-10-24sock: Ignore memcg pressure heuristics when raising allocatedAbel Wu
Before sockets became aware of net-memcg's memory pressure since commit e1aab161e013 ("socket: initial cgroup code."), the memory usage would be granted to raise if below average even when under protocol's pressure. This provides fairness among the sockets of same protocol. That commit changes this because the heuristic will also be effective when only memcg is under pressure which makes no sense. So revert that behavior. After reverting, __sk_mem_raise_allocated() no longer considers memcg's pressure. As memcgs are isolated from each other w.r.t. memory accounting, consuming one's budget won't affect others. So except the places where buffer sizes are needed to be tuned, allow workloads to use the memory they are provisioned. Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> Acked-by: Shakeel Butt <shakeelb@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20231019120026.42215-3-wuyun.abel@bytedance.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24sock: Doc behaviors for pressure heurisiticsAbel Wu
There are now two accounting infrastructures for skmem, while the heuristics in __sk_mem_raise_allocated() were actually introduced before memcg was born. Add some comments to clarify whether they can be applied to both infrastructures or not. Suggested-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> Acked-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20231019120026.42215-2-wuyun.abel@bytedance.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24sock: Code cleanup on __sk_mem_raise_allocated()Abel Wu
Code cleanup for both better simplicity and readability. No functional change intended. Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> Acked-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20231019120026.42215-1-wuyun.abel@bytedance.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24net: ti: icssg-prueth: Add phys_port_name supportJan Kiszka
Helps identifying the ports in udev rules e.g. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/895ae9c1-b6dd-4a97-be14-6f2b73c7b2b5@siemens.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24printk: printk: Remove unnecessary statements'len = 0;'Li kunyu
In the following two functions, len has already been assigned a value of 0 when defining the variable, so remove 'len=0;'. Signed-off-by: Li kunyu <kunyu@nfschina.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20231023062359.130633-1-kunyu@nfschina.com
2023-10-24net: microchip: lan743x: improve throughput with rx timestamp configVishvambar Panth S
Currently all RX frames are timestamped which results in a performance penalty when timestamping is not needed. The default is now being changed to not timestamp any Rx frames (HWTSTAMP_FILTER_NONE), but support has been added to allow changing the desired RX timestamping mode (HWTSTAMP_FILTER_ALL - which was the previous setting and HWTSTAMP_FILTER_PTP_V2_EVENT are now supported) using SIOCSHWTSTAMP. All settings were tested using the hwstamp_ctl application. It is also noted that ptp4l, when started, preconfigures the device to timestamp using HWTSTAMP_FILTER_PTP_V2_EVENT, so this driver continues to work properly "out of the box". Test setup: x64 PC with LAN7430 ---> x64 PC as partner iperf3 with - Timestamp all incoming packets: - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-5.05 sec 517 MBytes 859 Mbits/sec 0 sender [ 5] 0.00-5.00 sec 515 MBytes 864 Mbits/sec receiver iperf Done. iperf3 with - Timestamp only PTP packets: - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-5.04 sec 563 MBytes 937 Mbits/sec 0 sender [ 5] 0.00-5.00 sec 561 MBytes 941 Mbits/sec receiver Signed-off-by: Vishvambar Panth S <vishvambarpanth.s@microchip.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231020185801.25649-1-vishvambarpanth.s@microchip.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-24vgacon: fix mips/sibyte build regressionArnd Bergmann
The conversion to vgacon_register_screen() was missing an #include statement for the swarm board: arch/mips/sibyte/swarm/setup.c:146:9: error: implicit declaration of function 'vgacon_register_screen' [-Werror=implicit-function-declaration] Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202310240429.UqeQ2Cpr-lkp@intel.com/ Fixes: 555624c0d10b vgacon: clean up global screen_info instances Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20231024054412.2291220-1-arnd@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-23Merge tag 'pull-nfsd-fix' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull nfsd fix from Al Viro: "Catch from lock_rename() audit; nfsd_rename() checked that both directories belonged to the same filesystem, but only after having done lock_rename(). Trivial fix, tested and acked by nfs folks" * tag 'pull-nfsd-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: nfsd: lock_rename() needs both directories to live on the same fs
2023-10-24fs/9p: Remove unused function declaration v9fs_inode2stat()Yue Haibing
Commit 531b1094b743 ("[PATCH] v9fs: zero copy implementation") declared but never implemented this. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Message-ID: <20230807141726.38860-1-yuehaibing@huawei.com> Signed-off-by: Dominique Martinet <asmadeus@codewreck.org>