summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-04-21Merge tag 'mlx5-fixes-2023-04-20' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5 fixes 2023-04-19 This series provides bug fixes to mlx5 driver. * tag 'mlx5-fixes-2023-04-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: Revert "net/mlx5e: Don't use termination table when redundant" net/mlx5e: Nullify table pointer when failing to create net/mlx5: Use recovery timeout on sync reset flow Revert "net/mlx5: Remove "recovery" arg from mlx5_load_one() function" net/mlx5e: Fix error flow in representor failing to add vport rx rule net/mlx5: Release tunnel device after tc update skb net/mlx5: E-switch, Don't destroy indirect table in split rule net/mlx5: E-switch, Create per vport table based on devlink encap mode net/mlx5e: Release the label when replacing existing ct entry net/mlx5e: Don't clone flow post action attributes second time ==================== Link: https://lore.kernel.org/r/20230421015057.355468-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21Merge tag 'for-netdev' of ↵Jakub Kicinski
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2023-04-21 We've added 71 non-merge commits during the last 8 day(s) which contain a total of 116 files changed, 13397 insertions(+), 8896 deletions(-). The main changes are: 1) Add a new BPF netfilter program type and minimal support to hook BPF programs to netfilter hooks such as prerouting or forward, from Florian Westphal. 2) Fix race between btf_put and btf_idr walk which caused a deadlock, from Alexei Starovoitov. 3) Second big batch to migrate test_verifier unit tests into test_progs for ease of readability and debugging, from Eduard Zingerman. 4) Add support for refcounted local kptrs to the verifier for allowing shared ownership, useful for adding a node to both the BPF list and rbtree, from Dave Marchevsky. 5) Migrate bpf_for(), bpf_for_each() and bpf_repeat() macros from BPF selftests into libbpf-provided bpf_helpers.h header and improve kfunc handling, from Andrii Nakryiko. 6) Support 64-bit pointers to kfuncs needed for archs like s390x, from Ilya Leoshkevich. 7) Support BPF progs under getsockopt with a NULL optval, from Stanislav Fomichev. 8) Improve verifier u32 scalar equality checking in order to enable LLVM transformations which earlier had to be disabled specifically for BPF backend, from Yonghong Song. 9) Extend bpftool's struct_ops object loading to support links, from Kui-Feng Lee. 10) Add xsk selftest follow-up fixes for hugepage allocated umem, from Magnus Karlsson. 11) Support BPF redirects from tc BPF to ifb devices, from Daniel Borkmann. 12) Add BPF support for integer type when accessing variable length arrays, from Feng Zhou. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (71 commits) selftests/bpf: verifier/value_ptr_arith converted to inline assembly selftests/bpf: verifier/value_illegal_alu converted to inline assembly selftests/bpf: verifier/unpriv converted to inline assembly selftests/bpf: verifier/subreg converted to inline assembly selftests/bpf: verifier/spin_lock converted to inline assembly selftests/bpf: verifier/sock converted to inline assembly selftests/bpf: verifier/search_pruning converted to inline assembly selftests/bpf: verifier/runtime_jit converted to inline assembly selftests/bpf: verifier/regalloc converted to inline assembly selftests/bpf: verifier/ref_tracking converted to inline assembly selftests/bpf: verifier/map_ptr_mixing converted to inline assembly selftests/bpf: verifier/map_in_map converted to inline assembly selftests/bpf: verifier/lwt converted to inline assembly selftests/bpf: verifier/loops1 converted to inline assembly selftests/bpf: verifier/jeq_infer_not_null converted to inline assembly selftests/bpf: verifier/direct_packet_access converted to inline assembly selftests/bpf: verifier/d_path converted to inline assembly selftests/bpf: verifier/ctx converted to inline assembly selftests/bpf: verifier/btf_ctx_access converted to inline assembly selftests/bpf: verifier/bpf_get_stack converted to inline assembly ... ==================== Link: https://lore.kernel.org/r/20230421211035.9111-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21Merge branch '10GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== ixgbe: Multiple RSS bugfixes Joe Damato says: This series fixes two bugs I stumbled on with ixgbe: 1. The flow hash cannot be set manually with ethool at all. Patch 1/2 addresses this by fixing what appears to be a small bug in set_rxfh in ixgbe. See the commit message for more details. 2. Once the above patch is applied and the flow hash can be set, resetting the flow hash to default fails if the number of queues is greater than the number of queues supported by RSS. Other drivers (like i40e) will reset the flowhash to use the maximum number of queues supported by RSS even if the queue count configured is larger. In other words: some queues will not have packets distributed to them by the RSS hash if the queue count is too large. Patch 2/2 allows the user to reset ixgbe to default and the flowhash is set correctly to either the maximum number of queues supported by RSS or the configured queue count, whichever is smaller. I believe this is correct and it mimics the behavior of i40e; `ethtool -X $iface default` should probably always succeed even if all the queues cannot be utilized. See the commit message for more details and examples. I tested these on an ixgbe system I have access to and they appear to work as intended, but I would appreciate a review by the experts on this list :) * '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: ixgbe: Enable setting RSS table to default values ixgbe: Allow flow hash to be set via ethtool ==================== Link: https://lore.kernel.org/r/20230420235000.2971509-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net: dst: fix missing initialization of rt_uncachedMaxime Bizon
xfrm_alloc_dst() followed by xfrm4_dst_destroy(), without a xfrm4_fill_dst() call in between, causes the following BUG: BUG: spinlock bad magic on CPU#0, fbxhostapd/732 lock: 0x890b7668, .magic: 890b7668, .owner: <none>/-1, .owner_cpu: 0 CPU: 0 PID: 732 Comm: fbxhostapd Not tainted 6.3.0-rc6-next-20230414-00613-ge8de66369925-dirty #9 Hardware name: Marvell Kirkwood (Flattened Device Tree) unwind_backtrace from show_stack+0x10/0x14 show_stack from dump_stack_lvl+0x28/0x30 dump_stack_lvl from do_raw_spin_lock+0x20/0x80 do_raw_spin_lock from rt_del_uncached_list+0x30/0x64 rt_del_uncached_list from xfrm4_dst_destroy+0x3c/0xbc xfrm4_dst_destroy from dst_destroy+0x5c/0xb0 dst_destroy from rcu_process_callbacks+0xc4/0xec rcu_process_callbacks from __do_softirq+0xb4/0x22c __do_softirq from call_with_stack+0x1c/0x24 call_with_stack from do_softirq+0x60/0x6c do_softirq from __local_bh_enable_ip+0xa0/0xcc Patch "net: dst: Prevent false sharing vs. dst_entry:: __refcnt" moved rt_uncached and rt_uncached_list fields from rtable struct to dst struct, so they are more zeroed by memset_after(xdst, 0, u.dst) in xfrm_alloc_dst(). Note that rt_uncached (list_head) was never properly initialized at alloc time, but xfrm[46]_dst_destroy() is written in such a way that it was not an issue thanks to the memset: if (xdst->u.rt.dst.rt_uncached_list) rt_del_uncached_list(&xdst->u.rt); The route code does it the other way around: rt_uncached_list is assumed to be valid IIF rt_uncached list_head is not empty: void rt_del_uncached_list(struct rtable *rt) { if (!list_empty(&rt->dst.rt_uncached)) { struct uncached_list *ul = rt->dst.rt_uncached_list; spin_lock_bh(&ul->lock); list_del_init(&rt->dst.rt_uncached); spin_unlock_bh(&ul->lock); } } This patch adds mandatory rt_uncached list_head initialization in generic dst_init(), and adapt xfrm[46]_dst_destroy logic to match the rest of the code. Fixes: d288a162dd1c ("net: dst: Prevent false sharing vs. dst_entry:: __refcnt") Reported-by: kernel test robot <oliver.sang@intel.com> Link: https://lore.kernel.org/oe-lkp/202304162125.18b7bcdd-oliver.sang@intel.com Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> CC: Leon Romanovsky <leon@kernel.org> Signed-off-by: Maxime Bizon <mbizon@freebox.fr> Link: https://lore.kernel.org/r/20230420182508.2417582-1-mbizon@freebox.fr Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net: dsa: qca8k: fix LEDS_CLASS dependencyArnd Bergmann
With LEDS_CLASS=m, a built-in qca8k driver fails to link: arm-linux-gnueabi-ld: drivers/net/dsa/qca/qca8k-leds.o: in function `qca8k_setup_led_ctrl': qca8k-leds.c:(.text+0x1ea): undefined reference to `devm_led_classdev_register_ext' Change the dependency to avoid the broken configuration. Fixes: 1e264f9d2918 ("net: dsa: qca8k: add LEDs basic support") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Christian Marangi <ansuelsmth@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20230420213639.2243388-1-arnd@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net/sched: cls_api: Initialize miss_cookie_node when action miss is not usedIvan Vecera
Function tcf_exts_init_ex() sets exts->miss_cookie_node ptr only when use_action_miss is true so it assumes in other case that the field is set to NULL by the caller. If not then the field contains garbage and subsequent tcf_exts_destroy() call results in a crash. Ensure that the field .miss_cookie_node pointer is NULL when use_action_miss parameter is false to avoid this potential scenario. Fixes: 80cd22c35c90 ("net/sched: cls_api: Support hardware miss to tc action") Signed-off-by: Ivan Vecera <ivecera@redhat.com> Reviewed-by: Pedro Tammela <pctammela@mojatatu.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230420183634.1139391-1-ivecera@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net/handshake: Fix section mismatch in handshake_exitGeert Uytterhoeven
If CONFIG_NET_NS=n (e.g. m68k/defconfig): WARNING: modpost: vmlinux.o: section mismatch in reference: handshake_exit (section: .exit.text) -> handshake_genl_net_ops (section: .init.data) ERROR: modpost: Section mismatches detected. Fix this by dropping the __net_initdata tag from handshake_genl_net_ops. Fixes: 3b3009ea8abb713b ("net/handshake: Create a NETLINK service for handling handshake requests") Reported-by: noreply@ellerman.id.au Closes: http://kisskb.ellerman.id.au/kisskb/buildresult/14912987 Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Link: https://lore.kernel.org/r/20230420173723.3773434-1-geert@linux-m68k.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net/sched: sch_fq: fix integer overflow of "credit"Davide Caratti
if sch_fq is configured with "initial quantum" having values greater than INT_MAX, the first assignment of "credit" does signed integer overflow to a very negative value. In this situation, the syzkaller script provided by Cristoph triggers the CPU soft-lockup warning even with few sockets. It's not an infinite loop, but "credit" wasn't probably meant to be minus 2Gb for each new flow. Capping "initial quantum" to INT_MAX proved to fix the issue. v2: validation of "initial quantum" is done in fq_policy, instead of open coding in fq_change() _ suggested by Jakub Kicinski Reported-by: Christoph Paasch <cpaasch@apple.com> Link: https://github.com/multipath-tcp/mptcp_net-next/issues/377 Fixes: afe4fd062416 ("pkt_sched: fq: Fair Queue packet scheduler") Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Davide Caratti <dcaratti@redhat.com> Link: https://lore.kernel.org/r/7b3a3c7e36d03068707a021760a194a8eb5ad41a.1682002300.git.dcaratti@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21nfp: fix incorrect pointer deference when offloading IPsec with bondingHuanhuan Wang
There are two pointers in struct xfrm_dev_offload, *dev, *real_dev. The *dev points whether bonding interface or real interface, if bonding IPsec offload is used, it points bonding interface; if not, it points real interface. And *real_dev always points real interface. So nfp should always use real_dev instead of dev. Prior to this change the system becomes unresponsive when offloading IPsec for a device which is a lower device to a bonding device. Fixes: 859a497fe80c ("nfp: implement xfrm callbacks and expose ipsec offload feature to upper layer") CC: stable@vger.kernel.org Signed-off-by: Huanhuan Wang <huanhuan.wang@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Link: https://lore.kernel.org/r/20230420140125.38521-1-louis.peens@corigine.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net: dpaa: Fix uninitialized variable in dpaa_stop()Dan Carpenter
The return value is not initialized on the success path. Fixes: 901bdff2f529 ("net: fman: Change return type of disable to void") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com> Reviewed-by: Sean Anderson <sean.anderson@seco.com> Link: https://lore.kernel.org/r/8c9dc377-8495-495f-a4e5-4d2d0ee12f0c@kili.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-21net: phy: add basic driver for NXP CBTX PHYVladimir Oltean
The CBTX PHY is a Fast Ethernet PHY integrated into the SJA1110 A/B/C automotive Ethernet switches. It was hoped it would work with the Generic PHY driver, but alas, it doesn't. The most important reason why is that the PHY is powered down by default, and it needs a vendor register to power it on. It has a linear memory map that is accessed over SPI by the SJA1110 switch driver, which exposes a fake MDIO controller. It has the following (and only the following) standard clause 22 registers: 0x0: MII_BMCR 0x1: MII_BMSR 0x2: MII_PHYSID1 0x3: MII_PHYSID2 0x4: MII_ADVERTISE 0x5: MII_LPA 0x6: MII_EXPANSION 0x7: the missing MII_NPAGE for Next Page Transmit Register Every other register is vendor-defined. The register map expands the standard clause 22 5-bit address space of 0x20 registers, however the driver does not need to access the extra registers for now (and hopefully never). If it ever needs to do that, it is possible to implement a fake (software) page switching mechanism between the PHY driver and the SJA1110 MDIO controller driver. Also, Auto-MDIX is turned off by default in hardware, the driver turns it on by default and reports the current status. I've tested this with a VSC8514 link partner and a crossover cable, by forcing the mode on the link partner, and seeing that the CBTX PHY always sees the reverse of the mode forced on the VSC8514 (and that traffic works). The link doesn't come up (as expected) if MDI modes are forced on both ends in the same way (with the cross-over cable, that is). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20230418190141.1040562-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-04-22netfilter: nf_tables: allow to create netdev chain without devicePablo Neira Ayuso
Relax netdev chain creation to allow for loading the ruleset, then adding/deleting devices at a later stage. Hardware offload does not support for this feature yet. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: support for deleting devices in an existing netdev chainPablo Neira Ayuso
This patch allows for deleting devices in an existing netdev chain. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: support for adding new devices to an existing netdev chainPablo Neira Ayuso
This patch allows users to add devices to an existing netdev chain. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: rename function to destroy hook listPablo Neira Ayuso
Rename nft_flowtable_hooks_destroy() by nft_hooks_destroy() to prepare for netdev chain device updates. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: do not send complete notification of deletionsPablo Neira Ayuso
In most cases, table, name and handle is sufficient for userspace to identify an object that has been deleted. Skipping unneeded fields in the netlink attributes in the message saves bandwidth (ie. less chances of hitting ENOBUFS). Rules are an exception: the existing userspace monitor code relies on the rule definition. This exception can be removed by implementing a rule cache in userspace, this is already supported by the tracing infrastructure. Regarding flowtables, incremental deletion of devices is possible. Skipping a full notification allows userspace to differentiate between flowtable removal and incremental removal of devices. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: extended netlink error reporting for netdevicePablo Neira Ayuso
Flowtable and netdev chains are bound to one or several netdevice, extend netlink error reporting to specify the the netdevice that triggers the error. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22ipvs: Correct spelling in commentsSimon Horman
Correct some spelling errors flagged by codespell and found by inspection. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22ipvs: Remove {Enter,Leave}FunctionSimon Horman
Remove EnterFunction and LeaveFunction. These debugging macros seem well past their use-by date. And seem to have little value these days. Removing them allows some trivial cleanup of some exit paths for some functions. These are also included in this patch. There is likely scope for further cleanup of both debugging and unwind paths. But let's leave that for another day. Only intended to change debug output, and only when CONFIG_IP_VS_DEBUG is enabled. Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22ipvs: Consistently use array_size() in ip_vs_conn_init()Simon Horman
Consistently use array_size() to calculate the size of ip_vs_conn_tab in bytes. Flagged by Coccinelle: WARNING: array_size is already used (line 1498) to compute the same size No functional change intended. Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22ipvs: Update width of source for ip_vs_sync_conn_optionsSimon Horman
In ip_vs_sync_conn_v0() copy is made to struct ip_vs_sync_conn_options. That structure looks like this: struct ip_vs_sync_conn_options { struct ip_vs_seq in_seq; struct ip_vs_seq out_seq; }; The source of the copy is the in_seq field of struct ip_vs_conn. Whose type is struct ip_vs_seq. Thus we can see that the source - is not as wide as the amount of data copied, which is the width of struct ip_vs_sync_conn_option. The copy is safe because the next field in is another struct ip_vs_seq. Make use of struct_group() to annotate this. Flagged by gcc-13 as: In file included from ./include/linux/string.h:254, from ./include/linux/bitmap.h:11, from ./include/linux/cpumask.h:12, from ./arch/x86/include/asm/paravirt.h:17, from ./arch/x86/include/asm/cpuid.h:62, from ./arch/x86/include/asm/processor.h:19, from ./arch/x86/include/asm/timex.h:5, from ./include/linux/timex.h:67, from ./include/linux/time32.h:13, from ./include/linux/time.h:60, from ./include/linux/stat.h:19, from ./include/linux/module.h:13, from net/netfilter/ipvs/ip_vs_sync.c:38: In function 'fortify_memcpy_chk', inlined from 'ip_vs_sync_conn_v0' at net/netfilter/ipvs/ip_vs_sync.c:606:3: ./include/linux/fortify-string.h:529:25: error: call to '__read_overflow2_field' declared with attribute warning: detected read beyond size of field (2nd parameter); maybe use struct_group()? [-Werror=attribute-warning] 529 | __read_overflow2_field(q_size_field, size); | Compile tested only. Signed-off-by: Simon Horman <horms@kernel.org> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: do not store rule in traceinfo structureFlorian Westphal
pass it as argument instead. This reduces size of traceinfo to 16 bytes. Total stack usage: nf_tables_core.c:252 nft_do_chain 304 static While its possible to also pass basechain as argument, doing so increases nft_do_chaininfo function size. Unlike pktinfo/verdict/rule the basechain info isn't used in the expression evaluation path. gcc places it on the stack, which results in extra push/pop when it gets passed to the trace helpers as argument rather than as part of the traceinfo structure. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: do not store verdict in traceinfo structureFlorian Westphal
Just pass it as argument to nft_trace_notify. Stack is reduced by 8 bytes: nf_tables_core.c:256 nft_do_chain 312 static Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: do not store pktinfo in traceinfo structureFlorian Westphal
pass it as argument. No change in object size. stack usage decreases by 8 byte: nf_tables_core.c:254 nft_do_chain 320 static Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: remove unneeded conditionalFlorian Westphal
This helper is inlined, so keep it as small as possible. If the static key is true, there is only a very small chance that info->trace is false: 1. tracing was enabled at this very moment, the static key was updated to active right after nft_do_table was called. 2. tracing was disabled at this very moment. trace->info is already false, the static key is about to be patched to false soon. In both cases, no event will be sent because info->trace is false (checked in noinline slowpath). info->nf_trace is irrelevant. The nf_trace update is redunant in this case, but this will only happen for short duration, when static key flips. text data bss dec hex filename old: 2980 192 32 3204 c84 nf_tables_core.o new: 2964 192 32 3188 c74i nf_tables_core.o Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: make validation state per tableFlorian Westphal
We only need to validate tables that saw changes in the current transaction. The existing code revalidates all tables, but this isn't needed as cross-table jumps are not allowed (chains have table scope). Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: don't write table validation state without mutexFlorian Westphal
The ->cleanup callback needs to be removed, this doesn't work anymore as the transaction mutex is already released in the ->abort function. Just do it after a successful validation pass, this either happens from commit or abort phases where transaction mutex is held. Fixes: f102d66b335a ("netfilter: nf_tables: use dedicated mutex to guard transactions") Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: don't store chain address on jumpFlorian Westphal
Now that the rule trailer/end marker and the rcu head reside in the same structure, we no longer need to save/restore the chain pointer when performing/returning from a jump. We can simply let the trace infra walk the evaluated rule until it hits the end marker and then fetch the chain pointer from there. When the rule is NULL (policy tracing), then chain and basechain pointers were already identical, so just use the basechain. This cuts size of jumpstack in half, from 256 to 128 bytes in 64bit, scripts/stackusage says: nf_tables_core.c:251 nft_do_chain 328 static Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: don't store address of last rule on jumpFlorian Westphal
Walk the rule headers until the trailer one (last_bit flag set) instead of stopping at last_rule address. This avoids the need to store the address when jumping to another chain. This cuts size of jumpstack array by one third, on 64bit from 384 to 256 bytes. Still, stack usage is still quite large: scripts/stackusage: nf_tables_core.c:258 nft_do_chain 496 static Next patch will also remove chain pointer. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-22netfilter: nf_tables: merge nft_rules_old structure and end of ruleblob markerFlorian Westphal
In order to free the rules in a chain via call_rcu, the rule array used to stash a rcu_head and space for a pointer at the end of the rule array. When the current nft_rule_dp blob format got added in 2c865a8a28a1 ("netfilter: nf_tables: add rule blob layout"), this results in a double-trailer: size (unsigned long) struct nft_rule_dp struct nft_expr ... struct nft_rule_dp struct nft_expr ... struct nft_rule_dp (is_last=1) // Trailer The trailer, struct nft_rule_dp (is_last=1), is not accounted for in size, so it can be located via start_addr + size. Because the rcu_head is stored after 'start+size' as well this means the is_last trailer is *aliased* to the rcu_head (struct nft_rules_old). This is harmless, because at this time the nft_do_chain function never evaluates/accesses the trailer, it only checks the address boundary: for (; rule < last_rule; rule = nft_rule_next(rule)) { ... But this way the last_rule address has to be stashed in the jump structure to restore it after returning from a chain. nft_do_chain stack usage has become way too big, so put it on a diet. Without this patch is impossible to use for (; !rule->is_last; rule = nft_rule_next(rule)) { ... because on free, the needed update of the rcu_head will clobber the nft_rule_dp is_last bit. Furthermore, also stash the chain pointer in the trailer, this allows to recover the original chain structure from nf_tables_trace infra without a need to place them in the jump struct. After this patch it is trivial to diet the jump stack structure, done in the next two patches. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2023-04-21Merge tag 'kvmarm-fixes-6.3-4' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.3, part #4 - Plug a buffer overflow due to the use of the user-provided register width for firmware regs. Outright reject accesses where the user register width does not match the kernel representation. - Protect non-atomic RMW operations on vCPU flags against preemption, as an update to the flags by an intervening preemption could be lost.
2023-04-21MIPS: Define RUNTIME_DISCARD_EXIT in LD scriptJiaxun Yang
MIPS's exit sections are discarded at runtime as well. Fixes link error: `.exit.text' referenced in section `__jump_table' of fs/fuse/inode.o: defined in discarded section `.exit.text' of fs/fuse/inode.o Fixes: 99cb0d917ffa ("arch: fix broken BuildID for arm64 and riscv") Reported-by: "kernelci.org bot" <bot@kernelci.org> Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2023-04-21mailmap: add entry for OleksandrOleksandr Natalenko
Map my corporate email to my personal one. Link: https://lkml.kernel.org/r/20230419134734.454630-1-oleksandr@natalenko.name Signed-off-by: Oleksandr Natalenko <oleksandr@natalenko.name> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21ocfs2: reduce ioctl stack usageArnd Bergmann
On 32-bit architectures with KASAN_STACK enabled, the total stack usage of the ocfs2_ioctl function grows beyond the warning limit: fs/ocfs2/ioctl.c: In function 'ocfs2_ioctl': fs/ocfs2/ioctl.c:934:1: error: the frame size of 1448 bytes is larger than 1400 bytes [-Werror=frame-larger-than=] Move each of the variables into a basic block, and mark ocfs2_info_handle() as noinline_for_stack, in order to have the variable share stack slots. Link: https://lkml.kernel.org/r/20230417205631.1956027-1-arnd@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com> Reviewed-by: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21fs/proc: add Kthread flag to /proc/$pid/statusChunguang Wu
The command `ps -ef ` and `top -c` mark kernel thread by '[' and ']', but sometimes the result is not correct. The task->flags in /proc/$pid/stat is good, but we need remember the value of PF_KTHREAD is 0x00200000 and convert dec to hex. If we have no binary program and shell script which read /proc/$pid/stat, we can know it directly by `cat /proc/$pid/status`. Link: https://lkml.kernel.org/r/20230416052404.2920-1-fullspring2018@gmail.com Signed-off-by: Chunguang Wu <fullspring2018@gmail.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21ia64: fix an addr to taddr in huge_pte_offset()Hugh Dickins
I know nothing of ia64 htlbpage_to_page(), but guess that the p4d line should be using taddr rather than addr, like everywhere else. Link: https://lkml.kernel.org/r/732eae88-3beb-246-2c72-281de786740@google.com Fixes: c03ab9e32a2c ("ia64: add support for folded p4d page tables") Signed-off-by: Hugh Dickins <hughd@google.com Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21sparse: remove unnecessary 0 values from rcLi zeming
rc is assigned first, so it does not need to initialize the assignment. Link: https://lkml.kernel.org/r/20230421214733.2909-1-zeming@nfschina.com Signed-off-by: Li zeming <zeming@nfschina.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21mm: move 'mmap_min_addr' logic from callers into vm_unmapped_area()Linus Torvalds
Instead of having callers care about the mmap_min_addr logic for the lowest valid mapping address (and some of them getting it wrong), just move the logic into vm_unmapped_area() itself. One less thing for various architecture cases (and generic helpers) to worry about. We should really try to make much more of this be common code, but baby steps.. Without this, vm_unmapped_area() could return an address below mmap_min_addr (because some caller forgot about that). That then causes the mmap machinery to think it has found a workable address, but then later security_mmap_addr(addr) is unhappy about it and the mmap() returns with a nonsensical error (EPERM). The proper action is to either return ENOMEM (if the virtual address space is exhausted), or try to find another address (ie do a bottom-up search for free addresses after the top-down one failed). See commit 2afc745f3e30 ("mm: ensure get_unmapped_area() returns higher address than mmap_min_addr"), which fixed this for one call site (the generic arch_get_unmapped_area_topdown() fallback) but left other cases alone. Link: https://lkml.kernel.org/r/20230418214009.1142926-1-Liam.Howlett@oracle.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Liam Howlett <liam.howlett@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21hugetlb: pte_alloc_huge() to replace huge pte_alloc_map()Hugh Dickins
Some architectures can have their hugetlb pages down at the lowest PTE level: their huge_pte_alloc() using pte_alloc_map(), but without any following pte_unmap(). Since none of these arches uses CONFIG_HIGHPTE, this is not seen as a problem at present; but would become a problem if forthcoming changes were to add an rcu_read_lock() into pte_offset_map(), with the rcu_read_unlock() expected in pte_unmap(). Similarly in their huge_pte_offset(): pte_offset_kernel() is good enough for that, but it's probably less confusing if we define pte_offset_huge() along with pte_alloc_huge(). Only define them without CONFIG_HIGHPTE: so there would be a build error to signal if ever more work is needed. For ease of development, define these now for 6.4-rc1, ahead of any use: then architectures can integrate patches using them, independent from mm. Link: https://lkml.kernel.org/r/ae9e7d98-8a3a-cfd9-4762-bcddffdf96cf@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21maple_tree: fix allocation in mas_sparse_area()Peng Zhang
In the case of reverse allocation, mas->index and mas->last do not point to the correct allocation range, which will cause users to get incorrect allocation results, so fix it. If the user does not use it in a specific way, this bug will not be triggered. This is a bug, but only VMA uses it now, the way VMA is used now will not trigger it. There is a possibility that a user will trigger it in the future. Also re-check whether the size is still satisfied after the lower bound was increased, which is a corner case and is incorrect in previous versions. Link: https://lkml.kernel.org/r/20230419093625.99201-1-zhangpeng.00@bytedance.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Peng Zhang <zhangpeng.00@bytedance.com> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21mm: do not increment pgfault stats when page fault handler retriesSuren Baghdasaryan
If the page fault handler requests a retry, we will count the fault multiple times. This is a relatively harmless problem as the retry paths are not often requested, and the only user-visible problem is that the fault counter will be slightly higher than it should be. Nevertheless, userspace only took one fault, and should not see the fact that the kernel had to retry the fault multiple times. Move page fault accounting into mm_account_fault() and skip incomplete faults which will be accounted upon completion. Link: https://lkml.kernel.org/r/20230419175836.3857458-1-surenb@google.com Fixes: d065bd810b6d ("mm: retry page fault when blocking on disk transfer") Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Peter Xu <peterx@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jan Kara <jack@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Michel Lespinasse <michel@lespinasse.org> Cc: Minchan Kim <minchan@google.com> Cc: Punit Agrawal <punit.agrawal@bytedance.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21zsmalloc: allow only one active pool compaction contextSergey Senozhatsky
zsmalloc pool can be compacted concurrently by many contexts, e.g. cc1 handle_mm_fault() do_anonymous_page() __alloc_pages_slowpath() try_to_free_pages() do_try_to_free_pages( lru_gen_shrink_node() shrink_slab() do_shrink_slab() zs_shrinker_scan() zs_compact() Pool compaction is currently (basically) single-threaded as it is performed under pool->lock. Having multiple compaction threads results in unnecessary contention, as each thread competes for pool->lock. This, in turn, affects all zsmalloc operations such as zs_malloc(), zs_map_object(), zs_free(), etc. Introduce the pool->compaction_in_progress atomic variable, which ensures that only one compaction context can run at a time. This reduces overall pool->lock contention in (corner) cases when many contexts attempt to shrink zspool simultaneously. Link: https://lkml.kernel.org/r/20230418074639.1903197-1-senozhatsky@chromium.org Fixes: c0547d0b6a4b ("zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21selftests/mm: add new selftests for KSMStefan Roesch
This adds three new tests to the selftests for KSM. These tests use the new prctl API's to enable and disable KSM. 1) add new prctl flags to prctl header file in tools dir This adds the new prctl flags to the include file prct.h in the tools directory. This makes sure they are available for testing. 2) add KSM prctl merge test to ksm_tests This adds the -t option to the ksm_tests program. The -t flag allows to specify if it should use madvise or prctl ksm merging. 3) add two functions for debugging merge outcome for ksm_tests This adds two functions to report the metrics in /proc/self/ksm_stat and /sys/kernel/debug/mm/ksm. The debug output is enabled with the -d option. 4) add KSM prctl test to ksm_functional_tests This adds a test to the ksm_functional_test that verifies that the prctl system call to enable / disable KSM works. 5) add KSM fork test to ksm_functional_test Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited by the child process. Link: https://lkml.kernel.org/r/20230418051342.1919757-4-shr@devkernel.io Signed-off-by: Stefan Roesch <shr@devkernel.io> Acked-by: David Hildenbrand <david@redhat.com> Cc: Bagas Sanjaya <bagasdotme@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21mm: add new KSM process and sysfs knobsStefan Roesch
This adds the general_profit KSM sysfs knob and the process profit metric knobs to ksm_stat. 1) expose general_profit metric The documentation mentions a general profit metric, however this metric is not calculated. In addition the formula depends on the size of internal structures, which makes it more difficult for an administrator to make the calculation. Adding the metric for a better user experience. 2) document general_profit sysfs knob 3) calculate ksm process profit metric The ksm documentation mentions the process profit metric and how to calculate it. This adds the calculation of the metric. 4) mm: expose ksm process profit metric in ksm_stat This exposes the ksm process profit metric in /proc/<pid>/ksm_stat. The documentation mentions the formula for the ksm process profit metric, however it does not calculate it. In addition the formula depends on the size of internal structures. So it makes sense to expose it. 5) document new procfs ksm knobs Link: https://lkml.kernel.org/r/20230418051342.1919757-3-shr@devkernel.io Signed-off-by: Stefan Roesch <shr@devkernel.io> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21mm: add new api to enable ksm per processStefan Roesch
Patch series "mm: process/cgroup ksm support", v9. So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. Use case 1: The madvise call is not available in the programming language. An example for this are programs with forked workloads using a garbage collected language without pointers. In such a language madvise cannot be made available. In addition the addresses of objects get moved around as they are garbage collected. KSM sharing needs to be enabled "from the outside" for these type of workloads. Use case 2: The same interpreter can also be used for workloads where KSM brings no benefit or even has overhead. We'd like to be able to enable KSM on a workload by workload basis. Use case 3: With the madvise call sharing opportunities are only enabled for the current process: it is a workload-local decision. A considerable number of sharing opportunities may exist across multiple workloads or jobs (if they are part of the same security domain). Only a higler level entity like a job scheduler or container can know for certain if its running one or more instances of a job. That job scheduler however doesn't have the necessary internal workload knowledge to make targeted madvise calls. Security concerns: In previous discussions security concerns have been brought up. The problem is that an individual workload does not have the knowledge about what else is running on a machine. Therefore it has to be very conservative in what memory areas can be shared or not. However, if the system is dedicated to running multiple jobs within the same security domain, its the job scheduler that has the knowledge that sharing can be safely enabled and is even desirable. Performance: Experiments with using UKSM have shown a capacity increase of around 20%. Here are the metrics from an instagram workload (taken from a machine with 64GB main memory): full_scans: 445 general_profit: 20158298048 max_page_sharing: 256 merge_across_nodes: 1 pages_shared: 129547 pages_sharing: 5119146 pages_to_scan: 4000 pages_unshared: 1760924 pages_volatile: 10761341 run: 1 sleep_millisecs: 20 stable_node_chains: 167 stable_node_chains_prune_millisecs: 2000 stable_node_dups: 2751 use_zero_pages: 0 zero_pages_sharing: 0 After the service is running for 30 minutes to an hour, 4 to 5 million shared pages are common for this workload when using KSM. Detailed changes: 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. 3. Add general_profit metric The general_profit metric of KSM is specified in the documentation, but not calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm. 4. Add more metrics to ksm_stat This adds the process profit metric to /proc/<pid>/ksm_stat. 5. Add more tests to ksm_tests and ksm_functional_tests This adds an option to specify the merge type to the ksm_tests. This allows to test madvise and prctl KSM. It also adds a two new tests to ksm_functional_tests: one to test the new prctl options and the other one is a fork test to verify that the KSM process setting is inherited by client processes. This patch (of 3): So far KSM can only be enabled by calling madvise for memory regions. To be able to use KSM for more workloads, KSM needs to have the ability to be enabled / disabled at the process / cgroup level. 1. New options for prctl system command This patch series adds two new options to the prctl system call. The first one allows to enable KSM at the process level and the second one to query the setting. The setting will be inherited by child processes. With the above setting, KSM can be enabled for the seed process of a cgroup and all processes in the cgroup will inherit the setting. 2. Changes to KSM processing When KSM is enabled at the process level, the KSM code will iterate over all the VMA's and enable KSM for the eligible VMA's. When forking a process that has KSM enabled, the setting will be inherited by the new child process. 1) Introduce new MMF_VM_MERGE_ANY flag This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is set, kernel samepage merging (ksm) gets enabled for all vma's of a process. 2) Setting VM_MERGEABLE on VMA creation When a VMA is created, if the MMF_VM_MERGE_ANY flag is set, the VM_MERGEABLE flag will be set for this VMA. 3) support disabling of ksm for a process This adds the ability to disable ksm for a process if ksm has been enabled for the process with prctl. 4) add new prctl option to get and set ksm for a process This adds two new options to the prctl system call - enable ksm for all vmas of a process (if the vmas support it). - query if ksm has been enabled for a process. 3. Disabling MMF_VM_MERGE_ANY for storage keys in s390 In the s390 architecture when storage keys are used, the MMF_VM_MERGE_ANY will be disabled. Link: https://lkml.kernel.org/r/20230418051342.1919757-1-shr@devkernel.io Link: https://lkml.kernel.org/r/20230418051342.1919757-2-shr@devkernel.io Signed-off-by: Stefan Roesch <shr@devkernel.io> Acked-by: David Hildenbrand <david@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Rik van Riel <riel@surriel.com> Cc: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21mm: shrinkers: fix debugfs file permissionsJohn Keeping
The permissions for the files here are swapped as "count" is read-only and "scan" is write-only. While this doesn't really matter as these permissions don't stop the files being opened for reading/writing as appropriate, they are shown by "ls -l" and are confusing. Link: https://lkml.kernel.org/r/20230418101906.3131303-1-john@metanate.com Signed-off-by: John Keeping <john@metanate.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21mm: don't check VMA write permissions if the PTE/PMD indicates write permissionsDavid Hildenbrand
Staring at the comment "Recheck VMA as permissions can change since migration started" in remove_migration_pte() can result in confusion, because if the source PTE/PMD indicates write permissions, then there should be no need to check VMA write permissions when restoring migration entries or PTE-mapping a PMD. Commit d3cb8bf6081b ("mm: migrate: Close race between migration completion and mprotect") introduced the maybe_mkwrite() handling in remove_migration_pte() in 2014, stating that a race between mprotect() and migration finishing would be possible, and that we could end up with a writable PTE that should be readable. However, mprotect() code first updates vma->vm_flags / vma->vm_page_prot and then walks the page tables to (a) set all present writable PTEs to read-only and (b) convert all writable migration entries to readable migration entries. While walking the page tables and modifying the entries, migration code has to grab the PT locks to synchronize against concurrent page table modifications. Assuming migration would find a writable migration entry (while holding the PT lock) and replace it with a writable present PTE, surely mprotect() code didn't stumble over the writable migration entry yet (converting it into a readable migration entry) and would instead wait for the PT lock to convert the now present writable PTE into a read-only PTE. As mprotect() didn't finish yet, the behavior is just like migration didn't happen: a writable PTE will be converted to a read-only PTE. So it's fine to rely on the writability information in the source PTE/PMD and not recheck against the VMA as long as we're holding the PT lock to synchronize with anyone who concurrently wants to downgrade write permissions (like mprotect()) by first adjusting vma->vm_flags / vma->vm_page_prot to then walk over the page tables to adjust the page table entries. Running test cases that should reveal such races -- mprotect(PROT_READ) racing with page migration or THP splitting -- for multiple hours did not reveal an issue with this cleanup. Link: https://lkml.kernel.org/r/20230418142113.439494-1-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21migrate_pages_batch: fix statistics for longterm pin retryHuang Ying
In commit fd4a7ac32918 ("mm: migrate: try again if THP split is failed due to page refcnt"), if the THP splitting fails due to page reference count, we will retry to improve migration successful rate. But the failed splitting is counted as migration failure and migration retry, which will cause duplicated failure counting. So, in this patch, this is fixed via undoing the failure counting if we decide to retry. The patch is tested via failure injection. Link: https://lkml.kernel.org/r/20230416235929.1040194-1-ying.huang@intel.com Fixes: fd4a7ac32918 ("mm: migrate: try again if THP split is failed due to page refcnt") Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: David Hildenbrand <david@redhat.com> Cc: Yang Shi <shy828301@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21userfaultfd: use helper function range_in_vma()ZhangPeng
We can use range_in_vma() to check if dst_start, dst_start + len are within the dst_vma range. Minor readability improvement. Link: https://lkml.kernel.org/r/20230417003919.930515-1-zhangpeng362@huawei.com Signed-off-by: ZhangPeng <zhangpeng362@huawei.com> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Nanyong Sun <sunnanyong@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-21lib/show_mem.c: use for_each_populated_zone() simplify codeYajun Deng
__show_mem() needs to iterate over all zones that have memory, we can simplify the code by using for_each_populated_zone(). Link: https://lkml.kernel.org/r/20230417035226.4013584-1-yajun.deng@linux.dev Signed-off-by: Yajun Deng <yajun.deng@linux.dev> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>