summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-09-10ASoC: loongson: Add the correct judgement returnTang Bin
Use the function dev_err_probe can simplify code, but the error return should not be deleted, that is unreasonable, thus fix it. Fixes: 3d2528d6c021 ("ASoC: loongson: Simplify with dev_err_probe()") Signed-off-by: Tang Bin <tangbin@cmss.chinamobile.com> Link: https://patch.msgid.link/20240910021104.3400-1-tangbin@cmss.chinamobile.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-09-10ASoC: soc-ac97: Fix the incorrect descriptionTang Bin
In the function snd_soc_alloc_ac97_component & snd_soc_new_ac97_component, the error return is ERR_PTR, so fix the incorrect description. Fixes: 47e039413cac ("ASoC: Add support for allocating AC'97 device before registering it") Fixes: 7361fbeaeaab ("ASoC: ac97: Add support for resetting device before registration") Signed-off-by: Tang Bin <tangbin@cmss.chinamobile.com> Link: https://patch.msgid.link/20240910013303.2044-1-tangbin@cmss.chinamobile.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-09-10ALSA: memalloc: Move snd_malloc_ops definition into memalloc.c againTakashi Iwai
The definition of struct snd_malloc_ops was moved out to memalloc_local.h since there was another code for S/G buffer allocation referring to the struct. But since the code change to use non-contiguous allocators, it's solely referred in memalloc.c, hence it makes little sense to have a separate header file. Let's move it back to memalloc.c. Link: https://patch.msgid.link/20240910113141.32618-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2024-09-10net: ftgmac100: Enable TX interrupt to avoid TX timeoutJacky Chou
Currently, the driver only enables RX interrupt to handle RX packets and TX resources. Sometimes there is not RX traffic, so the TX resource needs to wait for RX interrupt to free. This situation will toggle the TX timeout watchdog when the MAC TX ring has no more resources to transmit packets. Therefore, enable TX interrupt to release TX resources at any time. When I am verifying iperf3 over UDP, the network hangs. Like the log below. root# iperf3 -c 192.168.100.100 -i1 -t10 -u -b0 Connecting to host 192.168.100.100, port 5201 [ 4] local 192.168.100.101 port 35773 connected to 192.168.100.100 port 5201 [ ID] Interval Transfer Bandwidth Total Datagrams [ 4] 0.00-20.42 sec 160 KBytes 64.2 Kbits/sec 20 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 [ 4] 20.42-20.42 sec 0.00 Bytes 0.00 bits/sec 0 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 4] 0.00-20.42 sec 160 KBytes 64.2 Kbits/sec 0.000 ms 0/20 (0%) [ 4] Sent 20 datagrams iperf3: error - the server has terminated The network topology is FTGMAC connects directly to a PC. UDP does not need to wait for ACK, unlike TCP. Therefore, FTGMAC needs to enable TX interrupt to release TX resources instead of waiting for the RX interrupt. Fixes: 10cbd6407609 ("ftgmac100: Rework NAPI & interrupts handling") Signed-off-by: Jacky Chou <jacky_chou@aspeedtech.com> Link: https://patch.msgid.link/20240906062831.2243399-1-jacky_chou@aspeedtech.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-09-10Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-09-10net: mdiobus: Debug print fwnode handle instead of raw pointerAlexander Dahl
Was slightly misleading before, because printed is pointer to fwnode, not to phy device, as placement in message suggested. Include header for dev_dbg() declaration while at it. Output before: [ +0.001247] mdio_bus f802c000.ethernet-ffffffff: registered phy 2612f00a fwnode at address 3 Output after: [ +0.001229] mdio_bus f802c000.ethernet-ffffffff: registered phy fwnode /ahb/apb/ethernet@f802c000/ethernet-phy@3 at address 3 Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Alexander Dahl <ada@thorsis.com> Link: https://patch.msgid.link/20240906062256.11289-1-ada@thorsis.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-09-10drm/nouveau/fb: restore init() for ramgp102Ben Skeggs
init() was removed from ramgp102 when reworking the memory detection, as it was thought that the code was only necessary when the driver performs mclk changes, which nouveau doesn't support on pascal. However, it turns out that we still need to execute this on some GPUs to restore settings after DEVINIT, so revert to the original behaviour. v2: fix tags in commit message, cc stable Closes: https://gitlab.freedesktop.org/drm/nouveau/-/issues/319 Fixes: 2c0c15a22fa0 ("drm/nouveau/fb/gp102-ga100: switch to simpler vram size detection method") Cc: stable@vger.kernel.org # 6.6+ Signed-off-by: Ben Skeggs <bskeggs@nvidia.com> Signed-off-by: Danilo Krummrich <dakr@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20240904232418.8590-1-bskeggs@nvidia.com
2024-09-10octeontx2-af: Modify SMQ flush sequence to drop packetsNaveen Mamindlapalli
The current implementation of SMQ flush sequence waits for the packets in the TM pipeline to be transmitted out of the link. This sequence doesn't succeed in HW when there is any issue with link such as lack of link credits, link down or any other traffic that is fully occupying the link bandwidth (QoS). This patch modifies the SMQ flush sequence to drop the packets after TL1 level (SQM) instead of polling for the packets to be sent out of RPM/CGX link. Fixes: 5d9b976d4480 ("octeontx2-af: Support fixed transmit scheduler topology") Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> Reviewed-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Link: https://patch.msgid.link/20240906045838.1620308-1-naveenm@marvell.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-09-10net/smc: add sysctl for smc_limit_hsD. Wythe
In commit 48b6190a0042 ("net/smc: Limit SMC visits when handshake workqueue congested"), we introduce a mechanism to put constraint on SMC connections visit according to the pressure of SMC handshake process. At that time, we believed that controlling the feature through netlink was sufficient. However, most people have realized now that netlink is not convenient in container scenarios, and sysctl is a more suitable approach. In addition, since commit 462791bbfa35 ("net/smc: add sysctl interface for SMC") had introcuded smc_sysctl_net_init(), it is reasonable for us to initialize limit_smc_hs in it instead of initializing it in smc_pnet_net_int(). Signed-off-by: D. Wythe <alibuda@linux.alibaba.com> Reviewed-by: Wen Gu <guwen@linux.alibaba.com> Reviewed-by: Jan Karcher <jaka@linux.ibm.com> Link: https://patch.msgid.link/1725590135-5631-1-git-send-email-alibuda@linux.alibaba.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-09-10locking/rwsem: Move is_rwsem_reader_owned() and rwsem_owner() under ↵Waiman Long
CONFIG_DEBUG_RWSEMS Both is_rwsem_reader_owned() and rwsem_owner() are currently only used when CONFIG_DEBUG_RWSEMS is defined. This causes a compilation error with clang when `make W=1` and CONFIG_WERROR=y: kernel/locking/rwsem.c:187:20: error: unused function 'is_rwsem_reader_owned' [-Werror,-Wunused-function] 187 | static inline bool is_rwsem_reader_owned(struct rw_semaphore *sem) | ^~~~~~~~~~~~~~~~~~~~~ kernel/locking/rwsem.c:271:35: error: unused function 'rwsem_owner' [-Werror,-Wunused-function] 271 | static inline struct task_struct *rwsem_owner(struct rw_semaphore *sem) | ^~~~~~~~~~~ Fix this by moving these two functions under the CONFIG_DEBUG_RWSEMS define. Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20240909182905.161156-1-longman@redhat.com
2024-09-10perf/x86/intel: Allow to setup LBR for counting event for BPFKan Liang
The BPF subsystem may capture LBR data on a counting event. However, the current implementation assumes that LBR can/should only be used with sampling events. For instance, retsnoop tool ([0]) makes an extensive use of this functionality and sets up perf event as follows: struct perf_event_attr attr; memset(&attr, 0, sizeof(attr)); attr.size = sizeof(attr); attr.type = PERF_TYPE_HARDWARE; attr.config = PERF_COUNT_HW_CPU_CYCLES; attr.sample_type = PERF_SAMPLE_BRANCH_STACK; attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL; To limit the LBR for a sampling event is to avoid unnecessary branch stack setup for a counting event in the sample read. Because LBR is only read in the sampling event's overflow. Although in most cases LBR is used in sampling, there is no HW limit to bind LBR to the sampling mode. Allow an LBR setup for a counting event unless in the sample read mode. Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag") Closes: https://lore.kernel.org/lkml/20240905180055.1221620-1-andrii@kernel.org/ Reported-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Andrii Nakryiko <andrii@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240909155848.326640-1-kan.liang@linux.intel.com
2024-09-10eth: fbnic: Add devlink firmware version infoLee Trager
This adds support to show firmware version information for both stored and running firmware versions. The version and commit is displayed separately to aid monitoring tools which only care about the version. Example output: # devlink dev info pci/0000:01:00.0: driver fbnic serial_number 88-25-08-ff-ff-01-50-92 versions: running: fw 24.07.15-017 fw.commit h999784ae9df0 fw.bootloader 24.07.10-000 fw.bootloader.commit hfef3ac835ce7 stored: fw 24.07.24-002 fw.commit hc9d14a68b3f2 fw.bootloader 24.07.22-000 fw.bootloader.commit h922f8493eb96 fw.undi 01.00.03-000 Signed-off-by: Lee Trager <lee@trager.us> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20240905233820.1713043-1-lee@trager.us Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-09-10jump_label: Fix static_key_slow_dec() yet againPeter Zijlstra
While commit 83ab38ef0a0b ("jump_label: Fix concurrency issues in static_key_slow_dec()") fixed one problem, it created yet another, notably the following is now possible: slow_dec if (try_dec) // dec_not_one-ish, false // enabled == 1 slow_inc if (inc_not_disabled) // inc_not_zero-ish // enabled == 2 return guard((mutex)(&jump_label_mutex); if (atomic_cmpxchg(1,0)==1) // false, we're 2 slow_dec if (try-dec) // dec_not_one, true // enabled == 1 return else try_dec() // dec_not_one, false WARN Use dec_and_test instead of cmpxchg(), like it was prior to 83ab38ef0a0b. Add a few WARNs for the paranoid. Fixes: 83ab38ef0a0b ("jump_label: Fix concurrency issues in static_key_slow_dec()") Reported-by: "Darrick J. Wong" <djwong@kernel.org> Tested-by: Klara Modin <klarasmodin@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2024-09-10iommu/amd: Add kernel parameters to limit V1 page-sizesJoerg Roedel
Add two new kernel command line parameters to limit the page-sizes used for v1 page-tables: nohugepages - Limits page-sizes to 4KiB v2_pgsizes_only - Limits page-sizes to 4Kib/2Mib/1GiB; The same as the sizes used with v2 page-tables This is needed for multiple scenarios. When assigning devices to SEV-SNP guests the IOMMU page-sizes need to match the sizes in the RMP table, otherwise the device will not be able to access all shared memory. Also, some ATS devices do not work properly with arbitrary IO page-sizes as supported by AMD-Vi, so limiting the sizes used by the driver is a suitable workaround. All-in-all, these parameters are only workarounds until the IOMMU core and related APIs gather the ability to negotiate the page-sizes in a better way. Signed-off-by: Joerg Roedel <jroedel@suse.de> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Link: https://lore.kernel.org/r/20240905072240.253313-1-joro@8bytes.org
2024-09-10dmaengine: idxd: Clean up cpumask and hotplug for perfmonKan Liang
The idxd PMU is system-wide scope, which is supported by the generic perf_event subsystem now. Set the scope for the idxd PMU and remove all the cpumask and hotplug codes. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20240802151643.1691631-6-kan.liang@linux.intel.com
2024-09-10iommu/vt-d: Clean up cpumask and hotplug for perfmonKan Liang
The iommu PMU is system-wide scope, which is supported by the generic perf_event subsystem now. Set the scope for the iommu PMU and remove all the cpumask and hotplug codes. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240802151643.1691631-5-kan.liang@linux.intel.com
2024-09-10perf/x86/intel/cstate: Clean up cpumask and hotplugKan Liang
There are three cstate PMUs with different scopes, core, die and module. The scopes are supported by the generic perf_event subsystem now. Set the scope for each PMU and remove all the cpumask and hotplug codes. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20240802151643.1691631-4-kan.liang@linux.intel.com
2024-09-10perf: Add PERF_EV_CAP_READ_SCOPEKan Liang
Usually, an event can be read from any CPU of the scope. It doesn't need to be read from the advertised CPU. Add a new event cap, PERF_EV_CAP_READ_SCOPE. An event of a PMU with scope can be read from any active CPU in the scope. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20240802151643.1691631-3-kan.liang@linux.intel.com
2024-09-10perf: Generic hotplug support for a PMU with a scopeKan Liang
The perf subsystem assumes that the counters of a PMU are per-CPU. So the user space tool reads a counter from each CPU in the system wide mode. However, many PMUs don't have a per-CPU counter. The counter is effective for a scope, e.g., a die or a socket. To address this, a cpumask is exposed by the kernel driver to restrict to one CPU to stand for a specific scope. In case the given CPU is removed, the hotplug support has to be implemented for each such driver. The codes to support the cpumask and hotplug are very similar. - Expose a cpumask into sysfs - Pickup another CPU in the same scope if the given CPU is removed. - Invoke the perf_pmu_migrate_context() to migrate to a new CPU. - In event init, always set the CPU in the cpumask to event->cpu Similar duplicated codes are implemented for each such PMU driver. It would be good to introduce a generic infrastructure to avoid such duplication. 5 popular scopes are implemented here, core, die, cluster, pkg, and the system-wide. The scope can be set when a PMU is registered. If so, a "cpumask" is automatically exposed for the PMU. The "cpumask" is from the perf_online_<scope>_mask, which is to track the active CPU for each scope. They are set when the first CPU of the scope is online via the generic perf hotplug support. When a corresponding CPU is removed, the perf_online_<scope>_mask is updated accordingly and the PMU will be moved to a new CPU from the same scope if possible. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20240802151643.1691631-2-kan.liang@linux.intel.com
2024-09-10io_uring: port to struct kmem_cache_argsChristian Brauner
Port req_cachep to struct kmem_cache_args. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: make __kmem_cache_create() static inlineChristian Brauner
Make __kmem_cache_create() a static inline function. Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: make kmem_cache_create_usercopy() static inlineChristian Brauner
Make kmem_cache_create_usercopy() a static inline function. Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: remove kmem_cache_create_rcu()Christian Brauner
Now that we have ported all users of kmem_cache_create_rcu() to struct kmem_cache_args the function is unused and can be removed. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10file: port to struct kmem_cache_argsChristian Brauner
Port filp_cache to struct kmem_cache_args. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: create kmem_cache_create() compatibility layerChristian Brauner
Use _Generic() to create a compatibility layer that type switches on the third argument to either call __kmem_cache_create() or __kmem_cache_create_args(). If NULL is passed for the struct kmem_cache_args argument use default args making porting for callers that don't care about additional arguments easy. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: port KMEM_CACHE_USERCOPY() to struct kmem_cache_argsChristian Brauner
Make KMEM_CACHE_USERCOPY() use struct kmem_cache_args. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: port KMEM_CACHE() to struct kmem_cache_argsChristian Brauner
Make KMEM_CACHE() use struct kmem_cache_args. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: remove rcu_freeptr_offset from struct kmem_cacheChristian Brauner
Pass down struct kmem_cache_args to calculate_sizes() so we can use args->{use}_freeptr_offset directly. This allows us to remove ->rcu_freeptr_offset from struct kmem_cache. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: pass struct kmem_cache_args to do_kmem_cache_create()Christian Brauner
and initialize most things in do_kmem_cache_create(). In a follow-up patch we'll remove rcu_freeptr_offset from struct kmem_cache. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: pull kmem_cache_open() into do_kmem_cache_create()Christian Brauner
do_kmem_cache_create() is the only caller and we're going to pass down struct kmem_cache_args in a follow-up patch. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: pass struct kmem_cache_args to create_cache()Christian Brauner
Pass struct kmem_cache_args to create_cache() so that we can later simplify further helpers. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: port kmem_cache_create_usercopy() to struct kmem_cache_argsChristian Brauner
Port kmem_cache_create_usercopy() to struct kmem_cache_args and remove the now unused do_kmem_cache_create_usercopy() helper. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: port kmem_cache_create_rcu() to struct kmem_cache_argsChristian Brauner
Port kmem_cache_create_rcu() to struct kmem_cache_args. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: port kmem_cache_create() to struct kmem_cache_argsChristian Brauner
Port kmem_cache_create() to struct kmem_cache_args. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: add struct kmem_cache_argsChristian Brauner
Currently we have multiple kmem_cache_create*() variants that take up to seven separate parameters with one of the functions having to grow an eigth parameter in the future to handle both usercopy and a custom freelist pointer. Add a struct kmem_cache_args structure and move less common parameters into it. Core parameters such as name, object size, and flags continue to be passed separately. Add a new function __kmem_cache_create_args() that takes a struct kmem_cache_args pointer and port do_kmem_cache_create_usercopy() over to it. In follow-up patches we will port the other kmem_cache_create*() variants over to it as well. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10slab: s/__kmem_cache_create/do_kmem_cache_create/gChristian Brauner
Free up reusing the double-underscore variant for follow-up patches. Reviewed-by: Kees Cook <kees@kernel.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10Merge branch 'vfs.file' of ↵Vlastimil Babka
gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs into slab/for-6.12/kmem_cache_args Merge prerequisities from the vfs git tree for the following series that introduces kmem_cache_args. The vfs.file branch includes the addition of kmem_cache_create_rcu() which was needed in vfs for the filp cache optimization. The following series refactors this code.
2024-09-10kbuild: add mod(name,file)_flags to assembler flags for module objectsKris Van Hees
In order to create the file at build time, modules.builtin.ranges, that contains the range of addresses for all built-in modules, there needs to be a way to identify what code is compiled into modules. To identify what code is compiled into modules during a kernel build, one can look for the presence of the -DKBUILD_MODFILE and -DKBUILD_MODNAME options in the compile command lines. A simple grep in .*.cmd files for those options is sufficient for this. Unfortunately, these options are only passed when compiling C source files. Various modules also include objects built from assembler source, and these options are not passed in that case. Adding $(modfile_flags) to modkern_aflags (similar to modkern_cflags), and adding $(modname_flags) to a_flags (similar to c_flags) makes it possible to identify which objects are compiled into modules for both C and assembler source files. While KBUILD_MODFILE is sufficient to generate the modules ranges data, KBUILD_MODNAME is passed as well for consistency with the C source code case. Signed-off-by: Kris Van Hees <kris.van.hees@oracle.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Tested-by: Sam James <sam@gentoo.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2024-09-10memcg: add charging of already allocated slab objectsShakeel Butt
At the moment, the slab objects are charged to the memcg at the allocation time. However there are cases where slab objects are allocated at the time where the right target memcg to charge it to is not known. One such case is the network sockets for the incoming connection which are allocated in the softirq context. Couple hundred thousand connections are very normal on large loaded server and almost all of those sockets underlying those connections get allocated in the softirq context and thus not charged to any memcg. However later at the accept() time we know the right target memcg to charge. Let's add new API to charge already allocated objects, so we can have better accounting of the memory usage. To measure the performance impact of this change, tcp_crr is used from the neper [1] performance suite. Basically it is a network ping pong test with new connection for each ping pong. The server and the client are run inside 3 level of cgroup hierarchy using the following commands: Server: $ tcp_crr -6 Client: $ tcp_crr -6 -c -H ${server_ip} If the client and server run on different machines with 50 GBPS NIC, there is no visible impact of the change. For the same machine experiment with v6.11-rc5 as base. base (throughput) with-patch tcp_crr 14545 (+- 80) 14463 (+- 56) It seems like the performance impact is within the noise. Link: https://github.com/google/neper [1] Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Paolo Abeni <pabeni@redhat.com> # net Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-09-10iomap: remove the iomap_file_buffered_write_punch_delalloc return valueChristoph Hellwig
iomap_file_buffered_write_punch_delalloc can only return errors if either the ->punch callback returned an error, or if someone changed the API of mapping_seek_hole_data to return a negative error code that is not -ENXIO. As the only instance of ->punch never returns an error, an such an error would be fatal anyway remove the entire error propagation and don't return an error code from iomap_file_buffered_write_punch_delalloc. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-6-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-09-10iomap: pass the iomap to the punch callbackChristoph Hellwig
XFS will need to look at the flags in the iomap structure, so pass it down all the way to the callback. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-5-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-09-10iomap: pass flags to iomap_file_buffered_write_punch_delallocChristoph Hellwig
To fix short write error handling, We'll need to figure out what operation iomap_file_buffered_write_punch_delalloc is called for. Pass the flags argument on to it, and reorder the argument list to match that of ->iomap_end so that the compiler only has to add the new punch argument to the end of it instead of reshuffling the registers. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-4-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-09-10iomap: improve shared block detection in iomap_unshare_iterChristoph Hellwig
Currently iomap_unshare_iter relies on the IOMAP_F_SHARED flag to detect blocks to unshare. This is reasonable, but IOMAP_F_SHARED is also useful for the file system to do internal book keeping for out of place writes. XFS used to that, until it got removed in commit 72a048c1056a ("xfs: only set IOMAP_F_SHARED when providing a srcmap to a write") because unshare for incorrectly unshare such blocks. Add an extra safeguard by checking the explicitly provided srcmap instead of the fallback to the iomap for valid data, as that catches the case where we'd just copy from the same place we'd write to easily, allowing to reinstate setting IOMAP_F_SHARED for all XFS writes that go to the COW fork. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-3-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-09-10iomap: handle a post-direct I/O invalidate race in iomap_write_delalloc_releaseChristoph Hellwig
When direct I/O completions invalidates the page cache it holds neither the i_rwsem nor the invalidate_lock so it can be racing with iomap_write_delalloc_release. If the search for the end of the region that contains data returns the start offset we hit such a race and just need to look for the end of the newly created hole instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240910043949.3481298-2-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-09-10drivers: watchdog: marvell_gti: Convert comma to semicolonChen Ni
Replace a comma between expression statements by a semicolon. Signed-off-by: Chen Ni <nichen@iscas.ac.cn> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240902080419.3824706-1-nichen@iscas.ac.cn Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
2024-09-10watchdog: iTCO_wdt: Convert comma to semicolonChen Ni
Replace a comma between expression statements by a semicolon. Signed-off-by: Chen Ni <nichen@iscas.ac.cn> Reviewed-by: Andy Shevchenko <andy@kernel.org> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240902081051.3824822-1-nichen@iscas.ac.cn Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
2024-09-10watchdog: Add Watchdog Timer driver for RZ/V2H(P)Lad Prabhakar
Add Watchdog Timer driver for RZ/V2H(P) SoC. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240829193831.80768-3-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
2024-09-10dt-bindings: watchdog: renesas,wdt: Document RZ/V2H(P) SoCLad Prabhakar
Add support for the Watchdog Timer (WDT) hardware found in the Renesas RZ/V2H(P) SoC to the `renesas,wdt` device tree bindings. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20240829193831.80768-2-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
2024-09-10watchdog: imx_sc_wdt: detect if already runningAlexander Sverdlin
Firmware (SC) WDT can be already enabled in U-Boot. Detect this case and make CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED functional by setting WDOG_HW_RUNNING. Signed-off-by: Alexander Sverdlin <alexander.sverdlin@siemens.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240828060212.108608-1-alexander.sverdlin@siemens.com Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
2024-09-10watchdog: imx2_wdt: Remove __maybe_unused notationsFabio Estevam
Use the DEFINE_SIMPLE_DEV_PM_OPS() and pm_sleep_ptr() macros to handle the .suspend/.resume callbacks. These macros allow the suspend and resume functions to be automatically dropped by the compiler when CONFIG_SUSPEND is disabled, without having to use __maybe_unused notation. Signed-off-by: Fabio Estevam <festevam@denx.de> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240809020822.335682-1-festevam@gmail.com Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>