summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-03-06drm/amdkfd: Use SQC when TCP would fail in gfx10.1 context saveLaurent Morichetti
Similarly to gfx9, gfx10.1 drops vector stores when an xnack error is raised. To work around this issue, use scalar stores instead of vector stores when trapsts.xnack_error == 1. Signed-off-by: Laurent Morichetti <laurent.morichetti@amd.com> Reviewed-by: Jay Cornwall <jay.cornwall@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-03-06drm/amdkfd: Increase the size of the memory reserved for the TBALaurent Morichetti
In a future commit, the cwsr trap handler code size for gfx10.1 will increase to slightly above the one page mark. Since the TMA does not need to be page aligned, and only 2 pointers are stored in it, push the TMA offset by 2 KiB and keep the TBA+TMA reserved memory size to two pages. Signed-off-by: Laurent Morichetti <laurent.morichetti@amd.com> Reviewed-by: Felix Kuehling <felix.kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-03-06drm/amdgpu: disable ring_muxer if mcbp is offPierre-Eric Pelloux-Prayer
Using the ring_muxer without preemption adds overhead for no reason since mcbp cannot be triggered. Moving back to a single queue in this case also helps when high priority app are used: in this case the gpu_scheduler priority handling will work as expected - much better than ring_muxer with its 2 independant schedulers competing for the same hardware queue. This change requires moving amdgpu_device_set_mcbp above amdgpu_device_ip_early_init because we use adev->gfx.mcbp. Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Acked-by: Jiadong Zhu <Jiadong.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-03-06drm/amd/pm: disable pp_dpm_dcefclk node for gfx 11.0.3 sriovYang Wang
the dce dpm is not available on gfx 11.0.3 sriov device. Signed-off-by: Yang Wang <kevinyang.wang@amd.com> Reviewed-by: Kenneth Feng <kenneth.feng@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-03-06drm/amdgpu: remove unused codeJesse Zhang
Remove the unused function - amdgpu_vm_pt_is_root_clean and remove the impossible condition v1: entries == 0 is not possible any more, so this condition could probably be removed (Felix) Signed-off-by: Jesse Zhang <jesse.zhang@amd.com> Suggested-by:Felix Kuehling <felix.kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2024-03-06ieee802154: cfg802154: make wpan_phy_class constantRicardo B. Marliere
Since commit 43a7206b0963 ("driver core: class: make class_register() take a const *"), the driver core allows for struct class to be in read-only memory, so move the wpan_phy_class structure to be declared at build time placing it into read-only memory, instead of having to be dynamically allocated at boot time. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ricardo B. Marliere <ricardo@marliere.net> Acked-by: Miquel Raynal <miquel.raynal@bootlin.com> Message-ID: <20240305-class_cleanup-wpan-v1-1-376f751fd481@marliere.net> [changed prefix from wifi to ieee802154 by stefan@datenfreihafen.org] Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
2024-03-06ieee802154: mcr20a: Remove unused of_gpio.hAndy Shevchenko
of_gpio.h is deprecated and subject to remove. The driver doesn't use it, simply remove the unused header. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Miquel Raynal <miquel.raynal@bootlin.com> Message-ID: <20240304175320.1199496-1-andriy.shevchenko@linux.intel.com> Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
2024-03-06ieee802154: at86rf230: Replace of_gpio.h by proper oneAndy Shevchenko
of_gpio.h is deprecated and subject to remove. The driver doesn't use it directly, replace it with what is really being used. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Miquel Raynal <miquel.raynal@bootlin.com> Message-ID: <20240304174218.1198411-1-andriy.shevchenko@linux.intel.com> Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
2024-03-06mac802154: fix llsec key resources release in mac802154_llsec_key_delFedor Pchelkin
mac802154_llsec_key_del() can free resources of a key directly without following the RCU rules for waiting before the end of a grace period. This may lead to use-after-free in case llsec_lookup_key() is traversing the list of keys in parallel with a key deletion: refcount_t: addition on 0; use-after-free. WARNING: CPU: 4 PID: 16000 at lib/refcount.c:25 refcount_warn_saturate+0x162/0x2a0 Modules linked in: CPU: 4 PID: 16000 Comm: wpan-ping Not tainted 6.7.0 #19 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:refcount_warn_saturate+0x162/0x2a0 Call Trace: <TASK> llsec_lookup_key.isra.0+0x890/0x9e0 mac802154_llsec_encrypt+0x30c/0x9c0 ieee802154_subif_start_xmit+0x24/0x1e0 dev_hard_start_xmit+0x13e/0x690 sch_direct_xmit+0x2ae/0xbc0 __dev_queue_xmit+0x11dd/0x3c20 dgram_sendmsg+0x90b/0xd60 __sys_sendto+0x466/0x4c0 __x64_sys_sendto+0xe0/0x1c0 do_syscall_64+0x45/0xf0 entry_SYSCALL_64_after_hwframe+0x6e/0x76 Also, ieee802154_llsec_key_entry structures are not freed by mac802154_llsec_key_del(): unreferenced object 0xffff8880613b6980 (size 64): comm "iwpan", pid 2176, jiffies 4294761134 (age 60.475s) hex dump (first 32 bytes): 78 0d 8f 18 80 88 ff ff 22 01 00 00 00 00 ad de x......."....... 00 00 00 00 00 00 00 00 03 00 cd ab 00 00 00 00 ................ backtrace: [<ffffffff81dcfa62>] __kmem_cache_alloc_node+0x1e2/0x2d0 [<ffffffff81c43865>] kmalloc_trace+0x25/0xc0 [<ffffffff88968b09>] mac802154_llsec_key_add+0xac9/0xcf0 [<ffffffff8896e41a>] ieee802154_add_llsec_key+0x5a/0x80 [<ffffffff8892adc6>] nl802154_add_llsec_key+0x426/0x5b0 [<ffffffff86ff293e>] genl_family_rcv_msg_doit+0x1fe/0x2f0 [<ffffffff86ff46d1>] genl_rcv_msg+0x531/0x7d0 [<ffffffff86fee7a9>] netlink_rcv_skb+0x169/0x440 [<ffffffff86ff1d88>] genl_rcv+0x28/0x40 [<ffffffff86fec15c>] netlink_unicast+0x53c/0x820 [<ffffffff86fecd8b>] netlink_sendmsg+0x93b/0xe60 [<ffffffff86b91b35>] ____sys_sendmsg+0xac5/0xca0 [<ffffffff86b9c3dd>] ___sys_sendmsg+0x11d/0x1c0 [<ffffffff86b9c65a>] __sys_sendmsg+0xfa/0x1d0 [<ffffffff88eadbf5>] do_syscall_64+0x45/0xf0 [<ffffffff890000ea>] entry_SYSCALL_64_after_hwframe+0x6e/0x76 Handle the proper resource release in the RCU callback function mac802154_llsec_key_del_rcu(). Note that if llsec_lookup_key() finds a key, it gets a refcount via llsec_key_get() and locally copies key id from key_entry (which is a list element). So it's safe to call llsec_key_put() and free the list entry after the RCU grace period elapses. Found by Linux Verification Center (linuxtesting.org). Fixes: 5d637d5aabd8 ("mac802154: add llsec structures and mutators") Cc: stable@vger.kernel.org Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru> Acked-by: Alexander Aring <aahringo@redhat.com> Message-ID: <20240228163840.6667-1-pchelkin@ispras.ru> Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
2024-03-06arch: define CONFIG_PAGE_SIZE_*KB on all architecturesArnd Bergmann
Most architectures only support a single hardcoded page size. In order to ensure that each one of these sets the corresponding Kconfig symbols, change over the PAGE_SHIFT definition to the common one and allow only the hardware page size to be selected. Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Stafford Horne <shorne@gmail.com> Acked-by: Johannes Berg <johannes@sipsolutions.net> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-03-06arch: simplify architecture specific page size configurationArnd Bergmann
arc, arm64, parisc and powerpc all have their own Kconfig symbols in place of the common CONFIG_PAGE_SIZE_4KB symbols. Change these so the common symbols are the ones that are actually used, while leaving the arhcitecture specific ones as the user visible place for configuring it, to avoid breaking user configs. Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> (powerpc32) Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Helge Deller <deller@gmx.de> # parisc Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-03-06arch: consolidate existing CONFIG_PAGE_SIZE_*KB definitionsArnd Bergmann
These four architectures define the same Kconfig symbols for configuring the page size. Move the logic into a common place where it can be shared with all other architectures. Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-03-06tracing: Limit trace_marker writes to just 4KSteven Rostedt (Google)
Limit the max print event of trace_marker to just 4K string size. This must also be less than the amount that can be held by a trace_seq along with the text that is before the output (like the task name, PID, CPU, state, etc). As trace_seq is made to handle large events (some greater than 4K). Make the max size of a trace_marker write event be 4K which is guaranteed to fit in the trace_seq buffer. Link: https://lore.kernel.org/linux-trace-kernel/20240304223433.4ba47dff@gandalf.local.home Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-03-06tracing: Limit trace_seq size to just 8K and not depend on architecture ↵Steven Rostedt (Google)
PAGE_SIZE The trace_seq buffer is used to print out entire events. It's typically set to PAGE_SIZE * 2 as there's some events that can be quite large. As a side effect, writes to trace_marker is limited by both the size of the trace_seq buffer as well as the ring buffer's sub-buffer size (which is a power of PAGE_SIZE). By limiting the trace_seq size, it also limits the size of the largest string written to trace_marker. trace_seq does not need to be dependent on PAGE_SIZE like the ring buffer sub-buffers need to be. Hard code it to 8K which is PAGE_SIZE * 2 on most architectures. This will also limit the size of trace_marker on those architectures with greater than 4K PAGE_SIZE. Link: https://lore.kernel.org/all/20240302111244.3a1674be@gandalf.local.home/ Link: https://lore.kernel.org/linux-trace-kernel/20240304191342.56fb1087@gandalf.local.home Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Sachin Sant <sachinp@linux.ibm.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-03-06tracing: Remove precision vsnprintf() check from print eventSteven Rostedt (Google)
This reverts 60be76eeabb3d ("tracing: Add size check when printing trace_marker output"). The only reason the precision check was added was because of a bug that miscalculated the write size of the string into the ring buffer and it truncated it removing the terminating nul byte. On reading the trace it crashed the kernel. But this was due to the bug in the code that happened during development and should never happen in practice. If anything, the precision can hide bugs where the string in the ring buffer isn't nul terminated and it will not be checked. Link: https://lore.kernel.org/all/C7E7AF1A-D30F-4D18-B8E5-AF1EF58004F5@linux.ibm.com/ Link: https://lore.kernel.org/linux-trace-kernel/20240227125706.04279ac2@gandalf.local.home Link: https://lore.kernel.org/all/20240302111244.3a1674be@gandalf.local.home/ Link: https://lore.kernel.org/linux-trace-kernel/20240304174341.2a561d9f@gandalf.local.home Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Fixes: 60be76eeabb3d ("tracing: Add size check when printing trace_marker output") Reported-by: Sachin Sant <sachinp@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2024-03-06mm: Enforce VM_IOREMAP flag and range in ioremap_page_range.Alexei Starovoitov
There are various users of get_vm_area() + ioremap_page_range() APIs. Enforce that get_vm_area() was requested as VM_IOREMAP type and range passed to ioremap_page_range() matches created vm_area to avoid accidentally ioremap-ing into wrong address range. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/bpf/20240305030516.41519-2-alexei.starovoitov@gmail.com
2024-03-06Merge tag 'md-6.9-20240306' of ↵Jens Axboe
https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-6.9/block Pull MD atomic queue limits changes from Song. * tag 'md-6.9-20240306' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: block: remove disk_stack_limits md: remove mddev->queue md: don't initialize queue limits md/raid10: use the atomic queue limit update APIs md/raid5: use the atomic queue limit update APIs md/raid1: use the atomic queue limit update APIs md/raid0: use the atomic queue limit update APIs md: add queue limit helpers md: add a mddev_is_dm helper md: add a mddev_add_trace_msg helper md: add a mddev_trace_remap helper
2024-03-06spi: Exctract spi_dev_check_cs() helperAndy Shevchenko
It seems a few functions implement the similar for-loop to validate chip select pins for uniqueness. Let's deduplicate that code in order to have a single place of that for better maintenance. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://msgid.link/r/20240306160114.3471398-3-andriy.shevchenko@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-06spi: Exctract spi_set_all_cs_unused() helperAndy Shevchenko
It seems a few functions implement the similar for-loop to mark all chip select pins unused. Let's deduplicate that code in order to have a single place of that for better maintenance. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://msgid.link/r/20240306160114.3471398-2-andriy.shevchenko@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-06spi: cs42l43: Don't limit native CS to the first chip selectCharles Keepax
As the chip selects can be configured through ACPI/OF/swnode, and the set_cs() callback will only be called when a native chip select is being used, there is no reason for the driver to only support the native chip select as the first chip select. Remove the check that introduces this limitation. Fixes: ef75e767167a ("spi: cs42l43: Add SPI controller support") Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com> Link: https://msgid.link/r/20240306161004.2205113-1-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-06ASoC: wm8962: Fix up incorrect error message in wm8962_set_fllStuart Henderson
Use source instead of ret, which seems to be unrelated and will always be zero. Signed-off-by: Stuart Henderson <stuarth@opensource.cirrus.com> Link: https://msgid.link/r/20240306161439.1385643-5-stuarth@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-06ASoC: wm8962: Enable both SPKOUTR_ENA and SPKOUTL_ENA in mono modeStuart Henderson
Signed-off-by: Stuart Henderson <stuarth@opensource.cirrus.com> Link: https://msgid.link/r/20240306161439.1385643-2-stuarth@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-06ASoC: wm8962: Enable oscillator if selecting WM8962_FLL_OSCStuart Henderson
Signed-off-by: Stuart Henderson <stuarth@opensource.cirrus.com> Link: https://msgid.link/r/20240306161439.1385643-1-stuarth@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-06ixgbe: pull out stats update to common routinesMaciej Fijalkowski
Introduce ixgbe_update_{r,t}x_ring_stats() that will be used by both standard and ZC datapath. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06i40e: remove unnecessary qv_info ptr NULL checksAlexey Kodanev
The "qv_info" ptr cannot be NULL when it gets the address of an element of the flexible array "qvlist_info->qv_info". Detected using the static analysis tool - Svace. Signed-off-by: Alexey Kodanev <aleksei.kodanev@bell-sw.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06iavf: drop duplicate iavf_{add|del}_cloud_filter() callsAlexey Kodanev
There are currently two pairs of identical checks and calls to iavf_{add|del}_cloud_filter(). Detected using the static analysis tool - Svace. Signed-off-by: Alexey Kodanev <aleksei.kodanev@bell-sw.com> Reviewed-by: Ahmed Zaki <ahmed.zaki@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06ice: Implement RSS settings for GTP using ethtoolTakeru Hayasaka
Following the addition of new GTP RSS hash options to ethtool.h, this patch implements the corresponding RSS settings for GTP packets in the Intel ice driver. It enables users to configure RSS for GTP-U and GTP-C traffic over IPv4 and IPv6, utilizing the newly defined hash options. The implementation covers the handling of gtpu(4|6), gtpc(4|6), gtpc(4|6)t, gtpu(4|6)e, gtpu(4|6)u, and gtpu(4|6)d traffic, providing enhanced load distribution for GTP traffic across multiple processing units. Signed-off-by: Takeru Hayasaka <hayatake396@gmail.com> Reviewed-by: Marcin Szycik <marcin.szycik@linux.intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06ethtool: Add GTP RSS hash options to ethtool.hTakeru Hayasaka
This is a patch that enables RSS functionality for GTP packets using ethtool. A user can include TEID and make RSS work for GTP-U over IPv4 by doing the following:`ethtool -N ens3 rx-flow-hash gtpu4 sde` In addition to gtpu(4|6), we now support gtpc(4|6),gtpc(4|6)t,gtpu(4|6)e, gtpu(4|6)u, and gtpu(4|6)d. gtpc(4|6): Used for GTP-C in IPv4 and IPv6, where the GTP header format does not include a TEID. gtpc(4|6)t: Used for GTP-C in IPv4 and IPv6, with a GTP header format that includes a TEID. gtpu(4|6): Used for GTP-U in both IPv4 and IPv6 scenarios. gtpu(4|6)e: Used for GTP-U with extended headers in both IPv4 and IPv6. gtpu(4|6)u: Used when the PSC (PDU session container) in the GTP-U extended header includes Uplink, applicable to both IPv4 and IPv6. gtpu(4|6)d: Used when the PSC in the GTP-U extended header includes Downlink, for both IPv4 and IPv6. GTP generates a flow that includes an ID called TEID to identify the tunnel. This tunnel is created for each UE (User Equipment).By performing RSS based on this flow, it is possible to apply RSS for each communication unit from the UE. Without this, RSS would only be effective within the range of IP addresses. For instance, the PGW can only perform RSS within the IP range of the SGW. Problematic from a load distribution perspective, especially if there's a bias in the terminals connected to a particular base station.This case can be solved by using this patch. Signed-off-by: Takeru Hayasaka <hayatake396@gmail.com> Reviewed-by: Marcin Szycik <marcin.szycik@linux.intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06dm-integrity: set max_integrity_segments in dm_integrity_io_hintsChristoph Hellwig
Set max_integrity_segments with the other queue limits instead of updating it later. This also uncovered that the driver is trying to set the limit to UINT_MAX while max_integrity_segments is an unsigned short, so fix it up to use USHRT_MAX instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2024-03-06ice: fix stats being updated by way too large valuesPrzemek Kitszel
Simplify stats accumulation logic to fix the case where we don't take previous stat value into account, we should always respect it. Main netdev stats of our PF (Tx/Rx packets/bytes) were reported orders of magnitude too big during OpenStack reconfiguration events, possibly other reconfiguration cases too. The regression was reported to be between 6.1 and 6.2, so I was almost certain that on of the two "preserve stats over reset" commits were the culprit. While reading the code, it was found that in some cases we will increase the stats by arbitrarily large number (thanks to ignoring "-prev" part of condition, after zeroing it). Note that this fixes also the case where we were around limits of u64, but that was not the regression reported. Full disclosure: I remember suggesting this particular piece of code to Ben a few years ago, so blame on me. Fixes: 2fd5e433cd26 ("ice: Accumulate HW and Netdev statistics over reset") Reported-by: Nebojsa Stevanovic <nebojsa.stevanovic@gcore.com> Link: https://lore.kernel.org/intel-wired-lan/VI1PR02MB439744DEDAA7B59B9A2833FE912EA@VI1PR02MB4397.eurprd02.prod.outlook.com Reported-by: Christian Rohmann <christian.rohmann@inovex.de> Link: https://lore.kernel.org/intel-wired-lan/f38a6ca4-af05-48b1-a3e6-17ef2054e525@inovex.de Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06igb: Fix missing time sync eventsVinicius Costa Gomes
Fix "double" clearing of interrupts, which can cause external events or timestamps to be missed. The E1000_TSIRC Time Sync Interrupt Cause register can be cleared in two ways, by either reading it or by writing '1' into the specific cause bit. This is documented in section 8.16.1. The following flow was used: 1. read E1000_TSIRC into 'tsicr'; 2. handle the interrupts present into 'tsirc' and mark them in 'ack'; 3. write 'ack' into E1000_TSICR; As both (1) and (3) will clear the interrupt cause, if the same interrupt happens again between (1) and (3) it will be ignored, causing events to be missed. Remove the extra clear in (3). Fixes: 00c65578b47b ("igb: enable internal PPS for the i210") Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06igc: Fix missing time sync eventsVinicius Costa Gomes
Fix "double" clearing of interrupts, which can cause external events or timestamps to be missed. The IGC_TSIRC Time Sync Interrupt Cause register can be cleared in two ways, by either reading it or by writing '1' into the specific cause bit. This is documented in section 8.16.1. The following flow was used: 1. read IGC_TSIRC into 'tsicr'; 2. handle the interrupts present in 'tsirc' and mark them in 'ack'; 3. write 'ack' into IGC_TSICR; As both (1) and (3) will clear the interrupt cause, if the same interrupt happens again between (1) and (3) it will be ignored, causing events to be missed. Remove the extra clear in (3). Fixes: 2c344ae24501 ("igc: Add support for TX timestamping") Reviewed-by: Kurt Kanzenbach <kurt@linutronix.de> Tested-by: Kurt Kanzenbach <kurt@linutronix.de> # Intel i225 Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-06block: remove disk_stack_limitsChristoph Hellwig
disk_stack_limits is unused now, remove it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-12-hch@lst.de
2024-03-06md: remove mddev->queueChristoph Hellwig
Just use the request_queue from the gendisk pointer in the relatively few places that sill need it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-11-hch@lst.de
2024-03-06md: don't initialize queue limitsChristoph Hellwig
Initial queue limits are now set from ->run. Remove the superfluous initialization in md_alloc and level_store. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-10-hch@lst.de
2024-03-06md/raid10: use the atomic queue limit update APIsChristoph Hellwig
Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into separate helpers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-9-hch@lst.de
2024-03-06md/raid5: use the atomic queue limit update APIsChristoph Hellwig
Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into separate helpers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-8-hch@lst.de
2024-03-06md/raid1: use the atomic queue limit update APIsChristoph Hellwig
Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into a separate helper function. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-7-hch@lst.de
2024-03-06md/raid0: use the atomic queue limit update APIsChristoph Hellwig
Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into a separate helper function. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-6-hch@lst.de
2024-03-06md: add queue limit helpersChristoph Hellwig
Add a few helpers that wrap the block queue limits API for use in MD. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-5-hch@lst.de
2024-03-06md: add a mddev_is_dm helperChristoph Hellwig
Add a helper to check for a DM-mapped MD device instead of using the obfuscated ->gendisk or ->queue NULL checks. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-4-hch@lst.de
2024-03-06md: add a mddev_add_trace_msg helperChristoph Hellwig
Add a small wrapper around blk_add_trace_msg that hides some argument dereferences and the check for a DM-mapped MD device. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-3-hch@lst.de
2024-03-06md: add a mddev_trace_remap helperChristoph Hellwig
Add a helper to trace bio remapping that hides some argument dereferences and the check for a DM-mapped MD device. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed--by: Song Liu <song@kernel.org> Tested-by: Song Liu <song@kernel.org> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240303140150.5435-2-hch@lst.de
2024-03-06iommu/dma: Document min_align_mask assumptionRobin Murphy
iommu-dma does not explicitly reference min_align_mask since we already assume that will be less than or equal to any typical IOVA granule. We wouldn't realistically expect to see the case where it is larger, and that would be non-trivial to support, however for the sake of reasoning (particularly around the interaction with SWIOTLB), let's clearly enforce the assumption. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Michael Kelley <mhklinux@outlook.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/dbb4d2d8e5d1691ac9a6c67e9758904e6c447ba5.1709553942.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-06iommu/vt-d: Remove scalabe mode in domain_context_clear_one()Lu Baolu
domain_context_clear_one() only handles the context entry teardown in legacy mode. Remove the scalable mode check in it to avoid dead code. Remove an unnecessary check in the code as well. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240305013305.204605-6-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-06iommu/vt-d: Remove scalable mode context entry setup from attach_devLu Baolu
The scalable mode context entry is now setup in the probe_device path, eliminating the need to configure it in the attach_dev path. Removes the redundant code from the attach_dev path to avoid dead code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240305013305.204605-5-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-06iommu/vt-d: Setup scalable mode context entry in probe pathLu Baolu
In contrast to legacy mode, the DMA translation table is configured in the PASID table entry instead of the context entry for scalable mode. For this reason, it is more appropriate to set up the scalable mode context entry in the device_probe callback and direct it to the appropriate PASID table. The iommu domain attach/detach operations only affect the PASID table entry. Therefore, there is no need to modify the context entry when configuring the translation type and page table. The only exception is the kdump case, where context entry setup is postponed until the device driver invokes the first DMA interface. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240305013305.204605-4-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-06iommu/vt-d: Fix NULL domain on device releaseLu Baolu
In the kdump kernel, the IOMMU operates in deferred_attach mode. In this mode, info->domain may not yet be assigned by the time the release_device function is called. It leads to the following crash in the crash kernel: BUG: kernel NULL pointer dereference, address: 000000000000003c ... RIP: 0010:do_raw_spin_lock+0xa/0xa0 ... _raw_spin_lock_irqsave+0x1b/0x30 intel_iommu_release_device+0x96/0x170 iommu_deinit_device+0x39/0xf0 __iommu_group_remove_device+0xa0/0xd0 iommu_bus_notifier+0x55/0xb0 notifier_call_chain+0x5a/0xd0 blocking_notifier_call_chain+0x41/0x60 bus_notify+0x34/0x50 device_del+0x269/0x3d0 pci_remove_bus_device+0x77/0x100 p2sb_bar+0xae/0x1d0 ... i801_probe+0x423/0x740 Use the release_domain mechanism to fix it. The scalable mode context entry which is not part of release domain should be cleared in release_device(). Fixes: 586081d3f6b1 ("iommu/vt-d: Remove DEFER_DEVICE_DOMAIN_INFO") Reported-by: Eric Badger <ebadger@purestorage.com> Closes: https://lore.kernel.org/r/20240113181713.1817855-1-ebadger@purestorage.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240305013305.204605-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-06iommu: Add static iommu_ops->release_domainLu Baolu
The current device_release callback for individual iommu drivers does the following: 1) Silent IOMMU DMA translation: It detaches any existing domain from the device and puts it into a blocking state (some drivers might use the identity state). 2) Resource release: It releases resources allocated during the device_probe callback and restores the device to its pre-probe state. Step 1 is challenging for individual iommu drivers because each must check if a domain is already attached to the device. Additionally, if a deferred attach never occurred, the device_release should avoid modifying hardware configuration regardless of the reason for its call. To simplify this process, introduce a static release_domain within the iommu_ops structure. It can be either a blocking or identity domain depending on the iommu hardware. The iommu core will decide whether to attach this domain before the device_release callback, eliminating the need for repetitive code in various drivers. Consequently, the device_release callback can focus solely on the opposite operations of device_probe, including releasing all resources allocated during that callback. Co-developed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240305013305.204605-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-03-06iommu/vt-d: Improve ITE fault handling if target device isn't presentEthan Zhao
Because surprise removal could happen anytime, e.g. user could request safe removal to EP(endpoint device) via sysfs and brings its link down to do surprise removal cocurrently. such aggressive cases would cause ATS invalidation request issued to non-existence target device, then deadly loop to retry that request after ITE fault triggered in interrupt context. this patch aims to optimize the ITE handling by checking the target device presence state to avoid retrying the timeout request blindly, thus avoid hard lockup or system hang. Devices TLB should only be invalidated when devices are in the iommu->device_rbtree (probed, not released) and present. Fixes: 6ba6c3a4cacf ("VT-d: add device IOTLB invalidation support") Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Ethan Zhao <haifeng.zhao@linux.intel.com> Link: https://lore.kernel.org/r/20240301080727.3529832-4-haifeng.zhao@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>