Age | Commit message (Collapse) | Author |
|
In sync reset flow, PF is checking that only devices of same device ID
as itself present on the PCIe bridge, otherwise it will NACK the reset.
Since the PCIe bridge connection to NIC card has to be 1 to 1, this is
valid.
However, the BlueField device may also expose another sub-device to the
PCI called management interface, which only provides an ethernet channel
between the host and the smart NIC.
Allow sync reset flow also when management interface sub-device present
when checking devices on the PCIe bridge.
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
While collecting crdump as part of fw_fatal health reporter dump the PF
may fail to lock the SW reset semaphore. Change the print to indicate if
it was due to another PF locked the semaphore already and so trying to
lock the semaphore returned -EBUSY.
Signed-off-by: Moshe Shemesh <moshe@nvidia.com>
Reviewed-by: Shay Drory <shayd@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
These HP laptops use Realtek HDA codec combined with 2 or 4 CS35L41
Amplifiers using SPI with Internal Boost.
Signed-off-by: Stefan Binding <sbinding@opensource.cirrus.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20231115162116.494968-3-sbinding@opensource.cirrus.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
These ASUS Zenbook laptops use Realtek HDA codec combined with
2xCS35L41 Amplifiers using SPI or I2C with External Boost or
Internal Boost.
Signed-off-by: Stefan Binding <sbinding@opensource.cirrus.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20231115162116.494968-2-sbinding@opensource.cirrus.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
If we get back ENODEV don't fail load. There are nvidia devices
that don't have display blocks and the driver should work on those.
Fixes: 15740541e8f0 ("drm/nouveau/devinit/tu102-: prepare for GSP-RM")
Link: https://gitlab.freedesktop.org/drm/nouveau/-/issues/270
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231115143933.261287-1-airlied@gmail.com
|
|
The Legion Go has a 2560x1600 portrait screen, with the native "up" facing
the right controller (90° CW from the rest of the device).
Signed-off-by: Brenton Simpson <appsforartists@google.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20231114233859.274189-1-appsforartists@google.com
|
|
Kirill Shutemov reported significant percpu memory consumption increase after
booting in 288-cpu VM ([1]) due to commit 41a5db8d8161 ("bpf: Add support for
non-fix-size percpu mem allocation"). The percpu memory consumption is
increased from 111MB to 969MB. The number is from /proc/meminfo.
I tried to reproduce the issue with my local VM which at most supports upto
255 cpus. With 252 cpus, without the above commit, the percpu memory
consumption immediately after boot is 57MB while with the above commit the
percpu memory consumption is 231MB.
This is not good since so far percpu memory from bpf memory allocator is not
widely used yet. Let us change pre-allocation in init stage to on-demand
allocation when verifier detects there is a need of percpu memory for bpf
program. With this change, percpu memory consumption after boot can be reduced
signicantly.
[1] https://lore.kernel.org/lkml/20231109154934.4saimljtqx625l3v@box.shutemov.name/
Fixes: 41a5db8d8161 ("bpf: Add support for non-fix-size percpu mem allocation")
Reported-and-tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20231111013928.948838-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A previous commit added a trylock for getting the SQPOLL thread info via
fdinfo, but this introduced a regression where we often fail to get it if
the thread is busy. For that case, we end up not printing the current CPU
and PID info.
Rather than rely on this lock, just print the pid we already stored in
the io_sq_data struct, and ensure we update the current CPU every time
we've slept or potentially rescheduled. The latter won't potentially be
100% accurate, but that wasn't the case before either as the task can
get migrated at any time unless it has been pinned at creation time.
We retain keeping the io_sq_data dereference inside the ctx->uring_lock,
as it has always been, as destruction of the thread and data happen below
that. We could make this RCU safe, but there's little point in doing that.
With this, we always print the last valid information we had, rather than
have spurious outputs with missing information.
Fixes: 7644b1a1c9a7 ("io_uring/fdinfo: lock SQ thread while retrieving thread cpu/pid")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
acpi_object *obj is unused in this function, so delete it, also
delete a unnecessary kfree(obj);
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Link: https://lore.kernel.org/r/20231113200742.3593548-4-harshit.m.mogalapalli@oracle.com
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
'attr_name_kobj' is allocated using kzalloc, but on all the error paths
it is not freed, hence we have a memory leak.
Fix the error path before kobject_init_and_add() by adding kfree().
kobject_put() must be always called after passing the object to
kobject_init_and_add(). Only the error path which is immediately next
to kobject_init_and_add() calls kobject_put() and not any other error
path after it.
Fix the error handling after kobject_init_and_add() by moving the
kobject_put() into the goto label err_other_attr_init that is already
used by all the error paths after kobject_init_and_add().
Fixes: a34fc329b189 ("platform/x86: hp-bioscfg: bioscfg")
Cc: stable@vger.kernel.org # 6.6.x: c5dbf0416000: platform/x86: hp-bioscfg: Simplify return check in hp_add_other_attributes()
Cc: stable@vger.kernel.org # 6.6.x: 5736aa9537c9: platform/x86: hp-bioscfg: move mutex_lock() down in hp_add_other_attributes()
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <error27@gmail.com>
Closes: https://lore.kernel.org/r/202309201412.on0VXJGo-lkp@intel.com/
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ij: Added the stable dep tags]
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Link: https://lore.kernel.org/r/20231113200742.3593548-3-harshit.m.mogalapalli@oracle.com
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
It turns out that even if the comment says that the driver can load
fine, it's not really the case and no codecs are detected.
Specifically for -EPROBE_DEFER, always fail the probe.
This fixes a regression when HDA-intel is loaded before i915.
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Closes: https://lore.kernel.org/r/ZVNUxZzCGcxQzqJX@intel.com
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Tested-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
Fixes: e6d0c13e9f46 ("ALSA: hda: i915: Remove extra argument from snd_hdac_i915_init")
Link: https://gitlab.freedesktop.org/drm/intel/-/issues/9671
Link: https://lore.kernel.org/r/20231115123625.74286-1-maarten.lankhorst@linux.intel.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
attr_name_kobj's memory allocation is done with mutex_lock() held, this
is not needed.
Move allocation outside of mutex_lock() so unlock is not needed when
allocation fails.
Suggested-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Link: https://lore.kernel.org/r/20231113200742.3593548-2-harshit.m.mogalapalli@oracle.com
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
All cases in switch-case have a same goto on error, move the return
check out of the switch. This is a cleanup.
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Link: https://lore.kernel.org/r/20231113200742.3593548-1-harshit.m.mogalapalli@oracle.com
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
max_brightness is used in ideapad_kbd_bl_brightness_get() before it's set,
causing ideapad_kbd_bl_brightness_get() to return -EINVAL sometimes.
Fixes: ecaa1867b524 ("platform/x86: ideapad-laptop: Add support for keyboard backlights using KBLC ACPI symbol")
Signed-off-by: Stuart Hayhurst <stuart.a.hayhurst@gmail.com>
Cc: stable@vger.kernel.org
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Link: https://lore.kernel.org/r/20231114114055.6220-2-stuart.a.hayhurst@gmail.com
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
Maurice is no longer with Intel and his e-mail address is no longer
active. Remove the stale entry from Slim boot loader section.
Signed-off-by: Jithu Joseph <jithu.joseph@intel.com>
Link: https://lore.kernel.org/r/20231102195218.143440-1-jithu.joseph@intel.com
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
Cited commit removed the strscpy() call and kept the snprintf() only.
It is common to use 'dev->name' as the format string before a netdev is
registered, this results in 'res' and 'name' pointers being equal.
According to POSIX, if copying takes place between objects that overlap
as a result of a call to sprintf() or snprintf(), the results are
undefined.
Add back the strscpy() and use 'buf' as an intermediate buffer.
Fixes: 7ad17b04dc7b ("net: trust the bitmap in __dev_alloc_name()")
Cc: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Gal Pressman <gal@nvidia.com>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Simon Horman <horms@kernel.org>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The top syzbot report for networking (#14 for the entire kernel)
is the queue timeout splat. We kept it around for a long time,
because in real life it provides pretty strong signal that
something is wrong with the driver or the device.
Removing it is also likely to break monitoring for those who
track it as a kernel warning.
Nevertheless, WARN()ings are best suited for catching kernel
programming bugs. If a Tx queue gets starved due to a pause
storm, priority configuration, or other weirdness - that's
obviously a problem, but not a problem we can fix at
the kernel level.
Bite the bullet and convert the WARN() to a print.
Before:
NETDEV WATCHDOG: eni1np1 (netdevsim): transmit queue 0 timed out 1975 ms
WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:525 dev_watchdog+0x39e/0x3b0
[... completely pointless stack trace of a timer follows ...]
Now:
netdevsim netdevsim1 eni1np1: NETDEV WATCHDOG: CPU: 0: transmit queue 0 timed out 1769 ms
Alternatively we could mark the drivers which syzbot has
learned to abuse as "print-instead-of-WARN" selectively.
Reported-by: syzbot+d55372214aff0faa1f1f@syzkaller.appspotmail.com
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Michael Chan says:
====================
bnxt_en: TX path improvements
All patches in this patchset are related to improving the TX path.
There are 2 areas of improvements:
1. The TX interrupt logic currently counts the number of TX completions
to determine the number of TX SKBs to free. We now change it so that
the TX completion will now contain the hardware consumer index
information. The driver will keep track of the latest hardware
consumer index from the last TX completion and clean up all TX SKBs
up to that index. This scheme aligns better with future chips and
allows xmit_more code path to be more optimized.
2. The current driver logic requires an additional MSIX for each
additional MQPRIO TX ring. This scheme uses too many MSIX vectors if
the user enables a large number of MQPRIO TCs. We now use a new scheme
that will use the same MSIX for all the MQPRIO TX rings for each
ethtool channel. Each ethtool TX channel can have up to 8 MQPRIO
TX rings and now they all will share the same MSIX.
v2: Rebased
v1 posted on Oct 27 2023 right before the close of net-next:
https://lore.kernel.org/netdev/20231027232252.36111-1-michael.chan@broadcom.com/
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that we use the cumulative consumer index scheme for TX completion,
we don't need to have one TX completion per TX packet in the xmit_more
code path. Set the TX_BD_FLAGS_NO_CMPL flag if xmit_more is true.
Fallback to one interrupt per packet if the ring is filled beyond
bp->tx_wake_thresh.
Also, move the wmb() to bnxt_txr_db_kick(). When xmit_more is true,
we'll skip the bnxt_txr_db_kick() call and there is no need to call
wmb() to sync. the TX BD data.
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We can now fully support sharing the same MSIX for all mqprio TX rings
belonging to the same ethtool channel with the new infrastructure:
1. Allocate the proper entries for cp_ring_arr in struct bnxt_cp_ring_info
to support the additional TX rings.
2. Populate the tx_ring array in struct bnxt_napi for all TX rings
sharing the same NAPI.
3. bnxt_num_tx_to_cp() returns the proper NQ/completion rings to support
the TX rings in the input.
4. Adjust bnxt_get_num_ring_stats() for the reduced number of ring
counters with the new scheme.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add 3 macros that handle to conversions between TC numbers and TX
ring numbers. These will help to clarify the existing logic and the
new logic in the next patch.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Up until now, each TX ring always requires a completion ring/NQ/MSIX.
bnxt_trim_rings() and the assignment of bp->cp_nr_rings always make
this assumption. This will no longer be true in the next patches, so
we refactor and add helper functions to determine the proper relationship
between TX rings and the required completion ring/NQ/MSIX. This patch
does not change the 1:1 relationship yet.
Note that on P5 chips, each RX and TX ring still requires a completion
ring. Only the number of NQs has been reduced. We should no longer call
bnxt_trim_rings() to adjust the RX and TX rings on P5 chips. Replace with
simple logic to check that RX + TX < CP and adjust accordingly.
bnxt_check_rings() should call _bnxt_get_max_rings() to get the raw
number of rings instead of bnxt_get_max_rings(). If we are about to
create TCs, bnxt_get_max_rings() would not be able to calculate the max
rings correctly.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
For each mqprio TC, we allocate a set of TX rings to map to the new
hardware CoS queue. Expand the tx_ring pointer in struct bnxt_napi
to an array of 8 to support up to 8 TX rings, one for each TC.
Only array entry 0 is used at this time. The rest of the array
entries will be used in later patches.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add 2 helper functions to set coalescing for each RX and TX rings. This
will make it easier to expand the number of TX rings per MSIX in the
next patches.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In order to support multiple TX rings on the same MSIX, we'll use the
upper byte of the TX opaque field to store the ring index in the new
tx_napi_idx field. This tx_napi_idx field is currently always 0 until
more infrastructure is added in later patches.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
bnxt_tx_int() processes the only one TX ring from the bnxt_napi pointer.
To prepare for more TX rings associated with the bnxt_napi structure,
add a new __bnxt_tx_int() function that takes the bnxt_tx_ring_info
pointer to process that one TX ring. No functional change.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These 2 constants were used for the one RX and one TX completion
ring pointer in the cpr->cp_ring_arr fixed array. Now that we've
changed to allocating the array for the exact number of entries to
support more TX rings, we no longer use these constants.
The array index as well as the type of completion ring (RX/TX) are
now encoded in the handle for the completion ring. This will allow
us to locate the completion ring during NAPI for any number of
completion rings sharing the same MSIX. In the following patches,
we'll be adding support for more TX rings associated with the same
MSIX vector.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
From the TX or RX ring structure, we need to find the corresponding
completion ring during initialization. On P5 chips, we use the MSIX/napi
entry to locate the completion ring because there is only one RX/TX
ring per MSIX. To allow multiple TX rings for each MSIX, we need
to add a direct pointer from the TX ring and RX ring structures.
This also simplifies the existing logic.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The cp_ring_arr is currently a fixed array of 2 pointers for the
TX and RX completion rings. These pointers are allocated during
ring initialization. Currntly, we support up to 2 completion rings
for each MSIX. In order to support more completion rings, we change
this fixed array to a pointer and allocate the required entries
during ring initialization. This patch keeps the current scheme of
allocating only 2 entries when needed. Later patches will expand
and allocate more entries when required.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
From the TX or RX ring structure, we need to find the corresponding
completion ring during initialization. On P5 chips, we use the MSIX/napi
entry to locate the completion ring because there is only one RX/TX
ring per MSIX. To allow multiple TX rings for each MSIX, we need
to add a direct pointer from the TX ring and RX ring structures.
This also simplifies the existing logic.
Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently, the opaque field in the TX BD is only used for debugging.
The TX completion logic relies on getting one TX completion for each
packet and they always complete in order.
Improve this scheme by putting the producer information (ring index plus
number of BDs for the packet) in the opaque field. This way, we can
handle TX completion processing by looking at the last TX completion
instead of counting the number of completions.
Since we no longer need to count the exact number of completions, we can
optimize xmit_more by disabling TX completion when the xmit_more
condition is true. This will be done in later patches.
This patch is only initializing the opaque field in the TX BD and is
not changing the driver's TX completion logic yet.
Reviewed-by: Andy Gospodarek <gospo@broadcom.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The usage count of struct dev_pm_info is an implementation detail that
is only available if CONFIG_PM is enabled, so printing it in a debug message
causes a build failure in configurations without PM:
In file included from include/linux/device.h:15,
from include/linux/pci.h:37,
from drivers/accel/ivpu/ivpu_pm.c:8:
drivers/accel/ivpu/ivpu_pm.c: In function 'ivpu_rpm_get_if_active':
drivers/accel/ivpu/ivpu_pm.c:254:51: error: 'struct dev_pm_info' has no member named 'usage_count'
254 | atomic_read(&vdev->drm.dev->power.usage_count));
| ^
include/linux/dev_printk.h:129:48: note: in definition of macro 'dev_printk'
129 | _dev_printk(level, dev, fmt, ##__VA_ARGS__); \
| ^~~~~~~~~~~
drivers/accel/ivpu/ivpu_drv.h:75:17: note: in expansion of macro 'dev_dbg'
75 | dev_dbg((vdev)->drm.dev, "[%s] " fmt, #type, ##args); \
| ^~~~~~~
drivers/accel/ivpu/ivpu_pm.c:253:9: note: in expansion of macro 'ivpu_dbg'
253 | ivpu_dbg(vdev, RPM, "rpm_get_if_active count %d\n",
| ^~~~~~~~
The print message does not seem essential, so the easiest workaround is
to just remove it.
Fixes: c39dc15191c4 ("accel/ivpu: Read clock rate only if device is up")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Signed-off-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231027152633.528490-1-arnd@kernel.org
(cherry picked from commit 1470acbef122c7e2e588f6346ce459c26d0568a2)
Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
|
|
When moving the *-internal-delay-ps properties to only apply for RGMII
interface modes there where a typo in the text formatting.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This reverts commit 3780bb29311eccb7a1c9641032a112eed237f7e3.
The cited commit introduced unwanted behavior.
The intent for the commit was to be able to detect carrier loss/gain
for just the NIC connected to the BMC. The unwanted effect is a
carrier loss for auxiliary paths also causes the BMC to lose
carrier. The BMC never regains carrier despite the secondary NIC
regaining a link.
This change, when merged, needs to be backported to stable kernels.
5.4-stable, 5.10-stable, 5.15-stable, 6.1-stable, 6.5-stable
Fixes: 3780bb29311e ("ncsi: Propagate carrier gain/loss events to the NCSI controller")
CC: stable@vger.kernel.org
Signed-off-by: Johnathan Mantey <johnathanx.mantey@intel.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Instead of the IRQ number user the struct irq_info pointer as parameter
in the internal pirq related functions. This allows to drop some calls
of info_for_irq().
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
Modify the internal bind- and unbind-interfaces to take a struct
irq_info parameter. When allocating a new IRQ pass the pointer from
the allocating function further up.
This will reduce the number of info_for_irq() calls and make the code
more efficient.
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
Instead of having a common function for allocating a single IRQ or a
consecutive number of IRQs, split up the functionality into the callers
of xen_allocate_irqs_dynamic().
This allows to handle any allocation error in xen_irq_init() gracefully
instead of panicing the system. Let xen_irq_init() return the irq_info
pointer or NULL in case of an allocation error.
Additionally set the IRQ into irq_info already at allocation time, as
otherwise the IRQ would be '0' (which is a valid IRQ number) until
being set.
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull hardening fixes from Kees Cook:
- stackleak: add declarations for global functions (Arnd Bergmann)
- gcc-plugins: randstruct: Only warn about true flexible arrays (Kees
Cook)
- gcc-plugins: latent_entropy: Fix description typo (Konstantin Runov)
* tag 'hardening-v6.7-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
gcc-plugins: latent_entropy: Fix typo (args -> argc) in plugin description
gcc-plugins: randstruct: Only warn about true flexible arrays
stackleak: add declarations for global functions
|
|
In percpu reader mode, trylock() for read had a lost wakeup: on failure
to get the lock, we may have caused a writer to fail to get the lock,
because we temporarily elevated the reader count.
We need to check for waiters after decrementing the read count - not
before.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
In no_data_io mode, we expect data checksums to be wrong - don't want to
spew the log with them.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
When searching the link table for the matching inode, we were searching
for a specific - incorrect - snapshot ID as well, causing us to fail to
find the inode.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Running with fewer max btree paths doesn't work anymore when replication
is enabled - as we've added e.g. the freespace and bucket gens btrees,
we naturally end up needing more btree paths.
This is an issue with lockdep, we end up taking more locks than lockdep
will track (the MAX_LOCKD_DEPTH constant). But bcachefs as merged does
not yet support lockdep anyways, so we can leave that for later.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
The journal read path had some informational log statements preperatory
for ZNS support - they're not of interest to users, so we can turn them
off.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
In may_delete_deleted_inode(), there's a corner case when a snapshot was
taken while we had an unlinked inode: we don't want to delete the inode
in the internal (shared) snapshot node, since it might have been
reattached in a descendent snapshot.
Instead we propagate the key to any snapshot leaves it doesn't exist in,
so that it can be deleted there if necessary, and then clear the
unlinked flag in the internal node.
But we forgot to commit after clearing the unlinked flag, causing us to
go into an infinite loop.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This fixes a bug discovered by generic/388 where sb->s_fs_info was NULL
while the superblock was still active - the error path was entirely
fubar, and was trying to do something unclear and unecessary.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
During mount, bcachefs mount option processing may sleep while allocating a string buffer.
Fix this by reference counting in order to take the atomic path.
Signed-off-by: Daniel J Blueman <daniel@quora.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
may_delete_deleted_inode() was returning without exiting a btree
iterator, eventually causing propagate_key_to_snaphot_leaves() to go
into an infinite loop hitting btree_trans_too_many_iters().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This deletes the complicated and somewhat expensive journal
pre-reservation machinery in favor of just using journal watermarks:
when the journal is more than half full, we run journal reclaim more
aggressively, and when the journal is more than 3/4s full we only allow
journal reclaim to get new journal reservations.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Pull Zstd fix from Nick Terrell:
"Only a single line change to fix a benign UBSAN warning"
* tag 'zstd-linus-v6.7-rc2' of https://github.com/terrelln/linux:
zstd: Fix array-index-out-of-bounds UBSAN warning
|
|
Matthieu Baerts says:
====================
mptcp: misc. fixes for v6.7
Here are a few fixes related to MPTCP:
- Patch 1 limits GSO max size to ~64K when MPTCP is being used due to a
spec limit. 'gso_max_size' can exceed the max value supported by MPTCP
since v5.19.
- Patch 2 fixes a possible NULL pointer dereference on close that can
happen since v6.7-rc1.
- Patch 3 avoids sending a RM_ADDR when the corresponding address is no
longer tracked locally. A regression for a fix backported to v5.19.
- Patch 4 adds a missing lock when changing the IP TOS with setsockopt().
A fix for v5.17.
- Patch 5 fixes an expectation when running MPTCP Join selftest with the
checksum option (-C). An issue present since v6.1.
====================
Link: https://lore.kernel.org/r/20231114-upstream-net-20231113-mptcp-misc-fixes-6-7-rc2-v1-0-7b9cd6a7b7f4@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|