Age | Commit message (Collapse) | Author |
|
Per-next-PR merge.
net/smc/af_smc.c
b5dd4d698171 ("net/smc: llc_conf_mutex refactor, replace it with rw_semaphore")
e40b801b3603 ("net/smc: fix potential panic dues to unprotected smc_llc_srv_add_link()")
https://lore.kernel.org/all/20230221124008.6303c330@canb.auug.org.au/
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
sizeof(struct ublksrv_io_cmd) is 16bytes, which can be held in 64byte SQE,
so not necessary to check IO_URING_F_SQE128.
With this change, we get chance to save half SQ ring memory.
Fixed: 71f28f3136af ("ublk_drv: add io_uring based userspace block driver")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20230220041413.1524335-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
There is a spelling mistake in a pr_err message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Frank Rowand <frowand.list@gmail.com>
Link: https://lore.kernel.org/r/20230220144422.873356-1-colin.i.king@gmail.com
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
Pull EDAC updates from Borislav Petkov:
- Add a driver for the RAS functionality on Xilinx's on chip memory
controller
- Add support for decoding errors from the first and second level
memory on SKL-based hardware
- Add support for the memory controllers in Intel Granite Rapids and
Emerald Rapids machines
- First round of amd64_edac driver simplification and removal of
unneeded functionality
- The usual cleanups and fixes
* tag 'edac_updates_for_v6.3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/amd64: Shut up an -Werror,-Wsometimes-uninitialized clang false positive
EDAC/amd64: Remove early_channel_count()
EDAC/amd64: Remove PCI Function 0
EDAC/amd64: Remove PCI Function 6
EDAC/amd64: Remove scrub rate control for Family 17h and later
EDAC/amd64: Don't set up EDAC PCI control on Family 17h+
EDAC/i10nm: Add driver decoder for Sapphire Rapids server
EDAC/i10nm: Add Intel Granite Rapids server support
EDAC/i10nm: Make more configurations CPU model specific
EDAC/i10nm: Add Intel Emerald Rapids server support
EDAC/skx_common: Delete duplicated and unreachable code
EDAC/skx_common: Enable EDAC support for the "near" memory
EDAC/qcom: Add platform_device_id table for module autoloading
EDAC/zynqmp: Add EDAC support for Xilinx ZynqMP OCM
dt-bindings: edac: Add bindings for Xilinx ZynqMP OCM
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull RAS updates from Borislav Petkov:
- Add support for reporting more bits of the physical address on error,
on newer AMD CPUs
- Mask out bits which don't belong to the address of the error being
reported
* tag 'ras_core_for_v6.3_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mce: Mask out non-address bits from machine check bank
x86/mce: Add support for Extended Physical Address MCA changes
x86/mce: Define a function to extract ErrorAddr from MCA_ADDR
|
|
Resolve conflicts from the signature change in iommu_map:
- drivers/infiniband/hw/usnic/usnic_uiom.c
Switch iommu_map_atomic() to iommu_map(.., GFP_ATOMIC)
- drivers/vfio/vfio_iommu_type1.c
Following indenting change for GFP_KERNEL
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Move drm_fb_helper_unprepare() from drm_fb_helper_fini() into the
calling fbdev implementation. Avoids a possible stale mutex with
generic fbdev code.
As indicated by its name, drm_fb_helper_prepare() prepares struct
drm_fb_helper before setting up the fbdev support with a call to
drm_fb_helper_init(). In legacy fbdev emulation, this happens next
to each other. If successful, drm_fb_helper_fini() later tear down
the fbdev device and also unprepare via drm_fb_helper_unprepare().
Generic fbdev emulation prepares struct drm_fb_helper immediately
after allocating the instance. It only calls drm_fb_helper_init()
as part of processing a hotplug event. If the hotplug-handling fails,
it runs drm_fb_helper_fini(). This unprepares the fb-helper instance
and the next hotplug event runs on stale data.
Solve this by moving drm_fb_helper_unprepare() from drm_fb_helper_fini()
into the fbdev implementations. Call it right before freeing the
fb-helper instance.
Fixes: 643231b28380 ("drm/fbdev-generic: Minimize hotplug error handling")
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Javier Martinez Canillas <javierm@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230216140620.17699-1-tzimmermann@suse.de
|
|
pci_msix_free_irq() is used to free an interrupt on a PCI/MSI-X interrupt
domain.
The API description specifies that the interrupt to be freed was allocated
via pci_msix_alloc_irq_at(). This description limits the usage of
pci_msix_free_irq() since pci_msix_free_irq() can also be used to free
MSI-X interrupts allocated with, for example, pci_alloc_irq_vectors().
Remove the text stating that the interrupt to be freed had to be allocated
with pci_msix_alloc_irq_at(). The needed struct msi_map need not be from
pci_msix_alloc_irq_at() but can be created from scratch using
pci_irq_vector() to obtain the Linux IRQ number. Highlight that
pci_msix_free_irq() cannot be used to disable MSI-X to guide users that,
for example, pci_free_irq_vectors() remains to be needed.
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/lkml/87r0xsd8j4.ffs@tglx
Link: https://lore.kernel.org/r/4c3e7a50d6e70f408812cd7ab199c6b4b326f9de.1676408572.git.reinette.chatre@intel.com
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
- Improve the scalability of the CFS bandwidth unthrottling logic with
large number of CPUs.
- Fix & rework various cpuidle routines, simplify interaction with the
generic scheduler code. Add __cpuidle methods as noinstr to objtool's
noinstr detection and fix boatloads of cpuidle bugs & quirks.
- Add new ABI: introduce MEMBARRIER_CMD_GET_REGISTRATIONS, to query
previously issued registrations.
- Limit scheduler slice duration to the sysctl_sched_latency period, to
improve scheduling granularity with a large number of SCHED_IDLE
tasks.
- Debuggability enhancement on sys_exit(): warn about disabled IRQs,
but also enable them to prevent a cascade of followup problems and
repeat warnings.
- Fix the rescheduling logic in prio_changed_dl().
- Micro-optimize cpufreq and sched-util methods.
- Micro-optimize ttwu_runnable()
- Micro-optimize the idle-scanning in update_numa_stats(),
select_idle_capacity() and steal_cookie_task().
- Update the RSEQ code & self-tests
- Constify various scheduler methods
- Remove unused methods
- Refine __init tags
- Documentation updates
- Misc other cleanups, fixes
* tag 'sched-core-2023-02-20' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (110 commits)
sched/rt: pick_next_rt_entity(): check list_entry
sched/deadline: Add more reschedule cases to prio_changed_dl()
sched/fair: sanitize vruntime of entity being placed
sched/fair: Remove capacity inversion detection
sched/fair: unlink misfit task from cpu overutilized
objtool: mem*() are not uaccess safe
cpuidle: Fix poll_idle() noinstr annotation
sched/clock: Make local_clock() noinstr
sched/clock/x86: Mark sched_clock() noinstr
x86/pvclock: Improve atomic update of last_value in pvclock_clocksource_read()
x86/atomics: Always inline arch_atomic64*()
cpuidle: tracing, preempt: Squash _rcuidle tracing
cpuidle: tracing: Warn about !rcu_is_watching()
cpuidle: lib/bug: Disable rcu_is_watching() during WARN/BUG
cpuidle: drivers: firmware: psci: Dont instrument suspend code
KVM: selftests: Fix build of rseq test
exit: Detect and fix irq disabled state in oops
cpuidle, arm64: Fix the ARM64 cpuidle logic
cpuidle: mvebu: Fix duplicate flags assignment
sched/fair: Limit sched slice duration
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
- rwsem micro-optimizations
- spinlock micro-optimizations
- cleanups, simplifications
* tag 'locking-core-2023-02-20' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
vduse: Remove include of rwlock.h
locking/lockdep: Remove lockdep_init_map_crosslock.
x86/ACPI/boot: Use try_cmpxchg() in __acpi_{acquire,release}_global_lock()
x86/PAT: Use try_cmpxchg() in set_page_memtype()
locking/rwsem: Disable preemption in all down_write*() and up_write() code paths
locking/rwsem: Disable preemption in all down_read*() and up_read() code paths
locking/rwsem: Prevent non-first waiter from spinning in down_write() slowpath
locking/qspinlock: Micro-optimize pending state waiting for unlock
|
|
git://anongit.freedesktop.org/drm/drm-intel into drm-next
drm/i915 fixes for the v6.3 merge window:
- Fix eDP+DSI dual panel systems
- Fix system suspend when fbdev isn't initialized
- Fix memory leaks in scatterlist
- Fix some MCR register annotations
- Fix documentation build warnings
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87v8k0xyx4.fsf@intel.com
|
|
Hardware requires an alignment to 64 bytes to return ASO data. Missing
this alignment caused to unpredictable results while ASO events were
generated.
Fixes: 8518d05b8f9a ("net/mlx5e: Create Advanced Steering Operation object for IPsec")
Reported-by: Emeel Hakim <ehakim@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Link: https://lore.kernel.org/r/de0302c572b90c9224a72868d4e0d657b6313c4b.1676797613.git.leon@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2023-02-15
1) From Gal Tariq and Parav, Few cleanups for mlx5 driver.
2) From Vlad: Allow offloading of ct 'new' match based on [1]
[1] https://lore.kernel.org/netdev/20230201163100.1001180-1-vladbu@nvidia.com/
* tag 'mlx5-updates-2023-02-15' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
net/mlx5e: RX, Remove doubtful unlikely call
net/mlx5e: Fix outdated TLS comment
net/mlx5e: Remove unused function mlx5e_sq_xmit_simple
net/mlx5e: Allow offloading of ct 'new' match
net/mlx5e: Implement CT entry update
net/mlx5: Simplify eq list traversal
net/mlx5e: Remove redundant page argument in mlx5e_xdp_handle()
net/mlx5e: Remove redundant page argument in mlx5e_xmit_xdp_buff()
net/mlx5e: Switch to using napi_build_skb()
====================
Link: https://lore.kernel.org/r/20230218090513.284718-1-saeed@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Currently, CT misses restore the missed chain on the tc skb extension so
tc will continue from the relevant chain. Instead, restore the CT action's
miss cookie on the extension, which will instruct tc to continue from the
this specific CT action instance on the relevant filter's action list.
Map the CT action's miss_cookie to a new miss object (ACT_MISS), and use
this miss mapping instead of the current chain miss object (CHAIN_MISS)
for CT action misses.
To restore this new miss mapping value, add a RX restore rule for each
such mapping value.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Oz Sholmo <ozsh@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This reg usage is always a mapped object, not necessarily
containing chain info.
Rename to properly convey what it stores.
This patch doesn't change any functionality.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Move tc miss handling code to en_tc.c, and remove
duplicate code.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Tc skb extension is a basic requirement for using tc
offload to support correct restoration on action miss.
Depend on it.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
struct tc_action->act_cookie is a user defined cookie,
and the related struct flow_action_entry->act_cookie is
used as an handle similar to struct flow_cls_offload->cookie.
Rename tc_action->act_cookie to user_cookie, and
flow_action_entry->act_cookie to cookie so their names
would better fit their usage.
Signed-off-by: Paul Blakey <paulb@nvidia.com>
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan-next
Stefan Schmidt says:
====================
pull-request: ieee802154-next 2023-02-20
Miquel Raynal build upon his earlier work and introduced two new
features into the ieee802154 stack. Beaconing to announce existing
PAN's and passive scanning to discover the beacons and associated
PAN's. The matching changes to the userspace configuration tool
have been posted as well and will be released together with the
kernel release.
Arnd Bergmann and Dmitry Torokhov worked on converting the
at86rf230 and cc2520 drivers away from the unused platform_data
usage and towards the new gpiod API. (I had to add a revert as
Dmitry found a regression on an already pushed tree on my side).
Changes since v1 (pull request 2023-02-02)
- Netlink API extack and NLA_POLICY* usage as suggested by Jakub
- Removed always true condition found by kernel test robot
- Simplify device removal with running background job for scanning
- Fix problems with beacon sending in some cases by using the MLME
tx path
* tag 'ieee802154-for-net-next-2023-02-20' of git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan-next:
ieee802154: Drop device trackers
mac802154: Fix an always true condition
mac802154: Send beacons using the MLME Tx path
ieee802154: Change error code on monitor scan netlink request
ieee802154: Convert scan error messages to extack
ieee802154: Use netlink policies when relevant on scan parameters
ieee802154: at86rf230: switch to using gpiod API
ieee802154: at86rf230: drop support for platform data
Revert "at86rf230: convert to gpio descriptors"
cc2520: move to gpio descriptors
mac802154: Avoid superfluous endianness handling
at86rf230: convert to gpio descriptors
mac802154: Handle basic beaconing
ieee802154: Add support for user beaconing requests
mac802154: Handle passive scanning
mac802154: Add MLME Tx locked helpers
mac802154: Prepare forcing specific symbol duration
ieee802154: Introduce a helper to validate a channel
ieee802154: Define a beacon frame header
ieee802154: Add support for user scanning requests
====================
Link: https://lore.kernel.org/r/20230220213749.386451-1-stefan@datenfreihafen.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Add an embarrassingly missed semicolon plus and embarrassingly missed
parenthesis breaking kernel building when CONFIG_RTC_LIB is not set
like the one reported with ia64 config.
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202302170047.EjCPizu3-lkp@intel.com/
Fixes: 14743ddd2495 ("sfc: add devlink info support for ef100")
Signed-off-by: Alejandro Lucero <alejandro.lucero-palau@amd.com>
Link: https://lore.kernel.org/r/20230220110133.29645-1-alejandro.lucero-palau@amd.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Fix some indentngs and remove the warning below:
drivers/net/ethernet/sfc/mae.c:657 efx_mae_enumerate_mports() warn: inconsistent indenting
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=4117
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Acked-by: Martin Habets <habetsm.xilinx@gmail.com>
Link: https://lore.kernel.org/r/20230220065958.52941-1-yang.lee@linux.alibaba.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The call "skb_copy_from_linear_data(skb, inl + 1, spc)" triggers a FORTIFY
memcpy() warning on ppc64 platform:
In function ‘fortify_memcpy_chk’,
inlined from ‘skb_copy_from_linear_data’ at ./include/linux/skbuff.h:4029:2,
inlined from ‘build_inline_wqe’ at drivers/net/ethernet/mellanox/mlx4/en_tx.c:722:4,
inlined from ‘mlx4_en_xmit’ at drivers/net/ethernet/mellanox/mlx4/en_tx.c:1066:3:
./include/linux/fortify-string.h:513:25: error: call to ‘__write_overflow_field’ declared with
attribute warning: detected write beyond size of field (1st parameter); maybe use struct_group()?
[-Werror=attribute-warning]
513 | __write_overflow_field(p_size_field, size);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Same behaviour on x86 you can get if you use "__always_inline" instead of
"inline" for skb_copy_from_linear_data() in skbuff.h
The call here copies data into inlined tx destricptor, which has 104
bytes (MAX_INLINE) space for data payload. In this case "spc" is known
in compile-time but the destination is used with hidden knowledge
(real structure of destination is different from that the compiler
can see). That cause the fortify warning because compiler can check
bounds, but the real bounds are different. "spc" can't be bigger than
64 bytes (MLX4_INLINE_ALIGN), so the data can always fit into inlined
tx descriptor. The fact that "inl" points into inlined tx descriptor is
determined earlier in mlx4_en_xmit().
Avoid confusing the compiler with "inl + 1" constructions to get to past
the inl header by introducing a flexible array "data" to the struct so
that the compiler can see that we are not dealing with an array of inl
structs, but rather, arbitrary data following the structure. There are
no changes to the structure layout reported by pahole, and the resulting
machine code is actually smaller.
Reported-by: Josef Oskera <joskera@redhat.com>
Link: https://lore.kernel.org/lkml/20230217094541.2362873-1-joskera@redhat.com
Fixes: f68f2ff91512 ("fortify: Detect struct member overflows in memcpy() at compile-time")
Cc: Yishai Hadas <yishaih@nvidia.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Link: https://lore.kernel.org/r/20230218183842.never.954-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
When doing timestamping in lan966x and having PROVE_LOCKING
enabled the following warning is shown.
========================================================
WARNING: possible irq lock inversion dependency detected
6.2.0-rc7-01749-gc54e1f7f7e36 #2786 Tainted: G N
--------------------------------------------------------
swapper/0/0 just changed the state of lock:
c2609f50 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x16c/0x2e8
but this lock took another, SOFTIRQ-unsafe lock in the past:
(&lan966x->ptp_ts_id_lock){+.+.}-{2:2}
and interrupts could create inverse lock ordering between them.
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&lan966x->ptp_ts_id_lock);
local_irq_disable();
lock(_xmit_ETHER#2);
lock(&lan966x->ptp_ts_id_lock);
<Interrupt>
lock(_xmit_ETHER#2);
*** DEADLOCK ***
5 locks held by swapper/0/0:
#0: c1001e18 ((&ndev->rs_timer)){+.-.}-{0:0}, at: call_timer_fn+0x0/0x33c
#1: c105e7c4 (rcu_read_lock){....}-{1:2}, at: ndisc_send_skb+0x134/0x81c
#2: c105e7d8 (rcu_read_lock_bh){....}-{1:2}, at: ip6_finish_output2+0x17c/0xc64
#3: c105e7d8 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x4c/0x1224
#4: c3056174 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x354/0x1224
the shortest dependencies between 2nd lock and 1st lock:
-> (&lan966x->ptp_ts_id_lock){+.+.}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock+0x38/0x48
lan966x_ptp_irq_handler+0x164/0x2a8
irq_thread_fn+0x1c/0x78
irq_thread+0x130/0x278
kthread+0xec/0x110
ret_from_fork+0x14/0x28
SOFTIRQ-ON-W at:
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock+0x38/0x48
lan966x_ptp_irq_handler+0x164/0x2a8
irq_thread_fn+0x1c/0x78
irq_thread+0x130/0x278
kthread+0xec/0x110
ret_from_fork+0x14/0x28
INITIAL USE at:
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock_irqsave+0x4c/0x68
lan966x_ptp_txtstamp_request+0x128/0x1cc
lan966x_port_xmit+0x224/0x43c
dev_hard_start_xmit+0xa8/0x2f0
sch_direct_xmit+0x108/0x2e8
__dev_queue_xmit+0x41c/0x1224
packet_sendmsg+0xdb4/0x134c
__sys_sendto+0xd0/0x154
sys_send+0x18/0x20
ret_fast_syscall+0x0/0x1c
}
... key at: [<c174ba0c>] __key.2+0x0/0x8
... acquired at:
_raw_spin_lock_irqsave+0x4c/0x68
lan966x_ptp_txtstamp_request+0x128/0x1cc
lan966x_port_xmit+0x224/0x43c
dev_hard_start_xmit+0xa8/0x2f0
sch_direct_xmit+0x108/0x2e8
__dev_queue_xmit+0x41c/0x1224
packet_sendmsg+0xdb4/0x134c
__sys_sendto+0xd0/0x154
sys_send+0x18/0x20
ret_fast_syscall+0x0/0x1c
-> (_xmit_ETHER#2){+.-.}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock+0x38/0x48
netif_freeze_queues+0x38/0x68
dev_deactivate_many+0xac/0x388
dev_deactivate+0x38/0x6c
linkwatch_do_dev+0x70/0x8c
__linkwatch_run_queue+0xd4/0x1e8
linkwatch_event+0x24/0x34
process_one_work+0x284/0x744
worker_thread+0x28/0x4bc
kthread+0xec/0x110
ret_from_fork+0x14/0x28
IN-SOFTIRQ-W at:
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock+0x38/0x48
sch_direct_xmit+0x16c/0x2e8
__dev_queue_xmit+0x41c/0x1224
ip6_finish_output2+0x5f4/0xc64
ndisc_send_skb+0x4cc/0x81c
addrconf_rs_timer+0xb0/0x2f8
call_timer_fn+0xb4/0x33c
expire_timers+0xb4/0x10c
run_timer_softirq+0xf8/0x2a8
__do_softirq+0xd4/0x5fc
__irq_exit_rcu+0x138/0x17c
irq_exit+0x8/0x28
__irq_svc+0x90/0xbc
arch_cpu_idle+0x30/0x3c
default_idle_call+0x44/0xac
do_idle+0xc8/0x138
cpu_startup_entry+0x18/0x1c
rest_init+0xcc/0x168
arch_post_acpi_subsys_init+0x0/0x8
INITIAL USE at:
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock+0x38/0x48
netif_freeze_queues+0x38/0x68
dev_deactivate_many+0xac/0x388
dev_deactivate+0x38/0x6c
linkwatch_do_dev+0x70/0x8c
__linkwatch_run_queue+0xd4/0x1e8
linkwatch_event+0x24/0x34
process_one_work+0x284/0x744
worker_thread+0x28/0x4bc
kthread+0xec/0x110
ret_from_fork+0x14/0x28
}
... key at: [<c175974c>] netdev_xmit_lock_key+0x8/0x1c8
... acquired at:
__lock_acquire+0x978/0x2978
lock_acquire.part.0+0xb0/0x248
_raw_spin_lock+0x38/0x48
sch_direct_xmit+0x16c/0x2e8
__dev_queue_xmit+0x41c/0x1224
ip6_finish_output2+0x5f4/0xc64
ndisc_send_skb+0x4cc/0x81c
addrconf_rs_timer+0xb0/0x2f8
call_timer_fn+0xb4/0x33c
expire_timers+0xb4/0x10c
run_timer_softirq+0xf8/0x2a8
__do_softirq+0xd4/0x5fc
__irq_exit_rcu+0x138/0x17c
irq_exit+0x8/0x28
__irq_svc+0x90/0xbc
arch_cpu_idle+0x30/0x3c
default_idle_call+0x44/0xac
do_idle+0xc8/0x138
cpu_startup_entry+0x18/0x1c
rest_init+0xcc/0x168
arch_post_acpi_subsys_init+0x0/0x8
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Tainted: G N 6.2.0-rc7-01749-gc54e1f7f7e36 #2786
Hardware name: Generic DT based system
unwind_backtrace from show_stack+0x10/0x14
show_stack from dump_stack_lvl+0x58/0x70
dump_stack_lvl from mark_lock.part.0+0x59c/0x93c
mark_lock.part.0 from __lock_acquire+0x978/0x2978
__lock_acquire from lock_acquire.part.0+0xb0/0x248
lock_acquire.part.0 from _raw_spin_lock+0x38/0x48
_raw_spin_lock from sch_direct_xmit+0x16c/0x2e8
sch_direct_xmit from __dev_queue_xmit+0x41c/0x1224
__dev_queue_xmit from ip6_finish_output2+0x5f4/0xc64
ip6_finish_output2 from ndisc_send_skb+0x4cc/0x81c
ndisc_send_skb from addrconf_rs_timer+0xb0/0x2f8
addrconf_rs_timer from call_timer_fn+0xb4/0x33c
call_timer_fn from expire_timers+0xb4/0x10c
expire_timers from run_timer_softirq+0xf8/0x2a8
run_timer_softirq from __do_softirq+0xd4/0x5fc
__do_softirq from __irq_exit_rcu+0x138/0x17c
__irq_exit_rcu from irq_exit+0x8/0x28
irq_exit from __irq_svc+0x90/0xbc
Exception stack(0xc1001f20 to 0xc1001f68)
1f20: ffffffff ffffffff 00000001 c011f840 c100e000 c100e000 c1009314 c1009370
1f40: c10f0c1a c0d5e564 c0f5da8c 00000000 00000000 c1001f70 c010f0bc c010f0c0
1f60: 600f0013 ffffffff
__irq_svc from arch_cpu_idle+0x30/0x3c
arch_cpu_idle from default_idle_call+0x44/0xac
default_idle_call from do_idle+0xc8/0x138
do_idle from cpu_startup_entry+0x18/0x1c
cpu_startup_entry from rest_init+0xcc/0x168
rest_init from arch_post_acpi_subsys_init+0x0/0x8
Fix this by using spin_lock_irqsave/spin_lock_irqrestore also
inside lan966x_ptp_irq_handler.
Fixes: e85a96e48e33 ("net: lan966x: Add support for ptp interrupts")
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Link: https://lore.kernel.org/r/20230217210917.2649365-1-horatiu.vultur@microchip.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2023-02-17
We've added 64 non-merge commits during the last 7 day(s) which contain
a total of 158 files changed, 4190 insertions(+), 988 deletions(-).
The main changes are:
1) Add a rbtree data structure following the "next-gen data structure"
precedent set by recently-added linked-list, that is, by using
kfunc + kptr instead of adding a new BPF map type, from Dave Marchevsky.
2) Add a new benchmark for hashmap lookups to BPF selftests,
from Anton Protopopov.
3) Fix bpf_fib_lookup to only return valid neighbors and add an option
to skip the neigh table lookup, from Martin KaFai Lau.
4) Add cgroup.memory=nobpf kernel parameter option to disable BPF memory
accouting for container environments, from Yafang Shao.
5) Batch of ice multi-buffer and driver performance fixes,
from Alexander Lobakin.
6) Fix a bug in determining whether global subprog's argument is
PTR_TO_CTX, which is based on type names which breaks kprobe progs,
from Andrii Nakryiko.
7) Prep work for future -mcpu=v4 LLVM option which includes usage of
BPF_ST insn. Thus improve BPF_ST-related value tracking in verifier,
from Eduard Zingerman.
8) More prep work for later building selftests with Memory Sanitizer
in order to detect usages of undefined memory, from Ilya Leoshkevich.
9) Fix xsk sockets to check IFF_UP earlier to avoid a NULL pointer
dereference via sendmsg(), from Maciej Fijalkowski.
10) Implement BPF trampoline for RV64 JIT compiler, from Pu Lehui.
11) Fix BPF memory allocator in combination with BPF hashtab where it could
corrupt special fields e.g. used in bpf_spin_lock, from Hou Tao.
12) Fix LoongArch BPF JIT to always use 4 instructions for function
address so that instruction sequences don't change between passes,
from Hengqi Chen.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (64 commits)
selftests/bpf: Add bpf_fib_lookup test
bpf: Add BPF_FIB_LOOKUP_SKIP_NEIGH for bpf_fib_lookup
riscv, bpf: Add bpf trampoline support for RV64
riscv, bpf: Add bpf_arch_text_poke support for RV64
riscv, bpf: Factor out emit_call for kernel and bpf context
riscv: Extend patch_text for multiple instructions
Revert "bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES"
selftests/bpf: Add global subprog context passing tests
selftests/bpf: Convert test_global_funcs test to test_loader framework
bpf: Fix global subprog context argument resolution logic
LoongArch, bpf: Use 4 instructions for function address in JIT
bpf: bpf_fib_lookup should not return neigh in NUD_FAILED state
bpf: Disable bh in bpf_test_run for xdp and tc prog
xsk: check IFF_UP earlier in Tx path
Fix typos in selftest/bpf files
selftests/bpf: Use bpf_{btf,link,map,prog}_get_info_by_fd()
samples/bpf: Use bpf_{btf,link,map,prog}_get_info_by_fd()
bpftool: Use bpf_{btf,link,map,prog}_get_info_by_fd()
libbpf: Use bpf_{btf,link,map,prog}_get_info_by_fd()
libbpf: Introduce bpf_{btf,link,map,prog}_get_info_by_fd()
...
====================
Link: https://lore.kernel.org/r/20230217221737.31122-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This patch implements features provisioning for mlx5_vdpa.
1) Validate the provisioned features are a subset of the parent
features.
2) Clearing features that are not wanted by userspace.
For example:
# vdpa mgmtdev show
pci/0000:41:04.2:
supported_classes net
max_supported_vqs 65
dev_features CSUM GUEST_CSUM MTU MAC HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
1) Provision vDPA device with all features derived from the parent
# vdpa dev add name vdpa1 mgmtdev pci/0000:41:04.2
# vdpa dev config show
vdpa1: mac e4:11:c6:d3:45:f0 link up link_announce false max_vq_pairs 1 mtu 1500
negotiated_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6 STATUS CTRL_VQ CTRL_VLAN MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
2) Provision vDPA device with a subset of parent features
# vdpa dev add name vdpa1 mgmtdev pci/0000:41:04.2 device_features 0x300020000
# vdpa dev config show
vdpa1:
negotiated_features CTRL_VQ VERSION_1 ACCESS_PLATFORM
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Message-Id: <1675725124-7375-7-git-send-email-si-wei.liu@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The spec says:
mtu only exists if VIRTIO_NET_F_MTU is set
status only exists if VIRTIO_NET_F_STATUS is set
We should only present MTU and STATUS conditionally depending on
the feature bits.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Message-Id: <1675725124-7375-6-git-send-email-si-wei.liu@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Today when device features are explicitly provisioned, the features
user supplied may contain device class specific features that are
not supported by the parent management device. On the other hand,
when parent management device supports more than one class, the
device features to provision may be ambiguous if none of the class
specific attributes is provided at the same time. Validate these
cases and prompt appropriate user errors accordingly.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Message-Id: <1675725124-7375-5-git-send-email-si-wei.liu@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
With device feature provisioning, there's a chance for misconfiguration
that the vdpa feature attribute supplied in 'vdpa dev add' command doesn't
get selected on the device_features to be provisioned. For instance, when
a @mac attribute is specified, the corresponding feature bit _F_MAC in
device_features should be set for consistency. If there's conflict on
provisioned features against the attribute, it should be treated as an
error to fail the ambiguous command. Noted the opposite is not
necessarily true, for e.g. it's okay to have _F_MAC set in device_features
without providing a corresponding @mac attribute, in which case the vdpa
vendor driver could load certain default value for attribute that is not
explicitly specified.
Generalize this check in vdpa core so that there's no duplicate code in
each vendor driver.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Message-Id: <1675725124-7375-4-git-send-email-si-wei.liu@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
The spec says:
status only exists if VIRTIO_NET_F_STATUS is set
Similar to MAC and MTU, vdpa_dev_net_config_fill() should read
STATUS conditionally depending on the feature bits.
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Message-Id: <1675725124-7375-3-git-send-email-si-wei.liu@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
In below example, before the fix, mtu attribute is supported
by the parent mgmtdev, but the error message showing "All
provided are not supported" is just misleading.
$ vdpa mgmtdev show
vdpasim_net:
supported_classes net
max_supported_vqs 3
dev_features MTU MAC CTRL_VQ CTRL_MAC_ADDR ANY_LAYOUT VERSION_1 ACCESS_PLATFORM
$ vdpa dev add mgmtdev vdpasim_net name vdpasim0 mtu 5000 max_vqp 2
Error: vdpa: All provided attributes are not supported.
kernel answers: Operation not supported
After fix, the relevant error message will be like:
$ vdpa dev add mgmtdev vdpasim_net name vdpasim0 mtu 5000 max_vqp 2
Error: vdpa: Some provided attributes are not supported: 0x1000.
kernel answers: Operation not supported
Fixes: d8ca2fa5be1b ("vdpa: Enable user to set mac and mtu of vdpa device")
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Message-Id: <1675725124-7375-2-git-send-email-si-wei.liu@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Initialize itolb spinlock.
Fixes: 5262912ef3cf ("vdpa/mlx5: Add support for control VQ and MAC setting")
Signed-off-by: Eli Cohen <elic@nvidia.com>
Message-Id: <20230206122016.1149373-1-elic@nvidia.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Clearing the mr struct erases the lock owner and causes warnings to be
emitted. It is not required to clear the mr so remove the memset call.
Fixes: 94abbccdf291 ("vdpa/mlx5: Add shared memory registration code")
Signed-off-by: Eli Cohen <elic@nvidia.com>
Message-Id: <20230206121956.1149356-1-elic@nvidia.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
When creating a memory key, the key value should be assigned to the
passed pointer and not or'ed to.
No functional issue was observed due to this bug.
Fixes: 29064bfdabd5 ("vdpa/mlx5: Add support library for mlx5 VDPA implementation")
Signed-off-by: Eli Cohen <elic@nvidia.com>
Message-Id: <20230205072906.1108194-1-elic@nvidia.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Add a print explaining why vhost_vdpa_alloc_domain failed if the device
is not IOMMU cache coherent capable.
Without this print, we have no hint why the operation failed.
For example:
$ virsh start <domain>
error: Failed to start domain <domain>
error: Unable to open '/dev/vhost-vdpa-<idx>' for vdpa device:
Unknown error 524
Suggested-by: Eugenio Perez Martin <eperezma@redhat.com>
Signed-off-by: Alvaro Karsz <alvaro.karsz@solid-run.com>
Message-Id: <20230202084212.1328530-1-alvaro.karsz@solid-run.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Eugenio Pérez Martin <eperezma@redhat.com>
|
|
There is no check about the return value of kmalloc in
virtscsi_rescan_hotunplug. Add the check to avoid use
of null pointer 'inq_result' in case of the failure
of kmalloc.
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
Message-Id: <20230202064124.22277-1-zyytlz.wz@163.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
There are two spelling mistakes in some literal strings. Fix them.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Message-Id: <20230130092644.37002-1-colin.i.king@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
|
|
Add VIRTIO_F_RING_RESET, which indicates that the driver can reset a
queue individually.
VIRTIO_F_RING_RESET feature is added to virtio-spec 1.2. The relevant
information is in
oasis-tcs/virtio-spec#124
oasis-tcs/virtio-spec#139
The implementation only adds the feature bit in supported features. It
does not require any other changes because we reuse the existing vhost
protocol.
The virtqueue reset process can be concluded as two parts:
1. The driver can reset a virtqueue. When it is triggered, we use the
set_backend to disable the virtqueue.
2. After the virtqueue is disabled, the driver may optionally re-enable
it. The process is basically similar to when the device is started,
except that the restart process does not need to set features and set
mem table since they do not change. QEMU will send messages containing
size, base, addr, kickfd and callfd of the virtqueue in order.
Specifically, the host kernel will receive these messages in order:
a. VHOST_SET_VRING_NUM
b. VHOST_SET_VRING_BASE
c. VHOST_SET_VRING_ADDR
d. VHOST_SET_VRING_KICK
e. VHOST_SET_VRING_CALL
f. VHOST_NET_SET_BACKEND
Finally, after we use set_backend to attach the virtqueue, the virtqueue
will be enabled and start to work.
Signed-off-by: Kangjie Xu <kangjie.xu@linux.alibaba.com>
Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Message-Id: <20220825085610.80315-1-kangjie.xu@linux.alibaba.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
|
|
Follow the advice of the Documentation/filesystems/sysfs.rst
and show() should only use sysfs_emit() or sysfs_emit_at()
when formatting the value to be returned to user space.
Signed-off-by: Bo Liu <liubo03@inspur.com>
Message-Id: <20230129091145.2837-1-liubo03@inspur.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch implements per virtqueue dma device for mlx5_vdpa. This is
needed for virtio_vdpa to work for CVQ which is backed by vringh but
not DMA. We simply advertise the vDPA device itself as the DMA device
for CVQ then DMA API can simply use PA so the identical mapping for
CVQ can still be used. Otherwise the identical (1:1) mapping won't
work when platform IOMMU is enabled since the IOVA is allocated on
demand which is not necessarily the PA.
This fixes the following crash when mlx5 vDPA device is bound to
virtio-vdpa with platform IOMMU enabled but not in passthrough mode:
BUG: unable to handle page fault for address: ff2fb3063deb1002
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 1393001067 P4D 1393002067 PUD 0
Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 55 PID: 8923 Comm: kworker/u112:3 Kdump: loaded Not tainted 6.1.0+ #7
Hardware name: Dell Inc. PowerEdge R750/0PJ80M, BIOS 1.5.4 12/17/2021
Workqueue: mlx5_vdpa_wq mlx5_cvq_kick_handler [mlx5_vdpa]
RIP: 0010:vringh_getdesc_iotlb+0x93/0x1d0 [vringh]
Code: 14 25 40 ef 01 00 83 82 c0 0a 00 00 01 48 2b 05 93 5a 1b ea 8b 4c 24 14 48 c1 f8 06 48 c1 e0 0c 48 03 05 90 5a 1b ea 48 01 c8 <0f> b7 00 83 aa c0 0a 00 00 01 65 ff 0d bc e4 41 3f 0f 84 05 01 00
RSP: 0018:ff46821ba664fdf8 EFLAGS: 00010282
RAX: ff2fb3063deb1002 RBX: 0000000000000a20 RCX: 0000000000000002
RDX: ff2fb318d2f94380 RSI: 0000000000000002 RDI: 0000000000000001
RBP: ff2fb3065e832410 R08: ff46821ba664fe00 R09: 0000000000000001
R10: 0000000000000000 R11: 000000000000000d R12: ff2fb3065e832488
R13: ff2fb3065e8324a8 R14: ff2fb3065e8324c8 R15: ff2fb3065e8324a8
FS: 0000000000000000(0000) GS:ff2fb3257fac0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ff2fb3063deb1002 CR3: 0000001392010006 CR4: 0000000000771ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
<TASK>
mlx5_cvq_kick_handler+0x89/0x2b0 [mlx5_vdpa]
process_one_work+0x1e2/0x3b0
? rescuer_thread+0x390/0x390
worker_thread+0x50/0x3a0
? rescuer_thread+0x390/0x390
kthread+0xd6/0x100
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x1f/0x30
</TASK>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Tested-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20230119061525.75068-6-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
Setting DMA mask for vDPA device in case that there are virtqueue that
is not backed by DMA so the vDPA device could be advertised as the DMA
device that is used by DMA API for software emulated virtqueues.
Reviewed-by: Eli Cohen <elic@nvidia.com>
Tested-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20230119061525.75068-5-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch adds the support of per vq dma device for virito-vDPA. vDPA
parents then are allowed to use different DMA devices. This is useful
for the parents that have software or emulated virtqueues.
Reviewed-by: Eli Cohen <elic@nvidia.com>
Tested-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20230119061525.75068-4-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This patch introduces a per virtqueue dma device. This will be used
for virtio devices whose virtqueue are backed by different underlayer
devices.
One example is the vDPA that where the control virtqueue could be
implemented through software mediation.
Some of the work are actually done before since the helper like
vring_dma_device(). This work left are:
- Let vring_dma_device() return the per virtqueue dma device instead
of the vdev's parent.
- Allow passing a dma_device when creating the virtqueue through a new
helper, old vring creation helper will keep using vdev's parent.
Reviewed-by: Eli Cohen <elic@nvidia.com>
Tested-by: Eli Cohen <elic@nvidia.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20230119061525.75068-2-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
"enabled" is defined in vhost_init_device_iotlb,
but it is never used. Let's remove it.
Signed-off-by: Liming Wu <liming.wu@jaguarmicro.com>
Message-Id: <20230110024445.303-1-liming.wu@jaguarmicro.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Eugenio Pérez <eperezma@redhat.com>
|
|
remove printk as it is meaningless.
Signed-off-by: Liming Wu <liming.wu@jaguarmicro.com>
Message-Id: <20230105070357.274-1-liming.wu@jaguarmicro.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
|
|
We used to (ab)use the DMA ops for setting up identical mappings in
the IOTLB. This patch tries to get rid of the those unnecessary DMA
ops by maintaining a simple identical/passthrough mappings by
default. When bound to virtio_vdpa driver, DMA API will simply use PA
as the IOVA and we will be all fine. When the vDPA bus tries to setup
customized mapping (e.g when bound to vhost-vDPA), the
identical/passthrough mapping will be removed.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221223060021.28011-1-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
|
|
This patch adds support for basic vendor stats that include counters
for tx, rx and cvq.
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221223055548.27810-5-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
|
|
This patch adds a new config ops callback to allow individual
simulator to implement the vendor stats callback.
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221223055548.27810-4-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
|
|
Allow individual simulator to customize the allocation size.
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221223055548.27810-3-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
This allows us to control the allocation size of the structure.
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221223055548.27810-2-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
vDPA simulators are software emulated device, so let's switch to use
weak barriers to avoid extra overhead in the driver.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Message-Id: <20221221062146.15356-1-jasowang@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
|