Age | Commit message (Collapse) | Author |
|
New chunk for dependency on start of job's execution instead on
the end. This is used for GPU deadlock prevention when
userspace uses mid-IB fences to wait for mid-IB work on other rings.
v2: Fix typo in AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES
v3: Bump KMS version
v4: put old fence AFTER acquiring the scheduled fence.
Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Suggested-by: Christian Koenig <Christian.Koenig@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Since
commit 39b50c603878 ("drm/atomic_helper: Stop modesets on unregistered
connectors harder")
We've been failing atomic checks if they try to enable new displays on
unregistered connectors. This is fine except for the one situation that
breaks atomic assumptions: suspend/resume. If a connector is
unregistered before we attempt to restore the atomic state, something we
end up failing the atomic check that happens when trying to restore the
state during resume.
Normally this would be OK: we try our best to make sure that the atomic
state pre-suspend can be restored post-suspend, but failures at that
point usually don't cause problems. That is of course, until we
introduced the new atomic MST VCPI helpers:
[drm:drm_atomic_helper_check_modeset [drm_kms_helper]] [CRTC:65:pipe B] active changed
[drm:drm_atomic_helper_check_modeset [drm_kms_helper]] Updating routing for [CONNECTOR:123:DP-5]
[drm:drm_atomic_helper_check_modeset [drm_kms_helper]] Disabling [CONNECTOR:123:DP-5]
[drm:drm_atomic_get_private_obj_state [drm]] Added new private object 0000000025844636 state 000000009fd2899a to 000000003a13d7b8
WARNING: CPU: 6 PID: 1070 at drivers/gpu/drm/drm_dp_mst_topology.c:3153 drm_dp_atomic_release_vcpi_slots+0xb9/0x200 [drm_kms_helper]
Modules linked in: fuse vfat fat snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic joydev iTCO_wdt i915(O) wmi_bmof intel_rapl btusb btrtl x86_pkg_temp_thermal btbcm btintel coretemp i2c_algo_bit drm_kms_helper(O) crc32_pclmul snd_hda_intel syscopyarea sysfillrect snd_hda_codec sysimgblt snd_hda_core bluetooth fb_sys_fops snd_pcm pcspkr drm(O) psmouse snd_timer mei_me ecdh_generic i2c_i801 mei i2c_core ucsi_acpi typec_ucsi typec wmi thinkpad_acpi ledtrig_audio snd soundcore tpm_tis rfkill tpm_tis_core video tpm acpi_pad pcc_cpufreq uas usb_storage crc32c_intel nvme serio_raw xhci_pci nvme_core xhci_hcd
CPU: 6 PID: 1070 Comm: gnome-shell Tainted: G W O 5.0.0-rc2Lyude-Test+ #1
Hardware name: LENOVO 20L8S2N800/20L8S2N800, BIOS N22ET35W (1.12 ) 04/09/2018
RIP: 0010:drm_dp_atomic_release_vcpi_slots+0xb9/0x200 [drm_kms_helper]
Code: 00 4c 39 6d f0 74 49 48 8d 7b 10 48 89 f9 48 c1 e9 03 42 80 3c 21 00 0f 85 d2 00 00 00 48 8b 6b 10 48 8d 5d f0 49 39 ee 75 c5 <0f> 0b 48 c7 c7 c0 78 b3 a0 48 89 c2 4c 89 ee e8 03 6c aa ff b8 ea
RSP: 0018:ffff88841235f268 EFLAGS: 00010246
RAX: ffff88841bf12ab0 RBX: ffff88841bf12aa8 RCX: 1ffff110837e2557
RDX: dffffc0000000000 RSI: 0000000000000000 RDI: ffffed108246bde0
RBP: ffff88841bf12ab8 R08: ffffed1083db3c93 R09: ffffed1083db3c92
R10: ffffed1083db3c92 R11: ffff88841ed9e497 R12: ffff888419555d80
R13: ffff8883bc499100 R14: ffff88841bf12ab8 R15: 0000000000000000
FS: 00007f16fbd4cd00(0000) GS:ffff88841ed80000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1687c9f000 CR3: 00000003ba3cc003 CR4: 00000000003606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
drm_atomic_helper_check_modeset+0xf21/0x2f50 [drm_kms_helper]
? drm_atomic_helper_commit_modeset_enables+0xa90/0xa90 [drm_kms_helper]
? __printk_safe_exit+0x10/0x10
? save_stack+0x8c/0xb0
? vprintk_func+0x96/0x1bf
? __printk_safe_exit+0x10/0x10
intel_atomic_check+0x234/0x4750 [i915]
? printk+0x9f/0xc5
? kmsg_dump_rewind_nolock+0xd9/0xd9
? _raw_spin_lock_irqsave+0xa4/0x140
? drm_atomic_check_only+0xb1/0x28b0 [drm]
? drm_dbg+0x186/0x1b0 [drm]
? drm_dev_dbg+0x200/0x200 [drm]
? intel_link_compute_m_n+0xb0/0xb0 [i915]
? drm_mode_put_tile_group+0x20/0x20 [drm]
? skl_plane_format_mod_supported+0x17f/0x1b0 [i915]
? drm_plane_check_pixel_format+0x14a/0x310 [drm]
drm_atomic_check_only+0x13c4/0x28b0 [drm]
? drm_state_info+0x220/0x220 [drm]
? drm_atomic_helper_disable_plane+0x1d0/0x1d0 [drm_kms_helper]
? pick_single_encoder_for_connector+0xe0/0xe0 [drm_kms_helper]
? kasan_unpoison_shadow+0x35/0x40
drm_atomic_commit+0x3b/0x100 [drm]
drm_atomic_helper_set_config+0xd5/0x100 [drm_kms_helper]
drm_mode_setcrtc+0x636/0x1660 [drm]
? vprintk_func+0x96/0x1bf
? drm_dev_dbg+0x200/0x200 [drm]
? drm_mode_getcrtc+0x790/0x790 [drm]
? printk+0x9f/0xc5
? mutex_unlock+0x1d/0x40
? drm_mode_addfb2+0x2e9/0x3a0 [drm]
? rcu_sync_dtor+0x2e0/0x2e0
? drm_dbg+0x186/0x1b0 [drm]
? set_page_dirty+0x271/0x4d0
drm_ioctl_kernel+0x203/0x290 [drm]
? drm_mode_getcrtc+0x790/0x790 [drm]
? drm_setversion+0x7f0/0x7f0 [drm]
? __switch_to_asm+0x34/0x70
? __switch_to_asm+0x34/0x70
drm_ioctl+0x445/0x950 [drm]
? drm_mode_getcrtc+0x790/0x790 [drm]
? drm_getunique+0x220/0x220 [drm]
? expand_files.part.10+0x920/0x920
do_vfs_ioctl+0x1a1/0x13d0
? ioctl_preallocate+0x2b0/0x2b0
? __fget_light+0x2d6/0x390
? schedule+0xd7/0x2e0
? fget_raw+0x10/0x10
? apic_timer_interrupt+0xa/0x20
? apic_timer_interrupt+0xa/0x20
? rcu_cleanup_dead_rnp+0x2c0/0x2c0
ksys_ioctl+0x60/0x90
__x64_sys_ioctl+0x6f/0xb0
do_syscall_64+0x136/0x440
? syscall_return_slowpath+0x2d0/0x2d0
? do_page_fault+0x89/0x330
? __do_page_fault+0x9c0/0x9c0
? prepare_exit_to_usermode+0x188/0x200
? perf_trace_sys_enter+0x1090/0x1090
? __x64_sys_sigaltstack+0x280/0x280
? __put_user_4+0x1c/0x30
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f16ff89a09b
Code: 0f 1e fa 48 8b 05 ed bd 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d bd bd 0c 00 f7 d8 64 89 01 48
RSP: 002b:00007fff001232b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fff001232f0 RCX: 00007f16ff89a09b
RDX: 00007fff001232f0 RSI: 00000000c06864a2 RDI: 000000000000000b
RBP: 00007fff001232f0 R08: 0000000000000000 R09: 000055a79d484460
R10: 000055a79d44e770 R11: 0000000000000246 R12: 00000000c06864a2
R13: 000000000000000b R14: 0000000000000000 R15: 000055a79d44e770
WARNING: CPU: 6 PID: 1070 at drivers/gpu/drm/drm_dp_mst_topology.c:3153 drm_dp_atomic_release_vcpi_slots+0xb9/0x200 [drm_kms_helper]
---[ end trace d536c05c13c83be2 ]---
[drm:drm_dp_atomic_release_vcpi_slots [drm_kms_helper]] *ERROR* no VCPI for [MST PORT:00000000f9e2b143] found in mst state 000000009fd2899a
This appears to be happening because we destroy the VCPI allocations
when disabling all connected displays while suspending, and those VCPI
allocations don't get restored on resume due to failing to restore the
atomic state.
So, fix this by introducing the suspending option to
drm_atomic_helper_duplicate_state() and use that to indicate in the
atomic state that it's being used for suspending or resuming the system,
and thus needs to be fixed up by the driver. We can then use the new
state->duplicated hook to tell update_connector_routing() in
drm_atomic_check_modeset() to allow for modesets on unregistered
connectors, which allows us to restore atomic states that contain MST
topologies that were removed after the state was duplicated and thus:
mostly fixing suspend and resume. This just leaves some issues that were
introduced with nouveau, that will be addressed next.
Changes since v3:
* Remove ->duplicated hunks that I left in the VCPI helpers by accident.
These don't need to be here, that was the supposed to be the purpose
of the last revision
Changes since v2:
* Remove the changes in this patch to the VCPI helpers, they aren't
needed anymore
Changes since v1:
* Rename suspend_or_resume to duplicated
Signed-off-by: Lyude Paul <lyude@redhat.com>
Fixes: eceae1472467 ("drm/dp_mst: Start tracking per-port VCPI allocations")
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20190202002023.29665-4-lyude@redhat.com
|
|
This patch integrates TID RDMA WRITE protocol into normal RDMA verbs
framework. The TID RDMA WRITE protocol is an end-to-end protocol
between the hfi1 drivers on two OPA nodes that converts a qualified
RDMA WRITE request into a TID RDMA WRITE request to avoid data copying
on the responder side.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
The s_ack_queue is managed by two pointers into the ring:
r_head_ack_queue and s_tail_ack_queue. r_head_ack_queue is the index of
where the next received request is going to be placed and s_tail_ack_queue
is the entry of the request currently being processed. This works
perfectly fine for normal Verbs as the requests are processed one at a
time and the s_tail_ack_queue is not moved until the request that it
points to is fully completed.
In this fashion, s_tail_ack_queue constantly chases r_head_ack_queue and
the two pointers can easily be used to determine "queue full" and "queue
empty" conditions.
The detection of these two conditions are imported in determining when an
old entry can safely be overwritten with a new received request and the
resources associated with the old request be safely released.
When pipelined TID RDMA WRITE is introduced into this mix, things look
very different. r_head_ack_queue is still the point at which a newly
received request will be inserted, s_tail_ack_queue is still the
currently processed request. However, with pipelined TID RDMA WRITE
requests, s_tail_ack_queue moves to the next request once all TID RDMA
WRITE responses for that request have been sent. The rest of the protocol
for a particular request is managed by other pointers specific to TID RDMA
- r_tid_tail and r_tid_ack - which point to the entries for which the next
TID RDMA DATA packets are going to arrive and the request for which
the next TID RDMA ACK packets are to be generated, respectively.
What this means is that entries in the ring, which are "behind"
s_tail_ack_queue (entries which s_tail_ack_queue has gone past) are no
longer considered complete. This is where the problem is - a newly
received request could potentially overwrite a still active TID RDMA WRITE
request.
The reason why the TID RDMA pointers trail s_tail_ack_queue is that the
normal Verbs send engine uses s_tail_ack_queue as the pointer for the next
response. Since TID RDMA WRITE responses are processed by the normal Verbs
send engine, s_tail_ack_queue had to be moved to the next entry once all
TID RDMA WRITE response packets were sent to get the desired pipelining
between requests. Doing otherwise would mean that the normal Verbs send
engine would not be able to send the TID RDMA WRITE responses for the next
TID RDMA request until the current one is fully completed.
This patch introduces the s_acked_ack_queue index to point to the next
request to complete on the responder side. For requests other than TID
RDMA WRITE, s_acked_ack_queue should always be kept in sync with
s_tail_ack_queue. For TID RDMA WRITE request, it may fall behind
s_tail_ack_queue.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
This patch adds the functions to build TID RDMA WRITE request.
The work request opcode, packet opcode, and packet formats for TID
RDMA WRITE protocol are also defined in this patch.
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
The RC retry timeout value is based on the estimated time for the
response packet to come back. However, for TID RDMA READ request, due
to the use of header suppression, the driver is normally not notified
for each incoming response packet until the last TID RDMA READ response
packet. Consequently, the retry timeout value should be extended to
cover the transaction time for the entire length of a segment (default
256K) instead of that for a single packet. This patch addresses the
issue by introducing new retry timer functions to account for multiple
packets and wrapper functions for backward compatibility.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
This patch adds the helper functions to build the TID RDMA READ request
on the requester side. The key is to allocate TID resources (TID flow
and TID entries) and send the resource information to the responder side
along with the read request. Since the TID resources are limited, each
TID RDMA READ request has to be split into segments with a default
segment size of 256K. A software flow is allocated to track the data
transaction for each segment. The work request opcode, packet opcode, and
packet formats for TID RDMA READ protocol are also defined in this patch.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
TID entries are used by hfi1 hardware to receive data payload from
incoming packets directly into a user buffer and thus avoid data copying
by software. This patch implements the functions for TID allocation,
freeing, and programming TID RcvArray entries in hardware for kernel
clients. TID entries are managed via lists of TID groups similar to PSM.
Furthermore, to track TID resource allocation for each request, software
flows are also allocated and freed as needed. Since software flows
consume large amount of memory for tracking TID allocation and freeing,
it is generally desirable to allocate them dynamically in the send queue
and only for TID RDMA requests, but pre-allocate them for receive queue
because the send queue could have thousands of entries while the receive
queue has only a limited number of entries.
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
This patch moves some RC helper functions into a header file so that
they can be called from both RC and TID RDMA functions. In addition,
a common function for rewinding a request is created in rdmavt so that
it can be shared between qib and hfi1 driver.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Move the iwpm kdoc comments from the prototype declarations to above
the function bodies. There are no functional changes in this patch.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Netlink statistics exported by rdma-cm never had any working user space
component published to the mailing list or to any open source
project. Canvassing various proprietary users, and the original requester,
we find that there are no real users of this interface.
This patch simply removes all occurrences of RDMA CM netlink in favour of
modern nldev implementation, which provides the same information and
accompanied by widely used user space component.
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Reviewed-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Add new clock according to 3rd ECO design change.
It's the parent clock of audio clock mux.
Signed-off-by: Weiyi Lu <weiyi.lu@mediatek.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
|
|
It is normal user behaviour to start, stop, then start a stream
again without closing it. Currently this works for compressed
playback streams but not capture ones.
The states on a compressed capture stream go directly from OPEN to
PREPARED, unlike a playback stream which moves to SETUP and waits
for a write of data before moving to PREPARED. Currently however,
when a stop is sent the state is set to SETUP for both types of
streams. This leaves a capture stream in the situation where a new
start can't be sent as that requires the state to be PREPARED and
a new set_params can't be sent as that requires the state to be
OPEN. The only option being to close the stream, and then reopen.
Correct this issues by allowing snd_compr_drain_notify to set the
state depending on the stream direction, as we already do in
set_params.
Fixes: 49bb6402f1aa ("ALSA: compress_core: Add support for capture streams")
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
There's no reason to expose struct vring_packed in UAPI - if we do we
won't be able to change or drop it, and it's not part of any interface.
Let's move it to virtio_ring.c
Cc: Tiwei Bie <tiwei.bie@intel.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
There are new types and helpers that are supposed to be used in new code.
As a preparation to get rid of legacy types and API functions do
the conversion here.
Cc: Kirti Wankhede <kwankhede@nvidia.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
This change adds definitions needed for DP audio compliance testing.
It also adds missing definition for DP video compliance.
Changes in V2:
-- Delete cover letter for this patch.
-- Move the description from cover letter into patch commit message.
-- Remove DPU from subject prefix
Reviewed-by: Sean Paul <sean@poorly.run>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> # for merging through -msm.
Signed-off-by: Chandan Uddaraju <chandanu@codeaurora.org>
Signed-off-by: Sean Paul <seanpaul@chromium.org>
|
|
Now that the last user of this hook, denali.c, stopped using it,
we can remove the erase hook from nand_legacy.
I squashed single_erase() because only the difference between
single_erase() and nand_erase_op() is the number of bit shifts.
The status/ret conversion in nand_erase_nand() is unneeded since
commit eb94555e9e97 ("mtd: nand: use usual return values for the
->erase() hook"). Cleaned it up now.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Boris Brezillon <bbrezillon@kernel.org>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
|
|
nand_get_device() was complex for apparently no good reason. Let's
replace this locking scheme with 2 mutexes: one attached to the
controller and another one attached to the chip.
Every time the core calls nand_get_device(), it will first lock the
chip and if the chip is not suspended, will then lock the controller.
nand_release_device() will release both lock in the reverse order.
nand_get_device() can sleep, just like the previous implementation,
which means you should never call that from an atomic context.
We also get rid of
- the chip->state field, since all it was used for was flagging the
chip as suspended. We replace it by a field called chip->suspended
and directly set it from nand_suspend/resume()
- the controller->wq and controller->active fields which are no longer
needed since the new controller->lock (now a mutex) guarantees that
all operations are serialized at the controller level
- panic_nand_get_device() which would anyway be a no-op. Talking about
panic write, I keep thinking the rawnand implementation is unsafe
because there's not negotiation with the controller to know when it's
actually done with it's previous operation. I don't intend to fix
that here, but that's probably something we should look at, or maybe
we should consider dropping the ->_panic_write() implementation
Last important change to mention: we now return -EBUSY when someone
tries to access a device that as been suspended, and propagate this
error to the upper layer.
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
|
|
NMI handling code should be executed between calls to nmi_enter and
nmi_exit.
Add a separate domain handler to properly setup NMI context when handling
an interrupt requested as NMI.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Provide flow handlers that are NMI safe for interrupts and percpu_devid
interrupts.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Add support for percpu_devid interrupts treated as NMIs.
Percpu_devid NMIs need to be setup/torn down on each CPU they target.
The same restrictions as for global NMIs still apply for percpu_devid NMIs.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Add functionality to allocate interrupt lines that will deliver IRQs
as Non-Maskable Interrupts. These allocations are only successful if
the irqchip provides the necessary support and allows NMI delivery for the
interrupt line.
Interrupt lines allocated for NMI delivery must be enabled/disabled through
enable_nmi/disable_nmi_nosync to keep their state consistent.
To treat a PERCPU IRQ as NMI, the interrupt must not be shared nor threaded,
the irqchip directly managing the IRQ must be the root irqchip and the
irqchip cannot be behind a slow bus.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
And use kernel style for the remaining comments in the header.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
We are using symbols from it, so we should include it directly. Found
after sorting includes in a driver.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
This patch adds a new API to provide access to the
hardware related data like soc revision, IDCODE... etc.
Signed-off-by: Nava kishore Manne <nava.manne@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
|
|
We want to allow userspace to reconfigure the subslice configuration on a
per context basis.
This is required for the functional requirement of shutting down non-VME
enabled sub-slices on Gen11 parts.
To do so, we expose a context parameter to allow adjustment of the RPCS
register stored within the context image (and currently not accessible via
LRI).
If the context is adjusted before first use or whilst idle, the adjustment
is for "free"; otherwise if the context is active we queue a request to do
so (using the kernel context), following all other activity by that
context, which is also marked as barrier for all following submission
against the same context.
Since the overhead of device re-configuration during context switching can
be significant, especially in multi-context workloads, we limit this new
uAPI to only support the Gen11 VME use case. In this use case either the
device is fully enabled, and exactly one slice and half of the subslices
are enabled.
Example usage:
struct drm_i915_gem_context_param_sseu sseu = { };
struct drm_i915_gem_context_param arg = {
.param = I915_CONTEXT_PARAM_SSEU,
.ctx_id = gem_context_create(fd),
.size = sizeof(sseu),
.value = to_user_pointer(&sseu)
};
/* Query device defaults. */
gem_context_get_param(fd, &arg);
/* Set VME configuration on a 1x6x8 part. */
sseu.slice_mask = 0x1;
sseu.subslice_mask = 0xe0;
gem_context_set_param(fd, &arg);
v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu()
(Lionel)
v3: Add ability to program this per engine (Chris)
v4: Move most get_sseu() into i915_gem_context.c (Lionel)
v5: Validate sseu configuration against the device's capabilities (Lionel)
v6: Change context powergating settings through MI_SDM on kernel context
(Chris)
v7: Synchronize the requests following a powergating setting change using
a global dependency (Chris)
Iterate timelines through dev_priv.gt.active_rings (Tvrtko)
Disable RPCS configuration setting for non capable users
(Lionel/Tvrtko)
v8: s/union intel_sseu/struct intel_sseu/ (Lionel)
s/dev_priv/i915/ (Tvrtko)
Change uapi class/instance fields to u16 (Tvrtko)
Bump mask fields to 64bits (Lionel)
Don't return EPERM when dynamic sseu is disabled (Tvrtko)
v9: Import context image into kernel context's ppgtt only when
reconfiguring powergated slice/subslices (Chris)
Use aliasing ppgtt when needed (Michel)
Tvrtko Ursulin:
v10:
* Update for upstream changes.
* Request submit needs a RPM reference.
* Reject on !FULL_PPGTT for simplicity.
* Pull out get/set param to helpers for readability and less indent.
* Use i915_request_await_dma_fence in add_global_barrier to skip waits
on the same timeline and avoid GEM_BUG_ON.
* No need to explicitly assign a NULL pointer to engine in legacy mode.
* No need to move gen8_make_rpcs up.
* Factored out global barrier as prep patch.
* Allow to only CAP_SYS_ADMIN if !Gen11.
v11:
* Remove engine vfunc in favour of local helper. (Chris Wilson)
* Stop retiring requests before updates since it is not needed
(Chris Wilson)
* Implement direct CPU update path for idle contexts. (Chris Wilson)
* Left side dependency needs only be on the same context timeline.
(Chris Wilson)
* It is sufficient to order the timeline. (Chris Wilson)
* Reject !RCS configuration attempts with -ENODEV for now.
v12:
* Rebase for make_rpcs.
v13:
* Centralize SSEU normalization to make_rpcs.
* Type width checking (uAPI <-> implementation).
* Gen11 restrictions uAPI checks.
* Gen11 subslice count differences handling.
Chris Wilson:
* args->size handling fixes.
* Update context image from GGTT.
* Postpone context image update to pinning.
* Use i915_gem_active_raw instead of last_request_on_engine.
v14:
* Add activity tracker on intel_context to fix the lifetime issues
and simplify the code. (Chris Wilson)
v15:
* Fix context pin leak if no space in ring by simplifying the
context pinning sequence.
v16:
* Rebase for context get/set param locking changes.
* Just -ENODEV on !Gen11. (Joonas)
v17:
* Fix one Gen11 subslice enablement rule.
* Handle error from i915_sw_fence_await_sw_fence_gfp. (Chris Wilson)
v18:
* Update commit message. (Joonas)
* Restrict uAPI to VME use case. (Joonas)
v19:
* Rebase.
v20:
* Rebase for ce->active_tracker.
v21:
* Rebase for IS_GEN changes.
v22:
* Reserve uAPI for flags straight away. (Chris Wilson)
v23:
* Rebase for RUNTIME_INFO.
v24:
* Added some headline docs for the uapi usage. (Joonas/Chris)
v25:
* Renamed class/instance to engine_class/engine_instance to avoid clash
with C++ keyword. (Tony Ye)
v26:
* Rebased for runtime pm api changes.
v27:
* Rebased for intel_context_init.
* Wrap commit msg to 75.
v28:
(Chris Wilson)
* Use i915_gem_ggtt.
* Use i915_request_await_dma_fence to show a better example.
v29:
* i915_timeline_set_barrier can now fail. (Chris Wilson)
v30:
* Capture some acks.
v31:
* Drop the WARN_ON from use controllable paths. (Chris Wilson)
* Use overflows_type for all checks.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107634
Issue: https://github.com/intel/media-driver/issues/267
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Zhipeng Gong <zhipeng.gong@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tony Ye <tony.ye@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Timo Aaltonen <timo.aaltonen@canonical.com>
Acked-by: Takashi Iwai <tiwai@suse.de>
Acked-by: Stéphane Marchesin <marcheu@chromium.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-4-tvrtko.ursulin@linux.intel.com
|
|
xfrm_state_put() moves struct xfrm_state to the GC list
and schedules the GC work to clean it up. On net exit call
path, xfrm_state_flush() is called to clean up and
xfrm_flush_gc() is called to wait for the GC work to complete
before exit.
However, this doesn't work because one of the ->destructor(),
ipcomp_destroy(), schedules the same GC work again inside
the GC work. It is hard to wait for such a nested async
callback. This is also why syzbot still reports the following
warning:
WARNING: CPU: 1 PID: 33 at net/ipv6/xfrm6_tunnel.c:351 xfrm6_tunnel_net_exit+0x2cb/0x500 net/ipv6/xfrm6_tunnel.c:351
...
ops_exit_list.isra.0+0xb0/0x160 net/core/net_namespace.c:153
cleanup_net+0x51d/0xb10 net/core/net_namespace.c:551
process_one_work+0xd0c/0x1ce0 kernel/workqueue.c:2153
worker_thread+0x143/0x14a0 kernel/workqueue.c:2296
kthread+0x357/0x430 kernel/kthread.c:246
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352
In fact, it is perfectly fine to bypass GC and destroy xfrm_state
synchronously on net exit call path, because it is in process context
and doesn't need a work struct to do any blocking work.
This patch introduces xfrm_state_put_sync() which simply bypasses
GC, and lets its callers to decide whether to use this synchronous
version. On net exit path, xfrm_state_fini() and
xfrm6_tunnel_net_exit() use it. And, as ipcomp_destroy() itself is
blocking, it can use xfrm_state_put_sync() directly too.
Also rename xfrm_state_gc_destroy() to ___xfrm_state_destroy() to
reflect this change.
Fixes: b48c05ab5d32 ("xfrm: Fix warning in xfrm6_tunnel_net_exit.")
Reported-and-tested-by: syzbot+e9aebef558e3ed673934@syzkaller.appspotmail.com
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
|
|
xa_erase does not allocate memory and doesn't have a gfp parameter.
Update the descriptions of all four variants to be more useful.
Signed-off-by: Matthew Wilcox <willy@infradead.org>
|
|
Having both a function and a function pointer member with the same
name (iscsit_release_cmd) is confusing. Hence rename the function pointer
member.
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Change "unsoliticed" into "unsolicited".
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
All users of the fixed_phy_add() pass -1 as GPIO number
to the fixed phy driver, and all users of fixed_phy_register()
pass -1 as GPIO number as well, except for the device
tree MDIO bus.
Any new users should create a proper device and pass the
GPIO as a descriptor associated with the device so delete
the GPIO argument from the calls and drop the code looking
requesting a GPIO in fixed_phy_add().
In fixed phy_register(), investigate the "fixed-link"
node and pick the GPIO descriptor from "link-gpios" if
this property exists. Move the corresponding code out
of of_mdio.c as the fixed phy code anyways requires
OF to be in use.
Tested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce a function that sends the SCSI status "BUSY" back to the
initiator. The next patch will add a call to this function in the srpt
target driver.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Whether or not a session is being torn down does not affect whether or not
SCSI commands are in the task set. Hence remove the "tearing down" checks
from the TMF code. The TRANSPORT_ISTATE_PROCESSING check is left out
because it is now safe to wait for a command that is in that state. The
CMD_T_PRE_EXECUTE is left out because abort processing is postponed until
after commands have left the pre-execute state since the patch that makes
TMF processing synchronous.
See also commit 1c21a48055a6 ("target: Avoid early CMD_T_PRE_EXECUTE
failures during ABORT_TASK").
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Due to the patch that makes TMF handling synchronous the
write_pending_status() callback function is no longer called. Hence remove
it.
Acked-by: Felipe Balbi <balbi@ti.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Andy Grover <agrover@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Himanshu Madhani <himanshu.madhani@qlogic.com>
Cc: Quinn Tran <quinn.tran@qlogic.com>
Cc: Saurav Kashyap <saurav.kashyap@qlogic.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
A soft iwarp driver that uses the host TCP stack via a kernel mode socket
does not need port mapping. In fact, if the port map daemon, iwpmd, is
running, then iwpmd must not try and create/bind a socket to the actual
port for a soft iwarp connection, since the driver already has that socket
bound.
Yet if the soft iwarp driver wants to interoperate with hard iwarp devices
that -are- using port mapping, then the soft iwarp driver's mappings still
need to be maintained and advertised by the iwpm protocol.
This patch enhances the rdma driver<->iwcm interface to allow an iwarp
driver to specify that it does not want port mapping. The iwpm
kernel<->iwpmd interface is also enhanced to pass up this information on
map requests.
Care is taken to interoperate with the current iwpmd version (ABI version
3) and only use the new NL attributes if iwpmd supports ABI version 4.
The ABI version define has also been created in rdma_netlink.h so both
kernel and user code can share it. The iwcm and iwpmd negotiate the ABI
version to use with a new HELLO netlink message.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Reviewed-by: Tatyana Nikolova <Tatyana.E.Nikolova@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
In order to add new IWPM_NL attributes, the enums for the IWPM commands
attributes are refactored such that a new attribute can be added without
breaking ABI version 3. Instead of sharing nl attribute enums for both
request and response messages, we create separate enums for each IWPM
message request and reply. This allows us to extend any given IWPM
message by adding new attributes for just that message. These new enums
are created, though, in a way to avoid breaking ABI version 3.
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Reviewed-by: Tatyana Nikolova <Tatyana.E.Nikolova@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
RDS Service type (TOS) is user-defined and needs to be configured
via RDS IOCTL interface. It must be set before initiating any
traffic and once set the TOS can not be changed. All out-going
traffic from the socket will be associated with its TOS.
Reviewed-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
[yanjun.zhu@oracle.com: Adapted original patch with ipv6 changes]
Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com>
|
|
Currently a packet is marked for loopback only if the source and
destination addresses equals. This is not enough when multiple gids are
present in rxe device's gid table and the traffic is from one gid to
another. Fix it by marking the packet for loopback if the destination MAC
address is equal to the source MAC address.
Signed-off-by: Kamal Heib <kamalheib1@gmail.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Tested-by: Yuval Shaia <yuval.shaia@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
If tmpfiles can be made persistent, then newly created tmpfiles need to
be treated like any other new files in policy.
This patch indicates which newly created tmpfiles are in policy, causing
the file hash to be calculated on __fput().
Reported-by: Ignaz Forster <ignaz.forster@gmx.de>
[rgoldwyn@suse.com: Call ima_post_create_tmpfile() in vfs_tmpfile() as
opposed to do_tmpfile(). This will help the case for overlayfs where
copy_up is denied while overwriting a file.]
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
|
|
This patch allows the kexec_file_load syscall to verify the PE signed
kernel image signature based on the preboot keys stored in the .platform
keyring, as fall back, if the signature verification failed due to not
finding the public key in the secondary or builtin keyrings.
This commit adds a VERIFY_USE_PLATFORM_KEYRING similar to previous
VERIFY_USE_SECONDARY_KEYRING indicating that verify_pkcs7_signature
should verify the signature using platform keyring. Also, decrease
the error message log level when verification failed with -ENOKEY,
so that if called tried multiple time with different keyring it
won't generate extra noises.
Signed-off-by: Kairui Song <kasong@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Dave Young <dyoung@redhat.com> (for kexec_file_load part)
[zohar@linux.ibm.com: tweaked the first paragraph of the patch description,
and fixed checkpatch warning.]
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
|
|
commit 9dc92c45177a ("integrity: Define a trusted platform keyring")
introduced a .platform keyring for storing preboot keys, used for
verifying kernel image signatures. Currently only IMA-appraisal is able
to use the keyring to verify kernel images that have their signature
stored in xattr.
This patch exposes the .platform keyring, making it accessible for
verifying PE signed kernel images as well.
Suggested-by: Mimi Zohar <zohar@linux.ibm.com>
Signed-off-by: Kairui Song <kasong@redhat.com>
Cc: David Howells <dhowells@redhat.com>
[zohar@linux.ibm.com: fixed checkpatch errors, squashed with patch fix]
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
|
|
Linux 5.0-rc5
Needed to merge the include/uapi changes so we have an up to date
single-tree for these files. Patches already posted are also expected to
need this for dependencies.
|
|
Keeping single line wrapper functions is not useful. Hence remove the
ib_sg_dma_address() and ib_sg_dma_len() functions. This patch does not
change any functionality.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
Expose XRC ODP capabilities as part of the extended device capabilities.
Signed-off-by: Moni Shoua <monis@mellanox.com>
Reviewed-by: Majd Dibbiny <majd@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
The ODP support matrix is per operation and per transport. The support for
each transport (RC, UD, etc.) is described with a bit field.
ODP for SRQ WQEs is considered a different kind of support from ODP for RQ
WQs and therefore needs a different capability bit.
Signed-off-by: Moni Shoua <monis@mellanox.com>
Reviewed-by: Majd Dibbiny <majd@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
|
|
From
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
For dependencies needed in the next ODP patches.
* branch 'mlx5-next':
net/mlx5: Set ODP SRQ support in firmware
net/mlx5: Add XRC transport to ODP device capabilities layout
|
|
indirect calls are only needed if ipv6 is a module.
Add helpers to abstract the v6ops indirections and use them instead.
fragment, reroute and route_input are kept as indirect calls.
The first two are not not used in hot path and route_input is only
used by bridge netfilter.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
nf_nat_ipv6 calls two ipv6 core functions, so add those to v6ops to avoid
the module dependency.
This is a prerequisite for merging ipv4 and ipv6 nat implementations.
Add wrappers to avoid the indirection if ipv6 is builtin.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Anonymous sets that are bound to rules from the same transaction trigger
a kernel splat from the abort path due to double set list removal and
double free.
This patch updates the logic to search for the transaction that is
responsible for creating the set and disable the set list removal and
release, given the rule is now responsible for this. Lookup is reverse
since the transaction that adds the set is likely to be at the tail of
the list.
Moreover, this patch adds the unbind step to deliver the event from the
commit path. This should not be done from the worker thread, since we
have no guarantees of in-order delivery to the listener.
This patch removes the assumption that both activate and deactivate
callbacks need to be provided.
Fixes: cd5125d8f518 ("netfilter: nf_tables: split set destruction in deactivate and destroy phase")
Reported-by: Mikhail Morfikov <mmorfikov@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|