Age | Commit message (Collapse) | Author |
|
ivb+ supports fp16 pixel formats on the sprite planes planes. Expose
that capability.
On ivb/hsw fp16 scanout is slightly busted. The output from the plane
will have 1/4 the expected value. For the sprite plane we can fix that
up with the plane gamma unit. This was fixed on bdw.
v2: Rebase on top of icl fp16
Split the ivb+ sprite birs into a separate patch
v3: Move ivb_need_sprite_gamma() check one level up so that
we don't waste time programming garbage into he gamma registers
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-10-ville.syrjala@linux.intel.com
|
|
gen4+ supports fp16 pixel formats on the primary planes. Add the
relevant code.
On ivb fp16 scanout is slightly busted. The output from the plane will
have 1/4 the expected value. For the primary plane we would have to
use the pipe gamma or pipe csc to correct that which would affect all
the other planes as well, hence we simply choose not to expose fp16
on the ivb primary plane. On hsw the primary plane got fixed.
On gmch platforms I observed that the plane width must be below 2k
pixels with fp16 or else we get a corrupted image. This limitation
does not seem to be documented in bspec. I verified the exact limit
using the chv pipe B primary plane since it has windowing capability.
The stride limits are unaffected by fp16.
v2: Rebase on top of icl fp16
Split thea gen4+ primary plane bits into a separate patch
Deal with HAS_GMCH()
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-9-ville.syrjala@linux.intel.com
|
|
skl+ supports fp16 pixel formats on all universal planes. Add the
necessary bits to expose that capability. The main different to
icl is that we can't scale fp16, so need to add the relevant
checks.
v2: Rebase on top of icl fp16
Split skl+ bits into a separate patch
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-8-ville.syrjala@linux.intel.com
|
|
Now that the planes declare their minimum cdclk requirements properly
we don't need to check the cdclk in skl_max_scale() anymore. Just check
against the maximum downscale ratio, and move the code next to it's
only caller.
v2: Add a comment explaining the HQ vs. not thing
Reviewed-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-7-ville.syrjala@linux.intel.com
|
|
The normal cdclk handling now takes care of making sure the
plane's pixel rate doesn't exceed the spec appointed percentage
of the cdclk frequency. Thus we can nuke
skl_check_pipe_max_pixel_rate().
Reviewed-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-6-ville.syrjala@linux.intel.com
|
|
Various pixel formats and plane scaling impose additional constraints
on the cdclk frequency. Provide a new plane->min_cdclk() hook that
will be used to compute the minimum acceptable cdclk frequency for
each plane.
Annoyingly on some platforms the numer of active planes affects
this calculation so we must also toss in more planes into the
state when the number of active planes changes.
The sequence of state computation must also be changed:
1. check_plane() (updates plane's visibility etc.)
2. figure out if more planes now require update min_cdclk
computaion
3. calculate the new min cdclk for each plane in the state
4. if the minimum of any plane now exceeds the current
logical cdclk we recompute the cdclk
4. during cdclk computation take the planes' min_cdclk into
accoutn
5. follow the normal cdclk programming to change the
cdclk frequency. This may now require a modeset (except
on bxt/glk in some cases), which either succeeds or
fails depending on whether userspace has given
us permission to perform a modeset or not.
v2: Fix plane id check in intel_crtc_add_planes_to_state()
Only print the debug message when cdclk needs bumping
Use dev_priv->cdclk... as the old state explicitly
Reviewed-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-5-ville.syrjala@linux.intel.com
|
|
check_digital_port_conflicts() is done needlessly late. Move it earlier.
This will be needed as later on we want to set any_ms=true a bit later
for non-modesets too and we can't call this guy without the
connection_mutex held.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-4-ville.syrjala@linux.intel.com
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
|
|
So far we've sort of protected the global state under dev_priv with
the connection_mutex. I wan to change that so that we can change the
cdclk even for pure plane updates. To that end let's formalize the
protection of the global state to follow what I started with the cdclk
code already (though not entirely properly) such that any crtc mutex
will suffice as a read lock, and all crtcs mutexes act as the write
lock.
We'll also pimp intel_atomic_state_clear() to clear the entire global
state, so that we don't accidentally leak stale information between
the locking retries.
As a slight optimization we'll only lock the crtc mutexes to protect
the global state, however if and when we actually have to poke the
hw (eg. if the actual cdclk changes) we must serialize commits
across all crtcs so that a parallel nonblocking commit can't get
ahead of the cdclk reprogamming. We do that by adding all crtcs to
the state.
TODO: the old global state examined during commit may still
be a problem since it always looks at the _latest_ swapped state
in dev_priv. Need to add proper old/new state for that too I think.
v2: Remeber to serialize the commits if necessary
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-3-ville.syrjala@linux.intel.com
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
|
|
To make the logs a bit less confusing let's toss in some
debug prints to indicate whether the cdclk reprogramming
is going to happen with a single pipe active or whether we
need to turn all pipes off for the duration.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191015193035.25982-2-ville.syrjala@linux.intel.com
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
|
|
When reprobing an MST topology during resume, we have to account for the
fact that while we were suspended it's possible that mstbs may have been
removed from any ports in the topology. Since iterating downwards in the
topology requires that we hold &mgr->lock, destroying MSTBs from this
context would result in attempting to lock &mgr->lock a second time and
deadlocking.
So, fix this by first moving destruction of MSTBs into
destroy_connector_work, then rename destroy_connector_work and friends
to reflect that they now destroy both ports and mstbs.
Note that even though this means that MSTBs will still be accessible for
a short period of time between their removal from the topology and
delayed destruction, we are still protected against referencing a MSTB
with a refcount of 0 since we use kref_get_unless_zero() in most places.
Changes since v1:
* s/destroy_connector_list/destroy_port_list/
s/connector_destroy_lock/delayed_destroy_lock/
s/connector_destroy_work/delayed_destroy_work/
s/drm_dp_finish_destroy_branch_device/drm_dp_delayed_destroy_mstb/
s/drm_dp_finish_destroy_port/drm_dp_delayed_destroy_port/
- danvet
* Use two loops in drm_dp_delayed_destroy_work() - danvet
* Better explain why we need to do this - danvet
* Use cancel_work_sync() instead of flush_work() - flush_work() doesn't
account for work requeing
Cc: Juston Li <juston.li@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Harry Wentland <hwentlan@amd.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Reviewed-by: Sean Paul <sean@poorly.run>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022023641.8026-2-lyude@redhat.com
|
|
Use the new cec_notifier_conn_(un)register() functions to
(un)register the notifier for the HDMI connector, and fill in
the cec_connector_info.
Signed-off-by: Dariusz Marcinkiewicz <darekm@google.com>
Tested-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Acked-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Commit Fixes: b9f8b09ce256 ("drm/tegra: Setup shared IOMMU domain after
initialization") changed the initialization order of the IOMMU related
bits but didn't update the cleanup path accordingly. This asymmetry can
cause failures during error recovery.
Fixes: b9f8b09ce256 ("drm/tegra: Setup shared IOMMU domain after initialization")
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
|
|
The hardware is not guaranteed to be enabled during execution of the
tegra_sor_init() function, which can lead to a crash on some Tegra SoCs.
Fix this by moving all register programming into code that is guaranteed
to only be executed when the hardware is enabled.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Fix misspellings of "connector" and "connection"
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024151737.29287-1-geert+renesas@glider.be
|
|
GEM VRAM provides an implementation for prepare_fb() and cleanup_fb()
of struct drm_plane_helper_funcs. Switch over vboxvideo.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024081404.6978-5-tzimmermann@suse.de
|
|
This patch implements prepare_fb() and cleanup_fb() in hibmc with the
GEM VRAM helpers. In the current code, pinning the BO is performed by
hibmc_plane_atomic_update(), where the operation does not belong.
This patch also fixes a bug where the pinned BO was never unpinned.
Pinning multiple BOs would have exhaused the available VRAM and further
pin operations would have failed, leaving the display in a corrupt
state.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024081404.6978-4-tzimmermann@suse.de
|
|
GEM VRAM provides an implementation for prepare_fb() and cleanup_fb()
of struct drm_simple_display_pipe_funcs. Switch over bochs.
v2:
* use helpers for struct drm_simple_display_pipe_funcs
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024081404.6978-3-tzimmermann@suse.de
|
|
The new helpers pin and unpin a framebuffer's GEM VRAM objects during
plane updates. This should be sufficient for most drivers' implementation
of prepare_fb() and cleanup_fb().
v2:
* provide helpers for struct drm_simple_display_pipe_funcs
* rename plane-helper funcs
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024081404.6978-2-tzimmermann@suse.de
|
|
Add missing descriptions of i915_perf_stream structure members
to documentation.
Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Robert Bragg <robert@sixbynine.org>
Signed-off-by: Anna Karas <anna.karas@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022101338.17048-1-anna.karas@intel.com
|
|
-Add comment for memory barrier
-Issue found using checkpatch.pl
Signed-off-by: Bhanusree <bhanusreemahesh@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/1571891313-14341-1-git-send-email-bhanusreemahesh@gmail.com
|
|
Remove unnecessary casts to pointer types passed to kfree.
Issue detected by coccinelle:
@@
type t1;
expression *e;
@@
-kfree((t1 *)e);
+kfree(e);
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023111107.9972-1-wambui.karugax@gmail.com
|
|
Passing the wrong type feels icky, everywhere else we use the pipe as
the first parameter. Spotted while discussing patches with Thomas
Zimmermann.
v2: Make xen compile correctly
Acked-By: Thomas Zimmermann <tzimmermann@suse.de> (v1)
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Noralf Trønnes <noralf@tronnes.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Eric Anholt <eric@anholt.net>
Cc: Emil Velikov <emil.velikov@collabora.com>
Cc: virtualization@lists.linux-foundation.org
Cc: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023101256.20509-1-daniel.vetter@ffwll.ch
|
|
Split the legacy submission backend from the common CS ring buffer
handling.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024100344.5041-1-chris@chris-wilson.co.uk
|
|
One more thing which relied on implicit dev_priv can be covnerted to use
the new mmio accessors.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024093440.32280-1-tvrtko.ursulin@linux.intel.com
|
|
Make trebly sure that all possible callbacks and their delayed brethren
are complete before asserting that the i915_active should be idle after
flushing all barriers.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023235359.27132-1-chris@chris-wilson.co.uk
|
|
When setting up the system to perform the atomic reset, we need to
serialise with any ongoing interrupt tasklet or else:
<0> [472.951428] i915_sel-4442 0d..1 466527056us : __i915_request_submit: rcs0 fence 11659:2, current 0
<0> [472.951554] i915_sel-4442 0d..1 466527059us : __execlists_submission_tasklet: rcs0: queue_priority_hint:-2147483648, submit:yes
<0> [472.951681] i915_sel-4442 0d..1 466527061us : trace_ports: rcs0: submit { 11659:2, 0:0 }
<0> [472.951805] i915_sel-4442 0.... 466527114us : __igt_atomic_reset_engine: i915_reset_engine(rcs0:active) under hardirq
<0> [472.951932] i915_sel-4442 0d... 466527115us : intel_engine_reset: rcs0 flags=11d
<0> [472.952056] i915_sel-4442 0d... 466527117us : execlists_reset_prepare: rcs0: depth<-1
<0> [472.952179] i915_sel-4442 0d... 466527119us : intel_engine_stop_cs: rcs0
<0> [472.952305] <idle>-0 1..s1 466527119us : process_csb: rcs0 cs-irq head=3, tail=4
<0> [472.952431] i915_sel-4442 0d... 466527122us : __intel_gt_reset: engine_mask=1
<0> [472.952557] <idle>-0 1..s1 466527124us : process_csb: rcs0 csb[4]: status=0x00000001:0x00000000
<0> [472.952683] <idle>-0 1..s1 466527130us : trace_ports: rcs0: promote { 11659:2*, 0:0 }
<0> [472.952808] i915_sel-4442 0d... 466527131us : execlists_reset: rcs0
<0> [472.952933] i915_sel-4442 0d..1 466527133us : process_csb: rcs0 cs-irq head=3, tail=4
<0> [472.953059] i915_sel-4442 0d..1 466527134us : process_csb: rcs0 csb[4]: status=0x00000001:0x00000000
<0> [472.953185] i915_sel-4442 0d..1 466527136us : trace_ports: rcs0: preempted { 11659:2*, 0:0 }
<0> [472.953310] i915_sel-4442 0d..1 466527150us : assert_pending_valid: Nothing pending for promotion!
<0> [472.953436] i915_sel-4442 0d..1 466527158us : process_csb: process_csb:1930 GEM_BUG_ON(!assert_pending_valid(execlists, "promote"))
We have the same CSB events being seen by process_csb() on two different
processors. One being issued by the reset in the test, the other by the
interrupt; this scenario is supposed to be prevented by flushing the
interrupt tasklet with tasklet_disable() before we enter the atomic
reset.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112069
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023232443.17450-1-chris@chris-wilson.co.uk
|
|
The attachment list is now protected by the dma_resv object.
Stop holding the dma_buf->lock while calling ->attach/->detach,
this allows for concurrent attach/detach operations.
v2: cleanup commit message and locking in _debug_show()
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/336790
|
|
This patch is a stripped down version of the locking changes
necessary to support dynamic DMA-buf handling.
It adds a dynamic flag for both importers as well as exporters
so that drivers can choose if they want the reservation object
locked or unlocked during mapping of attachments.
For compatibility between drivers we cache the DMA-buf mapping
during attaching an importer as soon as exporter/importer
disagree on the dynamic handling.
Issues and solutions we considered:
- We can't change all existing drivers, and existing improters have
strong opinions about which locks they're holding while calling
dma_buf_attachment_map/unmap. Exporters also have strong opinions about
which locks they can acquire in their ->map/unmap callbacks, levaing no
room for change. The solution to avoid this was to move the
actual map/unmap out from this call, into the attach/detach callbacks,
and cache the mapping. This works because drivers don't call
attach/detach from deep within their code callchains (like deep in
memory management code called from cs/execbuf ioctl), but directly from
the fd2handle implementation.
- The caching has some troubles on some soc drivers, which set other modes
than DMA_BIDIRECTIONAL. We can't have 2 incompatible mappings, and we
can't re-create the mapping at _map time due to the above locking fun.
We very carefuly step around that by only caching at attach time if the
dynamic mode between importer/expoert mismatches.
- There's been quite some discussion on dma-buf mappings which need active
cache management, which would all break down when caching, plus we don't
have explicit flush operations on the attachment side. The solution to
this was to shrug and keep the current discrepancy between what the
dma-buf docs claim and what implementations do, with the hope that the
begin/end_cpu_access hooks are good enough and that all necessary
flushing to keep device mappings consistent will be done there.
v2: cleanup set_name merge, improve kerneldoc
v3: update commit message, kerneldoc and cleanup _debug_show()
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/336788/
|
|
As early workload scan and shadow happens in execlist mmio handler,
which has already taken vgpu_lock. So remove extra lock taking here.
Fixes: 952f89f098c7 ("drm/i915/gvt: Wean off struct_mutex")
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
|
|
Replace sampling the engine state every so often with a periodic
heartbeat request to measure the health of an engine. This is coupled
with the forced-preemption to allow long running requests to survive so
long as they do not block other users.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023133108.21401-5-chris@chris-wilson.co.uk
|
|
Normally, we rely on our hangcheck to prevent persistent batches from
hogging the GPU. However, if the user disables hangcheck, this mechanism
breaks down. Despite our insistence that this is unsafe, the users are
equally insistent that they want to use endless batches and will disable
the hangcheck mechanism. We are looking at replacing hangcheck, in the
next patch, with a softer mechanism, that sends a pulse down the engine
to check if it is well. We can use the same preemptive pulse to flush an
active context off the GPU upon context close, preventing resources
being lost and unkillable requests remaining on the GPU after process
termination.
Testcase: igt/gem_ctx_exec/basic-nohangcheck
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Jon Bloomfield <jon.bloomfield@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023133108.21401-4-chris@chris-wilson.co.uk
|
|
On schedule-out (CS completion) of a banned context, scrub the context
image so that we do not replay the active payload. The intent is that we
skip banned payloads on request submission so that the timeline
advancement continues on in the background. However, if we are returning
to a preempted request, i915_request_skip() is ineffective and instead we
need to patch up the context image so that it continues from the start
of the next request.
v2: Fixup cancellation so that we only scrub the payload of the active
request and do not short-circuit the breadcrumbs (which might cause
other contexts to execute out of order).
v3: Grammar pass
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023133108.21401-3-chris@chris-wilson.co.uk
|
|
If the preempted context takes too long to relinquish control, e.g. it
is stuck inside a shader with arbitration disabled, evict that context
with an engine reset. This ensures that preemptions are reasonably
responsive, providing a tighter QoS for the more important context at
the cost of flagging unresponsive contexts more frequently (i.e. instead
of using an ~10s hangcheck, we now evict at ~100ms). The challenge of
lies in picking a timeout that can be reasonably serviced by HW for
typical workloads, balancing the existing clients against the needs for
responsiveness.
Note that coupled with timeslicing, this will lead to rapid GPU "hang"
detection with multiple active contexts vying for GPU time.
The forced preemption mechanism can be compiled out with
./scripts/config --set-val DRM_I915_PREEMPT_TIMEOUT 0
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023133108.21401-2-chris@chris-wilson.co.uk
|
|
If we are doing a normal GPU reset triggered after detecting a long
period of stalled work, we can take our time and allow the engines to
quiesce. Since we've stopped submission to the engine, and if we wait
long enough an innocent context should complete, leaving the engine idle.
So by waiting a short amount of time, we should prevent clobbering other
users when resetting a stuck context.
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Suggested-by: Jon Bloomfield <jon.bloomfield@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191023133108.21401-1-chris@chris-wilson.co.uk
|
|
GuC enable logging H2G action definition changed some time ago from 0xE000
to 0x40. All current GuC FW blobs use this definition, so fix the action
definition in driver to match.
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Robert M. Fosha <robert.m.fosha@intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022163754.23870-2-robert.m.fosha@intel.com
|
|
Creating and opening the GuC log relay file enables and starts
the relay potentially before the caller is ready to consume logs.
Change the behavior so that relay starts only on an explicit call
to the write function (with a value of '1'). Other values flush
the log relay as before.
v2: Style changes and fix typos. Add guc_log_relay_stop()
function. (Daniele)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Robert M. Fosha <robert.m.fosha@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022163754.23870-1-robert.m.fosha@intel.com
|
|
Atm we don't detect a PCH with PCI ID 0xA3C1 which showed up now on a CML
platform. We don't have the official assignment of the PCH PCI IDs, but
this looks like a CNP which was already used on CML platforms. Let's add
the new ID->PCH type mapping accordingly.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=112051
Reported-and-tested-by: Cyrus <cyrus.lien@canonical.com>
Cc: Cyrus <cyrus.lien@canonical.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022095155.30991-1-imre.deak@intel.com
|
|
During the discussion of patches that enhance the drm_dp_link helpers it
was concluded that these helpers aren't very useful to begin with. After
all other drivers have been converted not to use these helpers anymore,
move these helpers into the last remaining user: Tegra DRM.
If at some point these helpers are deemed more widely useful, they can
be moved out into the DRM DP helpers again.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-14-thierry.reding@gmail.com
|
|
During the discussion of patches that enhance the drm_dp_link helpers it
was concluded that these helpers aren't very useful to begin with. Start
pushing the equivalent code into individual drivers to ultimately remove
them.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-13-thierry.reding@gmail.com
|
|
During the discussion of patches that enhance the drm_dp_link helpers it
was concluded that these helpers aren't very useful to begin with. Start
pushing the equivalent code into individual drivers to ultimately remove
them.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-12-thierry.reding@gmail.com
|
|
The DP specification uses the term "default framing" instead of "non-
enhanced framing".
Reviewed-by: Andrzej Hajda <a.hajda@samsung.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-11-thierry.reding@gmail.com
|
|
During the discussion of patches that enhance the drm_dp_link helpers it
was concluded that these helpers aren't very useful to begin with. Start
pushing the equivalent code into individual drivers to ultimately remove
them.
v3: make link rate unsigned int to avoid overflow
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-10-thierry.reding@gmail.com
|
|
During the discussion of patches that enhance the drm_dp_link helpers it
was concluded that these helpers aren't very useful to begin with. Start
pushing the equivalent code into individual drivers to ultimately remove
them.
v4: use bulk DPCD writes if possible (Daniel Vetter)
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022145211.2258525-1-thierry.reding@gmail.com
|
|
If the transmitter supports pre-emphasis post cursor2 the sink will
request adjustments in a similar way to how it requests adjustments to
the voltage swing and pre-emphasis settings.
Add a helper to extract these adjustments on a per-lane basis from the
DPCD link status.
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-8-thierry.reding@gmail.com
|
|
Use microsecond sleeps for the clock recovery and channel equalization
delays during link training. The duration of these delays can be from
100 us up to 16 ms. It is rude to busy-loop for that amount of time.
While at it, also convert to standard coding style by putting the
opening braces in a function definition on a new line. Also switch to
using an unsigned int for the AUX read interval to match the data type
of the parameters to usleep_range().
v2: use correct multiplier for training delays (Philipp Zabel)
v3: clarify data type change in commit message
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-7-thierry.reding@gmail.com
|
|
Add a helper to check if the sink supports the eDP alternate scrambler
reset value of 0xfffe.
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-6-thierry.reding@gmail.com
|
|
Add a helper to check whether the sink supports ANSI 8B/10B channel
coding capability as specified in ANSI X3.230-1994, clause 11.
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-5-thierry.reding@gmail.com
|
|
Add a helper that checks for the fast training capability given the DPCD
receiver capabilities blob.
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-4-thierry.reding@gmail.com
|
|
It's idiomatic to check the return value of a function call immediately
after the function call, without any blank lines in between, to make it
more obvious that the two lines belong together.
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-3-thierry.reding@gmail.com
|
|
Keeping the list sorted alphabetically makes it much easier to determine
where to add new includes.
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191021143437.1477719-2-thierry.reding@gmail.com
|