summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/i915/i915_gem.h
diff options
context:
space:
mode:
authorChris Wilson <chris@chris-wilson.co.uk>2018-08-28 16:27:02 +0100
committerChris Wilson <chris@chris-wilson.co.uk>2018-08-29 13:49:08 +0100
commit9e4fa01221b3230320135072ad31ea809ca31147 (patch)
treecf55e999e87d16a214426b67d785b6507e36dd5d /drivers/gpu/drm/i915/i915_gem.h
parentd8c5d29f21bf0bc690fd8c26c54197221e235bc9 (diff)
drm/i915/execlists: Flush tasklet directly from reset-finish
On finishing the reset, the intention is to restart the GPU before we relinquish the forcewake taken to handle the reset - the goal being the GPU reloads a context before it is allowed to sleep. For this purpose, we used tasklet_flush() which although it accomplished the goal of restarting the GPU, carried with it a sting in its tail: it cleared the TASKLET_STATE_SCHED bit. This meant that if another CPU queued a new request to this engine, we would clear the flag and later attempt to requeue the tasklet on the local CPU, breaking the per-cpu softirq lists. Remove the dangerous tasklet_kill() and just run the tasklet func directly as we know it is safe to do so (the tasklets are internally locked to allow mixed usage from direct submission). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180828152702.27536-1-chris@chris-wilson.co.uk
Diffstat (limited to 'drivers/gpu/drm/i915/i915_gem.h')
-rw-r--r--drivers/gpu/drm/i915/i915_gem.h6
1 files changed, 0 insertions, 6 deletions
diff --git a/drivers/gpu/drm/i915/i915_gem.h b/drivers/gpu/drm/i915/i915_gem.h
index e46592956872..599c4f6eb1ea 100644
--- a/drivers/gpu/drm/i915/i915_gem.h
+++ b/drivers/gpu/drm/i915/i915_gem.h
@@ -82,12 +82,6 @@ static inline void __tasklet_disable_sync_once(struct tasklet_struct *t)
tasklet_unlock_wait(t);
}
-static inline void __tasklet_enable_sync_once(struct tasklet_struct *t)
-{
- if (atomic_dec_return(&t->count) == 0)
- tasklet_kill(t);
-}
-
static inline bool __tasklet_is_enabled(const struct tasklet_struct *t)
{
return !atomic_read(&t->count);