diff options
author | Du, Changbin <changbin.du@intel.com> | 2016-10-27 11:10:31 +0800 |
---|---|---|
committer | Zhenyu Wang <zhenyuw@linux.intel.com> | 2016-10-27 11:20:42 +0800 |
commit | e45d7b7f47a4849a5d3d55a2cf5802a72924d37b (patch) | |
tree | 5a341c6270949ee05150beeff5b53a0ae4990f9a /drivers/gpu/drm/i915/gvt | |
parent | 6fb5082a8c4243c22ecf310b9f3add8371dfa26e (diff) |
drm/i915/gvt: fix nested sleeping issue
We cannot use blocking method mutex_lock inside a wait loop.
Here we invoke pick_next_workload() which needs acquire a
mutex in our "condition" experssion. Then we go into a another
of the going-to-sleep sequence and changing the task state.
This is a dangerous. Let's rewrite the wait sequence to avoid
nested sleeping.
v2: fix do...while loop exit condition (zhenyu)
v3: rebase to gvt-staging branch
Signed-off-by: Du, Changbin <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Diffstat (limited to 'drivers/gpu/drm/i915/gvt')
-rw-r--r-- | drivers/gpu/drm/i915/gvt/scheduler.c | 19 |
1 files changed, 12 insertions, 7 deletions
diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index e96eaeebeb0a..f7e320b9b17a 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -402,19 +402,24 @@ static int workload_thread(void *priv) struct intel_vgpu_workload *workload = NULL; int ret; bool need_force_wake = IS_SKYLAKE(gvt->dev_priv); + DEFINE_WAIT_FUNC(wait, woken_wake_function); kfree(p); gvt_dbg_core("workload thread for ring %d started\n", ring_id); while (!kthread_should_stop()) { - ret = wait_event_interruptible(scheduler->waitq[ring_id], - kthread_should_stop() || - (workload = pick_next_workload(gvt, ring_id))); - - WARN_ON_ONCE(ret); - - if (kthread_should_stop()) + add_wait_queue(&scheduler->waitq[ring_id], &wait); + do { + workload = pick_next_workload(gvt, ring_id); + if (workload) + break; + wait_woken(&wait, TASK_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + } while (!kthread_should_stop()); + remove_wait_queue(&scheduler->waitq[ring_id], &wait); + + if (!workload) break; mutex_lock(&scheduler_mutex); |