diff options
author | Matthew Brost <matthew.brost@intel.com> | 2023-07-19 14:10:11 -0700 |
---|---|---|
committer | Rodrigo Vivi <rodrigo.vivi@intel.com> | 2023-12-21 11:37:52 -0500 |
commit | fd84041d094ce8feb730911ca9c7fdfff1d4fb94 (patch) | |
tree | 5e539be769c6e6b6b065a5dcdcfce39e9c52ccd5 /drivers/gpu/drm/xe/xe_migrate.h | |
parent | 845f64bdbfc96cefd7070621b18ff8f50c7857fb (diff) |
drm/xe: Make bind engines safe
We currently have a race between bind engines which can result in
corrupted page tables leading to faults.
A simple example:
bind A 0x0000-0x1000, engine A, has unsatisfied in-fence
bind B 0x1000-0x2000, engine B, no in-fences
exec A uses 0x1000-0x2000
Bind B will pass bind A and exec A will fault. This occurs as bind A
programs the root of the page table in a bind job which is held up by an
in-fence. Bind B in this case just programs a leaf entry of the
structure.
To fix use range-fence utility to track cross bind engine conflicts. In
the above example bind A would insert an dependency into the range-fence
tree with a key of 0x0-0x7fffffffff, bind B would find that dependency
and its bind job would scheduled behind the unsatisfied in-fence and
bind A's job.
Reviewed-by: Maarten Lankhorst<maarten.lankhorst@linux.intel.com>
Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Diffstat (limited to 'drivers/gpu/drm/xe/xe_migrate.h')
-rw-r--r-- | drivers/gpu/drm/xe/xe_migrate.h | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h index 204337ea3b4e..0d62aff6421c 100644 --- a/drivers/gpu/drm/xe/xe_migrate.h +++ b/drivers/gpu/drm/xe/xe_migrate.h @@ -69,6 +69,14 @@ struct xe_migrate_pt_update { const struct xe_migrate_pt_update_ops *ops; /** @vma: The vma we're updating the pagetable for. */ struct xe_vma *vma; + /** @job: The job if a GPU page-table update. NULL otherwise */ + struct xe_sched_job *job; + /** @start: Start of update for the range fence */ + u64 start; + /** @last: Last of update for the range fence */ + u64 last; + /** @tile_id: Tile ID of the update */ + u8 tile_id; }; struct xe_migrate *xe_migrate_init(struct xe_tile *tile); |