diff options
| author | Matthew Auld <matthew.auld@intel.com> | 2025-10-22 17:38:32 +0100 |
|---|---|---|
| committer | Matthew Auld <matthew.auld@intel.com> | 2025-10-23 10:48:36 +0100 |
| commit | fb188d8b00fc221fcc744109dfa29b9945c91913 (patch) | |
| tree | bcc11a98e037db1e31efc7e8a1bb3142fbaadcbf | |
| parent | aaeef7a9c8b9206039a588a23e4dc11dddbefe2d (diff) | |
drm/xe/migrate: fix chunk handling for 2M page emit
On systems with PAGE_SIZE > 4K the chunk will likely be rounded down to
zero, if say we have single 2M page, so one huge pte, since we also try
to align the chunk to PAGE_SIZE / XE_PAGE_SIZE, which will be 16 on 64K
systems. Make the ALIGN_DOWN conditional for 4K PTEs where we can
encounter gpu_page_size < PAGE_SIZE.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Link: https://lore.kernel.org/r/20251022163836.191405-4-matthew.auld@intel.com
| -rw-r--r-- | drivers/gpu/drm/xe/xe_migrate.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c index 451fae0106e5..ce5543fa7a52 100644 --- a/drivers/gpu/drm/xe/xe_migrate.c +++ b/drivers/gpu/drm/xe/xe_migrate.c @@ -1804,7 +1804,9 @@ static void build_pt_update_batch_sram(struct xe_migrate *m, while (ptes) { u32 chunk = min(MAX_PTE_PER_SDI, ptes); - chunk = ALIGN_DOWN(chunk, PAGE_SIZE / XE_PAGE_SIZE); + if (!level) + chunk = ALIGN_DOWN(chunk, PAGE_SIZE / XE_PAGE_SIZE); + bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_NUM_QW(chunk); bb->cs[bb->len++] = pt_offset; bb->cs[bb->len++] = 0; |
