diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 11:59:58 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 11:59:58 -0800 |
commit | a594533df0f6ca391da003f43d53b336a2d23ffa (patch) | |
tree | ec984c693b0bfc208519c43134f21365797f90ee /drivers/gpu/drm/scheduler/sched_main.c | |
parent | cdb9d3537711939e4d8fd0de2889c966f88346eb (diff) | |
parent | 66efff515a6500d4b4976fbab3bee8b92a1137fb (diff) |
Merge tag 'drm-next-2022-12-13' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"The biggest highlight is that the accel subsystem framework is merged.
Hopefully for 6.3 we will be able to line up a driver to use it.
In drivers land, i915 enables DG2 support by default now, and nouveau
has a big stability refactoring and initial ampere support, AMD
includes new hw IP support and should build on ARM again. There is
also an ofdrm driver to take over offb on platforms it's used.
Stuff outside my tree, the dma-buf patches hit a few places, the vc4
firmware changes also do, and i915 has some interactions with MEI for
discrete GPUs. I think all of those should have been acked/reviewed by
relevant parties.
New driver:
- ofdrm - replacement for offb
fbdev:
- add support for nomodeset
fourcc:
- add Vivante tiled modifier
core:
- atomic-helpers: CRTC primary plane test fixes, fb access hooks
- connector: TV API consistency, cmdline parser improvements
- send connector hotplug on cleanup
- sort makefile objects
tests:
- sort kunit tests
- improve DP-MST tests
- add kunit helpers to create a device
sched:
- module param for scheduling policy
- refcounting fix
buddy:
- add back random seed log
ttm:
- convert ttm_resource to size_t
- optimize pool allocations
edid:
- HFVSDB parsing support fixes
- logging/debug improvements
- DSC quirks
dma-buf:
- Add unlocked vmap and attachment mapping
- move drivers to common locking convention
- locking improvements
firmware:
- new API for rPI firmware and vc4
xilinx:
- zynqmp: displayport bridge support
- dpsub fix
bridge:
- adv7533: Remove dynamic lane switching
- it6505: Runtime PM support, sync improvements
- ps8640: Handle AUX defer messages
- tc358775: Drop soft-reset over I2C
panel:
- panel-edp: Add INX N116BGE-EA2 C2 and C4 support.
- Jadard JD9365DA-H3
- NewVision NV3051D
amdgpu:
- DCN support on ARM
- DCN 2.1 secure display
- Sienna Cichlid mode2 reset fixes
- new GC 11.x firmware versions
- drop AMD specific DSC workarounds in favour of drm code
- clang warning fixes
- scheduler rework
- SR-IOV fixes
- GPUVM locking fixes
- fix memory leak in CS IOCTL error path
- flexible array updates
- enable new GC/PSP/SMU/NBIO IP
- GFX preemption support for gfx9
amdkfd:
- cache size fixes
- userptr fixes
- enable cooperative launch on gfx 10.3
- enable GC 11.0.4 KFD support
radeon:
- replace kmap with kmap_local_page
- ACPI ref count fix
- HDA audio notifier support
i915:
- DG2 enabled by default
- MTL enablement work
- hotplug refactoring
- VBT improvements
- Display and watermark refactoring
- ADL-P workaround
- temp disable runtime_pm for discrete-
- fix for A380 as a secondary GPU
- Wa_18017747507 for DG2
- CS timestamp support fixes for gen5 and earlier
- never purge busy TTM objects
- use i915_sg_dma_sizes for all backends
- demote GuC kernel contexts to normal priority
- gvt: refactor for new MDEV interface
- enable DC power states on eDP ports
- fix gen 2/3 workarounds
nouveau:
- fix page fault handling
- Ampere acceleration support
- driver stability improvements
- nva3 backlight support
msm:
- MSM_INFO_GET_FLAGS support
- DPU: XR30 and P010 image formats
- Qualcomm SM6115 support
- DSI PHY support for QCM2290
- HDMI: refactored dev init path
- remove exclusive-fence hack
- fix speed-bin detection
- enable clamp to idle on 7c3
- improved hangcheck detection
vmwgfx:
- fb and cursor refactoring
- convert to generic hashtable
- cursor improvements
etnaviv:
- hw workarounds
- softpin MMU fixes
ast:
- atomic gamma LUT support
- convert to SHMEM
lcdif:
- support YUV planes
- Increase DMA burst size
- FIFO threshold tuning
meson:
- fix return type of cvbs mode_valid
mgag200:
- fix PLL setup on some revisions
sun4i:
- A100 and D1 support
udl:
- modesetting improvements
- hot unplug support
vc4:
- support PAL-M
- fix regression preventing 4K @ 60Hz
- fix NULL ptr deref
v3d:
- switch to drm managed resources
renesas:
- RZ/G2L DSI support
- DU Kconfig cleanup
mediatek:
- fixup dpi and hdmi
- MT8188 dpi support
- MT8195 AFBC support
tegra:
- NVDEC hardware on Tegra234 SoC
hdlcd:
- switch to drm managed resources
ingenic:
- fix registration error path
hisilicon:
- convert to drm_mode_init
maildp:
- use managed resources
mtk:
- use drm_mode_init
rockchip:
- use drm_mode_copy"
* tag 'drm-next-2022-12-13' of git://anongit.freedesktop.org/drm/drm: (1397 commits)
drm/amdgpu: fix mmhub register base coding error
drm/amdgpu: add tmz support for GC IP v11.0.4
drm/amdgpu: enable GFX Clock Gating control for GC IP v11.0.4
drm/amdgpu: enable GFX Power Gating for GC IP v11.0.4
drm/amdgpu: enable GFX IP v11.0.4 CG support
drm/amdgpu: Make amdgpu_ring_mux functions as static
drm/amdgpu: generally allow over-commit during BO allocation
drm/amd/display: fix array index out of bound error in DCN32 DML
drm/amd/display: 3.2.215
drm/amd/display: set optimized required for comp buf changes
drm/amd/display: Add debug option to skip PSR CRTC disable
drm/amd/display: correct DML calc error of UrgentLatency
drm/amd/display: correct static_screen_event_mask
drm/amd/display: Ensure commit_streams returns the DC return code
drm/amd/display: read invalid ddc pin status cause engine busy
drm/amd/display: Bypass DET swath fill check for max clocks
drm/amd/display: Disable uclk pstate for subvp pipes
drm/amd/display: Fix DCN2.1 default DSC clocks
drm/amd/display: Enable dp_hdmi21_pcon support
drm/amd/display: prevent seamless boot on displays that don't have the preferred dig
...
Diffstat (limited to 'drivers/gpu/drm/scheduler/sched_main.c')
-rw-r--r-- | drivers/gpu/drm/scheduler/sched_main.c | 229 |
1 files changed, 136 insertions, 93 deletions
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index e5a4ecde0063..31f3a1267be4 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -62,6 +62,55 @@ #define to_drm_sched_job(sched_job) \ container_of((sched_job), struct drm_sched_job, queue_node) +int drm_sched_policy = DRM_SCHED_POLICY_FIFO; + +/** + * DOC: sched_policy (int) + * Used to override default entities scheduling policy in a run queue. + */ +MODULE_PARM_DESC(sched_policy, "Specify the scheduling policy for entities on a run-queue, " __stringify(DRM_SCHED_POLICY_RR) " = Round Robin, " __stringify(DRM_SCHED_POLICY_FIFO) " = FIFO (default)."); +module_param_named(sched_policy, drm_sched_policy, int, 0444); + +static __always_inline bool drm_sched_entity_compare_before(struct rb_node *a, + const struct rb_node *b) +{ + struct drm_sched_entity *ent_a = rb_entry((a), struct drm_sched_entity, rb_tree_node); + struct drm_sched_entity *ent_b = rb_entry((b), struct drm_sched_entity, rb_tree_node); + + return ktime_before(ent_a->oldest_job_waiting, ent_b->oldest_job_waiting); +} + +static inline void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity) +{ + struct drm_sched_rq *rq = entity->rq; + + if (!RB_EMPTY_NODE(&entity->rb_tree_node)) { + rb_erase_cached(&entity->rb_tree_node, &rq->rb_tree_root); + RB_CLEAR_NODE(&entity->rb_tree_node); + } +} + +void drm_sched_rq_update_fifo(struct drm_sched_entity *entity, ktime_t ts) +{ + /* + * Both locks need to be grabbed, one to protect from entity->rq change + * for entity from within concurrent drm_sched_entity_select_rq and the + * other to update the rb tree structure. + */ + spin_lock(&entity->rq_lock); + spin_lock(&entity->rq->lock); + + drm_sched_rq_remove_fifo_locked(entity); + + entity->oldest_job_waiting = ts; + + rb_add_cached(&entity->rb_tree_node, &entity->rq->rb_tree_root, + drm_sched_entity_compare_before); + + spin_unlock(&entity->rq->lock); + spin_unlock(&entity->rq_lock); +} + /** * drm_sched_rq_init - initialize a given run queue struct * @@ -75,6 +124,7 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, { spin_lock_init(&rq->lock); INIT_LIST_HEAD(&rq->entities); + rq->rb_tree_root = RB_ROOT_CACHED; rq->current_entity = NULL; rq->sched = sched; } @@ -92,9 +142,12 @@ void drm_sched_rq_add_entity(struct drm_sched_rq *rq, { if (!list_empty(&entity->list)) return; + spin_lock(&rq->lock); + atomic_inc(rq->sched->score); list_add_tail(&entity->list, &rq->entities); + spin_unlock(&rq->lock); } @@ -111,23 +164,30 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, { if (list_empty(&entity->list)) return; + spin_lock(&rq->lock); + atomic_dec(rq->sched->score); list_del_init(&entity->list); + if (rq->current_entity == entity) rq->current_entity = NULL; + + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + drm_sched_rq_remove_fifo_locked(entity); + spin_unlock(&rq->lock); } /** - * drm_sched_rq_select_entity - Select an entity which could provide a job to run + * drm_sched_rq_select_entity_rr - Select an entity which could provide a job to run * * @rq: scheduler run queue to check. * * Try to find a ready entity, returns NULL if none found. */ static struct drm_sched_entity * -drm_sched_rq_select_entity(struct drm_sched_rq *rq) +drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq) { struct drm_sched_entity *entity; @@ -164,6 +224,34 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq) } /** + * drm_sched_rq_select_entity_fifo - Select an entity which provides a job to run + * + * @rq: scheduler run queue to check. + * + * Find oldest waiting ready entity, returns NULL if none found. + */ +static struct drm_sched_entity * +drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq) +{ + struct rb_node *rb; + + spin_lock(&rq->lock); + for (rb = rb_first_cached(&rq->rb_tree_root); rb; rb = rb_next(rb)) { + struct drm_sched_entity *entity; + + entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node); + if (drm_sched_entity_is_ready(entity)) { + rq->current_entity = entity; + reinit_completion(&entity->entity_idle); + break; + } + } + spin_unlock(&rq->lock); + + return rb ? rb_entry(rb, struct drm_sched_entity, rb_tree_node) : NULL; +} + +/** * drm_sched_job_done - complete a job * @s_job: pointer to the job which is done * @@ -198,32 +286,6 @@ static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb) } /** - * drm_sched_dependency_optimized - test if the dependency can be optimized - * - * @fence: the dependency fence - * @entity: the entity which depends on the above fence - * - * Returns true if the dependency can be optimized and false otherwise - */ -bool drm_sched_dependency_optimized(struct dma_fence* fence, - struct drm_sched_entity *entity) -{ - struct drm_gpu_scheduler *sched = entity->rq->sched; - struct drm_sched_fence *s_fence; - - if (!fence || dma_fence_is_signaled(fence)) - return false; - if (fence->context == entity->fence_context) - return true; - s_fence = to_drm_sched_fence(fence); - if (s_fence && s_fence->sched == sched) - return true; - - return false; -} -EXPORT_SYMBOL(drm_sched_dependency_optimized); - -/** * drm_sched_start_timeout - start timeout for reset worker * * @sched: scheduler instance to start the worker for @@ -355,27 +417,6 @@ static void drm_sched_job_timedout(struct work_struct *work) } } - /** - * drm_sched_increase_karma - Update sched_entity guilty flag - * - * @bad: The job guilty of time out - * - * Increment on every hang caused by the 'bad' job. If this exceeds the hang - * limit of the scheduler then the respective sched entity is marked guilty and - * jobs from it will not be scheduled further - */ -void drm_sched_increase_karma(struct drm_sched_job *bad) -{ - drm_sched_increase_karma_ext(bad, 1); -} -EXPORT_SYMBOL(drm_sched_increase_karma); - -void drm_sched_reset_karma(struct drm_sched_job *bad) -{ - drm_sched_increase_karma_ext(bad, 0); -} -EXPORT_SYMBOL(drm_sched_reset_karma); - /** * drm_sched_stop - stop the scheduler * @@ -517,31 +558,14 @@ EXPORT_SYMBOL(drm_sched_start); */ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) { - drm_sched_resubmit_jobs_ext(sched, INT_MAX); -} -EXPORT_SYMBOL(drm_sched_resubmit_jobs); - -/** - * drm_sched_resubmit_jobs_ext - helper to relunch certain number of jobs from mirror ring list - * - * @sched: scheduler instance - * @max: job numbers to relaunch - * - */ -void drm_sched_resubmit_jobs_ext(struct drm_gpu_scheduler *sched, int max) -{ struct drm_sched_job *s_job, *tmp; uint64_t guilty_context; bool found_guilty = false; struct dma_fence *fence; - int i = 0; list_for_each_entry_safe(s_job, tmp, &sched->pending_list, list) { struct drm_sched_fence *s_fence = s_job->s_fence; - if (i >= max) - break; - if (!found_guilty && atomic_read(&s_job->karma) > sched->hang_limit) { found_guilty = true; guilty_context = s_job->s_fence->scheduled.context; @@ -551,7 +575,6 @@ void drm_sched_resubmit_jobs_ext(struct drm_gpu_scheduler *sched, int max) dma_fence_set_error(&s_fence->finished, -ECANCELED); fence = sched->ops->run_job(s_job); - i++; if (IS_ERR_OR_NULL(fence)) { if (IS_ERR(fence)) @@ -567,7 +590,7 @@ void drm_sched_resubmit_jobs_ext(struct drm_gpu_scheduler *sched, int max) } } } -EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext); +EXPORT_SYMBOL(drm_sched_resubmit_jobs); /** * drm_sched_job_init - init a scheduler job @@ -685,32 +708,28 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job, EXPORT_SYMBOL(drm_sched_job_add_dependency); /** - * drm_sched_job_add_implicit_dependencies - adds implicit dependencies as job - * dependencies + * drm_sched_job_add_resv_dependencies - add all fences from the resv to the job * @job: scheduler job to add the dependencies to - * @obj: the gem object to add new dependencies from. - * @write: whether the job might write the object (so we need to depend on - * shared fences in the reservation object). + * @resv: the dma_resv object to get the fences from + * @usage: the dma_resv_usage to use to filter the fences * - * This should be called after drm_gem_lock_reservations() on your array of - * GEM objects used in the job but before updating the reservations with your - * own fences. + * This adds all fences matching the given usage from @resv to @job. + * Must be called with the @resv lock held. * * Returns: * 0 on success, or an error on failing to expand the array. */ -int drm_sched_job_add_implicit_dependencies(struct drm_sched_job *job, - struct drm_gem_object *obj, - bool write) +int drm_sched_job_add_resv_dependencies(struct drm_sched_job *job, + struct dma_resv *resv, + enum dma_resv_usage usage) { struct dma_resv_iter cursor; struct dma_fence *fence; int ret; - dma_resv_assert_held(obj->resv); + dma_resv_assert_held(resv); - dma_resv_for_each_fence(&cursor, obj->resv, dma_resv_usage_rw(write), - fence) { + dma_resv_for_each_fence(&cursor, resv, usage, fence) { /* Make sure to grab an additional ref on the added fence */ dma_fence_get(fence); ret = drm_sched_job_add_dependency(job, fence); @@ -721,8 +740,31 @@ int drm_sched_job_add_implicit_dependencies(struct drm_sched_job *job, } return 0; } -EXPORT_SYMBOL(drm_sched_job_add_implicit_dependencies); +EXPORT_SYMBOL(drm_sched_job_add_resv_dependencies); +/** + * drm_sched_job_add_implicit_dependencies - adds implicit dependencies as job + * dependencies + * @job: scheduler job to add the dependencies to + * @obj: the gem object to add new dependencies from. + * @write: whether the job might write the object (so we need to depend on + * shared fences in the reservation object). + * + * This should be called after drm_gem_lock_reservations() on your array of + * GEM objects used in the job but before updating the reservations with your + * own fences. + * + * Returns: + * 0 on success, or an error on failing to expand the array. + */ +int drm_sched_job_add_implicit_dependencies(struct drm_sched_job *job, + struct drm_gem_object *obj, + bool write) +{ + return drm_sched_job_add_resv_dependencies(job, obj->resv, + dma_resv_usage_rw(write)); +} +EXPORT_SYMBOL(drm_sched_job_add_implicit_dependencies); /** * drm_sched_job_cleanup - clean up scheduler job resources @@ -803,7 +845,9 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) /* Kernel run queue has higher priority than normal run queue*/ for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { - entity = drm_sched_rq_select_entity(&sched->sched_rq[i]); + entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? + drm_sched_rq_select_entity_fifo(&sched->sched_rq[i]) : + drm_sched_rq_select_entity_rr(&sched->sched_rq[i]); if (entity) break; } @@ -1082,13 +1126,15 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) EXPORT_SYMBOL(drm_sched_fini); /** - * drm_sched_increase_karma_ext - Update sched_entity guilty flag + * drm_sched_increase_karma - Update sched_entity guilty flag * * @bad: The job guilty of time out - * @type: type for increase/reset karma * + * Increment on every hang caused by the 'bad' job. If this exceeds the hang + * limit of the scheduler then the respective sched entity is marked guilty and + * jobs from it will not be scheduled further */ -void drm_sched_increase_karma_ext(struct drm_sched_job *bad, int type) +void drm_sched_increase_karma(struct drm_sched_job *bad) { int i; struct drm_sched_entity *tmp; @@ -1100,10 +1146,7 @@ void drm_sched_increase_karma_ext(struct drm_sched_job *bad, int type) * corrupt but keep in mind that kernel jobs always considered good. */ if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { - if (type == 0) - atomic_set(&bad->karma, 0); - else if (type == 1) - atomic_inc(&bad->karma); + atomic_inc(&bad->karma); for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_KERNEL; i++) { @@ -1114,7 +1157,7 @@ void drm_sched_increase_karma_ext(struct drm_sched_job *bad, int type) if (bad->s_fence->scheduled.context == entity->fence_context) { if (entity->guilty) - atomic_set(entity->guilty, type); + atomic_set(entity->guilty, 1); break; } } @@ -1124,4 +1167,4 @@ void drm_sched_increase_karma_ext(struct drm_sched_job *bad, int type) } } } -EXPORT_SYMBOL(drm_sched_increase_karma_ext); +EXPORT_SYMBOL(drm_sched_increase_karma); |