Age | Commit message (Collapse) | Author |
|
Add optional reset support which is mentioned in vivante,gc.yaml to
allow the driver to work on SoCs whose reset signal is asserted by default
Signed-off-by: Philippe Lecointre <philippe.lecointre@thalesgroup.com>
Acked-by: Simon Lenain <simon.lenain@thalesgroup.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Large draws can make the GPU appear to be stuck to the current hangcheck
logic as the FE address will not move until the draw is finished. However,
the FE has a debug register, which records the current primitive ID within
a draw. Using this debug register we can extend the timeout as long as the
draw progresses.
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
On some hardware (such at the GC7000 rev 6009), these registers need to be
read twice to return the correct value. Hide that in gpu_read().
Signed-off-by: Derek Foreman <derek.foreman@collabora.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This reverts commit 1dccdba084897443d116508a8ed71e0ac8a031a4.
In userspace a different approach was choosen - hwdb. As a result, there
is no need for these values.
Signed-off-by: Christian Gmeiner <cgmeiner@igalia.com>
Reviewed-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
These ones will be needed to make use fo the NN and TP units in the NPUs
based on Vivante IP.
Also fix the number of NN cores in the VIPNano-qi.
Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
Acked-by: Christian Gmeiner <cgmeiner@igalia.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The GPU is halted when it hits a MMU exception, so there is no point in
waiting for the job timeout to expire or try to work out if the GPU is
still making progress in the timeout handler, as we know that the GPU
won't make any more progress.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
|
|
Now that it is only used to track the driver internal state of
the MMU global and cmdbuf objects, we can get rid of this property
by making the free/finit functions of those objects safe to call
on an uninitialized object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
|
|
Instead of only tracking if the FE is running, use a enum to better
describe the various states the GPU can be in. This allows some
additional validation to make sure that functions that expect a
certain GPU state are only called when the GPU is actually in that
state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
|
|
Currently the FE is spinning way too fast when polling for new work in
the FE idleloop. As each poll fetches 16 bytes from memory, a GPU running
at 1GHz with the current setting of 200 wait cycle between fetches causes
80 MB/s of memory traffic just to check for new work when the GPU is
otherwise idle, which is more FE traffic than in some GPU loaded cases.
Significantly increase the number of wait cycles to slow down the poll
interval to ~30µs, limiting the FE idle memory traffic to 512 KB/s, while
providing a max latency which should not hurt most use-cases. The FE WAIT
command seems to have some unknown discrete steps in the wait cycles so
we may over/undershoot the target a bit, but that should be harmless.
If the GPU core base frequency is unknown keep the 200 wait cycles as
a sane default.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Sui Jingfeng <suijingfeng@loongson.cn>
Tested-by: Sui Jingfeng <suijingfeng@loongson.cn>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
|
|
This simplifies the driver code a bit, as XArray already provides
internal locking. IDRs are implemented using XArrays anyways, so
this drops one level of unneeded abstraction.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
The fence lock currently protects two distinct things. It protects the fence
IDR from concurrent inserts and removes and also keeps drm_sched_job_arm and
drm_sched_entity_push_job in one atomic section to guarantee the fence seqno
monotonicity. Split the lock into those two functions.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
We will use these for differentiating between GPUs and NPUs, as the
downstream driver does.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Older GC300 revisions have their power registers at an offset of 0x200
rather than 0x100. Add new gpu_read_power and gpu_write_power functions
to encapsulate accesses to the power addresses and fix the addresses.
Signed-off-by: Doug Brown <doug@schmorgal.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Track the pid per submit, so we can print the name and cmdline of
the task which submitted the batch that caused the gpu to hang.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Some GPU heavy test programs manage to trigger the hangcheck quite often.
If there are no other GPU users in the system and the test program
exhibits a very regular structure in the commandstreams that are being
submitted, we can end up with two distinct submits managing to trigger
the hangcheck with the FE in a very similar address range. This leads
the hangcheck to believe that the GPU is stuck, while in reality the GPU
is already busy working on a different job. To avoid those spurious
GPU resets, also remember and consider the last completed fence seqno
in the hang check.
Reported-by: Joerg Albert <joerg.albert@iav.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
While the DMA frontend can only be active when the MMU context is set, the
reverse isn't necessarily true, as the frontend can be stopped while the
MMU state is kept. Stop treating mmu_context being set as a indication that
the frontend is running and instead add a explicit property.
Cc: stable@vger.kernel.org # 5.4
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Michael Walle <michael@walle.cc>
Tested-by: Marek Vasut <marex@denx.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
They will be used for extended HWDB support.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
struct timespec is being removed from the kernel because it often leads
to code that is not y2038-safe.
In the etnaviv driver, monotonic timestamps are used, which do not suffer
from overflow, but the usage of timespec here gets in the way of removing
the interface completely.
Pass down the user-supplied 64-bit value here rather than converting
it to an intermediate timespec to avoid the conversion.
The conversion is transparent for all regular CLOCK_MONOTONIC values,
but is a small change in behavior for excessively large values: the
existing code would treat e.g. tv_sec=0x100000000 the same as tv_sec=0
and not block, while the new code it would block for up to 2^31
seconds. The new behavior is more logical here, but if it causes problems,
the truncation can be put back.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
This builds on top of the MMU contexts introduced earlier. Instead of having
one context per GPU core, each GPU client receives its own context.
On MMUv1 this still means a single shared pagetable set is used by all
clients, but on MMUv2 there is now a distinct set of pagetables for each
client. As the command fetch is also translated via the MMU on MMUv2 the
kernel command ringbuffer is mapped into each of the client pagetables.
As the MMU context switch is a bit of a heavy operation, due to the needed
cache and TLB flushing, this patch implements a lazy way of switching the
MMU context. The kernel does not have its own MMU context, but reuses the
last client context for all of its operations. This has some visible impact,
as the GPU can now only be started once a client has submitted some work and
we got the client MMU context assigned. Also the MMU context has a different
lifetime than the general client context, as the GPU might still execute the
kernel command buffer in the context of a client even after the client has
completed all GPU work and has been terminated. Only when the GPU is runtime
suspended or switches to another clients MMU context is the old context
freed up.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
This reworks the MMU handling to make it possible to have multiple MMU contexts.
A context is basically one instance of GPU page tables. Currently we have one
set of page tables per GPU, which isn't all that clever, as it has the
following two consequences:
1. All GPU clients (aka processes) are sharing the same pagetables, which means
there is no isolation between clients, but only between GPU assigned memory
spaces and the rest of the system. Better than nothing, but also not great.
2. Clients operating on the same set of buffers with different etnaviv GPU
cores, e.g. a workload using both the 2D and 3D GPU, need to map the used
buffers into the pagetable sets of each used GPU.
This patch reworks all the MMU handling to introduce the abstraction of the
MMU context. A context can be shared across different GPU cores, as long as
they have compatible MMU implementations, which is the case for all systems
with Vivante GPUs seen in the wild.
As MMUv1 is not able to change pagetables on the fly, without a
"stop the world" operation, which stops GPU, changes pagetables via CPU
interaction, restarts GPU, the implementation introduces a shared context on
MMUv1, which is returned whenever there is a request for a new context.
This patch assigns a MMU context to each GPU, so on MMUv2 systems there is
still one set of pagetables per GPU, but due to the shared context MMUv1
systems see a change in behavior as now a single pagetable set is used
across all GPU cores.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
If a MMU is shared between multiple GPUs, all of them need to flush their
TLBs, so a single marker that gets reset on the first flush won't do.
Replace the flush marker with a sequence number, so that it's possible to
check if the TLB is in sync with the current page table state for each GPU.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
There is no need for each GPU to have it's own cmdbuf suballocation
region. Only allocate a single one for the the etnaviv virtual device
and share it across all GPUs.
As the suballoc space is now potentially shared by more hardware jobs
running in parallel, double its size to 512KB to avoid contention.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
This allows to decouple the cmdbuf suballocator create and mapping
the region into the GPU address space. Allowing multiple AS to share
a single cmdbuf suballoc.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
Remember if the GPU has been sucessfully initialized. Only in that case
do we need to clean up various structures in the unbind path. If the
GPU hasn't been sucessfully initialized all the cleanups should happen
in the failure paths of the init function.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
It only written and we don't infer any useful information from
it anymore. Remove it.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
The etnaviv_gpu header only needs to know about the pointer types, so
replace by a forward declaration and only include the headers where needed.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
This is the only place in the driver that should have to deal with
the raw hardware fences. To avoid any further confusion, consolidate
the fence handling in this file and remove any traces of this from
the header files.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
There is no need to track the currently active fence. The GPU scheduler
keeps track of all the in-flight jobs.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
The documentation of drm_sched_job_init and drm_sched_entity_push_job has
been clarified. Both functions should be called under a shared lock, to
avoid jobs getting pushed into the scheduler queue in a different order
than their sched_fence seqnos, which will confuse checks that are looking
at the seqnos to infer information about completion order.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
When the hangcheck handler was replaced by the DRM scheduler timeout
handling we dropped the forward progress check, as this might allow
clients to hog the GPU for a long time with a big job.
It turns out that even reasonably well behaved clients like the
Armada Xorg driver occasionally trip over the 500ms timeout. Bring
back the forward progress check to get rid of the userspace regression.
We would still like to fix userspace to submit smaller batches
if possible, but that is for another day.
Cc: <stable@vger.kernel.org>
Fixes: 6d7a20c07760 (drm/etnaviv: replace hangcheck with scheduler timeout)
Reported-by: Russell King <linux@armlinux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Eric Anholt <eric@anholt.net>
|
|
This replaces the repetitive GPL-2.0 license text in code and header files
with the SPDX tags. Generated hardware headers aren't changed, as any changes
there need to be done in the upstream rnndb repository.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
I'm not aware of any case where tracing GPU register manipulation at the
kernel level would have been useful. It only adds more indirections and
adds to the code size.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
With the introduction of GPU security we have 3 different modes of
GPU operation:
- GPU core doesn't have security features -> no handling required
- the security related states are handled by the kernel driver
- the security related states are handled by a TrustZone application
Add a enum to differentiate between the different operation modes.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
New versions of the Vivante kernel driver don't trust the hardware feature
bits anymore, but use an internal hardware database. This also includes
more feature fields than are available in hardware.
As we can't trust the hardware feature bits to be correct anymore, we need
to replicate the HWDB in etanviv. For now only the GC7000L as found on
the i.MX8M is supported.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Newer GPU cores added yet more feature bits. Make room for them and
let userspace query them.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The slave interface clock is a clock input found on newer cores to gate
the register interface. For now we simply ungate it when the GPU is in
active state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This replaces the etnaviv internal hangcheck logic with the job timeout
handling provided by the DRM scheduler. This simplifies the driver further
and allows to replay jobs after a GPU reset, so only minimal state is lost.
This introduces a user-visible change in that we don't allow jobs to run
indefinitely as long as they make progress anymore, as this introduces
quality of service issues when multiple processes are using the GPU.
Userspace is now responsible to flush jobs in a way that the finish in a
reasonable time, where reasonable is currently defined as less than 500ms.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Move the fence dependency handling to the scheduler where it belongs.
Jobs with unsignaled dependencies just get to sit in the scheduler queue
without holding any locks.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This hooks in the DRM GPU scheduler. No improvement yet, as all the
dependency handling is still done in etnaviv_gem_submit. This just
replaces the actual GPU submit by passing through the scheduler.
Allows to get rid of the retire worker, as this is now driven by the
scheduler.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This moves away from using the internal seqno as the userspace fence
reference. By moving to a generic ID, we can later replace the internal
fence by something different than the etnaviv seqno fence.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
Less dynamic allocations and slims down the cmdbuf object to only the
required information, as everything else is already available in the
submit object.
This also simplifies buffer and mappings lifetime management, as they
are now exlusively attached to the submit object and not additionally
to the cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
To make them available to the event worker even after the actual
command stream execution has finished.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
While the etnaviv workqueue needs to be ordered, as we rely on work items
being executed in queuing order, this is only true for a single GPU.
Having a shared workqueue for all GPUs in the system limits concurrency
artificially.
Getting each GPU its own ordered workqueue still meets our ordering
expectations and enables retire workers to run concurrently.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
There is no need to store this in the gpu struct. MMU flushes are triggered
correctly in reaction to MMU maps and unmaps, independent of the current ctx.
Any required pipe switches can be infered from the current and the desired
GPU exec state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
With 'sync points' we can sample the reqeustes perform signals
before and/or after the submited command buffer.
Changes v2 -> v3:
- fixed indentation and init nr_events to 1
Changes v4 -> v5:
- simplify logic around fence handling.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
In order to support performance counters in a sane way we need to provide
a method to sync the GPU with the CPU. The GPU can process multpile command
buffers/events per irq. With the help of a 'sync point' we can trigger an event
and stop the GPU/FE immediately. When the CPU is done with is processing it
simply needs to restart the FE and the GPU will process the command stream.
Changes from v1 -> v2:
- process sync point with a work item to keep irq as fast as possible
Changes from v4 -> v5:
- renamed pmrs_* to sync_point_*
- call event_free(..) in sync_point_worker(..)
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This is prep work to be able to allocate multiple events in one go.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
GPU cores with the DYNAMIC_FREQUENCY_SCALING feature bit set expect the
platform to provide the clock scaling and ignore any requests to use the
internal FSCALE divider. Writes to this register still work, but don't
have any effect on the GPU clock frequency.
Save the initial core and shader clock frequency and ask the platform
to provide a slower clock when cooling is requested.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Loosely based on commit f0a42bb5423a ("drm/msm: submit support for
in-fences"). Unfortunately, struct drm_etnaviv_gem_submit doesn't have
a flags field yet, so we have to extend the structure and trust that
drm_ioctl will clear the flags for us if an older userspace only submits
part of the struct.
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Gustavo Padovan <gustavo.padovan@collabora.com>
Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Each Vivante GPU contains a clock divider which can divide the GPU clock
by 2^n, which can lower the power dissipation from the GPU. It has been
suggested that the GC600 on Dove is responsible for 20-30% of the power
dissipation from the SoC, so lowering the GPU clock rate provides a way
to throttle the power dissiptation, and reduce the temperature when the
SoC gets hot.
This patch hooks the Etnaviv driver into the kernel's thermal management
to allow the GPUs to be throttled when necessary, allowing a reduction in
GPU clock rate from /1 to /64 in power of 2 steps.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|