Age | Commit message (Collapse) | Author |
|
KVM does not support AArch32 EL0 on asymmetric systems. To that end,
prevent userspace from configuring a vCPU in such a state through
setting PSTATE.
It is already ABI that KVM rejects such a write on a system where
AArch32 EL0 is unsupported. Though the kernel's definition of a 32bit
system changed in commit 2122a833316f ("arm64: Allow mismatched
32-bit EL0 support"), KVM's did not.
Fixes: 2122a833316f ("arm64: Allow mismatched 32-bit EL0 support")
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220816192554.1455559-3-oliver.upton@linux.dev
|
|
KVM does not support AArch32 on asymmetric systems. To that end, enforce
AArch64-only behavior on PMCR_EL1.LC when on an asymmetric system.
Fixes: 2122a833316f ("arm64: Allow mismatched 32-bit EL0 support")
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220816192554.1455559-2-oliver.upton@linux.dev
|
|
Use GENMASK() to generate the masks of device type and device id,
fixing compilation errors due to the sign extension when using
older versions of GCC (such as is 7.5):
In function ‘kvm_vm_ioctl_set_device_addr.isra.38’,
inlined from ‘kvm_arch_vm_ioctl’ at arch/arm64/kvm/arm.c:1454:10:
././include/linux/compiler_types.h:354:38: error: call to ‘__compiletime_assert_599’ \
declared with attribute error: FIELD_GET: mask is not constant
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
Fixes: 9f968c9266aa ("KVM: arm64: vgic-v2: Add helper for legacy dist/cpuif base address setting")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
[maz: tidy up commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220810013435.1525363-1-yangyingliang@huawei.com
|
|
* kvm-arm64/nvhe-stacktrace: (27 commits)
: .
: Add an overflow stack to the nVHE EL2 code, allowing
: the implementation of an unwinder, courtesy of
: Kalesh Singh. From the cover letter (slightly edited):
:
: "nVHE has two modes of operation: protected (pKVM) and unprotected
: (conventional nVHE). Depending on the mode, a slightly different approach
: is used to dump the hypervisor stacktrace but the core unwinding logic
: remains the same.
:
: * Protected nVHE (pKVM) stacktraces:
:
: In protected nVHE mode, the host cannot directly access hypervisor memory.
:
: The hypervisor stack unwinding happens in EL2 and is made accessible to
: the host via a shared buffer. Symbolizing and printing the stacktrace
: addresses is delegated to the host and happens in EL1.
:
: * Non-protected (Conventional) nVHE stacktraces:
:
: In non-protected mode, the host is able to directly access the hypervisor
: stack pages.
:
: The hypervisor stack unwinding and dumping of the stacktrace is performed
: by the host in EL1, as this avoids the memory overhead of setting up
: shared buffers between the host and hypervisor."
:
: Additional patches from Oliver Upton and Marc Zyngier, tidying up
: the initial series.
: .
arm64: Update 'unwinder howto'
KVM: arm64: Don't open code ARRAY_SIZE()
KVM: arm64: Move nVHE-only helpers into kvm/stacktrace.c
KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
KVM: arm64: Move nVHE stacktrace unwinding into its own compilation unit
KVM: arm64: Move PROTECTED_NVHE_STACKTRACE around
KVM: arm64: Introduce pkvm_dump_backtrace()
KVM: arm64: Implement protected nVHE hyp stack unwinder
KVM: arm64: Save protected-nVHE (pKVM) hyp stacktrace
KVM: arm64: Stub implementation of pKVM HYP stack unwinder
KVM: arm64: Allocate shared pKVM hyp stacktrace buffers
KVM: arm64: Add PROTECTED_NVHE_STACKTRACE Kconfig
KVM: arm64: Introduce hyp_dump_backtrace()
KVM: arm64: Implement non-protected nVHE hyp stack unwinder
KVM: arm64: Prepare non-protected nVHE hypervisor stacktrace
KVM: arm64: Stub implementation of non-protected nVHE HYP stack unwinder
KVM: arm64: On stack overflow switch to hyp overflow_stack
arm64: stacktrace: Add description of stacktrace/common.h
arm64: stacktrace: Factor out common unwind()
arm64: stacktrace: Handle frame pointer from different address spaces
...
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Implementing a new unwinder is a bit more involved than writing
a couple of helpers, so let's not lure the reader into a false
sense of comfort. Instead, let's point out what they should
call into, and what sort of parameter they need to provide.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20220727142906.1856759-7-maz@kernel.org
|
|
Use ARRAY_SIZE() instead of an open-coded version.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Link: https://lore.kernel.org/r/20220727142906.1856759-6-maz@kernel.org
|
|
kvm_nvhe_stack_kern_va() only makes sense as part of the nVHE
unwinder, so simply move it there.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20220727142906.1856759-5-maz@kernel.org
|
|
Having multiple versions of on_accessible_stack() (one per unwinder)
makes it very hard to reason about what is used where due to the
complexity of the various includes, the forward declarations, and
the reliance on everything being 'inline'.
Instead, move the code back where it should be. Each unwinder
implements:
- on_accessible_stack() as well as the helpers it depends on,
- unwind()/unwind_next(), as they pass on_accessible_stack as
a parameter to unwind_next_common() (which is the only common
code here)
This hardly results in any duplication, and makes it much
easier to reason about the code.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20220727142906.1856759-4-maz@kernel.org
|
|
The unwinding code doesn't really belong to the exit handling
code. Instead, move it to a file (conveniently named stacktrace.c
to confuse the reviewer), and move all the stacktrace-related
stuff there.
It will be joined by more code very soon.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20220727142906.1856759-3-maz@kernel.org
|
|
Make the dependency with EL2_DEBUG more obvious by moving the
stacktrace configurtion *after* it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20220727142906.1856759-2-maz@kernel.org
|
|
Dumps the pKVM hypervisor backtrace from EL1 by reading the unwinded
addresses from the shared stacktrace buffer.
The nVHE hyp backtrace is dumped on hyp_panic(), before panicking the
host.
[ 111.623091] kvm [367]: nVHE call trace:
[ 111.623215] kvm [367]: [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8
[ 111.623448] kvm [367]: [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10
[ 111.623642] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
. . .
[ 111.640366] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
[ 111.640467] kvm [367]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
[ 111.640574] kvm [367]: [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c
[ 111.640676] kvm [367]: [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48
[ 111.640778] kvm [367]: [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128
[ 111.640880] kvm [367]: [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64
[ 111.640996] kvm [367]: ---[ end nVHE call trace ]---
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-18-kaleshsingh@google.com
|
|
Implements the common framework necessary for unwind() to work in
the protected nVHE context:
- on_accessible_stack()
- on_overflow_stack()
- unwind_next()
Protected nVHE unwind() is used to unwind and save the hyp stack
addresses to the shared stacktrace buffer. The host reads the
entries in this buffer, symbolizes and dumps the stacktrace (later
patch in the series).
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-17-kaleshsingh@google.com
|
|
In protected nVHE mode, the host cannot access private owned hypervisor
memory. Also the hypervisor aims to remains simple to reduce the attack
surface and does not provide any printk support.
For the above reasons, the approach taken to provide hypervisor stacktraces
in protected mode is:
1) Unwind and save the hyp stack addresses in EL2 to a shared buffer
with the host (done in this patch).
2) Delegate the dumping and symbolization of the addresses to the
host in EL1 (later patch in the series).
On hyp_panic(), the hypervisor prepares the stacktrace before returning to
the host.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-16-kaleshsingh@google.com
|
|
Add some stub implementations of protected nVHE stack unwinder, for
building. These are implemented later in this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-15-kaleshsingh@google.com
|
|
In protected nVHE mode the host cannot directly access
hypervisor memory, so we will dump the hypervisor stacktrace
to a shared buffer with the host.
The minimum size for the buffer required, assuming the min frame
size of [x29, x30] (2 * sizeof(long)), is half the combined size of
the hypervisor and overflow stacks plus an additional entry to
delimit the end of the stacktrace.
The stacktrace buffers are used later in the series to dump the
nVHE hypervisor stacktrace when using protected-mode.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-14-kaleshsingh@google.com
|
|
This can be used to disable stacktrace for the protected KVM
nVHE hypervisor, in order to save on the associated memory usage.
This option is disabled by default, since protected KVM is not widely
used on platforms other than Android currently.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-13-kaleshsingh@google.com
|
|
In non-protected nVHE mode, unwinds and dumps the hypervisor backtrace
from EL1. This is possible beacause the host can directly access the
hypervisor stack pages in non-protected mode.
The nVHE backtrace is dumped on hyp_panic(), before panicking the host.
[ 101.498183] kvm [377]: nVHE call trace:
[ 101.498363] kvm [377]: [<ffff8000090a6570>] __kvm_nvhe_hyp_panic+0xac/0xf8
[ 101.499045] kvm [377]: [<ffff8000090a65cc>] __kvm_nvhe_hyp_panic_bad_stack+0x10/0x10
[ 101.499498] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
. . .
[ 101.524929] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
[ 101.525062] kvm [377]: [<ffff8000090a61e4>] __kvm_nvhe_recursive_death+0x24/0x34
[ 101.525195] kvm [377]: [<ffff8000090a5de4>] __kvm_nvhe___kvm_vcpu_run+0x30/0x40c
[ 101.525333] kvm [377]: [<ffff8000090a8b64>] __kvm_nvhe_handle___kvm_vcpu_run+0x30/0x48
[ 101.525468] kvm [377]: [<ffff8000090a88b8>] __kvm_nvhe_handle_trap+0xc4/0x128
[ 101.525602] kvm [377]: [<ffff8000090a7864>] __kvm_nvhe___host_exit+0x64/0x64
[ 101.525745] kvm [377]: ---[ end nVHE call trace ]---
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-12-kaleshsingh@google.com
|
|
Implements the common framework necessary for unwind() to work
for non-protected nVHE mode:
- on_accessible_stack()
- on_overflow_stack()
- unwind_next()
Non-protected nVHE unwind() is used to unwind and dump the hypervisor
stacktrace by the host in EL1
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-11-kaleshsingh@google.com
|
|
In non-protected nVHE mode (non-pKVM) the host can directly access
hypervisor memory; and unwinding of the hypervisor stacktrace is
done from EL1 to save on memory for shared buffers.
To unwind the hypervisor stack from EL1 the host needs to know the
starting point for the unwind and information that will allow it to
translate hypervisor stack addresses to the corresponding kernel
addresses. This patch sets up this book keeping. It is made use of
later in the series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-10-kaleshsingh@google.com
|
|
Add stub implementations of non-protected nVHE stack unwinder, for
building. These are implemented later in this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-9-kaleshsingh@google.com
|
|
On hyp stack overflow switch to 16-byte aligned secondary stack.
This provides us stack space to better handle overflows; and is
used in a subsequent patch to dump the hypervisor stacktrace.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-8-kaleshsingh@google.com
|
|
Add brief description on how to use stacktrace/common.h to implement
a stack unwinder.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-7-kaleshsingh@google.com
|
|
Move unwind() to stacktrace/common.h, and as a result
the kernel unwind_next() to asm/stacktrace.h. This allow
reusing unwind() in the implementation of the nVHE HYP
stack unwinder, later in the series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-6-kaleshsingh@google.com
|
|
The unwinder code is made reusable so that it can be used to
unwind various types of stacks. One usecase is unwinding the
nVHE hyp stack from the host (EL1) in non-protected mode. This
means that the unwinder must be able to translate HYP stack
addresses to kernel addresses.
Add a callback (stack_trace_translate_fp_fn) to allow specifying
the translation function.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-5-kaleshsingh@google.com
|
|
Move common unwind_next logic to stacktrace/common.h. This allows
reusing the code in the implementation the nVHE hypervisor stack
unwinder, later in this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-4-kaleshsingh@google.com
|
|
Move common on_accessible_stack checks to stacktrace/common.h. This is
used in the implementation of the nVHE hypervisor unwinder later in
this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-3-kaleshsingh@google.com
|
|
In order to reuse the arm64 stack unwinding logic for the nVHE
hypervisor stack, move the common code to a shared header
(arch/arm64/include/asm/stacktrace/common.h).
The nVHE hypervisor cannot safely link against kernel code, so we
make use of the shared header to avoid duplicated logic later in
this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220726073750.3219117-2-kaleshsingh@google.com
|
|
* kvm-arm64/sysreg-cleanup-5.20:
: .
: Long overdue cleanup of the sysreg userspace access,
: with extra scrubbing on the vgic side of things.
: From the cover letter:
:
: "Schspa Shi recently reported[1] that some of the vgic code interacting
: with userspace was reading uninitialised stack memory, and although
: that read wasn't used any further, it prompted me to revisit this part
: of the code.
:
: Needless to say, this area of the kernel is pretty crufty, and shows a
: bunch of issues in other parts of the KVM/arm64 infrastructure. This
: series tries to remedy a bunch of them:
:
: - Sanitise the way we deal with sysregs from userspace: at the moment,
: each and every .set_user/.get_user callback has to implement its own
: userspace accesses (directly or indirectly). It'd be much better if
: that was centralised so that we can reason about it.
:
: - Enforce that all AArch64 sysregs are 64bit. Always. This was sort of
: implied by the code, but it took some effort to convince myself that
: this was actually the case.
:
: - Move the vgic-v3 sysreg userspace accessors to the userspace
: callbacks instead of hijacking the vcpu trap callback. This allows
: us to reuse the sysreg infrastructure.
:
: - Consolidate userspace accesses for both GICv2, GICv3 and common code
: as much as possible.
:
: - Cleanup a bunch of not-very-useful helpers, tidy up some of the code
: as we touch it.
:
: [1] https://lore.kernel.org/r/m2h740zz1i.fsf@gmail.com"
: .
KVM: arm64: Get rid or outdated comments
KVM: arm64: Descope kvm_arm_sys_reg_{get,set}_reg()
KVM: arm64: Get rid of find_reg_by_id()
KVM: arm64: vgic: Tidy-up calls to vgic_{get,set}_common_attr()
KVM: arm64: vgic: Consolidate userspace access for base address setting
KVM: arm64: vgic-v2: Add helper for legacy dist/cpuif base address setting
KVM: arm64: vgic: Use {get,put}_user() instead of copy_{from.to}_user
KVM: arm64: vgic-v2: Consolidate userspace access for MMIO registers
KVM: arm64: vgic-v3: Consolidate userspace access for MMIO registers
KVM: arm64: vgic-v3: Use u32 to manage the line level from userspace
KVM: arm64: vgic-v3: Convert userspace accessors over to FIELD_GET/FIELD_PREP
KVM: arm64: vgic-v3: Make the userspace accessors use sysreg API
KVM: arm64: vgic-v3: Push user access into vgic_v3_cpu_sysregs_uaccess()
KVM: arm64: vgic-v3: Simplify vgic_v3_has_cpu_sysregs_attr()
KVM: arm64: Get rid of reg_from/to_user()
KVM: arm64: Consolidate sysreg userspace accesses
KVM: arm64: Rely on index_to_param() for size checks on userspace access
KVM: arm64: Introduce generic get_user/set_user helpers for system registers
KVM: arm64: Reorder handling of invariant sysregs from userspace
KVM: arm64: Add get_reg_by_id() as a sys_reg_desc retrieving helper
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Once apon a time, the 32bit KVM/arm port was the reference, while
the arm64 version was the new kid on the block, without a clear
future... This was a long time ago.
"The times, they are a-changing."
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Having kvm_arm_sys_reg_get_reg and co in kvm_host.h gives the
impression that these functions are free to be called from
anywhere.
Not quite. They really are tied to out internal sysreg handling,
and they would be better off in the sys_regs.h header, which is
private. kvm_host.h could also get a bit of a diet, so let's
just do that.
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
This helper doesn't have a user anymore, let's get rid of it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
The userspace accessors have an early call to vgic_{get,set}_common_attr()
that makes the code hard to follow. Move it to the default: clause of
the decoding switch statement, which results in a nice cleanup.
This requires us to move the handling of the pending table into the
common handling, even if it is strictly a GICv3 feature (it has the
benefit of keeping the whole control group handling in the same
function).
Also cleanup vgic_v3_{get,set}_attr() while we're at it, deduplicating
the calls to vgic_v3_attr_regs_access().
Suggested-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Align kvm_vgic_addr() with the rest of the code by moving the
userspace accesses into it. kvm_vgic_addr() is also made static.
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
We carry a legacy interface to set the base addresses for GICv2.
As this is currently plumbed into the same handling code as
the modern interface, it limits the evolution we can make there.
Add a helper dedicated to this handling, with a view of maybe
removing this in the future.
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Tidy-up vgic_get_common_attr() and vgic_set_common_attr() to use
{get,put}_user() instead of the more complex (and less type-safe)
copy_{from,to}_user().
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Align the GICv2 MMIO accesses from userspace with the way the GICv3
code is now structured.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
For userspace accesses to GICv3 MMIO registers (and related data),
vgic_v3_{get,set}_attr are littered with {get,put}_user() calls,
making it hard to audit and reason about.
Consolidate all userspace accesses in vgic_v3_attr_regs_access(),
making the code far simpler to audit.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Despite the userspace ABI clearly defining the bits dealt with by
KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO as a __u32, the kernel uses a u64.
Use a u32 to match the userspace ABI, which will subsequently lead
to some simplifications.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
The GICv3 userspace accessors are all about dealing with conversion
between fields from architectural registers and internal representations.
However, and owing to the age of this code, the accessors use
a combination of shift/mask that is hard to read. It is nonetheless
easy to make it better by using the FIELD_{GET,PREP} macros that solely
rely on a mask.
This results in somewhat nicer looking code, and is probably easier
to maintain.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
The vgic-v3 sysreg accessors have been ignored as the rest of the
sysreg internal API was evolving, and are stuck with the .access
method (which is normally reserved to the guest's own access)
for the userspace accesses (which should use the .set/.get_user()
methods).
Catch up with the program and repaint all the accessors so that
they fit into the normal userspace model, and plug the result into
the helpers that have been introduced earlier.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
In order to start making the vgic sysreg access from userspace
similar to all the other sysregs, push the userspace memory
access one level down into vgic_v3_cpu_sysregs_uaccess().
The next step will be to rely on the sysreg infrastructure
to perform this task.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Finding out whether a sysreg exists has little to do with that
register being accessed, so drop the is_write parameter.
Also, the reg pointer is completely unused, and we're better off
just passing the attr pointer to the function.
This result in a small cleanup of the calling site, with a new
helper converting the vGIC view of a sysreg into the canonical
one (this is purely cosmetic, as the encoding is the same).
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
These helpers are only used by the invariant stuff now, and while
they pretend to support non-64bit registers, this only serves as
a way to scare the casual reviewer...
Replace these helpers with our good friends get/put_user(), and
don't look back.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
Until now, the .set_user and .get_user callbacks have to implement
(directly or not) the userspace memory accesses. Although this gives
us maximem flexibility, this is also a maintenance burden, making it
hard to audit, and I'd feel much better if it was all located in
a single place.
So let's do just that, simplifying most of the function signatures
in the process (the callbacks are now only concerned with the
data itself, and not with userspace).
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
index_to_param() already checks that we use 64bit accesses for all
registers accessed from userspace.
However, we have extra checks in other places (such as index_to_params),
which is pretty confusing. Get rid off these redundant checks.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
The userspace access to the system registers is done using helpers
that hardcode the table that is looked up. extract some generic
helpers from this, moving the handling of hidden sysregs into
the core code.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
In order to allow some further refactor of the sysreg helpers,
move the handling of invariant sysreg to occur before we handle
all the other ones.
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
find_reg_by_id() requires a sys_reg_param as input, which most
users provide as a on-stack variable, but don't make any use of
the result.
Provide a helper that doesn't have this requirement and simplify
the callers (all but one).
Reviewed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
* kvm-arm64/misc-5.20:
: .
: Misc fixes for 5.20:
:
: - Tidy up the hyp/nvhe Makefile
:
: - Fix functions pointlessly returning a void value
:
: - Fix vgic_init selftest to handle the GICv3-on-v3 case
:
: - Fix hypervisor symbolisation when CONFIG_RANDOMIZE_BASE=y
: .
KVM: arm64: Fix hypervisor address symbolization
KVM: arm64: selftests: Add support for GICv2 on v3
KVM: arm64: Don't return from void function
KVM: arm64: nvhe: Add intermediates to 'targets' instead of extra-y
KVM: arm64: nvhe: Rename confusing obj-y
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
With CONFIG_RANDOMIZE_BASE=y vmlinux addresses will resolve incorrectly
from kallsyms. Fix this by adding the KASLR offset before printing the
symbols.
Fixes: 6ccf9cb557bd ("KVM: arm64: Symbolize the nVHE HYP addresses")
Reported-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220715235824.2549012-1-kaleshsingh@google.com
|