diff options
| author | Oliver Upton <oliver.upton@linux.dev> | 2025-07-14 23:25:07 -0700 |
|---|---|---|
| committer | Oliver Upton <oliver.upton@linux.dev> | 2025-07-15 20:12:03 -0700 |
| commit | efa1368ba9f4b6e081c0fdd73245b0ba6ef75bda (patch) | |
| tree | 6edaaa675ee18b750089111e12ad10779b854b2f | |
| parent | f9e4e0a663d239f944649c5201879c7471615dd0 (diff) | |
KVM: arm64: Commit exceptions from KVM_SET_VCPU_EVENTS immediately
syzkaller has found that it can trip a warning in KVM's exception
emulation infrastructure by repeatedly injecting exceptions into the
guest.
While it's unlikely that a reasonable VMM will do this, further
investigation of the issue reveals that KVM can potentially discard the
"pending" SEA state. While the handling of KVM_GET_VCPU_EVENTS presumes
that userspace-injected SEAs are realized immediately, in reality the
emulated exception entry is deferred until the next call to KVM_RUN.
Hack-a-fix the immediate issues by committing the pending exceptions to
the vCPU's architectural state immediately in KVM_SET_VCPU_EVENTS. This
is no different to the way KVM-injected exceptions are handled in
KVM_RUN where we potentially call __kvm_adjust_pc() before returning to
userspace.
Reported-by: syzbot+4e09b1432de3774b86ae@syzkaller.appspotmail.com
Reported-by: syzbot+1f6f096afda6f4f8f565@syzkaller.appspotmail.com
Reviewed-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
| -rw-r--r-- | arch/arm64/kvm/guest.c | 28 |
1 files changed, 27 insertions, 1 deletions
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index e2702718d56d..16ba5e9ac86c 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -834,6 +834,19 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, return 0; } +static void commit_pending_events(struct kvm_vcpu *vcpu) +{ + if (!vcpu_get_flag(vcpu, PENDING_EXCEPTION)) + return; + + /* + * Reset the MMIO emulation state to avoid stepping PC after emulating + * the exception entry. + */ + vcpu->mmio_needed = false; + kvm_call_hyp(__kvm_adjust_pc, vcpu); +} + int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { @@ -843,8 +856,15 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, u64 esr = events->exception.serror_esr; int ret = 0; - if (ext_dabt_pending) + /* + * Immediately commit the pending SEA to the vCPU's architectural + * state which is necessary since we do not return a pending SEA + * to userspace via KVM_GET_VCPU_EVENTS. + */ + if (ext_dabt_pending) { ret = kvm_inject_sea_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + commit_pending_events(vcpu); + } if (ret < 0) return ret; @@ -863,6 +883,12 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, else ret = kvm_inject_serror(vcpu); + /* + * We could've decided that the SError is due for immediate software + * injection; commit the exception in case userspace decides it wants + * to inject more exceptions for some strange reason. + */ + commit_pending_events(vcpu); return (ret < 0) ? ret : 0; } |
