Commit 34b13ff6 authored by Cédric Le Goater's avatar Cédric Le Goater Committed by Greg Kroah-Hartman

KVM: PPC: Book3S HV: XIVE: Free escalation interrupts before disabling the VP

[ Upstream commit 237aed48 ]

When a vCPU is brought done, the XIVE VP (Virtual Processor) is first
disabled and then the event notification queues are freed. When freeing
the queues, we check for possible escalation interrupts and free them
also.

But when a XIVE VP is disabled, the underlying XIVE ENDs also are
disabled in OPAL. When an END (Event Notification Descriptor) is
disabled, its ESB pages (ESn and ESe) are disabled and loads return all
1s. Which means that any access on the ESB page of the escalation
interrupt will return invalid values.

When an interrupt is freed, the shutdown handler computes a 'saved_p'
field from the value returned by a load in xive_do_source_set_mask().
This value is incorrect for escalation interrupts for the reason
described above.

This has no impact on Linux/KVM today because we don't make use of it
but we will introduce in future changes a xive_get_irqchip_state()
handler. This handler will use the 'saved_p' field to return the state
of an interrupt and 'saved_p' being incorrect, softlockup will occur.

Fix the vCPU cleanup sequence by first freeing the escalation interrupts
if any, then disable the XIVE VP and last free the queues.

Fixes: 90c73795 ("KVM: PPC: Book3S HV: Add a new KVM device for the XIVE native exploitation mode")
Fixes: 5af50993 ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: default avatarCédric Le Goater <clg@kaod.org>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190806172538.5087-1-clg@kaod.orgSigned-off-by: default avatarSasha Levin <sashal@kernel.org>
parent 1b155b4f
...@@ -1037,20 +1037,22 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu) ...@@ -1037,20 +1037,22 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
/* Mask the VP IPI */ /* Mask the VP IPI */
xive_vm_esb_load(&xc->vp_ipi_data, XIVE_ESB_SET_PQ_01); xive_vm_esb_load(&xc->vp_ipi_data, XIVE_ESB_SET_PQ_01);
/* Disable the VP */ /* Free escalations */
xive_native_disable_vp(xc->vp_id);
/* Free the queues & associated interrupts */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) { for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
struct xive_q *q = &xc->queues[i];
/* Free the escalation irq */
if (xc->esc_virq[i]) { if (xc->esc_virq[i]) {
free_irq(xc->esc_virq[i], vcpu); free_irq(xc->esc_virq[i], vcpu);
irq_dispose_mapping(xc->esc_virq[i]); irq_dispose_mapping(xc->esc_virq[i]);
kfree(xc->esc_virq_names[i]); kfree(xc->esc_virq_names[i]);
} }
/* Free the queue */ }
/* Disable the VP */
xive_native_disable_vp(xc->vp_id);
/* Free the queues */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
struct xive_q *q = &xc->queues[i];
xive_native_disable_queue(xc->vp_id, q, i); xive_native_disable_queue(xc->vp_id, q, i);
if (q->qpage) { if (q->qpage) {
free_pages((unsigned long)q->qpage, free_pages((unsigned long)q->qpage,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment