Commit 6ad1e29f authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: Clean up __kvm_gfn_to_hva_cache_init() and its callers

Barret reported a (technically benign) bug where nr_pages_avail can be
accessed without being initialized if gfn_to_hva_many() fails.

  virt/kvm/kvm_main.c:2193:13: warning: 'nr_pages_avail' may be
  used uninitialized in this function [-Wmaybe-uninitialized]

Rather than simply squashing the warning by initializing nr_pages_avail,
fix the underlying issues by reworking __kvm_gfn_to_hva_cache_init() to
return immediately instead of continuing on.  Now that all callers check
the result and/or bail immediately on a bad hva, there's no need to
explicitly nullify the memslot on error.
Reported-by: default avatarBarret Rhoden <brho@google.com>
Fixes: f1b9dd5e ("kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init")
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent fcfbc617
...@@ -2130,33 +2130,36 @@ static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots, ...@@ -2130,33 +2130,36 @@ static int __kvm_gfn_to_hva_cache_init(struct kvm_memslots *slots,
gfn_t end_gfn = (gpa + len - 1) >> PAGE_SHIFT; gfn_t end_gfn = (gpa + len - 1) >> PAGE_SHIFT;
gfn_t nr_pages_needed = end_gfn - start_gfn + 1; gfn_t nr_pages_needed = end_gfn - start_gfn + 1;
gfn_t nr_pages_avail; gfn_t nr_pages_avail;
int r = start_gfn <= end_gfn ? 0 : -EINVAL;
ghc->gpa = gpa; /* Update ghc->generation before performing any error checks. */
ghc->generation = slots->generation; ghc->generation = slots->generation;
ghc->len = len;
ghc->hva = KVM_HVA_ERR_BAD; if (start_gfn > end_gfn) {
ghc->hva = KVM_HVA_ERR_BAD;
return -EINVAL;
}
/* /*
* If the requested region crosses two memslots, we still * If the requested region crosses two memslots, we still
* verify that the entire region is valid here. * verify that the entire region is valid here.
*/ */
while (!r && start_gfn <= end_gfn) { for ( ; start_gfn <= end_gfn; start_gfn += nr_pages_avail) {
ghc->memslot = __gfn_to_memslot(slots, start_gfn); ghc->memslot = __gfn_to_memslot(slots, start_gfn);
ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn, ghc->hva = gfn_to_hva_many(ghc->memslot, start_gfn,
&nr_pages_avail); &nr_pages_avail);
if (kvm_is_error_hva(ghc->hva)) if (kvm_is_error_hva(ghc->hva))
r = -EFAULT; return -EFAULT;
start_gfn += nr_pages_avail;
} }
/* Use the slow path for cross page reads and writes. */ /* Use the slow path for cross page reads and writes. */
if (!r && nr_pages_needed == 1) if (nr_pages_needed == 1)
ghc->hva += offset; ghc->hva += offset;
else else
ghc->memslot = NULL; ghc->memslot = NULL;
return r; ghc->gpa = gpa;
ghc->len = len;
return 0;
} }
int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment