Commit 86bdf3eb authored by Gavin Shan's avatar Gavin Shan Committed by Marc Zyngier

KVM: Support dirty ring in conjunction with bitmap

ARM64 needs to dirty memory outside of a VCPU context when VGIC/ITS is
enabled. It's conflicting with that ring-based dirty page tracking always
requires a running VCPU context.

Introduce a new flavor of dirty ring that requires the use of both VCPU
dirty rings and a dirty bitmap. The expectation is that for non-VCPU
sources of dirty memory (such as the VGIC/ITS on arm64), KVM writes to
the dirty bitmap. Userspace should scan the dirty bitmap before migrating
the VM to the target.

Use an additional capability to advertise this behavior. The newly added
capability (KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP) can't be enabled before
KVM_CAP_DIRTY_LOG_RING_ACQ_REL on ARM64. In this way, the newly added
capability is treated as an extension of KVM_CAP_DIRTY_LOG_RING_ACQ_REL.
Suggested-by: default avatarMarc Zyngier <maz@kernel.org>
Suggested-by: default avatarPeter Xu <peterx@redhat.com>
Co-developed-by: default avatarOliver Upton <oliver.upton@linux.dev>
Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
Acked-by: default avatarPeter Xu <peterx@redhat.com>
Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20221110104914.31280-4-gshan@redhat.com
parent e8a18565
...@@ -8003,13 +8003,6 @@ flushing is done by the KVM_GET_DIRTY_LOG ioctl). To achieve that, one ...@@ -8003,13 +8003,6 @@ flushing is done by the KVM_GET_DIRTY_LOG ioctl). To achieve that, one
needs to kick the vcpu out of KVM_RUN using a signal. The resulting needs to kick the vcpu out of KVM_RUN using a signal. The resulting
vmexit ensures that all dirty GFNs are flushed to the dirty rings. vmexit ensures that all dirty GFNs are flushed to the dirty rings.
NOTE: the capability KVM_CAP_DIRTY_LOG_RING and the corresponding
ioctl KVM_RESET_DIRTY_RINGS are mutual exclusive to the existing ioctls
KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG. After enabling
KVM_CAP_DIRTY_LOG_RING with an acceptable dirty ring size, the virtual
machine will switch to ring-buffer dirty page tracking and further
KVM_GET_DIRTY_LOG or KVM_CLEAR_DIRTY_LOG ioctls will fail.
NOTE: KVM_CAP_DIRTY_LOG_RING_ACQ_REL is the only capability that NOTE: KVM_CAP_DIRTY_LOG_RING_ACQ_REL is the only capability that
should be exposed by weakly ordered architecture, in order to indicate should be exposed by weakly ordered architecture, in order to indicate
the additional memory ordering requirements imposed on userspace when the additional memory ordering requirements imposed on userspace when
...@@ -8018,6 +8011,33 @@ Architecture with TSO-like ordering (such as x86) are allowed to ...@@ -8018,6 +8011,33 @@ Architecture with TSO-like ordering (such as x86) are allowed to
expose both KVM_CAP_DIRTY_LOG_RING and KVM_CAP_DIRTY_LOG_RING_ACQ_REL expose both KVM_CAP_DIRTY_LOG_RING and KVM_CAP_DIRTY_LOG_RING_ACQ_REL
to userspace. to userspace.
After enabling the dirty rings, the userspace needs to detect the
capability of KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP to see whether the
ring structures can be backed by per-slot bitmaps. With this capability
advertised, it means the architecture can dirty guest pages without
vcpu/ring context, so that some of the dirty information will still be
maintained in the bitmap structure. KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP
can't be enabled if the capability of KVM_CAP_DIRTY_LOG_RING_ACQ_REL
hasn't been enabled, or any memslot has been existing.
Note that the bitmap here is only a backup of the ring structure. The
use of the ring and bitmap combination is only beneficial if there is
only a very small amount of memory that is dirtied out of vcpu/ring
context. Otherwise, the stand-alone per-slot bitmap mechanism needs to
be considered.
To collect dirty bits in the backup bitmap, userspace can use the same
KVM_GET_DIRTY_LOG ioctl. KVM_CLEAR_DIRTY_LOG isn't needed as long as all
the generation of the dirty bits is done in a single pass. Collecting
the dirty bitmap should be the very last thing that the VMM does before
considering the state as complete. VMM needs to ensure that the dirty
state is final and avoid missing dirty pages from another ioctl ordered
after the bitmap collection.
NOTE: One example of using the backup bitmap is saving arm64 vgic/its
tables through KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} command on
KVM device "kvm-arm-vgic-its" when dirty ring is enabled.
8.30 KVM_CAP_XEN_HVM 8.30 KVM_CAP_XEN_HVM
-------------------- --------------------
......
...@@ -52,7 +52,10 @@ KVM_DEV_ARM_VGIC_GRP_CTRL ...@@ -52,7 +52,10 @@ KVM_DEV_ARM_VGIC_GRP_CTRL
KVM_DEV_ARM_ITS_SAVE_TABLES KVM_DEV_ARM_ITS_SAVE_TABLES
save the ITS table data into guest RAM, at the location provisioned save the ITS table data into guest RAM, at the location provisioned
by the guest in corresponding registers/table entries. by the guest in corresponding registers/table entries. Should userspace
require a form of dirty tracking to identify which pages are modified
by the saving process, it should use a bitmap even if using another
mechanism to track the memory dirtied by the vCPUs.
The layout of the tables in guest memory defines an ABI. The entries The layout of the tables in guest memory defines an ABI. The entries
are laid out in little endian format as described in the last paragraph. are laid out in little endian format as described in the last paragraph.
......
...@@ -37,6 +37,11 @@ static inline u32 kvm_dirty_ring_get_rsvd_entries(void) ...@@ -37,6 +37,11 @@ static inline u32 kvm_dirty_ring_get_rsvd_entries(void)
return 0; return 0;
} }
static inline bool kvm_use_dirty_bitmap(struct kvm *kvm)
{
return true;
}
static inline int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, static inline int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring,
int index, u32 size) int index, u32 size)
{ {
...@@ -67,6 +72,8 @@ static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring) ...@@ -67,6 +72,8 @@ static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
#else /* CONFIG_HAVE_KVM_DIRTY_RING */ #else /* CONFIG_HAVE_KVM_DIRTY_RING */
int kvm_cpu_dirty_log_size(void); int kvm_cpu_dirty_log_size(void);
bool kvm_use_dirty_bitmap(struct kvm *kvm);
bool kvm_arch_allow_write_without_running_vcpu(struct kvm *kvm);
u32 kvm_dirty_ring_get_rsvd_entries(void); u32 kvm_dirty_ring_get_rsvd_entries(void);
int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size); int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
......
...@@ -779,6 +779,7 @@ struct kvm { ...@@ -779,6 +779,7 @@ struct kvm {
pid_t userspace_pid; pid_t userspace_pid;
unsigned int max_halt_poll_ns; unsigned int max_halt_poll_ns;
u32 dirty_ring_size; u32 dirty_ring_size;
bool dirty_ring_with_bitmap;
bool vm_bugged; bool vm_bugged;
bool vm_dead; bool vm_dead;
......
...@@ -1178,6 +1178,7 @@ struct kvm_ppc_resize_hpt { ...@@ -1178,6 +1178,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_S390_ZPCI_OP 221 #define KVM_CAP_S390_ZPCI_OP 221
#define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_S390_CPU_TOPOLOGY 222
#define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223
#define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 224
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
......
...@@ -33,6 +33,12 @@ config HAVE_KVM_DIRTY_RING_ACQ_REL ...@@ -33,6 +33,12 @@ config HAVE_KVM_DIRTY_RING_ACQ_REL
bool bool
select HAVE_KVM_DIRTY_RING select HAVE_KVM_DIRTY_RING
# Allow enabling both the dirty bitmap and dirty ring. Only architectures
# that need to dirty memory outside of a vCPU context should select this.
config NEED_KVM_DIRTY_RING_WITH_BITMAP
bool
depends on HAVE_KVM_DIRTY_RING
config HAVE_KVM_EVENTFD config HAVE_KVM_EVENTFD
bool bool
select EVENTFD select EVENTFD
......
...@@ -21,6 +21,20 @@ u32 kvm_dirty_ring_get_rsvd_entries(void) ...@@ -21,6 +21,20 @@ u32 kvm_dirty_ring_get_rsvd_entries(void)
return KVM_DIRTY_RING_RSVD_ENTRIES + kvm_cpu_dirty_log_size(); return KVM_DIRTY_RING_RSVD_ENTRIES + kvm_cpu_dirty_log_size();
} }
bool kvm_use_dirty_bitmap(struct kvm *kvm)
{
lockdep_assert_held(&kvm->slots_lock);
return !kvm->dirty_ring_size || kvm->dirty_ring_with_bitmap;
}
#ifndef CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP
bool kvm_arch_allow_write_without_running_vcpu(struct kvm *kvm)
{
return false;
}
#endif
static u32 kvm_dirty_ring_used(struct kvm_dirty_ring *ring) static u32 kvm_dirty_ring_used(struct kvm_dirty_ring *ring)
{ {
return READ_ONCE(ring->dirty_index) - READ_ONCE(ring->reset_index); return READ_ONCE(ring->dirty_index) - READ_ONCE(ring->reset_index);
......
...@@ -1617,7 +1617,7 @@ static int kvm_prepare_memory_region(struct kvm *kvm, ...@@ -1617,7 +1617,7 @@ static int kvm_prepare_memory_region(struct kvm *kvm,
new->dirty_bitmap = NULL; new->dirty_bitmap = NULL;
else if (old && old->dirty_bitmap) else if (old && old->dirty_bitmap)
new->dirty_bitmap = old->dirty_bitmap; new->dirty_bitmap = old->dirty_bitmap;
else if (!kvm->dirty_ring_size) { else if (kvm_use_dirty_bitmap(kvm)) {
r = kvm_alloc_dirty_bitmap(new); r = kvm_alloc_dirty_bitmap(new);
if (r) if (r)
return r; return r;
...@@ -2060,8 +2060,8 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, ...@@ -2060,8 +2060,8 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log,
unsigned long n; unsigned long n;
unsigned long any = 0; unsigned long any = 0;
/* Dirty ring tracking is exclusive to dirty log tracking */ /* Dirty ring tracking may be exclusive to dirty log tracking */
if (kvm->dirty_ring_size) if (!kvm_use_dirty_bitmap(kvm))
return -ENXIO; return -ENXIO;
*memslot = NULL; *memslot = NULL;
...@@ -2125,8 +2125,8 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) ...@@ -2125,8 +2125,8 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
unsigned long *dirty_bitmap_buffer; unsigned long *dirty_bitmap_buffer;
bool flush; bool flush;
/* Dirty ring tracking is exclusive to dirty log tracking */ /* Dirty ring tracking may be exclusive to dirty log tracking */
if (kvm->dirty_ring_size) if (!kvm_use_dirty_bitmap(kvm))
return -ENXIO; return -ENXIO;
as_id = log->slot >> 16; as_id = log->slot >> 16;
...@@ -2237,8 +2237,8 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, ...@@ -2237,8 +2237,8 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm,
unsigned long *dirty_bitmap_buffer; unsigned long *dirty_bitmap_buffer;
bool flush; bool flush;
/* Dirty ring tracking is exclusive to dirty log tracking */ /* Dirty ring tracking may be exclusive to dirty log tracking */
if (kvm->dirty_ring_size) if (!kvm_use_dirty_bitmap(kvm))
return -ENXIO; return -ENXIO;
as_id = log->slot >> 16; as_id = log->slot >> 16;
...@@ -3305,7 +3305,10 @@ void mark_page_dirty_in_slot(struct kvm *kvm, ...@@ -3305,7 +3305,10 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
#ifdef CONFIG_HAVE_KVM_DIRTY_RING #ifdef CONFIG_HAVE_KVM_DIRTY_RING
if (WARN_ON_ONCE(!vcpu) || WARN_ON_ONCE(vcpu->kvm != kvm)) if (WARN_ON_ONCE(vcpu && vcpu->kvm != kvm))
return;
if (WARN_ON_ONCE(!kvm_arch_allow_write_without_running_vcpu(kvm) && !vcpu))
return; return;
#endif #endif
...@@ -3313,7 +3316,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, ...@@ -3313,7 +3316,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
unsigned long rel_gfn = gfn - memslot->base_gfn; unsigned long rel_gfn = gfn - memslot->base_gfn;
u32 slot = (memslot->as_id << 16) | memslot->id; u32 slot = (memslot->as_id << 16) | memslot->id;
if (kvm->dirty_ring_size) if (kvm->dirty_ring_size && vcpu)
kvm_dirty_ring_push(vcpu, slot, rel_gfn); kvm_dirty_ring_push(vcpu, slot, rel_gfn);
else else
set_bit_le(rel_gfn, memslot->dirty_bitmap); set_bit_le(rel_gfn, memslot->dirty_bitmap);
...@@ -4482,6 +4485,9 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) ...@@ -4482,6 +4485,9 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn); return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
#else #else
return 0; return 0;
#endif
#ifdef CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP
case KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP:
#endif #endif
case KVM_CAP_BINARY_STATS_FD: case KVM_CAP_BINARY_STATS_FD:
case KVM_CAP_SYSTEM_EVENT_DATA: case KVM_CAP_SYSTEM_EVENT_DATA:
...@@ -4558,6 +4564,20 @@ int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm, ...@@ -4558,6 +4564,20 @@ int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm,
return -EINVAL; return -EINVAL;
} }
static bool kvm_are_all_memslots_empty(struct kvm *kvm)
{
int i;
lockdep_assert_held(&kvm->slots_lock);
for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
if (!kvm_memslots_empty(__kvm_memslots(kvm, i)))
return false;
}
return true;
}
static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm, static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
struct kvm_enable_cap *cap) struct kvm_enable_cap *cap)
{ {
...@@ -4588,6 +4608,29 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm, ...@@ -4588,6 +4608,29 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
return -EINVAL; return -EINVAL;
return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]); return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
case KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP: {
int r = -EINVAL;
if (!IS_ENABLED(CONFIG_NEED_KVM_DIRTY_RING_WITH_BITMAP) ||
!kvm->dirty_ring_size || cap->flags)
return r;
mutex_lock(&kvm->slots_lock);
/*
* For simplicity, allow enabling ring+bitmap if and only if
* there are no memslots, e.g. to ensure all memslots allocate
* a bitmap after the capability is enabled.
*/
if (kvm_are_all_memslots_empty(kvm)) {
kvm->dirty_ring_with_bitmap = true;
r = 0;
}
mutex_unlock(&kvm->slots_lock);
return r;
}
default: default:
return kvm_vm_ioctl_enable_cap(kvm, cap); return kvm_vm_ioctl_enable_cap(kvm, cap);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment