Commit 5552de7b authored by Paolo Bonzini's avatar Paolo Bonzini

Merge tag 'kvm-s390-next-5.19-2' of...

Merge tag 'kvm-s390-next-5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD

KVM: s390: pvdump and selftest improvements

- add an interface to provide a hypervisor dump for secure guests
- improve selftests to show tests
parents b31455e9 b1edf7f1
...@@ -5127,7 +5127,15 @@ into ESA mode. This reset is a superset of the initial reset. ...@@ -5127,7 +5127,15 @@ into ESA mode. This reset is a superset of the initial reset.
__u32 reserved[3]; __u32 reserved[3];
}; };
cmd values: **Ultravisor return codes**
The Ultravisor return (reason) codes are provided by the kernel if a
Ultravisor call has been executed to achieve the results expected by
the command. Therefore they are independent of the IOCTL return
code. If KVM changes `rc`, its value will always be greater than 0
hence setting it to 0 before issuing a PV command is advised to be
able to detect a change of `rc`.
**cmd values:**
KVM_PV_ENABLE KVM_PV_ENABLE
Allocate memory and register the VM with the Ultravisor, thereby Allocate memory and register the VM with the Ultravisor, thereby
...@@ -5143,7 +5151,6 @@ KVM_PV_ENABLE ...@@ -5143,7 +5151,6 @@ KVM_PV_ENABLE
===== ============================= ===== =============================
KVM_PV_DISABLE KVM_PV_DISABLE
Deregister the VM from the Ultravisor and reclaim the memory that Deregister the VM from the Ultravisor and reclaim the memory that
had been donated to the Ultravisor, making it usable by the kernel had been donated to the Ultravisor, making it usable by the kernel
again. All registered VCPUs are converted back to non-protected again. All registered VCPUs are converted back to non-protected
...@@ -5160,6 +5167,117 @@ KVM_PV_VM_VERIFY ...@@ -5160,6 +5167,117 @@ KVM_PV_VM_VERIFY
Verify the integrity of the unpacked image. Only if this succeeds, Verify the integrity of the unpacked image. Only if this succeeds,
KVM is allowed to start protected VCPUs. KVM is allowed to start protected VCPUs.
KVM_PV_INFO
:Capability: KVM_CAP_S390_PROTECTED_DUMP
Presents an API that provides Ultravisor related data to userspace
via subcommands. len_max is the size of the user space buffer,
len_written is KVM's indication of how much bytes of that buffer
were actually written to. len_written can be used to determine the
valid fields if more response fields are added in the future.
::
enum pv_cmd_info_id {
KVM_PV_INFO_VM,
KVM_PV_INFO_DUMP,
};
struct kvm_s390_pv_info_header {
__u32 id;
__u32 len_max;
__u32 len_written;
__u32 reserved;
};
struct kvm_s390_pv_info {
struct kvm_s390_pv_info_header header;
struct kvm_s390_pv_info_dump dump;
struct kvm_s390_pv_info_vm vm;
};
**subcommands:**
KVM_PV_INFO_VM
This subcommand provides basic Ultravisor information for PV
hosts. These values are likely also exported as files in the sysfs
firmware UV query interface but they are more easily available to
programs in this API.
The installed calls and feature_indication members provide the
installed UV calls and the UV's other feature indications.
The max_* members provide information about the maximum number of PV
vcpus, PV guests and PV guest memory size.
::
struct kvm_s390_pv_info_vm {
__u64 inst_calls_list[4];
__u64 max_cpus;
__u64 max_guests;
__u64 max_guest_addr;
__u64 feature_indication;
};
KVM_PV_INFO_DUMP
This subcommand provides information related to dumping PV guests.
::
struct kvm_s390_pv_info_dump {
__u64 dump_cpu_buffer_len;
__u64 dump_config_mem_buffer_per_1m;
__u64 dump_config_finalize_len;
};
KVM_PV_DUMP
:Capability: KVM_CAP_S390_PROTECTED_DUMP
Presents an API that provides calls which facilitate dumping a
protected VM.
::
struct kvm_s390_pv_dmp {
__u64 subcmd;
__u64 buff_addr;
__u64 buff_len;
__u64 gaddr; /* For dump storage state */
};
**subcommands:**
KVM_PV_DUMP_INIT
Initializes the dump process of a protected VM. If this call does
not succeed all other subcommands will fail with -EINVAL. This
subcommand will return -EINVAL if a dump process has not yet been
completed.
Not all PV vms can be dumped, the owner needs to set `dump
allowed` PCF bit 34 in the SE header to allow dumping.
KVM_PV_DUMP_CONFIG_STOR_STATE
Stores `buff_len` bytes of tweak component values starting with
the 1MB block specified by the absolute guest address
(`gaddr`). `buff_len` needs to be `conf_dump_storage_state_len`
aligned and at least >= the `conf_dump_storage_state_len` value
provided by the dump uv_info data. buff_user might be written to
even if an error rc is returned. For instance if we encounter a
fault after writing the first page of data.
KVM_PV_DUMP_COMPLETE
If the subcommand succeeds it completes the dump process and lets
KVM_PV_DUMP_INIT be called again.
On success `conf_dump_finalize_len` bytes of completion data will be
stored to the `buff_addr`. The completion data contains a key
derivation seed, IV, tweak nonce and encryption keys as well as an
authentication tag all of which are needed to decrypt the dump at a
later time.
4.126 KVM_X86_SET_MSR_FILTER 4.126 KVM_X86_SET_MSR_FILTER
---------------------------- ----------------------------
...@@ -5802,6 +5920,32 @@ of CPUID leaf 0xD on the host. ...@@ -5802,6 +5920,32 @@ of CPUID leaf 0xD on the host.
This ioctl injects an event channel interrupt directly to the guest vCPU. This ioctl injects an event channel interrupt directly to the guest vCPU.
4.136 KVM_S390_PV_CPU_COMMAND
-----------------------------
:Capability: KVM_CAP_S390_PROTECTED_DUMP
:Architectures: s390
:Type: vcpu ioctl
:Parameters: none
:Returns: 0 on success, < 0 on error
This ioctl closely mirrors `KVM_S390_PV_COMMAND` but handles requests
for vcpus. It re-uses the kvm_s390_pv_dmp struct and hence also shares
the command ids.
**command:**
KVM_PV_DUMP
Presents an API that provides calls which facilitate dumping a vcpu
of a protected VM.
**subcommand:**
KVM_PV_DUMP_CPU
Provides encrypted dump data like register values.
The length of the returned data is provided by uv_info.guest_cpu_stor_len.
5. The kvm_run structure 5. The kvm_run structure
======================== ========================
...@@ -7956,6 +8100,20 @@ should adjust CPUID leaf 0xA to reflect that the PMU is disabled. ...@@ -7956,6 +8100,20 @@ should adjust CPUID leaf 0xA to reflect that the PMU is disabled.
When enabled, KVM will exit to userspace with KVM_EXIT_SYSTEM_EVENT of When enabled, KVM will exit to userspace with KVM_EXIT_SYSTEM_EVENT of
type KVM_SYSTEM_EVENT_SUSPEND to process the guest suspend request. type KVM_SYSTEM_EVENT_SUSPEND to process the guest suspend request.
8.37 KVM_CAP_S390_PROTECTED_DUMP
--------------------------------
:Capability: KVM_CAP_S390_PROTECTED_DUMP
:Architectures: s390
:Type: vm
This capability indicates that KVM and the Ultravisor support dumping
PV guests. The `KVM_PV_DUMP` command is available for the
`KVM_S390_PV_COMMAND` ioctl and the `KVM_PV_INFO` command provides
dump related UV data. Also the vcpu ioctl `KVM_S390_PV_CPU_COMMAND` is
available and supports the `KVM_PV_DUMP_CPU` subcommand.
9. Known KVM API problems 9. Known KVM API problems
========================= =========================
......
...@@ -10,3 +10,4 @@ KVM for s390 systems ...@@ -10,3 +10,4 @@ KVM for s390 systems
s390-diag s390-diag
s390-pv s390-pv
s390-pv-boot s390-pv-boot
s390-pv-dump
.. SPDX-License-Identifier: GPL-2.0
===========================================
s390 (IBM Z) Protected Virtualization dumps
===========================================
Summary
-------
Dumping a VM is an essential tool for debugging problems inside
it. This is especially true when a protected VM runs into trouble as
there's no way to access its memory and registers from the outside
while it's running.
However when dumping a protected VM we need to maintain its
confidentiality until the dump is in the hands of the VM owner who
should be the only one capable of analysing it.
The confidentiality of the VM dump is ensured by the Ultravisor who
provides an interface to KVM over which encrypted CPU and memory data
can be requested. The encryption is based on the Customer
Communication Key which is the key that's used to encrypt VM data in a
way that the customer is able to decrypt.
Dump process
------------
A dump is done in 3 steps:
**Initiation**
This step initializes the dump process, generates cryptographic seeds
and extracts dump keys with which the VM dump data will be encrypted.
**Data gathering**
Currently there are two types of data that can be gathered from a VM:
the memory and the vcpu state.
The vcpu state contains all the important registers, general, floating
point, vector, control and tod/timers of a vcpu. The vcpu dump can
contain incomplete data if a vcpu is dumped while an instruction is
emulated with help of the hypervisor. This is indicated by a flag bit
in the dump data. For the same reason it is very important to not only
write out the encrypted vcpu state, but also the unencrypted state
from the hypervisor.
The memory state is further divided into the encrypted memory and its
metadata comprised of the encryption tweaks and status flags. The
encrypted memory can simply be read once it has been exported. The
time of the export does not matter as no re-encryption is
needed. Memory that has been swapped out and hence was exported can be
read from the swap and written to the dump target without need for any
special actions.
The tweaks / status flags for the exported pages need to be requested
from the Ultravisor.
**Finalization**
The finalization step will provide the data needed to be able to
decrypt the vcpu and memory data and end the dump process. When this
step completes successfully a new dump initiation can be started.
...@@ -41,6 +41,10 @@ void uv_query_info(void) ...@@ -41,6 +41,10 @@ void uv_query_info(void)
uv_info.max_num_sec_conf = uvcb.max_num_sec_conf; uv_info.max_num_sec_conf = uvcb.max_num_sec_conf;
uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id; uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id;
uv_info.uv_feature_indications = uvcb.uv_feature_indications; uv_info.uv_feature_indications = uvcb.uv_feature_indications;
uv_info.supp_se_hdr_ver = uvcb.supp_se_hdr_versions;
uv_info.supp_se_hdr_pcf = uvcb.supp_se_hdr_pcf;
uv_info.conf_dump_storage_state_len = uvcb.conf_dump_storage_state_len;
uv_info.conf_dump_finalize_len = uvcb.conf_dump_finalize_len;
} }
#ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
......
...@@ -923,6 +923,7 @@ struct kvm_s390_pv { ...@@ -923,6 +923,7 @@ struct kvm_s390_pv {
u64 guest_len; u64 guest_len;
unsigned long stor_base; unsigned long stor_base;
void *stor_var; void *stor_var;
bool dumping;
}; };
struct kvm_arch{ struct kvm_arch{
......
...@@ -50,6 +50,10 @@ ...@@ -50,6 +50,10 @@
#define UVC_CMD_SET_UNSHARE_ALL 0x0340 #define UVC_CMD_SET_UNSHARE_ALL 0x0340
#define UVC_CMD_PIN_PAGE_SHARED 0x0341 #define UVC_CMD_PIN_PAGE_SHARED 0x0341
#define UVC_CMD_UNPIN_PAGE_SHARED 0x0342 #define UVC_CMD_UNPIN_PAGE_SHARED 0x0342
#define UVC_CMD_DUMP_INIT 0x0400
#define UVC_CMD_DUMP_CONF_STOR_STATE 0x0401
#define UVC_CMD_DUMP_CPU 0x0402
#define UVC_CMD_DUMP_COMPLETE 0x0403
#define UVC_CMD_SET_SHARED_ACCESS 0x1000 #define UVC_CMD_SET_SHARED_ACCESS 0x1000
#define UVC_CMD_REMOVE_SHARED_ACCESS 0x1001 #define UVC_CMD_REMOVE_SHARED_ACCESS 0x1001
#define UVC_CMD_RETR_ATTEST 0x1020 #define UVC_CMD_RETR_ATTEST 0x1020
...@@ -77,6 +81,10 @@ enum uv_cmds_inst { ...@@ -77,6 +81,10 @@ enum uv_cmds_inst {
BIT_UVC_CMD_UNSHARE_ALL = 20, BIT_UVC_CMD_UNSHARE_ALL = 20,
BIT_UVC_CMD_PIN_PAGE_SHARED = 21, BIT_UVC_CMD_PIN_PAGE_SHARED = 21,
BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22, BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22,
BIT_UVC_CMD_DUMP_INIT = 24,
BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE = 25,
BIT_UVC_CMD_DUMP_CPU = 26,
BIT_UVC_CMD_DUMP_COMPLETE = 27,
BIT_UVC_CMD_RETR_ATTEST = 28, BIT_UVC_CMD_RETR_ATTEST = 28,
}; };
...@@ -110,7 +118,13 @@ struct uv_cb_qui { ...@@ -110,7 +118,13 @@ struct uv_cb_qui {
u8 reserved88[158 - 136]; /* 0x0088 */ u8 reserved88[158 - 136]; /* 0x0088 */
u16 max_guest_cpu_id; /* 0x009e */ u16 max_guest_cpu_id; /* 0x009e */
u64 uv_feature_indications; /* 0x00a0 */ u64 uv_feature_indications; /* 0x00a0 */
u8 reserveda8[200 - 168]; /* 0x00a8 */ u64 reserveda8; /* 0x00a8 */
u64 supp_se_hdr_versions; /* 0x00b0 */
u64 supp_se_hdr_pcf; /* 0x00b8 */
u64 reservedc0; /* 0x00c0 */
u64 conf_dump_storage_state_len; /* 0x00c8 */
u64 conf_dump_finalize_len; /* 0x00d0 */
u8 reservedd8[256 - 216]; /* 0x00d8 */
} __packed __aligned(8); } __packed __aligned(8);
/* Initialize Ultravisor */ /* Initialize Ultravisor */
...@@ -240,6 +254,31 @@ struct uv_cb_attest { ...@@ -240,6 +254,31 @@ struct uv_cb_attest {
u64 reserved168[4]; /* 0x0168 */ u64 reserved168[4]; /* 0x0168 */
} __packed __aligned(8); } __packed __aligned(8);
struct uv_cb_dump_cpu {
struct uv_cb_header header;
u64 reserved08[2];
u64 cpu_handle;
u64 dump_area_origin;
u64 reserved28[5];
} __packed __aligned(8);
struct uv_cb_dump_stor_state {
struct uv_cb_header header;
u64 reserved08[2];
u64 config_handle;
u64 dump_area_origin;
u64 gaddr;
u64 reserved28[4];
} __packed __aligned(8);
struct uv_cb_dump_complete {
struct uv_cb_header header;
u64 reserved08[2];
u64 config_handle;
u64 dump_area_origin;
u64 reserved30[5];
} __packed __aligned(8);
static inline int __uv_call(unsigned long r1, unsigned long r2) static inline int __uv_call(unsigned long r1, unsigned long r2)
{ {
int cc; int cc;
...@@ -307,6 +346,10 @@ struct uv_info { ...@@ -307,6 +346,10 @@ struct uv_info {
unsigned int max_num_sec_conf; unsigned int max_num_sec_conf;
unsigned short max_guest_cpu_id; unsigned short max_guest_cpu_id;
unsigned long uv_feature_indications; unsigned long uv_feature_indications;
unsigned long supp_se_hdr_ver;
unsigned long supp_se_hdr_pcf;
unsigned long conf_dump_storage_state_len;
unsigned long conf_dump_finalize_len;
}; };
extern struct uv_info uv_info; extern struct uv_info uv_info;
......
...@@ -392,6 +392,54 @@ static ssize_t uv_query_facilities(struct kobject *kobj, ...@@ -392,6 +392,54 @@ static ssize_t uv_query_facilities(struct kobject *kobj,
static struct kobj_attribute uv_query_facilities_attr = static struct kobj_attribute uv_query_facilities_attr =
__ATTR(facilities, 0444, uv_query_facilities, NULL); __ATTR(facilities, 0444, uv_query_facilities, NULL);
static ssize_t uv_query_supp_se_hdr_ver(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%lx\n", uv_info.supp_se_hdr_ver);
}
static struct kobj_attribute uv_query_supp_se_hdr_ver_attr =
__ATTR(supp_se_hdr_ver, 0444, uv_query_supp_se_hdr_ver, NULL);
static ssize_t uv_query_supp_se_hdr_pcf(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%lx\n", uv_info.supp_se_hdr_pcf);
}
static struct kobj_attribute uv_query_supp_se_hdr_pcf_attr =
__ATTR(supp_se_hdr_pcf, 0444, uv_query_supp_se_hdr_pcf, NULL);
static ssize_t uv_query_dump_cpu_len(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return scnprintf(page, PAGE_SIZE, "%lx\n",
uv_info.guest_cpu_stor_len);
}
static struct kobj_attribute uv_query_dump_cpu_len_attr =
__ATTR(uv_query_dump_cpu_len, 0444, uv_query_dump_cpu_len, NULL);
static ssize_t uv_query_dump_storage_state_len(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return scnprintf(page, PAGE_SIZE, "%lx\n",
uv_info.conf_dump_storage_state_len);
}
static struct kobj_attribute uv_query_dump_storage_state_len_attr =
__ATTR(dump_storage_state_len, 0444, uv_query_dump_storage_state_len, NULL);
static ssize_t uv_query_dump_finalize_len(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return scnprintf(page, PAGE_SIZE, "%lx\n",
uv_info.conf_dump_finalize_len);
}
static struct kobj_attribute uv_query_dump_finalize_len_attr =
__ATTR(dump_finalize_len, 0444, uv_query_dump_finalize_len, NULL);
static ssize_t uv_query_feature_indications(struct kobject *kobj, static ssize_t uv_query_feature_indications(struct kobject *kobj,
struct kobj_attribute *attr, char *buf) struct kobj_attribute *attr, char *buf)
{ {
...@@ -437,6 +485,11 @@ static struct attribute *uv_query_attrs[] = { ...@@ -437,6 +485,11 @@ static struct attribute *uv_query_attrs[] = {
&uv_query_max_guest_cpus_attr.attr, &uv_query_max_guest_cpus_attr.attr,
&uv_query_max_guest_vms_attr.attr, &uv_query_max_guest_vms_attr.attr,
&uv_query_max_guest_addr_attr.attr, &uv_query_max_guest_addr_attr.attr,
&uv_query_supp_se_hdr_ver_attr.attr,
&uv_query_supp_se_hdr_pcf_attr.attr,
&uv_query_dump_storage_state_len_attr.attr,
&uv_query_dump_finalize_len_attr.attr,
&uv_query_dump_cpu_len_attr.attr,
NULL, NULL,
}; };
......
...@@ -606,6 +606,26 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) ...@@ -606,6 +606,26 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_S390_PROTECTED: case KVM_CAP_S390_PROTECTED:
r = is_prot_virt_host(); r = is_prot_virt_host();
break; break;
case KVM_CAP_S390_PROTECTED_DUMP: {
u64 pv_cmds_dump[] = {
BIT_UVC_CMD_DUMP_INIT,
BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE,
BIT_UVC_CMD_DUMP_CPU,
BIT_UVC_CMD_DUMP_COMPLETE,
};
int i;
r = is_prot_virt_host();
for (i = 0; i < ARRAY_SIZE(pv_cmds_dump); i++) {
if (!test_bit_inv(pv_cmds_dump[i],
(unsigned long *)&uv_info.inst_calls_list)) {
r = 0;
break;
}
}
break;
}
default: default:
r = 0; r = 0;
} }
...@@ -2220,6 +2240,115 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *rc, u16 *rrc) ...@@ -2220,6 +2240,115 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *rc, u16 *rrc)
return r; return r;
} }
/*
* Here we provide user space with a direct interface to query UV
* related data like UV maxima and available features as well as
* feature specific data.
*
* To facilitate future extension of the data structures we'll try to
* write data up to the maximum requested length.
*/
static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
{
ssize_t len_min;
switch (info->header.id) {
case KVM_PV_INFO_VM: {
len_min = sizeof(info->header) + sizeof(info->vm);
if (info->header.len_max < len_min)
return -EINVAL;
memcpy(info->vm.inst_calls_list,
uv_info.inst_calls_list,
sizeof(uv_info.inst_calls_list));
/* It's max cpuid not max cpus, so it's off by one */
info->vm.max_cpus = uv_info.max_guest_cpu_id + 1;
info->vm.max_guests = uv_info.max_num_sec_conf;
info->vm.max_guest_addr = uv_info.max_sec_stor_addr;
info->vm.feature_indication = uv_info.uv_feature_indications;
return len_min;
}
case KVM_PV_INFO_DUMP: {
len_min = sizeof(info->header) + sizeof(info->dump);
if (info->header.len_max < len_min)
return -EINVAL;
info->dump.dump_cpu_buffer_len = uv_info.guest_cpu_stor_len;
info->dump.dump_config_mem_buffer_per_1m = uv_info.conf_dump_storage_state_len;
info->dump.dump_config_finalize_len = uv_info.conf_dump_finalize_len;
return len_min;
}
default:
return -EINVAL;
}
}
static int kvm_s390_pv_dmp(struct kvm *kvm, struct kvm_pv_cmd *cmd,
struct kvm_s390_pv_dmp dmp)
{
int r = -EINVAL;
void __user *result_buff = (void __user *)dmp.buff_addr;
switch (dmp.subcmd) {
case KVM_PV_DUMP_INIT: {
if (kvm->arch.pv.dumping)
break;
/*
* Block SIE entry as concurrent dump UVCs could lead
* to validities.
*/
kvm_s390_vcpu_block_all(kvm);
r = uv_cmd_nodata(kvm_s390_pv_get_handle(kvm),
UVC_CMD_DUMP_INIT, &cmd->rc, &cmd->rrc);
KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP INIT: rc %x rrc %x",
cmd->rc, cmd->rrc);
if (!r) {
kvm->arch.pv.dumping = true;
} else {
kvm_s390_vcpu_unblock_all(kvm);
r = -EINVAL;
}
break;
}
case KVM_PV_DUMP_CONFIG_STOR_STATE: {
if (!kvm->arch.pv.dumping)
break;
/*
* gaddr is an output parameter since we might stop
* early. As dmp will be copied back in our caller, we
* don't need to do it ourselves.
*/
r = kvm_s390_pv_dump_stor_state(kvm, result_buff, &dmp.gaddr, dmp.buff_len,
&cmd->rc, &cmd->rrc);
break;
}
case KVM_PV_DUMP_COMPLETE: {
if (!kvm->arch.pv.dumping)
break;
r = -EINVAL;
if (dmp.buff_len < uv_info.conf_dump_finalize_len)
break;
r = kvm_s390_pv_dump_complete(kvm, result_buff,
&cmd->rc, &cmd->rrc);
break;
}
default:
r = -ENOTTY;
break;
}
return r;
}
static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd) static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
{ {
int r = 0; int r = 0;
...@@ -2356,6 +2485,68 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd) ...@@ -2356,6 +2485,68 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
cmd->rc, cmd->rrc); cmd->rc, cmd->rrc);
break; break;
} }
case KVM_PV_INFO: {
struct kvm_s390_pv_info info = {};
ssize_t data_len;
/*
* No need to check the VM protection here.
*
* Maybe user space wants to query some of the data
* when the VM is still unprotected. If we see the
* need to fence a new data command we can still
* return an error in the info handler.
*/
r = -EFAULT;
if (copy_from_user(&info, argp, sizeof(info.header)))
break;
r = -EINVAL;
if (info.header.len_max < sizeof(info.header))
break;
data_len = kvm_s390_handle_pv_info(&info);
if (data_len < 0) {
r = data_len;
break;
}
/*
* If a data command struct is extended (multiple
* times) this can be used to determine how much of it
* is valid.
*/
info.header.len_written = data_len;
r = -EFAULT;
if (copy_to_user(argp, &info, data_len))
break;
r = 0;
break;
}
case KVM_PV_DUMP: {
struct kvm_s390_pv_dmp dmp;
r = -EINVAL;
if (!kvm_s390_pv_is_protected(kvm))
break;
r = -EFAULT;
if (copy_from_user(&dmp, argp, sizeof(dmp)))
break;
r = kvm_s390_pv_dmp(kvm, cmd, dmp);
if (r)
break;
if (copy_to_user(argp, &dmp, sizeof(dmp))) {
r = -EFAULT;
break;
}
break;
}
default: default:
r = -ENOTTY; r = -ENOTTY;
} }
...@@ -4473,6 +4664,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) ...@@ -4473,6 +4664,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
struct kvm_run *kvm_run = vcpu->run; struct kvm_run *kvm_run = vcpu->run;
int rc; int rc;
/*
* Running a VM while dumping always has the potential to
* produce inconsistent dump data. But for PV vcpus a SIE
* entry while dumping could also lead to a fatal validity
* intercept which we absolutely want to avoid.
*/
if (vcpu->kvm->arch.pv.dumping)
return -EINVAL;
if (kvm_run->immediate_exit) if (kvm_run->immediate_exit)
return -EINTR; return -EINTR;
...@@ -4912,6 +5112,48 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp, ...@@ -4912,6 +5112,48 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp,
return -ENOIOCTLCMD; return -ENOIOCTLCMD;
} }
static int kvm_s390_handle_pv_vcpu_dump(struct kvm_vcpu *vcpu,
struct kvm_pv_cmd *cmd)
{
struct kvm_s390_pv_dmp dmp;
void *data;
int ret;
/* Dump initialization is a prerequisite */
if (!vcpu->kvm->arch.pv.dumping)
return -EINVAL;
if (copy_from_user(&dmp, (__u8 __user *)cmd->data, sizeof(dmp)))
return -EFAULT;
/* We only handle this subcmd right now */
if (dmp.subcmd != KVM_PV_DUMP_CPU)
return -EINVAL;
/* CPU dump length is the same as create cpu storage donation. */
if (dmp.buff_len != uv_info.guest_cpu_stor_len)
return -EINVAL;
data = kvzalloc(uv_info.guest_cpu_stor_len, GFP_KERNEL);
if (!data)
return -ENOMEM;
ret = kvm_s390_pv_dump_cpu(vcpu, data, &cmd->rc, &cmd->rrc);
VCPU_EVENT(vcpu, 3, "PROTVIRT DUMP CPU %d rc %x rrc %x",
vcpu->vcpu_id, cmd->rc, cmd->rrc);
if (ret)
ret = -EINVAL;
/* On success copy over the dump data */
if (!ret && copy_to_user((__u8 __user *)dmp.buff_addr, data, uv_info.guest_cpu_stor_len))
ret = -EFAULT;
kvfree(data);
return ret;
}
long kvm_arch_vcpu_ioctl(struct file *filp, long kvm_arch_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg) unsigned int ioctl, unsigned long arg)
{ {
...@@ -5076,6 +5318,33 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -5076,6 +5318,33 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
irq_state.len); irq_state.len);
break; break;
} }
case KVM_S390_PV_CPU_COMMAND: {
struct kvm_pv_cmd cmd;
r = -EINVAL;
if (!is_prot_virt_host())
break;
r = -EFAULT;
if (copy_from_user(&cmd, argp, sizeof(cmd)))
break;
r = -EINVAL;
if (cmd.flags)
break;
/* We only handle this cmd right now */
if (cmd.cmd != KVM_PV_DUMP)
break;
r = kvm_s390_handle_pv_vcpu_dump(vcpu, &cmd);
/* Always copy over UV rc / rrc data */
if (copy_to_user((__u8 __user *)argp, &cmd.rc,
sizeof(cmd.rc) + sizeof(cmd.rrc)))
r = -EFAULT;
break;
}
default: default:
r = -ENOTTY; r = -ENOTTY;
} }
......
...@@ -250,6 +250,11 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc, ...@@ -250,6 +250,11 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size, int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
unsigned long tweak, u16 *rc, u16 *rrc); unsigned long tweak, u16 *rc, u16 *rrc);
int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state); int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
int kvm_s390_pv_dump_complete(struct kvm *kvm, void __user *buff_user,
u16 *rc, u16 *rrc);
static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm) static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
{ {
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
*/ */
#include <linux/kvm.h> #include <linux/kvm.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/minmax.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
#include <asm/gmap.h> #include <asm/gmap.h>
...@@ -298,3 +299,200 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state) ...@@ -298,3 +299,200 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
return -EINVAL; return -EINVAL;
return 0; return 0;
} }
int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
{
struct uv_cb_dump_cpu uvcb = {
.header.cmd = UVC_CMD_DUMP_CPU,
.header.len = sizeof(uvcb),
.cpu_handle = vcpu->arch.pv.handle,
.dump_area_origin = (u64)buff,
};
int cc;
cc = uv_call_sched(0, (u64)&uvcb);
*rc = uvcb.header.rc;
*rrc = uvcb.header.rrc;
return cc;
}
/* Size of the cache for the storage state dump data. 1MB for now */
#define DUMP_BUFF_LEN HPAGE_SIZE
/**
* kvm_s390_pv_dump_stor_state
*
* @kvm: pointer to the guest's KVM struct
* @buff_user: Userspace pointer where we will write the results to
* @gaddr: Starting absolute guest address for which the storage state
* is requested.
* @buff_user_len: Length of the buff_user buffer
* @rc: Pointer to where the uvcb return code is stored
* @rrc: Pointer to where the uvcb return reason code is stored
*
* Stores buff_len bytes of tweak component values to buff_user
* starting with the 1MB block specified by the absolute guest address
* (gaddr). The gaddr pointer will be updated with the last address
* for which data was written when returning to userspace. buff_user
* might be written to even if an error rc is returned. For instance
* if we encounter a fault after writing the first page of data.
*
* Context: kvm->lock needs to be held
*
* Return:
* 0 on success
* -ENOMEM if allocating the cache fails
* -EINVAL if gaddr is not aligned to 1MB
* -EINVAL if buff_user_len is not aligned to uv_info.conf_dump_storage_state_len
* -EINVAL if the UV call fails, rc and rrc will be set in this case
* -EFAULT if copying the result to buff_user failed
*/
int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
{
struct uv_cb_dump_stor_state uvcb = {
.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
.header.len = sizeof(uvcb),
.config_handle = kvm->arch.pv.handle,
.gaddr = *gaddr,
.dump_area_origin = 0,
};
const u64 increment_len = uv_info.conf_dump_storage_state_len;
size_t buff_kvm_size;
size_t size_done = 0;
u8 *buff_kvm = NULL;
int cc, ret;
ret = -EINVAL;
/* UV call processes 1MB guest storage chunks at a time */
if (!IS_ALIGNED(*gaddr, HPAGE_SIZE))
goto out;
/*
* We provide the storage state for 1MB chunks of guest
* storage. The buffer will need to be aligned to
* conf_dump_storage_state_len so we don't end on a partial
* chunk.
*/
if (!buff_user_len ||
!IS_ALIGNED(buff_user_len, increment_len))
goto out;
/*
* Allocate a buffer from which we will later copy to the user
* process. We don't want userspace to dictate our buffer size
* so we limit it to DUMP_BUFF_LEN.
*/
ret = -ENOMEM;
buff_kvm_size = min_t(u64, buff_user_len, DUMP_BUFF_LEN);
buff_kvm = vzalloc(buff_kvm_size);
if (!buff_kvm)
goto out;
ret = 0;
uvcb.dump_area_origin = (u64)buff_kvm;
/* We will loop until the user buffer is filled or an error occurs */
do {
/* Get 1MB worth of guest storage state data */
cc = uv_call_sched(0, (u64)&uvcb);
/* All or nothing */
if (cc) {
ret = -EINVAL;
break;
}
size_done += increment_len;
uvcb.dump_area_origin += increment_len;
buff_user_len -= increment_len;
uvcb.gaddr += HPAGE_SIZE;
/* KVM Buffer full, time to copy to the process */
if (!buff_user_len || size_done == DUMP_BUFF_LEN) {
if (copy_to_user(buff_user, buff_kvm, size_done)) {
ret = -EFAULT;
break;
}
buff_user += size_done;
size_done = 0;
uvcb.dump_area_origin = (u64)buff_kvm;
}
} while (buff_user_len);
/* Report back where we ended dumping */
*gaddr = uvcb.gaddr;
/* Lets only log errors, we don't want to spam */
out:
if (ret)
KVM_UV_EVENT(kvm, 3,
"PROTVIRT DUMP STORAGE STATE: addr %llx ret %d, uvcb rc %x rrc %x",
uvcb.gaddr, ret, uvcb.header.rc, uvcb.header.rrc);
*rc = uvcb.header.rc;
*rrc = uvcb.header.rrc;
vfree(buff_kvm);
return ret;
}
/**
* kvm_s390_pv_dump_complete
*
* @kvm: pointer to the guest's KVM struct
* @buff_user: Userspace pointer where we will write the results to
* @rc: Pointer to where the uvcb return code is stored
* @rrc: Pointer to where the uvcb return reason code is stored
*
* Completes the dumping operation and writes the completion data to
* user space.
*
* Context: kvm->lock needs to be held
*
* Return:
* 0 on success
* -ENOMEM if allocating the completion buffer fails
* -EINVAL if the UV call fails, rc and rrc will be set in this case
* -EFAULT if copying the result to buff_user failed
*/
int kvm_s390_pv_dump_complete(struct kvm *kvm, void __user *buff_user,
u16 *rc, u16 *rrc)
{
struct uv_cb_dump_complete complete = {
.header.len = sizeof(complete),
.header.cmd = UVC_CMD_DUMP_COMPLETE,
.config_handle = kvm_s390_pv_get_handle(kvm),
};
u64 *compl_data;
int ret;
/* Allocate dump area */
compl_data = vzalloc(uv_info.conf_dump_finalize_len);
if (!compl_data)
return -ENOMEM;
complete.dump_area_origin = (u64)compl_data;
ret = uv_call_sched(0, (u64)&complete);
*rc = complete.header.rc;
*rrc = complete.header.rrc;
KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP COMPLETE: rc %x rrc %x",
complete.header.rc, complete.header.rrc);
if (!ret) {
/*
* kvm_s390_pv_dealloc_vm() will also (mem)set
* this to false on a reboot or other destroy
* operation for this vm.
*/
kvm->arch.pv.dumping = false;
kvm_s390_vcpu_unblock_all(kvm);
ret = copy_to_user(buff_user, compl_data, uv_info.conf_dump_finalize_len);
if (ret)
ret = -EFAULT;
}
vfree(compl_data);
/* If the UVC returned an error, translate it to -EINVAL */
if (ret > 0)
ret = -EINVAL;
return ret;
}
...@@ -1157,6 +1157,7 @@ struct kvm_ppc_resize_hpt { ...@@ -1157,6 +1157,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_VM_TSC_CONTROL 214 #define KVM_CAP_VM_TSC_CONTROL 214
#define KVM_CAP_SYSTEM_EVENT_DATA 215 #define KVM_CAP_SYSTEM_EVENT_DATA 215
#define KVM_CAP_ARM_SYSTEM_SUSPEND 216 #define KVM_CAP_ARM_SYSTEM_SUSPEND 216
#define KVM_CAP_S390_PROTECTED_DUMP 217
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
...@@ -1660,6 +1661,55 @@ struct kvm_s390_pv_unp { ...@@ -1660,6 +1661,55 @@ struct kvm_s390_pv_unp {
__u64 tweak; __u64 tweak;
}; };
enum pv_cmd_dmp_id {
KVM_PV_DUMP_INIT,
KVM_PV_DUMP_CONFIG_STOR_STATE,
KVM_PV_DUMP_COMPLETE,
KVM_PV_DUMP_CPU,
};
struct kvm_s390_pv_dmp {
__u64 subcmd;
__u64 buff_addr;
__u64 buff_len;
__u64 gaddr; /* For dump storage state */
__u64 reserved[4];
};
enum pv_cmd_info_id {
KVM_PV_INFO_VM,
KVM_PV_INFO_DUMP,
};
struct kvm_s390_pv_info_dump {
__u64 dump_cpu_buffer_len;
__u64 dump_config_mem_buffer_per_1m;
__u64 dump_config_finalize_len;
};
struct kvm_s390_pv_info_vm {
__u64 inst_calls_list[4];
__u64 max_cpus;
__u64 max_guests;
__u64 max_guest_addr;
__u64 feature_indication;
};
struct kvm_s390_pv_info_header {
__u32 id;
__u32 len_max;
__u32 len_written;
__u32 reserved;
};
struct kvm_s390_pv_info {
struct kvm_s390_pv_info_header header;
union {
struct kvm_s390_pv_info_dump dump;
struct kvm_s390_pv_info_vm vm;
};
};
enum pv_cmd_id { enum pv_cmd_id {
KVM_PV_ENABLE, KVM_PV_ENABLE,
KVM_PV_DISABLE, KVM_PV_DISABLE,
...@@ -1668,6 +1718,8 @@ enum pv_cmd_id { ...@@ -1668,6 +1718,8 @@ enum pv_cmd_id {
KVM_PV_VERIFY, KVM_PV_VERIFY,
KVM_PV_PREP_RESET, KVM_PV_PREP_RESET,
KVM_PV_UNSHARE_ALL, KVM_PV_UNSHARE_ALL,
KVM_PV_INFO,
KVM_PV_DUMP,
}; };
struct kvm_pv_cmd { struct kvm_pv_cmd {
...@@ -2118,4 +2170,7 @@ struct kvm_stats_desc { ...@@ -2118,4 +2170,7 @@ struct kvm_stats_desc {
/* Available with KVM_CAP_XSAVE2 */ /* Available with KVM_CAP_XSAVE2 */
#define KVM_GET_XSAVE2 _IOR(KVMIO, 0xcf, struct kvm_xsave) #define KVM_GET_XSAVE2 _IOR(KVMIO, 0xcf, struct kvm_xsave)
/* Available with KVM_CAP_S390_PROTECTED_DUMP */
#define KVM_S390_PV_CPU_COMMAND _IOWR(KVMIO, 0xd0, struct kvm_pv_cmd)
#endif /* __LINUX_KVM_H */ #endif /* __LINUX_KVM_H */
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include "test_util.h" #include "test_util.h"
#include "kvm_util.h" #include "kvm_util.h"
#include "kselftest.h"
enum mop_target { enum mop_target {
LOGICAL, LOGICAL,
...@@ -691,34 +692,92 @@ static void test_errors(void) ...@@ -691,34 +692,92 @@ static void test_errors(void)
kvm_vm_free(t.kvm_vm); kvm_vm_free(t.kvm_vm);
} }
struct testdef {
const char *name;
void (*test)(void);
int extension;
} testlist[] = {
{
.name = "simple copy",
.test = test_copy,
},
{
.name = "generic error checks",
.test = test_errors,
},
{
.name = "copy with storage keys",
.test = test_copy_key,
.extension = 1,
},
{
.name = "copy with key storage protection override",
.test = test_copy_key_storage_prot_override,
.extension = 1,
},
{
.name = "copy with key fetch protection",
.test = test_copy_key_fetch_prot,
.extension = 1,
},
{
.name = "copy with key fetch protection override",
.test = test_copy_key_fetch_prot_override,
.extension = 1,
},
{
.name = "error checks with key",
.test = test_errors_key,
.extension = 1,
},
{
.name = "termination",
.test = test_termination,
.extension = 1,
},
{
.name = "error checks with key storage protection override",
.test = test_errors_key_storage_prot_override,
.extension = 1,
},
{
.name = "error checks without key fetch prot override",
.test = test_errors_key_fetch_prot_override_not_enabled,
.extension = 1,
},
{
.name = "error checks with key fetch prot override",
.test = test_errors_key_fetch_prot_override_enabled,
.extension = 1,
},
};
int main(int argc, char *argv[]) int main(int argc, char *argv[])
{ {
int memop_cap, extension_cap; int memop_cap, extension_cap, idx;
setbuf(stdout, NULL); /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); /* Tell stdout not to buffer its content */
ksft_print_header();
memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP); memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP);
extension_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP_EXTENSION); extension_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP_EXTENSION);
if (!memop_cap) { if (!memop_cap) {
print_skip("CAP_S390_MEM_OP not supported"); ksft_exit_skip("CAP_S390_MEM_OP not supported.\n");
exit(KSFT_SKIP);
} }
test_copy(); ksft_set_plan(ARRAY_SIZE(testlist));
if (extension_cap > 0) {
test_copy_key(); for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) {
test_copy_key_storage_prot_override(); if (testlist[idx].extension >= extension_cap) {
test_copy_key_fetch_prot(); testlist[idx].test();
test_copy_key_fetch_prot_override(); ksft_test_result_pass("%s\n", testlist[idx].name);
test_errors_key(); } else {
test_termination(); ksft_test_result_skip("%s - extension level %d not supported\n",
test_errors_key_storage_prot_override(); testlist[idx].name,
test_errors_key_fetch_prot_override_not_enabled(); testlist[idx].extension);
test_errors_key_fetch_prot_override_enabled(); }
} else {
print_skip("storage key memop extension not supported");
} }
test_errors();
return 0; ksft_finished(); /* Print results and exit() accordingly */
} }
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include "test_util.h" #include "test_util.h"
#include "kvm_util.h" #include "kvm_util.h"
#include "kselftest.h"
#define VCPU_ID 3 #define VCPU_ID 3
#define LOCAL_IRQS 32 #define LOCAL_IRQS 32
...@@ -202,7 +203,7 @@ static void inject_irq(int cpu_id) ...@@ -202,7 +203,7 @@ static void inject_irq(int cpu_id)
static void test_normal(void) static void test_normal(void)
{ {
pr_info("Testing normal reset\n"); ksft_print_msg("Testing normal reset\n");
/* Create VM */ /* Create VM */
vm = vm_create_default(VCPU_ID, 0, guest_code_initial); vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
run = vcpu_state(vm, VCPU_ID); run = vcpu_state(vm, VCPU_ID);
...@@ -225,7 +226,7 @@ static void test_normal(void) ...@@ -225,7 +226,7 @@ static void test_normal(void)
static void test_initial(void) static void test_initial(void)
{ {
pr_info("Testing initial reset\n"); ksft_print_msg("Testing initial reset\n");
vm = vm_create_default(VCPU_ID, 0, guest_code_initial); vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
run = vcpu_state(vm, VCPU_ID); run = vcpu_state(vm, VCPU_ID);
sync_regs = &run->s.regs; sync_regs = &run->s.regs;
...@@ -247,7 +248,7 @@ static void test_initial(void) ...@@ -247,7 +248,7 @@ static void test_initial(void)
static void test_clear(void) static void test_clear(void)
{ {
pr_info("Testing clear reset\n"); ksft_print_msg("Testing clear reset\n");
vm = vm_create_default(VCPU_ID, 0, guest_code_initial); vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
run = vcpu_state(vm, VCPU_ID); run = vcpu_state(vm, VCPU_ID);
sync_regs = &run->s.regs; sync_regs = &run->s.regs;
...@@ -266,14 +267,35 @@ static void test_clear(void) ...@@ -266,14 +267,35 @@ static void test_clear(void)
kvm_vm_free(vm); kvm_vm_free(vm);
} }
struct testdef {
const char *name;
void (*test)(void);
bool needs_cap;
} testlist[] = {
{ "initial", test_initial, false },
{ "normal", test_normal, true },
{ "clear", test_clear, true },
};
int main(int argc, char *argv[]) int main(int argc, char *argv[])
{ {
bool has_s390_vcpu_resets = kvm_check_cap(KVM_CAP_S390_VCPU_RESETS);
int idx;
setbuf(stdout, NULL); /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); /* Tell stdout not to buffer its content */
test_initial(); ksft_print_header();
if (kvm_check_cap(KVM_CAP_S390_VCPU_RESETS)) { ksft_set_plan(ARRAY_SIZE(testlist));
test_normal();
test_clear(); for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) {
if (!testlist[idx].needs_cap || has_s390_vcpu_resets) {
testlist[idx].test();
ksft_test_result_pass("%s\n", testlist[idx].name);
} else {
ksft_test_result_skip("%s - no VCPU_RESETS capability\n",
testlist[idx].name);
}
} }
return 0;
ksft_finished(); /* Print results and exit() accordingly */
} }
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include "test_util.h" #include "test_util.h"
#include "kvm_util.h" #include "kvm_util.h"
#include "diag318_test_handler.h" #include "diag318_test_handler.h"
#include "kselftest.h"
#define VCPU_ID 5 #define VCPU_ID 5
...@@ -74,27 +75,9 @@ static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right) ...@@ -74,27 +75,9 @@ static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
#define TEST_SYNC_FIELDS (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS|KVM_SYNC_DIAG318) #define TEST_SYNC_FIELDS (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS|KVM_SYNC_DIAG318)
#define INVALID_SYNC_FIELD 0x80000000 #define INVALID_SYNC_FIELD 0x80000000
int main(int argc, char *argv[]) void test_read_invalid(struct kvm_vm *vm, struct kvm_run *run)
{ {
struct kvm_vm *vm; int rv;
struct kvm_run *run;
struct kvm_regs regs;
struct kvm_sregs sregs;
int rv, cap;
/* Tell stdout not to buffer its content */
setbuf(stdout, NULL);
cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
if (!cap) {
print_skip("CAP_SYNC_REGS not supported");
exit(KSFT_SKIP);
}
/* Create VM */
vm = vm_create_default(VCPU_ID, 0, guest_code);
run = vcpu_state(vm, VCPU_ID);
/* Request reading invalid register set from VCPU. */ /* Request reading invalid register set from VCPU. */
run->kvm_valid_regs = INVALID_SYNC_FIELD; run->kvm_valid_regs = INVALID_SYNC_FIELD;
...@@ -110,6 +93,11 @@ int main(int argc, char *argv[]) ...@@ -110,6 +93,11 @@ int main(int argc, char *argv[])
"Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d\n", "Invalid kvm_valid_regs did not cause expected KVM_RUN error: %d\n",
rv); rv);
vcpu_state(vm, VCPU_ID)->kvm_valid_regs = 0; vcpu_state(vm, VCPU_ID)->kvm_valid_regs = 0;
}
void test_set_invalid(struct kvm_vm *vm, struct kvm_run *run)
{
int rv;
/* Request setting invalid register set into VCPU. */ /* Request setting invalid register set into VCPU. */
run->kvm_dirty_regs = INVALID_SYNC_FIELD; run->kvm_dirty_regs = INVALID_SYNC_FIELD;
...@@ -125,6 +113,13 @@ int main(int argc, char *argv[]) ...@@ -125,6 +113,13 @@ int main(int argc, char *argv[])
"Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d\n", "Invalid kvm_dirty_regs did not cause expected KVM_RUN error: %d\n",
rv); rv);
vcpu_state(vm, VCPU_ID)->kvm_dirty_regs = 0; vcpu_state(vm, VCPU_ID)->kvm_dirty_regs = 0;
}
void test_req_and_verify_all_valid_regs(struct kvm_vm *vm, struct kvm_run *run)
{
struct kvm_sregs sregs;
struct kvm_regs regs;
int rv;
/* Request and verify all valid register sets. */ /* Request and verify all valid register sets. */
run->kvm_valid_regs = TEST_SYNC_FIELDS; run->kvm_valid_regs = TEST_SYNC_FIELDS;
...@@ -146,6 +141,13 @@ int main(int argc, char *argv[]) ...@@ -146,6 +141,13 @@ int main(int argc, char *argv[])
vcpu_sregs_get(vm, VCPU_ID, &sregs); vcpu_sregs_get(vm, VCPU_ID, &sregs);
compare_sregs(&sregs, &run->s.regs); compare_sregs(&sregs, &run->s.regs);
}
void test_set_and_verify_various_reg_values(struct kvm_vm *vm, struct kvm_run *run)
{
struct kvm_sregs sregs;
struct kvm_regs regs;
int rv;
/* Set and verify various register values */ /* Set and verify various register values */
run->s.regs.gprs[11] = 0xBAD1DEA; run->s.regs.gprs[11] = 0xBAD1DEA;
...@@ -180,6 +182,11 @@ int main(int argc, char *argv[]) ...@@ -180,6 +182,11 @@ int main(int argc, char *argv[])
vcpu_sregs_get(vm, VCPU_ID, &sregs); vcpu_sregs_get(vm, VCPU_ID, &sregs);
compare_sregs(&sregs, &run->s.regs); compare_sregs(&sregs, &run->s.regs);
}
void test_clear_kvm_dirty_regs_bits(struct kvm_vm *vm, struct kvm_run *run)
{
int rv;
/* Clear kvm_dirty_regs bits, verify new s.regs values are /* Clear kvm_dirty_regs bits, verify new s.regs values are
* overwritten with existing guest values. * overwritten with existing guest values.
...@@ -200,8 +207,46 @@ int main(int argc, char *argv[]) ...@@ -200,8 +207,46 @@ int main(int argc, char *argv[])
TEST_ASSERT(run->s.regs.diag318 != 0x4B1D, TEST_ASSERT(run->s.regs.diag318 != 0x4B1D,
"diag318 sync regs value incorrect 0x%llx.", "diag318 sync regs value incorrect 0x%llx.",
run->s.regs.diag318); run->s.regs.diag318);
}
struct testdef {
const char *name;
void (*test)(struct kvm_vm *vm, struct kvm_run *run);
} testlist[] = {
{ "read invalid", test_read_invalid },
{ "set invalid", test_set_invalid },
{ "request+verify all valid regs", test_req_and_verify_all_valid_regs },
{ "set+verify various regs", test_set_and_verify_various_reg_values },
{ "clear kvm_dirty_regs bits", test_clear_kvm_dirty_regs_bits },
};
int main(int argc, char *argv[])
{
static struct kvm_run *run;
static struct kvm_vm *vm;
int idx;
/* Tell stdout not to buffer its content */
setbuf(stdout, NULL);
ksft_print_header();
if (!kvm_check_cap(KVM_CAP_SYNC_REGS))
ksft_exit_skip("CAP_SYNC_REGS not supported");
ksft_set_plan(ARRAY_SIZE(testlist));
/* Create VM */
vm = vm_create_default(VCPU_ID, 0, guest_code);
run = vcpu_state(vm, VCPU_ID);
for (idx = 0; idx < ARRAY_SIZE(testlist); idx++) {
testlist[idx].test(vm, run);
ksft_test_result_pass("%s\n", testlist[idx].name);
}
kvm_vm_free(vm); kvm_vm_free(vm);
return 0; ksft_finished(); /* Print results and exit() accordingly */
} }
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <sys/mman.h> #include <sys/mman.h>
#include "test_util.h" #include "test_util.h"
#include "kvm_util.h" #include "kvm_util.h"
#include "kselftest.h"
#define PAGE_SHIFT 12 #define PAGE_SHIFT 12
#define PAGE_SIZE (1 << PAGE_SHIFT) #define PAGE_SIZE (1 << PAGE_SHIFT)
...@@ -63,12 +64,12 @@ static enum permission test_protection(void *addr, uint8_t key) ...@@ -63,12 +64,12 @@ static enum permission test_protection(void *addr, uint8_t key)
} }
enum stage { enum stage {
STAGE_END,
STAGE_INIT_SIMPLE, STAGE_INIT_SIMPLE,
TEST_SIMPLE, TEST_SIMPLE,
STAGE_INIT_FETCH_PROT_OVERRIDE, STAGE_INIT_FETCH_PROT_OVERRIDE,
TEST_FETCH_PROT_OVERRIDE, TEST_FETCH_PROT_OVERRIDE,
TEST_STORAGE_PROT_OVERRIDE, TEST_STORAGE_PROT_OVERRIDE,
STAGE_END /* must be the last entry (it's the amount of tests) */
}; };
struct test { struct test {
...@@ -182,7 +183,7 @@ static void guest_code(void) ...@@ -182,7 +183,7 @@ static void guest_code(void)
GUEST_SYNC(perform_next_stage(&i, mapped_0)); GUEST_SYNC(perform_next_stage(&i, mapped_0));
} }
#define HOST_SYNC(vmp, stage) \ #define HOST_SYNC_NO_TAP(vmp, stage) \
({ \ ({ \
struct kvm_vm *__vm = (vmp); \ struct kvm_vm *__vm = (vmp); \
struct ucall uc; \ struct ucall uc; \
...@@ -198,12 +199,21 @@ static void guest_code(void) ...@@ -198,12 +199,21 @@ static void guest_code(void)
ASSERT_EQ(uc.args[1], __stage); \ ASSERT_EQ(uc.args[1], __stage); \
}) })
#define HOST_SYNC(vmp, stage) \
({ \
HOST_SYNC_NO_TAP(vmp, stage); \
ksft_test_result_pass("" #stage "\n"); \
})
int main(int argc, char *argv[]) int main(int argc, char *argv[])
{ {
struct kvm_vm *vm; struct kvm_vm *vm;
struct kvm_run *run; struct kvm_run *run;
vm_vaddr_t guest_0_page; vm_vaddr_t guest_0_page;
ksft_print_header();
ksft_set_plan(STAGE_END);
vm = vm_create_default(VCPU_ID, 0, guest_code); vm = vm_create_default(VCPU_ID, 0, guest_code);
run = vcpu_state(vm, VCPU_ID); run = vcpu_state(vm, VCPU_ID);
...@@ -212,9 +222,14 @@ int main(int argc, char *argv[]) ...@@ -212,9 +222,14 @@ int main(int argc, char *argv[])
HOST_SYNC(vm, TEST_SIMPLE); HOST_SYNC(vm, TEST_SIMPLE);
guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0); guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);
if (guest_0_page != 0) if (guest_0_page != 0) {
print_skip("Did not allocate page at 0 for fetch protection override tests"); /* Use NO_TAP so we don't get a PASS print */
HOST_SYNC(vm, STAGE_INIT_FETCH_PROT_OVERRIDE); HOST_SYNC_NO_TAP(vm, STAGE_INIT_FETCH_PROT_OVERRIDE);
ksft_test_result_skip("STAGE_INIT_FETCH_PROT_OVERRIDE - "
"Did not allocate page at 0\n");
} else {
HOST_SYNC(vm, STAGE_INIT_FETCH_PROT_OVERRIDE);
}
if (guest_0_page == 0) if (guest_0_page == 0)
mprotect(addr_gva2hva(vm, (vm_vaddr_t)0), PAGE_SIZE, PROT_READ); mprotect(addr_gva2hva(vm, (vm_vaddr_t)0), PAGE_SIZE, PROT_READ);
run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE; run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
...@@ -224,4 +239,8 @@ int main(int argc, char *argv[]) ...@@ -224,4 +239,8 @@ int main(int argc, char *argv[])
run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE; run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE;
run->kvm_dirty_regs = KVM_SYNC_CRS; run->kvm_dirty_regs = KVM_SYNC_CRS;
HOST_SYNC(vm, TEST_STORAGE_PROT_OVERRIDE); HOST_SYNC(vm, TEST_STORAGE_PROT_OVERRIDE);
kvm_vm_free(vm);
ksft_finished(); /* Print results and exit() accordingly */
} }
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment