Commit 53861af9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux

Pull virtio updates from Rusty Russell:
 "OK, this has the big virtio 1.0 implementation, as specified by OASIS.

  On top of tht is the major rework of lguest, to use PCI and virtio
  1.0, to double-check the implementation.

  Then comes the inevitable fixes and cleanups from that work"

* tag 'virtio-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (80 commits)
  virtio: don't set VIRTIO_CONFIG_S_DRIVER_OK twice.
  virtio_net: unconditionally define struct virtio_net_hdr_v1.
  tools/lguest: don't use legacy definitions for net device in example launcher.
  virtio: Don't expose legacy net features when VIRTIO_NET_NO_LEGACY defined.
  tools/lguest: use common error macros in the example launcher.
  tools/lguest: give virtqueues names for better error messages
  tools/lguest: more documentation and checking of virtio 1.0 compliance.
  lguest: don't look in console features to find emerg_wr.
  tools/lguest: don't start devices until DRIVER_OK status set.
  tools/lguest: handle indirect partway through chain.
  tools/lguest: insert driver references from the 1.0 spec (4.1 Virtio Over PCI)
  tools/lguest: insert device references from the 1.0 spec (4.1 Virtio Over PCI)
  tools/lguest: rename virtio_pci_cfg_cap field to match spec.
  tools/lguest: fix features_accepted logic in example launcher.
  tools/lguest: handle device reset correctly in example launcher.
  virtual: Documentation: simplify and generalize paravirt_ops.txt
  lguest: remove NOTIFY call and eventfd facility.
  lguest: remove NOTIFY facility from demonstration launcher.
  lguest: use the PCI console device's emerg_wr for early boot messages.
  lguest: always put console in PCI slot #1.
  ...
parents 5c277007 5b40a7da
Paravirt_ops on IA64
====================
21 May 2008, Isaku Yamahata <yamahata@valinux.co.jp>
Introduction
------------
The aim of this documentation is to help with maintainability and/or to
encourage people to use paravirt_ops/IA64.
paravirt_ops (pv_ops in short) is a way for virtualization support of
Linux kernel on x86. Several ways for virtualization support were
proposed, paravirt_ops is the winner.
On the other hand, now there are also several IA64 virtualization
technologies like kvm/IA64, xen/IA64 and many other academic IA64
hypervisors so that it is good to add generic virtualization
infrastructure on Linux/IA64.
What is paravirt_ops?
---------------------
It has been developed on x86 as virtualization support via API, not ABI.
It allows each hypervisor to override operations which are important for
hypervisors at API level. And it allows a single kernel binary to run on
all supported execution environments including native machine.
Essentially paravirt_ops is a set of function pointers which represent
operations corresponding to low level sensitive instructions and high
level functionalities in various area. But one significant difference
from usual function pointer table is that it allows optimization with
binary patch. It is because some of these operations are very
performance sensitive and indirect call overhead is not negligible.
With binary patch, indirect C function call can be transformed into
direct C function call or in-place execution to eliminate the overhead.
Thus, operations of paravirt_ops are classified into three categories.
- simple indirect call
These operations correspond to high level functionality so that the
overhead of indirect call isn't very important.
- indirect call which allows optimization with binary patch
Usually these operations correspond to low level instructions. They
are called frequently and performance critical. So the overhead is
very important.
- a set of macros for hand written assembly code
Hand written assembly codes (.S files) also need paravirtualization
because they include sensitive instructions or some of code paths in
them are very performance critical.
The relation to the IA64 machine vector
---------------------------------------
Linux/IA64 has the IA64 machine vector functionality which allows the
kernel to switch implementations (e.g. initialization, ipi, dma api...)
depending on executing platform.
We can replace some implementations very easily defining a new machine
vector. Thus another approach for virtualization support would be
enhancing the machine vector functionality.
But paravirt_ops approach was taken because
- virtualization support needs wider support than machine vector does.
e.g. low level instruction paravirtualization. It must be
initialized very early before platform detection.
- virtualization support needs more functionality like binary patch.
Probably the calling overhead might not be very large compared to the
emulation overhead of virtualization. However in the native case, the
overhead should be eliminated completely.
A single kernel binary should run on each environment including native,
and the overhead of paravirt_ops on native environment should be as
small as possible.
- for full virtualization technology, e.g. KVM/IA64 or
Xen/IA64 HVM domain, the result would be
(the emulated platform machine vector. probably dig) + (pv_ops).
This means that the virtualization support layer should be under
the machine vector layer.
Possibly it might be better to move some function pointers from
paravirt_ops to machine vector. In fact, Xen domU case utilizes both
pv_ops and machine vector.
IA64 paravirt_ops
-----------------
In this section, the concrete paravirt_ops will be discussed.
Because of the architecture difference between ia64 and x86, the
resulting set of functions is very different from x86 pv_ops.
- C function pointer tables
They are not very performance critical so that simple C indirect
function call is acceptable. The following structures are defined at
this moment. For details see linux/include/asm-ia64/paravirt.h
- struct pv_info
This structure describes the execution environment.
- struct pv_init_ops
This structure describes the various initialization hooks.
- struct pv_iosapic_ops
This structure describes hooks to iosapic operations.
- struct pv_irq_ops
This structure describes hooks to irq related operations
- struct pv_time_op
This structure describes hooks to steal time accounting.
- a set of indirect calls which need optimization
Currently this class of functions correspond to a subset of IA64
intrinsics. At this moment the optimization with binary patch isn't
implemented yet.
struct pv_cpu_op is defined. For details see
linux/include/asm-ia64/paravirt_privop.h
Mostly they correspond to ia64 intrinsics 1-to-1.
Caveat: Now they are defined as C indirect function pointers, but in
order to support binary patch optimization, they will be changed
using GCC extended inline assembly code.
- a set of macros for hand written assembly code (.S files)
For maintenance purpose, the taken approach for .S files is single
source code and compile multiple times with different macros definitions.
Each pv_ops instance must define those macros to compile.
The important thing here is that sensitive, but non-privileged
instructions must be paravirtualized and that some privileged
instructions also need paravirtualization for reasonable performance.
Developers who modify .S files must be aware of that. At this moment
an easy checker is implemented to detect paravirtualization breakage.
But it doesn't cover all the cases.
Sometimes this set of macros is called pv_cpu_asm_op. But there is no
corresponding structure in the source code.
Those macros mostly 1:1 correspond to a subset of privileged
instructions. See linux/include/asm-ia64/native/inst.h.
And some functions written in assembly also need to be overrided so
that each pv_ops instance have to define some macros. Again see
linux/include/asm-ia64/native/inst.h.
Those structures must be initialized very early before start_kernel.
Probably initialized in head.S using multi entry point or some other trick.
For native case implementation see linux/arch/ia64/kernel/paravirt.c.
...@@ -2,6 +2,9 @@ Virtualization support in the Linux kernel. ...@@ -2,6 +2,9 @@ Virtualization support in the Linux kernel.
00-INDEX 00-INDEX
- this file. - this file.
paravirt_ops.txt
- Describes the Linux kernel pv_ops to support different hypervisors
kvm/ kvm/
- Kernel Virtual Machine. See also http://linux-kvm.org - Kernel Virtual Machine. See also http://linux-kvm.org
uml/ uml/
......
Paravirt_ops
============
Linux provides support for different hypervisor virtualization technologies.
Historically different binary kernels would be required in order to support
different hypervisors, this restriction was removed with pv_ops.
Linux pv_ops is a virtualization API which enables support for different
hypervisors. It allows each hypervisor to override critical operations and
allows a single kernel binary to run on all supported execution environments
including native machine -- without any hypervisors.
pv_ops provides a set of function pointers which represent operations
corresponding to low level critical instructions and high level
functionalities in various areas. pv-ops allows for optimizations at run
time by enabling binary patching of the low-ops critical operations
at boot time.
pv_ops operations are classified into three categories:
- simple indirect call
These operations correspond to high level functionality where it is
known that the overhead of indirect call isn't very important.
- indirect call which allows optimization with binary patch
Usually these operations correspond to low level critical instructions. They
are called frequently and are performance critical. The overhead is
very important.
- a set of macros for hand written assembly code
Hand written assembly codes (.S files) also need paravirtualization
because they include sensitive instructions or some of code paths in
them are very performance critical.
...@@ -7302,7 +7302,7 @@ M: Alok Kataria <akataria@vmware.com> ...@@ -7302,7 +7302,7 @@ M: Alok Kataria <akataria@vmware.com>
M: Rusty Russell <rusty@rustcorp.com.au> M: Rusty Russell <rusty@rustcorp.com.au>
L: virtualization@lists.linux-foundation.org L: virtualization@lists.linux-foundation.org
S: Supported S: Supported
F: Documentation/ia64/paravirt_ops.txt F: Documentation/virtual/paravirt_ops.txt
F: arch/*/kernel/paravirt* F: arch/*/kernel/paravirt*
F: arch/*/include/asm/paravirt.h F: arch/*/include/asm/paravirt.h
......
/* ASB2305 PCI I/O mapping handler
*
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#include <linux/pci.h>
#include <linux/module.h>
/*
* Create a virtual mapping cookie for a PCI BAR (memory or IO)
*/
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
resource_size_t start = pci_resource_start(dev, bar);
resource_size_t len = pci_resource_len(dev, bar);
unsigned long flags = pci_resource_flags(dev, bar);
if (!len || !start)
return NULL;
if ((flags & IORESOURCE_IO) || (flags & IORESOURCE_MEM)) {
if (flags & IORESOURCE_CACHEABLE && !(flags & IORESOURCE_IO))
return ioremap(start, len);
else
return ioremap_nocache(start, len);
}
return NULL;
}
EXPORT_SYMBOL(pci_iomap);
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
struct zpci_iomap_entry { struct zpci_iomap_entry {
u32 fh; u32 fh;
u8 bar; u8 bar;
u16 count;
}; };
extern struct zpci_iomap_entry *zpci_iomap_start; extern struct zpci_iomap_entry *zpci_iomap_start;
......
...@@ -259,7 +259,10 @@ void __iowrite64_copy(void __iomem *to, const void *from, size_t count) ...@@ -259,7 +259,10 @@ void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
} }
/* Create a virtual mapping cookie for a PCI BAR */ /* Create a virtual mapping cookie for a PCI BAR */
void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) void __iomem *pci_iomap_range(struct pci_dev *pdev,
int bar,
unsigned long offset,
unsigned long max)
{ {
struct zpci_dev *zdev = get_zdev(pdev); struct zpci_dev *zdev = get_zdev(pdev);
u64 addr; u64 addr;
...@@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max) ...@@ -270,14 +273,27 @@ void __iomem *pci_iomap(struct pci_dev *pdev, int bar, unsigned long max)
idx = zdev->bars[bar].map_idx; idx = zdev->bars[bar].map_idx;
spin_lock(&zpci_iomap_lock); spin_lock(&zpci_iomap_lock);
if (zpci_iomap_start[idx].count++) {
BUG_ON(zpci_iomap_start[idx].fh != zdev->fh ||
zpci_iomap_start[idx].bar != bar);
} else {
zpci_iomap_start[idx].fh = zdev->fh; zpci_iomap_start[idx].fh = zdev->fh;
zpci_iomap_start[idx].bar = bar; zpci_iomap_start[idx].bar = bar;
}
/* Detect overrun */
BUG_ON(!zpci_iomap_start[idx].count);
spin_unlock(&zpci_iomap_lock); spin_unlock(&zpci_iomap_lock);
addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48); addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48);
return (void __iomem *) addr; return (void __iomem *) addr + offset;
} }
EXPORT_SYMBOL_GPL(pci_iomap); EXPORT_SYMBOL_GPL(pci_iomap_range);
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
return pci_iomap_range(dev, bar, 0, maxlen);
}
EXPORT_SYMBOL(pci_iomap);
void pci_iounmap(struct pci_dev *pdev, void __iomem *addr) void pci_iounmap(struct pci_dev *pdev, void __iomem *addr)
{ {
...@@ -285,8 +301,12 @@ void pci_iounmap(struct pci_dev *pdev, void __iomem *addr) ...@@ -285,8 +301,12 @@ void pci_iounmap(struct pci_dev *pdev, void __iomem *addr)
idx = (((__force u64) addr) & ~ZPCI_IOMAP_ADDR_BASE) >> 48; idx = (((__force u64) addr) & ~ZPCI_IOMAP_ADDR_BASE) >> 48;
spin_lock(&zpci_iomap_lock); spin_lock(&zpci_iomap_lock);
/* Detect underrun */
BUG_ON(!zpci_iomap_start[idx].count);
if (!--zpci_iomap_start[idx].count) {
zpci_iomap_start[idx].fh = 0; zpci_iomap_start[idx].fh = 0;
zpci_iomap_start[idx].bar = 0; zpci_iomap_start[idx].bar = 0;
}
spin_unlock(&zpci_iomap_lock); spin_unlock(&zpci_iomap_lock);
} }
EXPORT_SYMBOL_GPL(pci_iounmap); EXPORT_SYMBOL_GPL(pci_iounmap);
......
...@@ -16,7 +16,6 @@ ...@@ -16,7 +16,6 @@
#define LHCALL_SET_PTE 14 #define LHCALL_SET_PTE 14
#define LHCALL_SET_PGD 15 #define LHCALL_SET_PGD 15
#define LHCALL_LOAD_TLS 16 #define LHCALL_LOAD_TLS 16
#define LHCALL_NOTIFY 17
#define LHCALL_LOAD_GDT_ENTRY 18 #define LHCALL_LOAD_GDT_ENTRY 18
#define LHCALL_SEND_INTERRUPTS 19 #define LHCALL_SEND_INTERRUPTS 19
......
...@@ -56,6 +56,9 @@ ...@@ -56,6 +56,9 @@
#include <linux/virtio_console.h> #include <linux/virtio_console.h>
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/pci.h>
#include <linux/virtio_pci.h>
#include <asm/acpi.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/lguest.h> #include <asm/lguest.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
...@@ -71,6 +74,8 @@ ...@@ -71,6 +74,8 @@
#include <asm/stackprotector.h> #include <asm/stackprotector.h>
#include <asm/reboot.h> /* for struct machine_ops */ #include <asm/reboot.h> /* for struct machine_ops */
#include <asm/kvm_para.h> #include <asm/kvm_para.h>
#include <asm/pci_x86.h>
#include <asm/pci-direct.h>
/*G:010 /*G:010
* Welcome to the Guest! * Welcome to the Guest!
...@@ -831,6 +836,24 @@ static struct irq_chip lguest_irq_controller = { ...@@ -831,6 +836,24 @@ static struct irq_chip lguest_irq_controller = {
.irq_unmask = enable_lguest_irq, .irq_unmask = enable_lguest_irq,
}; };
static int lguest_enable_irq(struct pci_dev *dev)
{
u8 line = 0;
/* We literally use the PCI interrupt line as the irq number. */
pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &line);
irq_set_chip_and_handler_name(line, &lguest_irq_controller,
handle_level_irq, "level");
dev->irq = line;
return 0;
}
/* We don't do hotplug PCI, so this shouldn't be called. */
static void lguest_disable_irq(struct pci_dev *dev)
{
WARN_ON(1);
}
/* /*
* This sets up the Interrupt Descriptor Table (IDT) entry for each hardware * This sets up the Interrupt Descriptor Table (IDT) entry for each hardware
* interrupt (except 128, which is used for system calls), and then tells the * interrupt (except 128, which is used for system calls), and then tells the
...@@ -1181,25 +1204,136 @@ static __init char *lguest_memory_setup(void) ...@@ -1181,25 +1204,136 @@ static __init char *lguest_memory_setup(void)
return "LGUEST"; return "LGUEST";
} }
/* Offset within PCI config space of BAR access capability. */
static int console_cfg_offset = 0;
static int console_access_cap;
/* Set up so that we access off in bar0 (on bus 0, device 1, function 0) */
static void set_cfg_window(u32 cfg_offset, u32 off)
{
write_pci_config_byte(0, 1, 0,
cfg_offset + offsetof(struct virtio_pci_cap, bar),
0);
write_pci_config(0, 1, 0,
cfg_offset + offsetof(struct virtio_pci_cap, length),
4);
write_pci_config(0, 1, 0,
cfg_offset + offsetof(struct virtio_pci_cap, offset),
off);
}
static void write_bar_via_cfg(u32 cfg_offset, u32 off, u32 val)
{
/*
* We could set this up once, then leave it; nothing else in the *
* kernel should touch these registers. But if it went wrong, that
* would be a horrible bug to find.
*/
set_cfg_window(cfg_offset, off);
write_pci_config(0, 1, 0,
cfg_offset + sizeof(struct virtio_pci_cap), val);
}
static void probe_pci_console(void)
{
u8 cap, common_cap = 0, device_cap = 0;
/* Offset within BAR0 */
u32 device_offset;
u32 device_len;
/* Avoid recursive printk into here. */
console_cfg_offset = -1;
if (!early_pci_allowed()) {
printk(KERN_ERR "lguest: early PCI access not allowed!\n");
return;
}
/* We expect a console PCI device at BUS0, slot 1. */
if (read_pci_config(0, 1, 0, 0) != 0x10431AF4) {
printk(KERN_ERR "lguest: PCI device is %#x!\n",
read_pci_config(0, 1, 0, 0));
return;
}
/* Find the capabilities we need (must be in bar0) */
cap = read_pci_config_byte(0, 1, 0, PCI_CAPABILITY_LIST);
while (cap) {
u8 vndr = read_pci_config_byte(0, 1, 0, cap);
if (vndr == PCI_CAP_ID_VNDR) {
u8 type, bar;
u32 offset, length;
type = read_pci_config_byte(0, 1, 0,
cap + offsetof(struct virtio_pci_cap, cfg_type));
bar = read_pci_config_byte(0, 1, 0,
cap + offsetof(struct virtio_pci_cap, bar));
offset = read_pci_config(0, 1, 0,
cap + offsetof(struct virtio_pci_cap, offset));
length = read_pci_config(0, 1, 0,
cap + offsetof(struct virtio_pci_cap, length));
switch (type) {
case VIRTIO_PCI_CAP_DEVICE_CFG:
if (bar == 0) {
device_cap = cap;
device_offset = offset;
device_len = length;
}
break;
case VIRTIO_PCI_CAP_PCI_CFG:
console_access_cap = cap;
break;
}
}
cap = read_pci_config_byte(0, 1, 0, cap + PCI_CAP_LIST_NEXT);
}
if (!device_cap || !console_access_cap) {
printk(KERN_ERR "lguest: No caps (%u/%u/%u) in console!\n",
common_cap, device_cap, console_access_cap);
return;
}
/*
* Note that we can't check features, until we've set the DRIVER
* status bit. We don't want to do that until we have a real driver,
* so we just check that the device-specific config has room for
* emerg_wr. If it doesn't support VIRTIO_CONSOLE_F_EMERG_WRITE
* it should ignore the access.
*/
if (device_len < (offsetof(struct virtio_console_config, emerg_wr)
+ sizeof(u32))) {
printk(KERN_ERR "lguest: console missing emerg_wr field\n");
return;
}
console_cfg_offset = device_offset;
printk(KERN_INFO "lguest: Console via virtio-pci emerg_wr\n");
}
/* /*
* We will eventually use the virtio console device to produce console output, * We will eventually use the virtio console device to produce console output,
* but before that is set up we use LHCALL_NOTIFY on normal memory to produce * but before that is set up we use the virtio PCI console's backdoor mmio
* console output. * access and the "emergency" write facility (which is legal even before the
* device is configured).
*/ */
static __init int early_put_chars(u32 vtermno, const char *buf, int count) static __init int early_put_chars(u32 vtermno, const char *buf, int count)
{ {
char scratch[17]; /* If we couldn't find PCI console, forget it. */
unsigned int len = count; if (console_cfg_offset < 0)
return count;
/* We use a nul-terminated string, so we make a copy. Icky, huh? */ if (unlikely(!console_cfg_offset)) {
if (len > sizeof(scratch) - 1) probe_pci_console();
len = sizeof(scratch) - 1; if (console_cfg_offset < 0)
scratch[len] = '\0'; return count;
memcpy(scratch, buf, len); }
hcall(LHCALL_NOTIFY, __pa(scratch), 0, 0, 0);
/* This routine returns the number of bytes actually written. */ write_bar_via_cfg(console_access_cap,
return len; console_cfg_offset
+ offsetof(struct virtio_console_config, emerg_wr),
buf[0]);
return 1;
} }
/* /*
...@@ -1399,14 +1533,6 @@ __init void lguest_init(void) ...@@ -1399,14 +1533,6 @@ __init void lguest_init(void)
/* Hook in our special panic hypercall code. */ /* Hook in our special panic hypercall code. */
atomic_notifier_chain_register(&panic_notifier_list, &paniced); atomic_notifier_chain_register(&panic_notifier_list, &paniced);
/*
* The IDE code spends about 3 seconds probing for disks: if we reserve
* all the I/O ports up front it can't get them and so doesn't probe.
* Other device drivers are similar (but less severe). This cuts the
* kernel boot time on my machine from 4.1 seconds to 0.45 seconds.
*/
paravirt_disable_iospace();
/* /*
* This is messy CPU setup stuff which the native boot code does before * This is messy CPU setup stuff which the native boot code does before
* start_kernel, so we have to do, too: * start_kernel, so we have to do, too:
...@@ -1436,6 +1562,13 @@ __init void lguest_init(void) ...@@ -1436,6 +1562,13 @@ __init void lguest_init(void)
/* Register our very early console. */ /* Register our very early console. */
virtio_cons_early_init(early_put_chars); virtio_cons_early_init(early_put_chars);
/* Don't let ACPI try to control our PCI interrupts. */
disable_acpi();
/* We control them ourselves, by overriding these two hooks. */
pcibios_enable_irq = lguest_enable_irq;
pcibios_disable_irq = lguest_disable_irq;
/* /*
* Last of all, we set the power management poweroff hook to point to * Last of all, we set the power management poweroff hook to point to
* the Guest routine to power off, and the reboot hook to our restart * the Guest routine to power off, and the reboot hook to our restart
......
...@@ -28,8 +28,7 @@ struct virtio_blk_vq { ...@@ -28,8 +28,7 @@ struct virtio_blk_vq {
char name[VQ_NAME_LEN]; char name[VQ_NAME_LEN];
} ____cacheline_aligned_in_smp; } ____cacheline_aligned_in_smp;
struct virtio_blk struct virtio_blk {
{
struct virtio_device *vdev; struct virtio_device *vdev;
/* The disk structure for the kernel. */ /* The disk structure for the kernel. */
...@@ -52,8 +51,7 @@ struct virtio_blk ...@@ -52,8 +51,7 @@ struct virtio_blk
struct virtio_blk_vq *vqs; struct virtio_blk_vq *vqs;
}; };
struct virtblk_req struct virtblk_req {
{
struct request *req; struct request *req;
struct virtio_blk_outhdr out_hdr; struct virtio_blk_outhdr out_hdr;
struct virtio_scsi_inhdr in_hdr; struct virtio_scsi_inhdr in_hdr;
...@@ -575,6 +573,12 @@ static int virtblk_probe(struct virtio_device *vdev) ...@@ -575,6 +573,12 @@ static int virtblk_probe(struct virtio_device *vdev)
u16 min_io_size; u16 min_io_size;
u8 physical_block_exp, alignment_offset; u8 physical_block_exp, alignment_offset;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS), err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),
GFP_KERNEL); GFP_KERNEL);
if (err < 0) if (err < 0)
......
...@@ -1986,7 +1986,10 @@ static int virtcons_probe(struct virtio_device *vdev) ...@@ -1986,7 +1986,10 @@ static int virtcons_probe(struct virtio_device *vdev)
bool multiport; bool multiport;
bool early = early_put_chars != NULL; bool early = early_put_chars != NULL;
if (!vdev->config->get) { /* We only need a config space if features are offered */
if (!vdev->config->get &&
(virtio_has_feature(vdev, VIRTIO_CONSOLE_F_SIZE)
|| virtio_has_feature(vdev, VIRTIO_CONSOLE_F_MULTIPORT))) {
dev_err(&vdev->dev, "%s failure: config access disabled\n", dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__); __func__);
return -EINVAL; return -EINVAL;
......
# Guest requires the device configuration and probing code.
obj-$(CONFIG_LGUEST_GUEST) += lguest_device.o
# Host requires the other files, which can be a module. # Host requires the other files, which can be a module.
obj-$(CONFIG_LGUEST) += lg.o obj-$(CONFIG_LGUEST) += lg.o
lg-y = core.o hypercalls.o page_tables.o interrupts_and_traps.o \ lg-y = core.o hypercalls.o page_tables.o interrupts_and_traps.o \
......
...@@ -208,6 +208,14 @@ void __lgwrite(struct lg_cpu *cpu, unsigned long addr, const void *b, ...@@ -208,6 +208,14 @@ void __lgwrite(struct lg_cpu *cpu, unsigned long addr, const void *b,
*/ */
int run_guest(struct lg_cpu *cpu, unsigned long __user *user) int run_guest(struct lg_cpu *cpu, unsigned long __user *user)
{ {
/* If the launcher asked for a register with LHREQ_GETREG */
if (cpu->reg_read) {
if (put_user(*cpu->reg_read, user))
return -EFAULT;
cpu->reg_read = NULL;
return sizeof(*cpu->reg_read);
}
/* We stop running once the Guest is dead. */ /* We stop running once the Guest is dead. */
while (!cpu->lg->dead) { while (!cpu->lg->dead) {
unsigned int irq; unsigned int irq;
...@@ -217,21 +225,12 @@ int run_guest(struct lg_cpu *cpu, unsigned long __user *user) ...@@ -217,21 +225,12 @@ int run_guest(struct lg_cpu *cpu, unsigned long __user *user)
if (cpu->hcall) if (cpu->hcall)
do_hypercalls(cpu); do_hypercalls(cpu);
/* /* Do we have to tell the Launcher about a trap? */
* It's possible the Guest did a NOTIFY hypercall to the if (cpu->pending.trap) {
* Launcher. if (copy_to_user(user, &cpu->pending,
*/ sizeof(cpu->pending)))
if (cpu->pending_notify) {
/*
* Does it just needs to write to a registered
* eventfd (ie. the appropriate virtqueue thread)?
*/
if (!send_notify_to_eventfd(cpu)) {
/* OK, we tell the main Launcher. */
if (put_user(cpu->pending_notify, user))
return -EFAULT; return -EFAULT;
return sizeof(cpu->pending_notify); return sizeof(cpu->pending);
}
} }
/* /*
......
...@@ -117,9 +117,6 @@ static void do_hcall(struct lg_cpu *cpu, struct hcall_args *args) ...@@ -117,9 +117,6 @@ static void do_hcall(struct lg_cpu *cpu, struct hcall_args *args)
/* Similarly, this sets the halted flag for run_guest(). */ /* Similarly, this sets the halted flag for run_guest(). */
cpu->halted = 1; cpu->halted = 1;
break; break;
case LHCALL_NOTIFY:
cpu->pending_notify = args->arg1;
break;
default: default:
/* It should be an architecture-specific hypercall. */ /* It should be an architecture-specific hypercall. */
if (lguest_arch_do_hcall(cpu, args)) if (lguest_arch_do_hcall(cpu, args))
...@@ -189,7 +186,7 @@ static void do_async_hcalls(struct lg_cpu *cpu) ...@@ -189,7 +186,7 @@ static void do_async_hcalls(struct lg_cpu *cpu)
* Stop doing hypercalls if they want to notify the Launcher: * Stop doing hypercalls if they want to notify the Launcher:
* it needs to service this first. * it needs to service this first.
*/ */
if (cpu->pending_notify) if (cpu->pending.trap)
break; break;
} }
} }
...@@ -280,7 +277,7 @@ void do_hypercalls(struct lg_cpu *cpu) ...@@ -280,7 +277,7 @@ void do_hypercalls(struct lg_cpu *cpu)
* NOTIFY to the Launcher, we want to return now. Otherwise we do * NOTIFY to the Launcher, we want to return now. Otherwise we do
* the hypercall. * the hypercall.
*/ */
if (!cpu->pending_notify) { if (!cpu->pending.trap) {
do_hcall(cpu, cpu->hcall); do_hcall(cpu, cpu->hcall);
/* /*
* Tricky point: we reset the hcall pointer to mark the * Tricky point: we reset the hcall pointer to mark the
......
...@@ -50,7 +50,10 @@ struct lg_cpu { ...@@ -50,7 +50,10 @@ struct lg_cpu {
/* Bitmap of what has changed: see CHANGED_* above. */ /* Bitmap of what has changed: see CHANGED_* above. */
int changed; int changed;
unsigned long pending_notify; /* pfn from LHCALL_NOTIFY */ /* Pending operation. */
struct lguest_pending pending;
unsigned long *reg_read; /* register from LHREQ_GETREG */
/* At end of a page shared mapped over lguest_pages in guest. */ /* At end of a page shared mapped over lguest_pages in guest. */
unsigned long regs_page; unsigned long regs_page;
...@@ -78,24 +81,18 @@ struct lg_cpu { ...@@ -78,24 +81,18 @@ struct lg_cpu {
struct lg_cpu_arch arch; struct lg_cpu_arch arch;
}; };
struct lg_eventfd {
unsigned long addr;
struct eventfd_ctx *event;
};
struct lg_eventfd_map {
unsigned int num;
struct lg_eventfd map[];
};
/* The private info the thread maintains about the guest. */ /* The private info the thread maintains about the guest. */
struct lguest { struct lguest {
struct lguest_data __user *lguest_data; struct lguest_data __user *lguest_data;
struct lg_cpu cpus[NR_CPUS]; struct lg_cpu cpus[NR_CPUS];
unsigned int nr_cpus; unsigned int nr_cpus;
/* Valid guest memory pages must be < this. */
u32 pfn_limit; u32 pfn_limit;
/* Device memory is >= pfn_limit and < device_limit. */
u32 device_limit;
/* /*
* This provides the offset to the base of guest-physical memory in the * This provides the offset to the base of guest-physical memory in the
* Launcher. * Launcher.
...@@ -110,8 +107,6 @@ struct lguest { ...@@ -110,8 +107,6 @@ struct lguest {
unsigned int stack_pages; unsigned int stack_pages;
u32 tsc_khz; u32 tsc_khz;
struct lg_eventfd_map *eventfds;
/* Dead? */ /* Dead? */
const char *dead; const char *dead;
}; };
...@@ -197,8 +192,10 @@ void guest_pagetable_flush_user(struct lg_cpu *cpu); ...@@ -197,8 +192,10 @@ void guest_pagetable_flush_user(struct lg_cpu *cpu);
void guest_set_pte(struct lg_cpu *cpu, unsigned long gpgdir, void guest_set_pte(struct lg_cpu *cpu, unsigned long gpgdir,
unsigned long vaddr, pte_t val); unsigned long vaddr, pte_t val);
void map_switcher_in_guest(struct lg_cpu *cpu, struct lguest_pages *pages); void map_switcher_in_guest(struct lg_cpu *cpu, struct lguest_pages *pages);
bool demand_page(struct lg_cpu *cpu, unsigned long cr2, int errcode); bool demand_page(struct lg_cpu *cpu, unsigned long cr2, int errcode,
unsigned long *iomem);
void pin_page(struct lg_cpu *cpu, unsigned long vaddr); void pin_page(struct lg_cpu *cpu, unsigned long vaddr);
bool __guest_pa(struct lg_cpu *cpu, unsigned long vaddr, unsigned long *paddr);
unsigned long guest_pa(struct lg_cpu *cpu, unsigned long vaddr); unsigned long guest_pa(struct lg_cpu *cpu, unsigned long vaddr);
void page_table_guest_data_init(struct lg_cpu *cpu); void page_table_guest_data_init(struct lg_cpu *cpu);
...@@ -210,6 +207,7 @@ void lguest_arch_handle_trap(struct lg_cpu *cpu); ...@@ -210,6 +207,7 @@ void lguest_arch_handle_trap(struct lg_cpu *cpu);
int lguest_arch_init_hypercalls(struct lg_cpu *cpu); int lguest_arch_init_hypercalls(struct lg_cpu *cpu);
int lguest_arch_do_hcall(struct lg_cpu *cpu, struct hcall_args *args); int lguest_arch_do_hcall(struct lg_cpu *cpu, struct hcall_args *args);
void lguest_arch_setup_regs(struct lg_cpu *cpu, unsigned long start); void lguest_arch_setup_regs(struct lg_cpu *cpu, unsigned long start);
unsigned long *lguest_arch_regptr(struct lg_cpu *cpu, size_t reg_off, bool any);
/* <arch>/switcher.S: */ /* <arch>/switcher.S: */
extern char start_switcher_text[], end_switcher_text[], switch_to_guest[]; extern char start_switcher_text[], end_switcher_text[], switch_to_guest[];
......
This diff is collapsed.
This diff is collapsed.
...@@ -250,6 +250,16 @@ static void release_pte(pte_t pte) ...@@ -250,6 +250,16 @@ static void release_pte(pte_t pte)
} }
/*:*/ /*:*/
static bool gpte_in_iomem(struct lg_cpu *cpu, pte_t gpte)
{
/* We don't handle large pages. */
if (pte_flags(gpte) & _PAGE_PSE)
return false;
return (pte_pfn(gpte) >= cpu->lg->pfn_limit
&& pte_pfn(gpte) < cpu->lg->device_limit);
}
static bool check_gpte(struct lg_cpu *cpu, pte_t gpte) static bool check_gpte(struct lg_cpu *cpu, pte_t gpte)
{ {
if ((pte_flags(gpte) & _PAGE_PSE) || if ((pte_flags(gpte) & _PAGE_PSE) ||
...@@ -374,8 +384,14 @@ static pte_t *find_spte(struct lg_cpu *cpu, unsigned long vaddr, bool allocate, ...@@ -374,8 +384,14 @@ static pte_t *find_spte(struct lg_cpu *cpu, unsigned long vaddr, bool allocate,
* *
* If we fixed up the fault (ie. we mapped the address), this routine returns * If we fixed up the fault (ie. we mapped the address), this routine returns
* true. Otherwise, it was a real fault and we need to tell the Guest. * true. Otherwise, it was a real fault and we need to tell the Guest.
*
* There's a corner case: they're trying to access memory between
* pfn_limit and device_limit, which is I/O memory. In this case, we
* return false and set @iomem to the physical address, so the the
* Launcher can handle the instruction manually.
*/ */
bool demand_page(struct lg_cpu *cpu, unsigned long vaddr, int errcode) bool demand_page(struct lg_cpu *cpu, unsigned long vaddr, int errcode,
unsigned long *iomem)
{ {
unsigned long gpte_ptr; unsigned long gpte_ptr;
pte_t gpte; pte_t gpte;
...@@ -383,6 +399,8 @@ bool demand_page(struct lg_cpu *cpu, unsigned long vaddr, int errcode) ...@@ -383,6 +399,8 @@ bool demand_page(struct lg_cpu *cpu, unsigned long vaddr, int errcode)
pmd_t gpmd; pmd_t gpmd;
pgd_t gpgd; pgd_t gpgd;
*iomem = 0;
/* We never demand page the Switcher, so trying is a mistake. */ /* We never demand page the Switcher, so trying is a mistake. */
if (vaddr >= switcher_addr) if (vaddr >= switcher_addr)
return false; return false;
...@@ -459,6 +477,12 @@ bool demand_page(struct lg_cpu *cpu, unsigned long vaddr, int errcode) ...@@ -459,6 +477,12 @@ bool demand_page(struct lg_cpu *cpu, unsigned long vaddr, int errcode)
if ((errcode & 4) && !(pte_flags(gpte) & _PAGE_USER)) if ((errcode & 4) && !(pte_flags(gpte) & _PAGE_USER))
return false; return false;
/* If they're accessing io memory, we expect a fault. */
if (gpte_in_iomem(cpu, gpte)) {
*iomem = (pte_pfn(gpte) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
return false;
}
/* /*
* Check that the Guest PTE flags are OK, and the page number is below * Check that the Guest PTE flags are OK, and the page number is below
* the pfn_limit (ie. not mapping the Launcher binary). * the pfn_limit (ie. not mapping the Launcher binary).
...@@ -553,7 +577,9 @@ static bool page_writable(struct lg_cpu *cpu, unsigned long vaddr) ...@@ -553,7 +577,9 @@ static bool page_writable(struct lg_cpu *cpu, unsigned long vaddr)
*/ */
void pin_page(struct lg_cpu *cpu, unsigned long vaddr) void pin_page(struct lg_cpu *cpu, unsigned long vaddr)
{ {
if (!page_writable(cpu, vaddr) && !demand_page(cpu, vaddr, 2)) unsigned long iomem;
if (!page_writable(cpu, vaddr) && !demand_page(cpu, vaddr, 2, &iomem))
kill_guest(cpu, "bad stack page %#lx", vaddr); kill_guest(cpu, "bad stack page %#lx", vaddr);
} }
/*:*/ /*:*/
...@@ -647,7 +673,7 @@ void guest_pagetable_flush_user(struct lg_cpu *cpu) ...@@ -647,7 +673,7 @@ void guest_pagetable_flush_user(struct lg_cpu *cpu)
/*:*/ /*:*/
/* We walk down the guest page tables to get a guest-physical address */ /* We walk down the guest page tables to get a guest-physical address */
unsigned long guest_pa(struct lg_cpu *cpu, unsigned long vaddr) bool __guest_pa(struct lg_cpu *cpu, unsigned long vaddr, unsigned long *paddr)
{ {
pgd_t gpgd; pgd_t gpgd;
pte_t gpte; pte_t gpte;
...@@ -656,31 +682,47 @@ unsigned long guest_pa(struct lg_cpu *cpu, unsigned long vaddr) ...@@ -656,31 +682,47 @@ unsigned long guest_pa(struct lg_cpu *cpu, unsigned long vaddr)
#endif #endif
/* Still not set up? Just map 1:1. */ /* Still not set up? Just map 1:1. */
if (unlikely(cpu->linear_pages)) if (unlikely(cpu->linear_pages)) {
return vaddr; *paddr = vaddr;
return true;
}
/* First step: get the top-level Guest page table entry. */ /* First step: get the top-level Guest page table entry. */
gpgd = lgread(cpu, gpgd_addr(cpu, vaddr), pgd_t); gpgd = lgread(cpu, gpgd_addr(cpu, vaddr), pgd_t);
/* Toplevel not present? We can't map it in. */ /* Toplevel not present? We can't map it in. */
if (!(pgd_flags(gpgd) & _PAGE_PRESENT)) { if (!(pgd_flags(gpgd) & _PAGE_PRESENT))
kill_guest(cpu, "Bad address %#lx", vaddr); goto fail;
return -1UL;
}
#ifdef CONFIG_X86_PAE #ifdef CONFIG_X86_PAE
gpmd = lgread(cpu, gpmd_addr(gpgd, vaddr), pmd_t); gpmd = lgread(cpu, gpmd_addr(gpgd, vaddr), pmd_t);
if (!(pmd_flags(gpmd) & _PAGE_PRESENT)) { if (!(pmd_flags(gpmd) & _PAGE_PRESENT))
kill_guest(cpu, "Bad address %#lx", vaddr); goto fail;
return -1UL;
}
gpte = lgread(cpu, gpte_addr(cpu, gpmd, vaddr), pte_t); gpte = lgread(cpu, gpte_addr(cpu, gpmd, vaddr), pte_t);
#else #else
gpte = lgread(cpu, gpte_addr(cpu, gpgd, vaddr), pte_t); gpte = lgread(cpu, gpte_addr(cpu, gpgd, vaddr), pte_t);
#endif #endif
if (!(pte_flags(gpte) & _PAGE_PRESENT)) if (!(pte_flags(gpte) & _PAGE_PRESENT))
kill_guest(cpu, "Bad address %#lx", vaddr); goto fail;
*paddr = pte_pfn(gpte) * PAGE_SIZE | (vaddr & ~PAGE_MASK);
return true;
return pte_pfn(gpte) * PAGE_SIZE | (vaddr & ~PAGE_MASK); fail:
*paddr = -1UL;
return false;
}
/*
* This is the version we normally use: kills the Guest if it uses a
* bad address
*/
unsigned long guest_pa(struct lg_cpu *cpu, unsigned long vaddr)
{
unsigned long paddr;
if (!__guest_pa(cpu, vaddr, &paddr))
kill_guest(cpu, "Bad address %#lx", vaddr);
return paddr;
} }
/* /*
...@@ -912,7 +954,8 @@ static void __guest_set_pte(struct lg_cpu *cpu, int idx, ...@@ -912,7 +954,8 @@ static void __guest_set_pte(struct lg_cpu *cpu, int idx,
* now. This shaves 10% off a copy-on-write * now. This shaves 10% off a copy-on-write
* micro-benchmark. * micro-benchmark.
*/ */
if (pte_flags(gpte) & (_PAGE_DIRTY | _PAGE_ACCESSED)) { if ((pte_flags(gpte) & (_PAGE_DIRTY | _PAGE_ACCESSED))
&& !gpte_in_iomem(cpu, gpte)) {
if (!check_gpte(cpu, gpte)) if (!check_gpte(cpu, gpte))
return; return;
set_pte(spte, set_pte(spte,
......
...@@ -182,6 +182,52 @@ static void run_guest_once(struct lg_cpu *cpu, struct lguest_pages *pages) ...@@ -182,6 +182,52 @@ static void run_guest_once(struct lg_cpu *cpu, struct lguest_pages *pages)
} }
/*:*/ /*:*/
unsigned long *lguest_arch_regptr(struct lg_cpu *cpu, size_t reg_off, bool any)
{
switch (reg_off) {
case offsetof(struct pt_regs, bx):
return &cpu->regs->ebx;
case offsetof(struct pt_regs, cx):
return &cpu->regs->ecx;
case offsetof(struct pt_regs, dx):
return &cpu->regs->edx;
case offsetof(struct pt_regs, si):
return &cpu->regs->esi;
case offsetof(struct pt_regs, di):
return &cpu->regs->edi;
case offsetof(struct pt_regs, bp):
return &cpu->regs->ebp;
case offsetof(struct pt_regs, ax):
return &cpu->regs->eax;
case offsetof(struct pt_regs, ip):
return &cpu->regs->eip;
case offsetof(struct pt_regs, sp):
return &cpu->regs->esp;
}
/* Launcher can read these, but we don't allow any setting. */
if (any) {
switch (reg_off) {
case offsetof(struct pt_regs, ds):
return &cpu->regs->ds;
case offsetof(struct pt_regs, es):
return &cpu->regs->es;
case offsetof(struct pt_regs, fs):
return &cpu->regs->fs;
case offsetof(struct pt_regs, gs):
return &cpu->regs->gs;
case offsetof(struct pt_regs, cs):
return &cpu->regs->cs;
case offsetof(struct pt_regs, flags):
return &cpu->regs->eflags;
case offsetof(struct pt_regs, ss):
return &cpu->regs->ss;
}
}
return NULL;
}
/*M:002 /*M:002
* There are hooks in the scheduler which we can register to tell when we * There are hooks in the scheduler which we can register to tell when we
* get kicked off the CPU (preempt_notifier_register()). This would allow us * get kicked off the CPU (preempt_notifier_register()). This would allow us
...@@ -269,109 +315,72 @@ void lguest_arch_run_guest(struct lg_cpu *cpu) ...@@ -269,109 +315,72 @@ void lguest_arch_run_guest(struct lg_cpu *cpu)
* usually attached to a PC. * usually attached to a PC.
* *
* When the Guest uses one of these instructions, we get a trap (General * When the Guest uses one of these instructions, we get a trap (General
* Protection Fault) and come here. We see if it's one of those troublesome * Protection Fault) and come here. We queue this to be sent out to the
* instructions and skip over it. We return true if we did. * Launcher to handle.
*/
static int emulate_insn(struct lg_cpu *cpu)
{
u8 insn;
unsigned int insnlen = 0, in = 0, small_operand = 0;
/*
* The eip contains the *virtual* address of the Guest's instruction:
* walk the Guest's page tables to find the "physical" address.
*/ */
unsigned long physaddr = guest_pa(cpu, cpu->regs->eip);
/* /*
* This must be the Guest kernel trying to do something, not userspace! * The eip contains the *virtual* address of the Guest's instruction:
* The bottom two bits of the CS segment register are the privilege * we copy the instruction here so the Launcher doesn't have to walk
* level. * the page tables to decode it. We handle the case (eg. in a kernel
*/ * module) where the instruction is over two pages, and the pages are
if ((cpu->regs->cs & 3) != GUEST_PL) * virtually but not physically contiguous.
return 0; *
* The longest possible x86 instruction is 15 bytes, but we don't handle
/* Decoding x86 instructions is icky. */ * anything that strange.
insn = lgread(cpu, physaddr, u8);
/*
* Around 2.6.33, the kernel started using an emulation for the
* cmpxchg8b instruction in early boot on many configurations. This
* code isn't paravirtualized, and it tries to disable interrupts.
* Ignore it, which will Mostly Work.
*/ */
if (insn == 0xfa) { static void copy_from_guest(struct lg_cpu *cpu,
/* "cli", or Clear Interrupt Enable instruction. Skip it. */ void *dst, unsigned long vaddr, size_t len)
cpu->regs->eip++; {
return 1; size_t to_page_end = PAGE_SIZE - (vaddr % PAGE_SIZE);
unsigned long paddr;
BUG_ON(len > PAGE_SIZE);
/* If it goes over a page, copy in two parts. */
if (len > to_page_end) {
/* But make sure the next page is mapped! */
if (__guest_pa(cpu, vaddr + to_page_end, &paddr))
copy_from_guest(cpu, dst + to_page_end,
vaddr + to_page_end,
len - to_page_end);
else
/* Otherwise fill with zeroes. */
memset(dst + to_page_end, 0, len - to_page_end);
len = to_page_end;
} }
/* /* This will kill the guest if it isn't mapped, but that
* 0x66 is an "operand prefix". It means a 16, not 32 bit in/out. * shouldn't happen. */
*/ __lgread(cpu, dst, guest_pa(cpu, vaddr), len);
if (insn == 0x66) { }
small_operand = 1;
/* The instruction is 1 byte so far, read the next byte. */
insnlen = 1;
insn = lgread(cpu, physaddr + insnlen, u8);
}
/*
* We can ignore the lower bit for the moment and decode the 4 opcodes
* we need to emulate.
*/
switch (insn & 0xFE) {
case 0xE4: /* in <next byte>,%al */
insnlen += 2;
in = 1;
break;
case 0xEC: /* in (%dx),%al */
insnlen += 1;
in = 1;
break;
case 0xE6: /* out %al,<next byte> */
insnlen += 2;
break;
case 0xEE: /* out %al,(%dx) */
insnlen += 1;
break;
default:
/* OK, we don't know what this is, can't emulate. */
return 0;
}
/* static void setup_emulate_insn(struct lg_cpu *cpu)
* If it was an "IN" instruction, they expect the result to be read {
* into %eax, so we change %eax. We always return all-ones, which cpu->pending.trap = 13;
* traditionally means "there's nothing there". copy_from_guest(cpu, cpu->pending.insn, cpu->regs->eip,
*/ sizeof(cpu->pending.insn));
if (in) { }
/* Lower bit tells means it's a 32/16 bit access */
if (insn & 0x1) { static void setup_iomem_insn(struct lg_cpu *cpu, unsigned long iomem_addr)
if (small_operand) {
cpu->regs->eax |= 0xFFFF; cpu->pending.trap = 14;
else cpu->pending.addr = iomem_addr;
cpu->regs->eax = 0xFFFFFFFF; copy_from_guest(cpu, cpu->pending.insn, cpu->regs->eip,
} else sizeof(cpu->pending.insn));
cpu->regs->eax |= 0xFF;
}
/* Finally, we've "done" the instruction, so move past it. */
cpu->regs->eip += insnlen;
/* Success! */
return 1;
} }
/*H:050 Once we've re-enabled interrupts, we look at why the Guest exited. */ /*H:050 Once we've re-enabled interrupts, we look at why the Guest exited. */
void lguest_arch_handle_trap(struct lg_cpu *cpu) void lguest_arch_handle_trap(struct lg_cpu *cpu)
{ {
unsigned long iomem_addr;
switch (cpu->regs->trapnum) { switch (cpu->regs->trapnum) {
case 13: /* We've intercepted a General Protection Fault. */ case 13: /* We've intercepted a General Protection Fault. */
/* /* Hand to Launcher to emulate those pesky IN and OUT insns */
* Check if this was one of those annoying IN or OUT
* instructions which we need to emulate. If so, we just go
* back into the Guest after we've done it.
*/
if (cpu->regs->errcode == 0) { if (cpu->regs->errcode == 0) {
if (emulate_insn(cpu)) setup_emulate_insn(cpu);
return; return;
} }
break; break;
...@@ -387,9 +396,16 @@ void lguest_arch_handle_trap(struct lg_cpu *cpu) ...@@ -387,9 +396,16 @@ void lguest_arch_handle_trap(struct lg_cpu *cpu)
* whether kernel or userspace code. * whether kernel or userspace code.
*/ */
if (demand_page(cpu, cpu->arch.last_pagefault, if (demand_page(cpu, cpu->arch.last_pagefault,
cpu->regs->errcode)) cpu->regs->errcode, &iomem_addr))
return; return;
/* Was this an access to memory mapped IO? */
if (iomem_addr) {
/* Tell Launcher, let it handle it. */
setup_iomem_insn(cpu, iomem_addr);
return;
}
/* /*
* OK, it's really not there (or not OK): the Guest needs to * OK, it's really not there (or not OK): the Guest needs to
* know. We write out the cr2 value so it knows where the * know. We write out the cr2 value so it knows where the
......
...@@ -1710,6 +1710,12 @@ static int virtnet_probe(struct virtio_device *vdev) ...@@ -1710,6 +1710,12 @@ static int virtnet_probe(struct virtio_device *vdev)
struct virtnet_info *vi; struct virtnet_info *vi;
u16 max_queue_pairs; u16 max_queue_pairs;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
if (!virtnet_validate_features(vdev)) if (!virtnet_validate_features(vdev))
return -EINVAL; return -EINVAL;
......
...@@ -950,6 +950,12 @@ static int virtscsi_probe(struct virtio_device *vdev) ...@@ -950,6 +950,12 @@ static int virtscsi_probe(struct virtio_device *vdev)
u32 num_queues; u32 num_queues;
struct scsi_host_template *hostt; struct scsi_host_template *hostt;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
/* We need to know how many queues before we allocate. */ /* We need to know how many queues before we allocate. */
num_queues = virtscsi_config_get(vdev, num_queues) ? : 1; num_queues = virtscsi_config_get(vdev, num_queues) ? : 1;
......
...@@ -12,16 +12,32 @@ config VIRTIO_PCI ...@@ -12,16 +12,32 @@ config VIRTIO_PCI
depends on PCI depends on PCI
select VIRTIO select VIRTIO
---help--- ---help---
This drivers provides support for virtio based paravirtual device This driver provides support for virtio based paravirtual device
drivers over PCI. This requires that your VMM has appropriate PCI drivers over PCI. This requires that your VMM has appropriate PCI
virtio backends. Most QEMU based VMMs should support these devices virtio backends. Most QEMU based VMMs should support these devices
(like KVM or Xen). (like KVM or Xen).
Currently, the ABI is not considered stable so there is no guarantee
that this version of the driver will work with your VMM.
If unsure, say M. If unsure, say M.
config VIRTIO_PCI_LEGACY
bool "Support for legacy virtio draft 0.9.X and older devices"
default y
depends on VIRTIO_PCI
---help---
Virtio PCI Card 0.9.X Draft (circa 2014) and older device support.
This option enables building a transitional driver, supporting
both devices conforming to Virtio 1 specification, and legacy devices.
If disabled, you get a slightly smaller, non-transitional driver,
with no legacy compatibility.
So look out into your driveway. Do you have a flying car? If
so, you can happily disable this option and virtio will not
break. Otherwise, leave it set. Unless you're testing what
life will be like in The Future.
If unsure, say Y.
config VIRTIO_BALLOON config VIRTIO_BALLOON
tristate "Virtio balloon driver" tristate "Virtio balloon driver"
depends on VIRTIO depends on VIRTIO
......
obj-$(CONFIG_VIRTIO) += virtio.o virtio_ring.o obj-$(CONFIG_VIRTIO) += virtio.o virtio_ring.o
obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o
obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o
virtio_pci-y := virtio_pci_legacy.o virtio_pci_common.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
...@@ -236,7 +236,10 @@ static int virtio_dev_probe(struct device *_d) ...@@ -236,7 +236,10 @@ static int virtio_dev_probe(struct device *_d)
if (err) if (err)
goto err; goto err;
add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK); /* If probe didn't do it, mark device DRIVER_OK ourselves. */
if (!(dev->config->get_status(dev) & VIRTIO_CONFIG_S_DRIVER_OK))
virtio_device_ready(dev);
if (drv->scan) if (drv->scan)
drv->scan(dev); drv->scan(dev);
......
...@@ -44,8 +44,7 @@ static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; ...@@ -44,8 +44,7 @@ static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES;
module_param(oom_pages, int, S_IRUSR | S_IWUSR); module_param(oom_pages, int, S_IRUSR | S_IWUSR);
MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
struct virtio_balloon struct virtio_balloon {
{
struct virtio_device *vdev; struct virtio_device *vdev;
struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
...@@ -466,6 +465,12 @@ static int virtballoon_probe(struct virtio_device *vdev) ...@@ -466,6 +465,12 @@ static int virtballoon_probe(struct virtio_device *vdev)
struct virtio_balloon *vb; struct virtio_balloon *vb;
int err; int err;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
vdev->priv = vb = kmalloc(sizeof(*vb), GFP_KERNEL); vdev->priv = vb = kmalloc(sizeof(*vb), GFP_KERNEL);
if (!vb) { if (!vb) {
err = -ENOMEM; err = -ENOMEM;
......
/* /*
* Virtio memory mapped device driver * Virtio memory mapped device driver
* *
* Copyright 2011, ARM Ltd. * Copyright 2011-2014, ARM Ltd.
* *
* This module allows virtio devices to be used over a virtual, memory mapped * This module allows virtio devices to be used over a virtual, memory mapped
* platform device. * platform device.
...@@ -50,36 +50,6 @@ ...@@ -50,36 +50,6 @@
* *
* *
* *
* Registers layout (all 32-bit wide):
*
* offset d. name description
* ------ -- ---------------- -----------------
*
* 0x000 R MagicValue Magic value "virt"
* 0x004 R Version Device version (current max. 1)
* 0x008 R DeviceID Virtio device ID
* 0x00c R VendorID Virtio vendor ID
*
* 0x010 R HostFeatures Features supported by the host
* 0x014 W HostFeaturesSel Set of host features to access via HostFeatures
*
* 0x020 W GuestFeatures Features activated by the guest
* 0x024 W GuestFeaturesSel Set of activated features to set via GuestFeatures
* 0x028 W GuestPageSize Size of guest's memory page in bytes
*
* 0x030 W QueueSel Queue selector
* 0x034 R QueueNumMax Maximum size of the currently selected queue
* 0x038 W QueueNum Queue size for the currently selected queue
* 0x03c W QueueAlign Used Ring alignment for the current queue
* 0x040 RW QueuePFN PFN for the currently selected queue
*
* 0x050 W QueueNotify Queue notifier
* 0x060 R InterruptStatus Interrupt status register
* 0x064 W InterruptACK Interrupt acknowledge register
* 0x070 RW Status Device status register
*
* 0x100+ RW Device-specific configuration space
*
* Based on Virtio PCI driver by Anthony Liguori, copyright IBM Corp. 2007 * Based on Virtio PCI driver by Anthony Liguori, copyright IBM Corp. 2007
* *
* This work is licensed under the terms of the GNU GPL, version 2 or later. * This work is licensed under the terms of the GNU GPL, version 2 or later.
...@@ -145,11 +115,16 @@ struct virtio_mmio_vq_info { ...@@ -145,11 +115,16 @@ struct virtio_mmio_vq_info {
static u64 vm_get_features(struct virtio_device *vdev) static u64 vm_get_features(struct virtio_device *vdev)
{ {
struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
u64 features;
writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
features = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
features <<= 32;
/* TODO: Features > 32 bits */ writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
writel(0, vm_dev->base + VIRTIO_MMIO_HOST_FEATURES_SEL); features |= readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
return readl(vm_dev->base + VIRTIO_MMIO_HOST_FEATURES); return features;
} }
static int vm_finalize_features(struct virtio_device *vdev) static int vm_finalize_features(struct virtio_device *vdev)
...@@ -159,11 +134,20 @@ static int vm_finalize_features(struct virtio_device *vdev) ...@@ -159,11 +134,20 @@ static int vm_finalize_features(struct virtio_device *vdev)
/* Give virtio_ring a chance to accept features. */ /* Give virtio_ring a chance to accept features. */
vring_transport_features(vdev); vring_transport_features(vdev);
/* Make sure we don't have any features > 32 bits! */ /* Make sure there is are no mixed devices */
BUG_ON((u32)vdev->features != vdev->features); if (vm_dev->version == 2 &&
!__virtio_test_bit(vdev, VIRTIO_F_VERSION_1)) {
dev_err(&vdev->dev, "New virtio-mmio devices (version 2) must provide VIRTIO_F_VERSION_1 feature!\n");
return -EINVAL;
}
writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
writel((u32)(vdev->features >> 32),
vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
writel(0, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES_SEL); writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
writel(vdev->features, vm_dev->base + VIRTIO_MMIO_GUEST_FEATURES); writel((u32)vdev->features,
vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
return 0; return 0;
} }
...@@ -275,7 +259,12 @@ static void vm_del_vq(struct virtqueue *vq) ...@@ -275,7 +259,12 @@ static void vm_del_vq(struct virtqueue *vq)
/* Select and deactivate the queue */ /* Select and deactivate the queue */
writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL); writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
if (vm_dev->version == 1) {
writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN); writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
} else {
writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
}
size = PAGE_ALIGN(vring_size(info->num, VIRTIO_MMIO_VRING_ALIGN)); size = PAGE_ALIGN(vring_size(info->num, VIRTIO_MMIO_VRING_ALIGN));
free_pages_exact(info->queue, size); free_pages_exact(info->queue, size);
...@@ -312,7 +301,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, ...@@ -312,7 +301,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL); writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
/* Queue shouldn't already be set up. */ /* Queue shouldn't already be set up. */
if (readl(vm_dev->base + VIRTIO_MMIO_QUEUE_PFN)) { if (readl(vm_dev->base + (vm_dev->version == 1 ?
VIRTIO_MMIO_QUEUE_PFN : VIRTIO_MMIO_QUEUE_READY))) {
err = -ENOENT; err = -ENOENT;
goto error_available; goto error_available;
} }
...@@ -356,13 +346,6 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, ...@@ -356,13 +346,6 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
info->num /= 2; info->num /= 2;
} }
/* Activate the queue */
writel(info->num, vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
writel(VIRTIO_MMIO_VRING_ALIGN,
vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
/* Create the vring */ /* Create the vring */
vq = vring_new_virtqueue(index, info->num, VIRTIO_MMIO_VRING_ALIGN, vdev, vq = vring_new_virtqueue(index, info->num, VIRTIO_MMIO_VRING_ALIGN, vdev,
true, info->queue, vm_notify, callback, name); true, info->queue, vm_notify, callback, name);
...@@ -371,6 +354,33 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, ...@@ -371,6 +354,33 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
goto error_new_virtqueue; goto error_new_virtqueue;
} }
/* Activate the queue */
writel(info->num, vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
if (vm_dev->version == 1) {
writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
} else {
u64 addr;
addr = virt_to_phys(info->queue);
writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
writel((u32)(addr >> 32),
vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
addr = virt_to_phys(virtqueue_get_avail(vq));
writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
writel((u32)(addr >> 32),
vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
addr = virt_to_phys(virtqueue_get_used(vq));
writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
writel((u32)(addr >> 32),
vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
writel(1, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
}
vq->priv = info; vq->priv = info;
info->vq = vq; info->vq = vq;
...@@ -381,7 +391,12 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index, ...@@ -381,7 +391,12 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
return vq; return vq;
error_new_virtqueue: error_new_virtqueue:
if (vm_dev->version == 1) {
writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN); writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
} else {
writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
}
free_pages_exact(info->queue, size); free_pages_exact(info->queue, size);
error_alloc_pages: error_alloc_pages:
kfree(info); kfree(info);
...@@ -476,15 +491,31 @@ static int virtio_mmio_probe(struct platform_device *pdev) ...@@ -476,15 +491,31 @@ static int virtio_mmio_probe(struct platform_device *pdev)
/* Check device version */ /* Check device version */
vm_dev->version = readl(vm_dev->base + VIRTIO_MMIO_VERSION); vm_dev->version = readl(vm_dev->base + VIRTIO_MMIO_VERSION);
if (vm_dev->version != 1) { if (vm_dev->version < 1 || vm_dev->version > 2) {
dev_err(&pdev->dev, "Version %ld not supported!\n", dev_err(&pdev->dev, "Version %ld not supported!\n",
vm_dev->version); vm_dev->version);
return -ENXIO; return -ENXIO;
} }
vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID); vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);
if (vm_dev->vdev.id.device == 0) {
/*
* virtio-mmio device with an ID 0 is a (dummy) placeholder
* with no function. End probing now with no error reported.
*/
return -ENODEV;
}
vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID); vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);
/* Reject legacy-only IDs for version 2 devices */
if (vm_dev->version == 2 &&
virtio_device_is_legacy_only(vm_dev->vdev.id)) {
dev_err(&pdev->dev, "Version 2 not supported for devices %u!\n",
vm_dev->vdev.id.device);
return -ENODEV;
}
if (vm_dev->version == 1)
writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
platform_set_drvdata(pdev, vm_dev); platform_set_drvdata(pdev, vm_dev);
......
...@@ -19,6 +19,14 @@ ...@@ -19,6 +19,14 @@
#include "virtio_pci_common.h" #include "virtio_pci_common.h"
static bool force_legacy = false;
#if IS_ENABLED(CONFIG_VIRTIO_PCI_LEGACY)
module_param(force_legacy, bool, 0444);
MODULE_PARM_DESC(force_legacy,
"Force legacy mode for transitional virtio 1 devices");
#endif
/* wait for pending irq handlers */ /* wait for pending irq handlers */
void vp_synchronize_vectors(struct virtio_device *vdev) void vp_synchronize_vectors(struct virtio_device *vdev)
{ {
...@@ -464,15 +472,97 @@ static const struct pci_device_id virtio_pci_id_table[] = { ...@@ -464,15 +472,97 @@ static const struct pci_device_id virtio_pci_id_table[] = {
MODULE_DEVICE_TABLE(pci, virtio_pci_id_table); MODULE_DEVICE_TABLE(pci, virtio_pci_id_table);
static void virtio_pci_release_dev(struct device *_d)
{
struct virtio_device *vdev = dev_to_virtio(_d);
struct virtio_pci_device *vp_dev = to_vp_device(vdev);
/* As struct device is a kobject, it's not safe to
* free the memory (including the reference counter itself)
* until it's release callback. */
kfree(vp_dev);
}
static int virtio_pci_probe(struct pci_dev *pci_dev, static int virtio_pci_probe(struct pci_dev *pci_dev,
const struct pci_device_id *id) const struct pci_device_id *id)
{ {
return virtio_pci_legacy_probe(pci_dev, id); struct virtio_pci_device *vp_dev;
int rc;
/* allocate our structure and fill it out */
vp_dev = kzalloc(sizeof(struct virtio_pci_device), GFP_KERNEL);
if (!vp_dev)
return -ENOMEM;
pci_set_drvdata(pci_dev, vp_dev);
vp_dev->vdev.dev.parent = &pci_dev->dev;
vp_dev->vdev.dev.release = virtio_pci_release_dev;
vp_dev->pci_dev = pci_dev;
INIT_LIST_HEAD(&vp_dev->virtqueues);
spin_lock_init(&vp_dev->lock);
/* Disable MSI/MSIX to bring device to a known good state. */
pci_msi_off(pci_dev);
/* enable the device */
rc = pci_enable_device(pci_dev);
if (rc)
goto err_enable_device;
rc = pci_request_regions(pci_dev, "virtio-pci");
if (rc)
goto err_request_regions;
if (force_legacy) {
rc = virtio_pci_legacy_probe(vp_dev);
/* Also try modern mode if we can't map BAR0 (no IO space). */
if (rc == -ENODEV || rc == -ENOMEM)
rc = virtio_pci_modern_probe(vp_dev);
if (rc)
goto err_probe;
} else {
rc = virtio_pci_modern_probe(vp_dev);
if (rc == -ENODEV)
rc = virtio_pci_legacy_probe(vp_dev);
if (rc)
goto err_probe;
}
pci_set_master(pci_dev);
rc = register_virtio_device(&vp_dev->vdev);
if (rc)
goto err_register;
return 0;
err_register:
if (vp_dev->ioaddr)
virtio_pci_legacy_remove(vp_dev);
else
virtio_pci_modern_remove(vp_dev);
err_probe:
pci_release_regions(pci_dev);
err_request_regions:
pci_disable_device(pci_dev);
err_enable_device:
kfree(vp_dev);
return rc;
} }
static void virtio_pci_remove(struct pci_dev *pci_dev) static void virtio_pci_remove(struct pci_dev *pci_dev)
{ {
virtio_pci_legacy_remove(pci_dev); struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev);
unregister_virtio_device(&vp_dev->vdev);
if (vp_dev->ioaddr)
virtio_pci_legacy_remove(vp_dev);
else
virtio_pci_modern_remove(vp_dev);
pci_release_regions(pci_dev);
pci_disable_device(pci_dev);
} }
static struct pci_driver virtio_pci_driver = { static struct pci_driver virtio_pci_driver = {
......
...@@ -53,12 +53,32 @@ struct virtio_pci_device { ...@@ -53,12 +53,32 @@ struct virtio_pci_device {
struct virtio_device vdev; struct virtio_device vdev;
struct pci_dev *pci_dev; struct pci_dev *pci_dev;
/* In legacy mode, these two point to within ->legacy. */
/* Where to read and clear interrupt */
u8 __iomem *isr;
/* Modern only fields */
/* The IO mapping for the PCI config space (non-legacy mode) */
struct virtio_pci_common_cfg __iomem *common;
/* Device-specific data (non-legacy mode) */
void __iomem *device;
/* Base of vq notifications (non-legacy mode). */
void __iomem *notify_base;
/* So we can sanity-check accesses. */
size_t notify_len;
size_t device_len;
/* Capability for when we need to map notifications per-vq. */
int notify_map_cap;
/* Multiply queue_notify_off by this value. (non-legacy mode). */
u32 notify_offset_multiplier;
/* Legacy only field */
/* the IO mapping for the PCI config space */ /* the IO mapping for the PCI config space */
void __iomem *ioaddr; void __iomem *ioaddr;
/* the IO mapping for ISR operation */
void __iomem *isr;
/* a list of queues so we can dispatch IRQs */ /* a list of queues so we can dispatch IRQs */
spinlock_t lock; spinlock_t lock;
struct list_head virtqueues; struct list_head virtqueues;
...@@ -127,8 +147,19 @@ const char *vp_bus_name(struct virtio_device *vdev); ...@@ -127,8 +147,19 @@ const char *vp_bus_name(struct virtio_device *vdev);
*/ */
int vp_set_vq_affinity(struct virtqueue *vq, int cpu); int vp_set_vq_affinity(struct virtqueue *vq, int cpu);
int virtio_pci_legacy_probe(struct pci_dev *pci_dev, #if IS_ENABLED(CONFIG_VIRTIO_PCI_LEGACY)
const struct pci_device_id *id); int virtio_pci_legacy_probe(struct virtio_pci_device *);
void virtio_pci_legacy_remove(struct pci_dev *pci_dev); void virtio_pci_legacy_remove(struct virtio_pci_device *);
#else
static inline int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
{
return -ENODEV;
}
static inline void virtio_pci_legacy_remove(struct virtio_pci_device *vp_dev)
{
}
#endif
int virtio_pci_modern_probe(struct virtio_pci_device *);
void virtio_pci_modern_remove(struct virtio_pci_device *);
#endif #endif
...@@ -211,23 +211,10 @@ static const struct virtio_config_ops virtio_pci_config_ops = { ...@@ -211,23 +211,10 @@ static const struct virtio_config_ops virtio_pci_config_ops = {
.set_vq_affinity = vp_set_vq_affinity, .set_vq_affinity = vp_set_vq_affinity,
}; };
static void virtio_pci_release_dev(struct device *_d)
{
struct virtio_device *vdev = dev_to_virtio(_d);
struct virtio_pci_device *vp_dev = to_vp_device(vdev);
/* As struct device is a kobject, it's not safe to
* free the memory (including the reference counter itself)
* until it's release callback. */
kfree(vp_dev);
}
/* the PCI probing function */ /* the PCI probing function */
int virtio_pci_legacy_probe(struct pci_dev *pci_dev, int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
const struct pci_device_id *id)
{ {
struct virtio_pci_device *vp_dev; struct pci_dev *pci_dev = vp_dev->pci_dev;
int err;
/* We only own devices >= 0x1000 and <= 0x103f: leave the rest. */ /* We only own devices >= 0x1000 and <= 0x103f: leave the rest. */
if (pci_dev->device < 0x1000 || pci_dev->device > 0x103f) if (pci_dev->device < 0x1000 || pci_dev->device > 0x103f)
...@@ -239,41 +226,12 @@ int virtio_pci_legacy_probe(struct pci_dev *pci_dev, ...@@ -239,41 +226,12 @@ int virtio_pci_legacy_probe(struct pci_dev *pci_dev,
return -ENODEV; return -ENODEV;
} }
/* allocate our structure and fill it out */
vp_dev = kzalloc(sizeof(struct virtio_pci_device), GFP_KERNEL);
if (vp_dev == NULL)
return -ENOMEM;
vp_dev->vdev.dev.parent = &pci_dev->dev;
vp_dev->vdev.dev.release = virtio_pci_release_dev;
vp_dev->vdev.config = &virtio_pci_config_ops;
vp_dev->pci_dev = pci_dev;
INIT_LIST_HEAD(&vp_dev->virtqueues);
spin_lock_init(&vp_dev->lock);
/* Disable MSI/MSIX to bring device to a known good state. */
pci_msi_off(pci_dev);
/* enable the device */
err = pci_enable_device(pci_dev);
if (err)
goto out;
err = pci_request_regions(pci_dev, "virtio-pci");
if (err)
goto out_enable_device;
vp_dev->ioaddr = pci_iomap(pci_dev, 0, 0); vp_dev->ioaddr = pci_iomap(pci_dev, 0, 0);
if (vp_dev->ioaddr == NULL) { if (!vp_dev->ioaddr)
err = -ENOMEM; return -ENOMEM;
goto out_req_regions;
}
vp_dev->isr = vp_dev->ioaddr + VIRTIO_PCI_ISR; vp_dev->isr = vp_dev->ioaddr + VIRTIO_PCI_ISR;
pci_set_drvdata(pci_dev, vp_dev);
pci_set_master(pci_dev);
/* we use the subsystem vendor/device id as the virtio vendor/device /* we use the subsystem vendor/device id as the virtio vendor/device
* id. this allows us to use the same PCI vendor/device id for all * id. this allows us to use the same PCI vendor/device id for all
* virtio devices and to identify the particular virtio driver by * virtio devices and to identify the particular virtio driver by
...@@ -281,36 +239,18 @@ int virtio_pci_legacy_probe(struct pci_dev *pci_dev, ...@@ -281,36 +239,18 @@ int virtio_pci_legacy_probe(struct pci_dev *pci_dev,
vp_dev->vdev.id.vendor = pci_dev->subsystem_vendor; vp_dev->vdev.id.vendor = pci_dev->subsystem_vendor;
vp_dev->vdev.id.device = pci_dev->subsystem_device; vp_dev->vdev.id.device = pci_dev->subsystem_device;
vp_dev->vdev.config = &virtio_pci_config_ops;
vp_dev->config_vector = vp_config_vector; vp_dev->config_vector = vp_config_vector;
vp_dev->setup_vq = setup_vq; vp_dev->setup_vq = setup_vq;
vp_dev->del_vq = del_vq; vp_dev->del_vq = del_vq;
/* finally register the virtio device */
err = register_virtio_device(&vp_dev->vdev);
if (err)
goto out_set_drvdata;
return 0; return 0;
out_set_drvdata:
pci_iounmap(pci_dev, vp_dev->ioaddr);
out_req_regions:
pci_release_regions(pci_dev);
out_enable_device:
pci_disable_device(pci_dev);
out:
kfree(vp_dev);
return err;
} }
void virtio_pci_legacy_remove(struct pci_dev *pci_dev) void virtio_pci_legacy_remove(struct virtio_pci_device *vp_dev)
{ {
struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev); struct pci_dev *pci_dev = vp_dev->pci_dev;
unregister_virtio_device(&vp_dev->vdev);
vp_del_vqs(&vp_dev->vdev);
pci_iounmap(pci_dev, vp_dev->ioaddr); pci_iounmap(pci_dev, vp_dev->ioaddr);
pci_release_regions(pci_dev);
pci_disable_device(pci_dev);
} }
This diff is collapsed.
...@@ -54,8 +54,7 @@ ...@@ -54,8 +54,7 @@
#define END_USE(vq) #define END_USE(vq)
#endif #endif
struct vring_virtqueue struct vring_virtqueue {
{
struct virtqueue vq; struct virtqueue vq;
/* Actual memory layout for this queue */ /* Actual memory layout for this queue */
...@@ -245,14 +244,14 @@ static inline int virtqueue_add(struct virtqueue *_vq, ...@@ -245,14 +244,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) + 1); vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) + 1);
vq->num_added++; vq->num_added++;
pr_debug("Added buffer head %i to %p\n", head, vq);
END_USE(vq);
/* This is very unlikely, but theoretically possible. Kick /* This is very unlikely, but theoretically possible. Kick
* just in case. */ * just in case. */
if (unlikely(vq->num_added == (1 << 16) - 1)) if (unlikely(vq->num_added == (1 << 16) - 1))
virtqueue_kick(_vq); virtqueue_kick(_vq);
pr_debug("Added buffer head %i to %p\n", head, vq);
END_USE(vq);
return 0; return 0;
} }
......
...@@ -15,6 +15,9 @@ struct pci_dev; ...@@ -15,6 +15,9 @@ struct pci_dev;
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
/* Create a virtual mapping cookie for a PCI BAR (memory or IO) */ /* Create a virtual mapping cookie for a PCI BAR (memory or IO) */
extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max); extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max);
extern void __iomem *pci_iomap_range(struct pci_dev *dev, int bar,
unsigned long offset,
unsigned long maxlen);
/* Create a virtual mapping cookie for a port on a given PCI device. /* Create a virtual mapping cookie for a port on a given PCI device.
* Do not call this directly, it exists to make it easier for architectures * Do not call this directly, it exists to make it easier for architectures
* to override */ * to override */
...@@ -30,6 +33,13 @@ static inline void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned lon ...@@ -30,6 +33,13 @@ static inline void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned lon
{ {
return NULL; return NULL;
} }
static inline void __iomem *pci_iomap_range(struct pci_dev *dev, int bar,
unsigned long offset,
unsigned long maxlen)
{
return NULL;
}
#endif #endif
#endif /* __ASM_GENERIC_IO_H */ #endif /* __ASM_GENERIC_IO_H */
...@@ -8,52 +8,13 @@ ...@@ -8,52 +8,13 @@
* *
* The Guest needs devices to do anything useful. Since we don't let it touch * The Guest needs devices to do anything useful. Since we don't let it touch
* real devices (think of the damage it could do!) we provide virtual devices. * real devices (think of the damage it could do!) we provide virtual devices.
* We could emulate a PCI bus with various devices on it, but that is a fairly * We emulate a PCI bus with virtio devices on it; we used to have our own
* complex burden for the Host and suboptimal for the Guest, so we have our own * lguest bus which was far simpler, but this tests the virtio 1.0 standard.
* simple lguest bus and we use "virtio" drivers. These drivers need a set of
* routines from us which will actually do the virtual I/O, but they handle all
* the net/block/console stuff themselves. This means that if we want to add
* a new device, we simply need to write a new virtio driver and create support
* for it in the Launcher: this code won't need to change.
* *
* Virtio devices are also used by kvm, so we can simply reuse their optimized * Virtio devices are also used by kvm, so we can simply reuse their optimized
* device drivers. And one day when everyone uses virtio, my plan will be * device drivers. And one day when everyone uses virtio, my plan will be
* complete. Bwahahahah! * complete. Bwahahahah!
*
* Devices are described by a simplified ID, a status byte, and some "config"
* bytes which describe this device's configuration. This is placed by the
* Launcher just above the top of physical memory:
*/
struct lguest_device_desc {
/* The device type: console, network, disk etc. Type 0 terminates. */
__u8 type;
/* The number of virtqueues (first in config array) */
__u8 num_vq;
/*
* The number of bytes of feature bits. Multiply by 2: one for host
* features and one for Guest acknowledgements.
*/ */
__u8 feature_len;
/* The number of bytes of the config array after virtqueues. */
__u8 config_len;
/* A status byte, written by the Guest. */
__u8 status;
__u8 config[0];
};
/*D:135
* This is how we expect the device configuration field for a virtqueue
* to be laid out in config space.
*/
struct lguest_vqconfig {
/* The number of entries in the virtio_ring */
__u16 num;
/* The interrupt we get when something happens. */
__u16 irq;
/* The page number of the virtio ring for this device. */
__u32 pfn;
};
/*:*/
/* Write command first word is a request. */ /* Write command first word is a request. */
enum lguest_req enum lguest_req
...@@ -62,12 +23,22 @@ enum lguest_req ...@@ -62,12 +23,22 @@ enum lguest_req
LHREQ_GETDMA, /* No longer used */ LHREQ_GETDMA, /* No longer used */
LHREQ_IRQ, /* + irq */ LHREQ_IRQ, /* + irq */
LHREQ_BREAK, /* No longer used */ LHREQ_BREAK, /* No longer used */
LHREQ_EVENTFD, /* + address, fd. */ LHREQ_EVENTFD, /* No longer used. */
LHREQ_GETREG, /* + offset within struct pt_regs (then read value). */
LHREQ_SETREG, /* + offset within struct pt_regs, value. */
LHREQ_TRAP, /* + trap number to deliver to guest. */
}; };
/* /*
* The alignment to use between consumer and producer parts of vring. * This is what read() of the lguest fd populates. trap ==
* x86 pagesize for historical reasons. * LGUEST_TRAP_ENTRY for an LHCALL_NOTIFY (addr is the
* argument), 14 for a page fault in the MMIO region (addr is
* the trap address, insn is the instruction), or 13 for a GPF
* (insn is the instruction).
*/ */
#define LGUEST_VRING_ALIGN 4096 struct lguest_pending {
__u8 trap;
__u8 insn[7];
__u32 addr;
};
#endif /* _LINUX_LGUEST_LAUNCHER */ #endif /* _LINUX_LGUEST_LAUNCHER */
...@@ -51,23 +51,29 @@ ...@@ -51,23 +51,29 @@
/* Virtio vendor ID - Read Only */ /* Virtio vendor ID - Read Only */
#define VIRTIO_MMIO_VENDOR_ID 0x00c #define VIRTIO_MMIO_VENDOR_ID 0x00c
/* Bitmask of the features supported by the host /* Bitmask of the features supported by the device (host)
* (32 bits per set) - Read Only */ * (32 bits per set) - Read Only */
#define VIRTIO_MMIO_HOST_FEATURES 0x010 #define VIRTIO_MMIO_DEVICE_FEATURES 0x010
/* Host features set selector - Write Only */ /* Device (host) features set selector - Write Only */
#define VIRTIO_MMIO_HOST_FEATURES_SEL 0x014 #define VIRTIO_MMIO_DEVICE_FEATURES_SEL 0x014
/* Bitmask of features activated by the guest /* Bitmask of features activated by the driver (guest)
* (32 bits per set) - Write Only */ * (32 bits per set) - Write Only */
#define VIRTIO_MMIO_GUEST_FEATURES 0x020 #define VIRTIO_MMIO_DRIVER_FEATURES 0x020
/* Activated features set selector - Write Only */ /* Activated features set selector - Write Only */
#define VIRTIO_MMIO_GUEST_FEATURES_SEL 0x024 #define VIRTIO_MMIO_DRIVER_FEATURES_SEL 0x024
#ifndef VIRTIO_MMIO_NO_LEGACY /* LEGACY DEVICES ONLY! */
/* Guest's memory page size in bytes - Write Only */ /* Guest's memory page size in bytes - Write Only */
#define VIRTIO_MMIO_GUEST_PAGE_SIZE 0x028 #define VIRTIO_MMIO_GUEST_PAGE_SIZE 0x028
#endif
/* Queue selector - Write Only */ /* Queue selector - Write Only */
#define VIRTIO_MMIO_QUEUE_SEL 0x030 #define VIRTIO_MMIO_QUEUE_SEL 0x030
...@@ -77,12 +83,21 @@ ...@@ -77,12 +83,21 @@
/* Queue size for the currently selected queue - Write Only */ /* Queue size for the currently selected queue - Write Only */
#define VIRTIO_MMIO_QUEUE_NUM 0x038 #define VIRTIO_MMIO_QUEUE_NUM 0x038
#ifndef VIRTIO_MMIO_NO_LEGACY /* LEGACY DEVICES ONLY! */
/* Used Ring alignment for the currently selected queue - Write Only */ /* Used Ring alignment for the currently selected queue - Write Only */
#define VIRTIO_MMIO_QUEUE_ALIGN 0x03c #define VIRTIO_MMIO_QUEUE_ALIGN 0x03c
/* Guest's PFN for the currently selected queue - Read Write */ /* Guest's PFN for the currently selected queue - Read Write */
#define VIRTIO_MMIO_QUEUE_PFN 0x040 #define VIRTIO_MMIO_QUEUE_PFN 0x040
#endif
/* Ready bit for the currently selected queue - Read Write */
#define VIRTIO_MMIO_QUEUE_READY 0x044
/* Queue notifier - Write Only */ /* Queue notifier - Write Only */
#define VIRTIO_MMIO_QUEUE_NOTIFY 0x050 #define VIRTIO_MMIO_QUEUE_NOTIFY 0x050
...@@ -95,6 +110,21 @@ ...@@ -95,6 +110,21 @@
/* Device status register - Read Write */ /* Device status register - Read Write */
#define VIRTIO_MMIO_STATUS 0x070 #define VIRTIO_MMIO_STATUS 0x070
/* Selected queue's Descriptor Table address, 64 bits in two halves */
#define VIRTIO_MMIO_QUEUE_DESC_LOW 0x080
#define VIRTIO_MMIO_QUEUE_DESC_HIGH 0x084
/* Selected queue's Available Ring address, 64 bits in two halves */
#define VIRTIO_MMIO_QUEUE_AVAIL_LOW 0x090
#define VIRTIO_MMIO_QUEUE_AVAIL_HIGH 0x094
/* Selected queue's Used Ring address, 64 bits in two halves */
#define VIRTIO_MMIO_QUEUE_USED_LOW 0x0a0
#define VIRTIO_MMIO_QUEUE_USED_HIGH 0x0a4
/* Configuration atomicity value */
#define VIRTIO_MMIO_CONFIG_GENERATION 0x0fc
/* The config space is defined by each driver as /* The config space is defined by each driver as
* the per-driver configuration space - Read Write */ * the per-driver configuration space - Read Write */
#define VIRTIO_MMIO_CONFIG 0x100 #define VIRTIO_MMIO_CONFIG 0x100
......
...@@ -36,8 +36,7 @@ ...@@ -36,8 +36,7 @@
/* Size of a PFN in the balloon interface. */ /* Size of a PFN in the balloon interface. */
#define VIRTIO_BALLOON_PFN_SHIFT 12 #define VIRTIO_BALLOON_PFN_SHIFT 12
struct virtio_balloon_config struct virtio_balloon_config {
{
/* Number of pages host wants Guest to give up. */ /* Number of pages host wants Guest to give up. */
__le32 num_pages; __le32 num_pages;
/* Number of pages we've actually got in balloon. */ /* Number of pages we've actually got in balloon. */
......
...@@ -31,22 +31,25 @@ ...@@ -31,22 +31,25 @@
#include <linux/virtio_types.h> #include <linux/virtio_types.h>
/* Feature bits */ /* Feature bits */
#define VIRTIO_BLK_F_BARRIER 0 /* Does host support barriers? */
#define VIRTIO_BLK_F_SIZE_MAX 1 /* Indicates maximum segment size */ #define VIRTIO_BLK_F_SIZE_MAX 1 /* Indicates maximum segment size */
#define VIRTIO_BLK_F_SEG_MAX 2 /* Indicates maximum # of segments */ #define VIRTIO_BLK_F_SEG_MAX 2 /* Indicates maximum # of segments */
#define VIRTIO_BLK_F_GEOMETRY 4 /* Legacy geometry available */ #define VIRTIO_BLK_F_GEOMETRY 4 /* Legacy geometry available */
#define VIRTIO_BLK_F_RO 5 /* Disk is read-only */ #define VIRTIO_BLK_F_RO 5 /* Disk is read-only */
#define VIRTIO_BLK_F_BLK_SIZE 6 /* Block size of disk is available*/ #define VIRTIO_BLK_F_BLK_SIZE 6 /* Block size of disk is available*/
#define VIRTIO_BLK_F_SCSI 7 /* Supports scsi command passthru */
#define VIRTIO_BLK_F_WCE 9 /* Writeback mode enabled after reset */
#define VIRTIO_BLK_F_TOPOLOGY 10 /* Topology information is available */ #define VIRTIO_BLK_F_TOPOLOGY 10 /* Topology information is available */
#define VIRTIO_BLK_F_CONFIG_WCE 11 /* Writeback mode available in config */
#define VIRTIO_BLK_F_MQ 12 /* support more than one vq */ #define VIRTIO_BLK_F_MQ 12 /* support more than one vq */
/* Legacy feature bits */
#ifndef VIRTIO_BLK_NO_LEGACY
#define VIRTIO_BLK_F_BARRIER 0 /* Does host support barriers? */
#define VIRTIO_BLK_F_SCSI 7 /* Supports scsi command passthru */
#define VIRTIO_BLK_F_WCE 9 /* Writeback mode enabled after reset */
#define VIRTIO_BLK_F_CONFIG_WCE 11 /* Writeback mode available in config */
#ifndef __KERNEL__ #ifndef __KERNEL__
/* Old (deprecated) name for VIRTIO_BLK_F_WCE. */ /* Old (deprecated) name for VIRTIO_BLK_F_WCE. */
#define VIRTIO_BLK_F_FLUSH VIRTIO_BLK_F_WCE #define VIRTIO_BLK_F_FLUSH VIRTIO_BLK_F_WCE
#endif #endif
#endif /* !VIRTIO_BLK_NO_LEGACY */
#define VIRTIO_BLK_ID_BYTES 20 /* ID string length */ #define VIRTIO_BLK_ID_BYTES 20 /* ID string length */
...@@ -100,8 +103,10 @@ struct virtio_blk_config { ...@@ -100,8 +103,10 @@ struct virtio_blk_config {
#define VIRTIO_BLK_T_IN 0 #define VIRTIO_BLK_T_IN 0
#define VIRTIO_BLK_T_OUT 1 #define VIRTIO_BLK_T_OUT 1
#ifndef VIRTIO_BLK_NO_LEGACY
/* This bit says it's a scsi command, not an actual read or write. */ /* This bit says it's a scsi command, not an actual read or write. */
#define VIRTIO_BLK_T_SCSI_CMD 2 #define VIRTIO_BLK_T_SCSI_CMD 2
#endif /* VIRTIO_BLK_NO_LEGACY */
/* Cache flush command */ /* Cache flush command */
#define VIRTIO_BLK_T_FLUSH 4 #define VIRTIO_BLK_T_FLUSH 4
...@@ -109,8 +114,10 @@ struct virtio_blk_config { ...@@ -109,8 +114,10 @@ struct virtio_blk_config {
/* Get device ID command */ /* Get device ID command */
#define VIRTIO_BLK_T_GET_ID 8 #define VIRTIO_BLK_T_GET_ID 8
#ifndef VIRTIO_BLK_NO_LEGACY
/* Barrier before this op. */ /* Barrier before this op. */
#define VIRTIO_BLK_T_BARRIER 0x80000000 #define VIRTIO_BLK_T_BARRIER 0x80000000
#endif /* !VIRTIO_BLK_NO_LEGACY */
/* This is the first element of the read scatter-gather list. */ /* This is the first element of the read scatter-gather list. */
struct virtio_blk_outhdr { struct virtio_blk_outhdr {
...@@ -122,12 +129,14 @@ struct virtio_blk_outhdr { ...@@ -122,12 +129,14 @@ struct virtio_blk_outhdr {
__virtio64 sector; __virtio64 sector;
}; };
#ifndef VIRTIO_BLK_NO_LEGACY
struct virtio_scsi_inhdr { struct virtio_scsi_inhdr {
__virtio32 errors; __virtio32 errors;
__virtio32 data_len; __virtio32 data_len;
__virtio32 sense_len; __virtio32 sense_len;
__virtio32 residual; __virtio32 residual;
}; };
#endif /* !VIRTIO_BLK_NO_LEGACY */
/* And this is the final byte of the write scatter-gather list. */ /* And this is the final byte of the write scatter-gather list. */
#define VIRTIO_BLK_S_OK 0 #define VIRTIO_BLK_S_OK 0
......
...@@ -49,12 +49,14 @@ ...@@ -49,12 +49,14 @@
#define VIRTIO_TRANSPORT_F_START 28 #define VIRTIO_TRANSPORT_F_START 28
#define VIRTIO_TRANSPORT_F_END 33 #define VIRTIO_TRANSPORT_F_END 33
#ifndef VIRTIO_CONFIG_NO_LEGACY
/* Do we get callbacks when the ring is completely used, even if we've /* Do we get callbacks when the ring is completely used, even if we've
* suppressed them? */ * suppressed them? */
#define VIRTIO_F_NOTIFY_ON_EMPTY 24 #define VIRTIO_F_NOTIFY_ON_EMPTY 24
/* Can the device handle any descriptor layout? */ /* Can the device handle any descriptor layout? */
#define VIRTIO_F_ANY_LAYOUT 27 #define VIRTIO_F_ANY_LAYOUT 27
#endif /* VIRTIO_CONFIG_NO_LEGACY */
/* v1.0 compliant. */ /* v1.0 compliant. */
#define VIRTIO_F_VERSION_1 32 #define VIRTIO_F_VERSION_1 32
......
...@@ -35,7 +35,6 @@ ...@@ -35,7 +35,6 @@
#define VIRTIO_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ #define VIRTIO_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */
#define VIRTIO_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ #define VIRTIO_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */
#define VIRTIO_NET_F_MAC 5 /* Host has given MAC address. */ #define VIRTIO_NET_F_MAC 5 /* Host has given MAC address. */
#define VIRTIO_NET_F_GSO 6 /* Host handles pkts w/ any GSO type */
#define VIRTIO_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ #define VIRTIO_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */
#define VIRTIO_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ #define VIRTIO_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */
#define VIRTIO_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ #define VIRTIO_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */
...@@ -56,6 +55,10 @@ ...@@ -56,6 +55,10 @@
* Steering */ * Steering */
#define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ #define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */
#ifndef VIRTIO_NET_NO_LEGACY
#define VIRTIO_NET_F_GSO 6 /* Host handles pkts w/ any GSO type */
#endif /* VIRTIO_NET_NO_LEGACY */
#define VIRTIO_NET_S_LINK_UP 1 /* Link is up */ #define VIRTIO_NET_S_LINK_UP 1 /* Link is up */
#define VIRTIO_NET_S_ANNOUNCE 2 /* Announcement is needed */ #define VIRTIO_NET_S_ANNOUNCE 2 /* Announcement is needed */
...@@ -71,19 +74,39 @@ struct virtio_net_config { ...@@ -71,19 +74,39 @@ struct virtio_net_config {
__u16 max_virtqueue_pairs; __u16 max_virtqueue_pairs;
} __attribute__((packed)); } __attribute__((packed));
/*
* This header comes first in the scatter-gather list. If you don't
* specify GSO or CSUM features, you can simply ignore the header.
*
* This is bitwise-equivalent to the legacy struct virtio_net_hdr_mrg_rxbuf,
* only flattened.
*/
struct virtio_net_hdr_v1 {
#define VIRTIO_NET_HDR_F_NEEDS_CSUM 1 /* Use csum_start, csum_offset */
#define VIRTIO_NET_HDR_F_DATA_VALID 2 /* Csum is valid */
__u8 flags;
#define VIRTIO_NET_HDR_GSO_NONE 0 /* Not a GSO frame */
#define VIRTIO_NET_HDR_GSO_TCPV4 1 /* GSO frame, IPv4 TCP (TSO) */
#define VIRTIO_NET_HDR_GSO_UDP 3 /* GSO frame, IPv4 UDP (UFO) */
#define VIRTIO_NET_HDR_GSO_TCPV6 4 /* GSO frame, IPv6 TCP */
#define VIRTIO_NET_HDR_GSO_ECN 0x80 /* TCP has ECN set */
__u8 gso_type;
__virtio16 hdr_len; /* Ethernet + IP + tcp/udp hdrs */
__virtio16 gso_size; /* Bytes to append to hdr_len per frame */
__virtio16 csum_start; /* Position to start checksumming from */
__virtio16 csum_offset; /* Offset after that to place checksum */
__virtio16 num_buffers; /* Number of merged rx buffers */
};
#ifndef VIRTIO_NET_NO_LEGACY
/* This header comes first in the scatter-gather list. /* This header comes first in the scatter-gather list.
* If VIRTIO_F_ANY_LAYOUT is not negotiated, it must * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must
* be the first element of the scatter-gather list. If you don't * be the first element of the scatter-gather list. If you don't
* specify GSO or CSUM features, you can simply ignore the header. */ * specify GSO or CSUM features, you can simply ignore the header. */
struct virtio_net_hdr { struct virtio_net_hdr {
#define VIRTIO_NET_HDR_F_NEEDS_CSUM 1 // Use csum_start, csum_offset /* See VIRTIO_NET_HDR_F_* */
#define VIRTIO_NET_HDR_F_DATA_VALID 2 // Csum is valid
__u8 flags; __u8 flags;
#define VIRTIO_NET_HDR_GSO_NONE 0 // Not a GSO frame /* See VIRTIO_NET_HDR_GSO_* */
#define VIRTIO_NET_HDR_GSO_TCPV4 1 // GSO frame, IPv4 TCP (TSO)
#define VIRTIO_NET_HDR_GSO_UDP 3 // GSO frame, IPv4 UDP (UFO)
#define VIRTIO_NET_HDR_GSO_TCPV6 4 // GSO frame, IPv6 TCP
#define VIRTIO_NET_HDR_GSO_ECN 0x80 // TCP has ECN set
__u8 gso_type; __u8 gso_type;
__virtio16 hdr_len; /* Ethernet + IP + tcp/udp hdrs */ __virtio16 hdr_len; /* Ethernet + IP + tcp/udp hdrs */
__virtio16 gso_size; /* Bytes to append to hdr_len per frame */ __virtio16 gso_size; /* Bytes to append to hdr_len per frame */
...@@ -97,6 +120,7 @@ struct virtio_net_hdr_mrg_rxbuf { ...@@ -97,6 +120,7 @@ struct virtio_net_hdr_mrg_rxbuf {
struct virtio_net_hdr hdr; struct virtio_net_hdr hdr;
__virtio16 num_buffers; /* Number of merged rx buffers */ __virtio16 num_buffers; /* Number of merged rx buffers */
}; };
#endif /* ...VIRTIO_NET_NO_LEGACY */
/* /*
* Control virtqueue data structures * Control virtqueue data structures
......
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
#ifndef _LINUX_VIRTIO_PCI_H #ifndef _LINUX_VIRTIO_PCI_H
#define _LINUX_VIRTIO_PCI_H #define _LINUX_VIRTIO_PCI_H
#include <linux/virtio_config.h> #include <linux/types.h>
#ifndef VIRTIO_PCI_NO_LEGACY #ifndef VIRTIO_PCI_NO_LEGACY
...@@ -99,4 +99,95 @@ ...@@ -99,4 +99,95 @@
/* Vector value used to disable MSI for queue */ /* Vector value used to disable MSI for queue */
#define VIRTIO_MSI_NO_VECTOR 0xffff #define VIRTIO_MSI_NO_VECTOR 0xffff
#ifndef VIRTIO_PCI_NO_MODERN
/* IDs for different capabilities. Must all exist. */
/* Common configuration */
#define VIRTIO_PCI_CAP_COMMON_CFG 1
/* Notifications */
#define VIRTIO_PCI_CAP_NOTIFY_CFG 2
/* ISR access */
#define VIRTIO_PCI_CAP_ISR_CFG 3
/* Device specific configuration */
#define VIRTIO_PCI_CAP_DEVICE_CFG 4
/* PCI configuration access */
#define VIRTIO_PCI_CAP_PCI_CFG 5
/* This is the PCI capability header: */
struct virtio_pci_cap {
__u8 cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
__u8 cap_next; /* Generic PCI field: next ptr. */
__u8 cap_len; /* Generic PCI field: capability length */
__u8 cfg_type; /* Identifies the structure. */
__u8 bar; /* Where to find it. */
__u8 padding[3]; /* Pad to full dword. */
__le32 offset; /* Offset within bar. */
__le32 length; /* Length of the structure, in bytes. */
};
struct virtio_pci_notify_cap {
struct virtio_pci_cap cap;
__le32 notify_off_multiplier; /* Multiplier for queue_notify_off. */
};
/* Fields in VIRTIO_PCI_CAP_COMMON_CFG: */
struct virtio_pci_common_cfg {
/* About the whole device. */
__le32 device_feature_select; /* read-write */
__le32 device_feature; /* read-only */
__le32 guest_feature_select; /* read-write */
__le32 guest_feature; /* read-write */
__le16 msix_config; /* read-write */
__le16 num_queues; /* read-only */
__u8 device_status; /* read-write */
__u8 config_generation; /* read-only */
/* About a specific virtqueue. */
__le16 queue_select; /* read-write */
__le16 queue_size; /* read-write, power of 2. */
__le16 queue_msix_vector; /* read-write */
__le16 queue_enable; /* read-write */
__le16 queue_notify_off; /* read-only */
__le32 queue_desc_lo; /* read-write */
__le32 queue_desc_hi; /* read-write */
__le32 queue_avail_lo; /* read-write */
__le32 queue_avail_hi; /* read-write */
__le32 queue_used_lo; /* read-write */
__le32 queue_used_hi; /* read-write */
};
/* Macro versions of offsets for the Old Timers! */
#define VIRTIO_PCI_CAP_VNDR 0
#define VIRTIO_PCI_CAP_NEXT 1
#define VIRTIO_PCI_CAP_LEN 2
#define VIRTIO_PCI_CAP_CFG_TYPE 3
#define VIRTIO_PCI_CAP_BAR 4
#define VIRTIO_PCI_CAP_OFFSET 8
#define VIRTIO_PCI_CAP_LENGTH 12
#define VIRTIO_PCI_NOTIFY_CAP_MULT 16
#define VIRTIO_PCI_COMMON_DFSELECT 0
#define VIRTIO_PCI_COMMON_DF 4
#define VIRTIO_PCI_COMMON_GFSELECT 8
#define VIRTIO_PCI_COMMON_GF 12
#define VIRTIO_PCI_COMMON_MSIX 16
#define VIRTIO_PCI_COMMON_NUMQ 18
#define VIRTIO_PCI_COMMON_STATUS 20
#define VIRTIO_PCI_COMMON_CFGGENERATION 21
#define VIRTIO_PCI_COMMON_Q_SELECT 22
#define VIRTIO_PCI_COMMON_Q_SIZE 24
#define VIRTIO_PCI_COMMON_Q_MSIX 26
#define VIRTIO_PCI_COMMON_Q_ENABLE 28
#define VIRTIO_PCI_COMMON_Q_NOFF 30
#define VIRTIO_PCI_COMMON_Q_DESCLO 32
#define VIRTIO_PCI_COMMON_Q_DESCHI 36
#define VIRTIO_PCI_COMMON_Q_AVAILLO 40
#define VIRTIO_PCI_COMMON_Q_AVAILHI 44
#define VIRTIO_PCI_COMMON_Q_USEDLO 48
#define VIRTIO_PCI_COMMON_Q_USEDHI 52
#endif /* VIRTIO_PCI_NO_MODERN */
#endif #endif
...@@ -10,10 +10,11 @@ ...@@ -10,10 +10,11 @@
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
/** /**
* pci_iomap - create a virtual mapping cookie for a PCI BAR * pci_iomap_range - create a virtual mapping cookie for a PCI BAR
* @dev: PCI device that owns the BAR * @dev: PCI device that owns the BAR
* @bar: BAR number * @bar: BAR number
* @maxlen: length of the memory to map * @offset: map memory at the given offset in BAR
* @maxlen: max length of the memory to map
* *
* Using this function you will get a __iomem address to your device BAR. * Using this function you will get a __iomem address to your device BAR.
* You can access it using ioread*() and iowrite*(). These functions hide * You can access it using ioread*() and iowrite*(). These functions hide
...@@ -21,16 +22,21 @@ ...@@ -21,16 +22,21 @@
* you expect from them in the correct way. * you expect from them in the correct way.
* *
* @maxlen specifies the maximum length to map. If you want to get access to * @maxlen specifies the maximum length to map. If you want to get access to
* the complete BAR without checking for its length first, pass %0 here. * the complete BAR from offset to the end, pass %0 here.
* */ * */
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) void __iomem *pci_iomap_range(struct pci_dev *dev,
int bar,
unsigned long offset,
unsigned long maxlen)
{ {
resource_size_t start = pci_resource_start(dev, bar); resource_size_t start = pci_resource_start(dev, bar);
resource_size_t len = pci_resource_len(dev, bar); resource_size_t len = pci_resource_len(dev, bar);
unsigned long flags = pci_resource_flags(dev, bar); unsigned long flags = pci_resource_flags(dev, bar);
if (!len || !start) if (len <= offset || !start)
return NULL; return NULL;
len -= offset;
start += offset;
if (maxlen && len > maxlen) if (maxlen && len > maxlen)
len = maxlen; len = maxlen;
if (flags & IORESOURCE_IO) if (flags & IORESOURCE_IO)
...@@ -43,6 +49,25 @@ void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) ...@@ -43,6 +49,25 @@ void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
/* What? */ /* What? */
return NULL; return NULL;
} }
EXPORT_SYMBOL(pci_iomap_range);
/**
* pci_iomap - create a virtual mapping cookie for a PCI BAR
* @dev: PCI device that owns the BAR
* @bar: BAR number
* @maxlen: length of the memory to map
*
* Using this function you will get a __iomem address to your device BAR.
* You can access it using ioread*() and iowrite*(). These functions hide
* the details if this is a MMIO or PIO address space and will just do what
* you expect from them in the correct way.
*
* @maxlen specifies the maximum length to map. If you want to get access to
* the complete BAR without checking for its length first, pass %0 here.
* */
void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)
{
return pci_iomap_range(dev, bar, 0, maxlen);
}
EXPORT_SYMBOL(pci_iomap); EXPORT_SYMBOL(pci_iomap);
#endif /* CONFIG_PCI */ #endif /* CONFIG_PCI */
...@@ -524,6 +524,12 @@ static int p9_virtio_probe(struct virtio_device *vdev) ...@@ -524,6 +524,12 @@ static int p9_virtio_probe(struct virtio_device *vdev)
int err; int err;
struct virtio_chan *chan; struct virtio_chan *chan;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
chan = kmalloc(sizeof(struct virtio_chan), GFP_KERNEL); chan = kmalloc(sizeof(struct virtio_chan), GFP_KERNEL);
if (!chan) { if (!chan) {
pr_err("Failed to allocate virtio 9P channel\n"); pr_err("Failed to allocate virtio 9P channel\n");
......
# This creates the demonstration utility "lguest" which runs a Linux guest. # This creates the demonstration utility "lguest" which runs a Linux guest.
CFLAGS:=-m32 -Wall -Wmissing-declarations -Wmissing-prototypes -O3 -U_FORTIFY_SOURCE CFLAGS:=-m32 -Wall -Wmissing-declarations -Wmissing-prototypes -O3 -U_FORTIFY_SOURCE -Iinclude
all: lguest all: lguest
include/linux/virtio_types.h: ../../include/uapi/linux/virtio_types.h
mkdir -p include/linux 2>&1 || true
ln -sf ../../../../include/uapi/linux/virtio_types.h $@
lguest: include/linux/virtio_types.h
clean: clean:
rm -f lguest rm -f lguest
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment