- 05 Nov, 2009 19 commits
-
-
Alexander Graf authored
We want to be able to build KVM as a module. To enable us doing so, we need some more exports from core Linux parts. This patch exports all functions and variables that are required for KVM. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We need to access some VCPU fields from assembly code. In order to get the proper offsets, we have to define them in asm-offsets.c. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We need to run some KVM trampoline code in real mode. Unfortunately, real mode only covers 8MB on Cell so we need to squeeze ourselves as low as possible. Also, we need to trap interrupts to get us back from guest state to host state without telling Linux about it. This patch adds interrupt traps and includes the KVM code that requires real mode in the real mode parts of Linux. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
Little opcodes behave differently on desktop and embedded PowerPC cores. In order to reflect those differences, let's add some #ifdef code to emulate.c. We could probably also handle them in the core specific emulation files, but I would prefer to reuse as much code as possible. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We support setting the DEC to a certain value right now. Doing that basically triggers the CPU local timer. But there's also an mfdec command that enabled the OS to read the decrementor. This is required at least by all desktop and server PowerPC Linux kernels. It can't really hurt to allow embedded ones to do it as well though. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
There are generic parts of PowerPC that can be shared across all implementations and specific parts that only apply to BookE or desktop PPCs. This patch adds emulation for desktop specific opcodes that don't apply to BookE CPUs. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
This patch adds an implementation for a G3/G4 MMU, so we can run G3 and G4 guests in KVM on Book3s_64. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
To be able to run a guest, we also need to implement a guest MMU. This patch adds MMU handling for Book3s_64 guests. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We designed the Book3S port of KVM as modular as possible. Most of the code could be easily used on a Book3S_32 host as well. The main difference between 32 and 64 bit cores is the MMU. To keep things well separated, we treat the book3s_64 MMU as one possible compile option. This patch adds all the MMU helpers the rest of the code needs in order to modify the host's MMU, like setting PTEs and segments. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
This adds the book3s core handling file. Here everything that is generic to desktop PowerPC cores is handled, including interrupt injections, MSR settings, etc. It basically takes over the same role as booke.c for embedded PowerPCs. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
Getting from host state to the guest is only half the story. We also need to return to our host context and handle whatever happened to get us out of the guest. On PowerPC every guest exit is an interrupt. So all we need to do is trap the host's interrupt handlers and get into our #VMEXIT code to handle it. PowerPCs also have a register that can add an offset to the interrupt handlers' adresses which is what the booke KVM code uses. Unfortunately that is a hypervisor ressource and we also want to be able to run KVM when we're running in an LPAR. So we have to hook into the Linux interrupt handlers. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
This is the really low level of guest entry/exit code. Book3s_64 has an SLB, which stores all ESID -> VSID mappings we're currently aware of. The segments in the guest differ from the ones on the host, so we need to switch the SLB to tell the MMU that we're in a new context. So we store a shadow of the guest's SLB in the PACA, switch to that on entry and only restore bolted entries on exit, leaving the rest to the Linux SLB fault handler. That way we get a really clean way of switching the SLB. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
This is the of entry / exit code. In order to switch between host and guest context, we need to switch register state and call the exit code handler on exit. This assembly file does exactly that. To finally enter the guest it calls into book3s_64_slb.S. On exit it gets jumped at from book3s_64_slb.S too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We need to intercept interrupt vectors. To do that, let's add a file we can always include which only activates the intercepts when we have then configured. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
This adds the book3s specific header file that contains structs that are only valid on book3s specific code. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We need to store more information than we currently have for vcpus when running on Book3s. So let's extend the internal struct definitions. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
We need quite a bunch of new constants for KVM on Book3s, so let's define them now. These constants will be used in later patches. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
Right now sregs is unused on PPC, so we can use it for initialization of the CPU. KVM on BookE always virtualizes the host CPU. On Book3s we go a step further and take the PVR from userspace that tells us what kind of CPU we are supposed to virtualize, because we support Book3s_32 and Book3s_64 guests. In order to get that information, we use the sregs ioctl, because we don't want to reset the guest CPU on every normal register set. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexander Graf authored
PowerPC code handles dirty logging in the generic parts atm. While this is great for "return -ENOTSUPP", we need to be rather target specific when actually implementing it. So let's split it to implementation specific code, so we can implement it for book3s. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 Oct, 2009 21 commits
-
-
Benjamin Herrenschmidt authored
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Olof Johansson authored
pasemi_defconfig hasn't been updated for a year. Mostly a refresh of defaults, but this also disables 64K pages. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Jean Delvare authored
The ADT746x don't have any register at sub-address 0, so better use an existing register for the initial test read. Signed-off-by: Jean Delvare <khali@linux-fr.org> Cc: Colin Leroy <colin@colino.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
Defining CONFIG_SPARSE_IRQ enables generic code that gets rid of the static irq_desc array, and replaces it with an array of pointers to irq_descs. It also allows node local allocation of irq_descs, however we currently don't have the information available to do that, so we just allocate them on all on node 0. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
Move the default case out of the if, ie. when we're just displaying an irq. And consolidate all the odd cases at the top, ie. printing the header and footer. And in the process cope with sparse irq_descs. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Thomas Gleixner authored
Mark all functions which are only called from nvram_init() __init. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev@ozlabs.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Thomas Gleixner authored
nvram_clear_error_log() calls ppc_md.nvram_write() even when nvram_error_log_index is -1 (invalid). The nvram_write() function does not check for a negative offset. Check nvram_error_log_index as the other nvram log functions do. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev@ozlabs.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Thomas Gleixner authored
nvram_find_partition() has no user. The call site was removed in the arch/powerpc move, but the function stayed. Remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev@ozlabs.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
irqs_disabled_flags is #defined in linux/irqflags.h when CONFIG_TRACE_IRQFLAGS_SUPPORT is enabled. 64 and 32 bit always have CONFIG_TRACE_IRQFLAGS_SUPPORT enabled so just remove irqs_disabled_flags. This fixes the case when someone needs to include both linux/irqflags.h and asm/hw_irq.h. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
David Gibson authored
The hugepage arch code provides a number of hook functions/macros which mirror the functionality of various normal page pte access functions. Various changes in the normal page accessors (in particular BenH's recent changes to the handling of lazy icache flushing and PAGE_EXEC) have caused the hugepage versions to get out of sync with the originals. In some cases, this is a bug, at least on some MMU types. One of the reasons that some hooks were not identical to the normal page versions, is that the fact we're dealing with a hugepage needed to be passed down do use the correct dcache-icache flush function. This patch makes the main flush_dcache_icache_page() function hugepage aware (by checking for the PageCompound flag). That in turn means we can make set_huge_pte_at() just a call to set_pte_at() bringing it back into sync. As a bonus, this lets us remove the hash_huge_page_do_lazy_icache() function, replacing it with a call to the hash_page_do_lazy_icache() function it was based on. Some other hugepage pte access hooks - huge_ptep_get_and_clear() and huge_ptep_clear_flush() - are not so easily unified, but this patch at least brings them back into sync with the current versions of the corresponding normal page functions. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
David Gibson authored
This patch separates the parts of hugetlbpage.c which are inherently specific to the hash MMU into a new hugelbpage-hash64.c file. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
David Gibson authored
This patch simplifies the logic used to initialize hugepages on powerpc. The somewhat oddly named set_huge_psize() is renamed to add_huge_page_size() and now does all necessary verification of whether it's given a valid hugepage sizes (instead of just some) and instantiates the generic hstate structure (but no more). hugetlbpage_init() now steps through the available pagesizes, checks if they're valid for hugepages by calling add_huge_page_size() and initializes the kmem_caches for the hugepage pagetables. This means we can now eliminate the mmu_huge_psizes array, since we no longer need to pass the sizing information for the pagetable caches from set_huge_psize() into hugetlbpage_init() Determination of the default huge page size is also moved from the hash code into the general hugepage code. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
David Gibson authored
Currently each available hugepage size uses a slightly different pagetable layout: that is, the bottem level table of pointers to hugepages is a different size, and may branch off from the normal page tables at a different level. Every hugepage aware path that needs to walk the pagetables must therefore look up the hugepage size from the slice info first, and work out the correct way to walk the pagetables accordingly. Future hardware is likely to add more possible hugepage sizes, more layout options and more mess. This patch, therefore reworks the handling of hugepage pagetables to reduce this complexity. In the new scheme, instead of having to consult the slice mask, pagetable walking code can check a flag in the PGD/PUD/PMD entries to see where to branch off to hugepage pagetables, and the entry also contains the information (eseentially hugepage shift) necessary to then interpret that table without recourse to the slice mask. This scheme can be extended neatly to handle multiple levels of self-describing "special" hugepage pagetables, although for now we assume only one level exists. This approach means that only the pagetable allocation path needs to know how the pagetables should be set out. All other (hugepage) pagetable walking paths can just interpret the structure as they go. There already was a flag bit in PGD/PUD/PMD entries for hugepage directory pointers, but it was only used for debug. We alter that flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable pointer (normally it would be 1 since the pointer lies in the linear mapping). This means that asm pagetable walking can test for (and punt on) hugepage pointers with the same test that checks for unpopulated page directory entries (beq becomes bge), since hugepage pointers will always be positive, and normal pointers always negative. While we're at it, we get rid of the confusing (and grep defeating) #defining of hugepte_shift to be the same thing as mmu_huge_psizes. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
David Gibson authored
Currently we have a fair bit of rather fiddly code to manage the various kmem_caches used to store page tables of various levels. We generally have two caches holding some combination of PGD, PUD and PMD tables, plus several more for the special hugepage pagetables. This patch cleans this all up by taking a different approach. Rather than the caches being designated as for PUDs or for hugeptes for 16M pages, the caches are simply allocated to be a specific size. Thus sharing of caches between different types/levels of pagetables happens naturally. The pagetable size, where needed, is passed around encoded in the same way as {PGD,PUD,PMD}_INDEX_SIZE; that is n where the pagetable contains 2^n pointers. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
David Gibson authored
Currently, hpte_need_flush() only correctly flushes the given address for normal pages. Callers for hugepages are required to mask the address themselves. But hpte_need_flush() already looks up the page sizes for its own reasons, so this is a rather silly imposition on the callers. This patch alters it to mask based on the pagesize it has looked up itself, and removes the awkward masking code in the hugepage caller. Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Brian King authored
When running Active Memory Sharing, the Collaborative Memory Manager (CMM) may mark some pages as "loaned" with the hypervisor. Periodically, the CMM will query the hypervisor for a loan request, which is a single signed value. When kexec'ing into a kdump kernel, the CMM driver in the kdump kernel is not aware of the pages the previous kernel had marked as "loaned", so the hypervisor and the CMM driver are out of sync. Fix the CMM driver to handle this scenario by ignoring requests to decrease the number of loaned pages if we don't think we have any pages loaned. Pages that are marked as "loaned" which are not in the balloon will automatically get switched to "active" the next time we touch the page. This also fixes the case where totalram_pages is smaller than min_mem_mb, which can occur during kdump. Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
get_irq_desc() is a powerpc-specific version of irq_to_desc(). That is reason enough to remove it, but it also doesn't know about sparse irq_desc support which irq_to_desc() does (when we enable it). Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
Rather than open-coding our own check, use irq_has_action() to check if an irq has an action - ie. is "in use". irq_has_action() doesn't take the descriptor lock, but it shouldn't matter - we're just using it as an indicator that the irq is in use. disable_irq_nosync() will take the descriptor lock before doing anything also. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
The irq_desc array consumes quite a lot of space, and for systems that don't need or can't have 512 irqs it's just wasted space. The first 16 are reserved for ISA, so the minimum of 32 is really 16 - and no one has asked for more than 512 so leave that as the maximum. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Vorontsov authored
Linux power management subsystem supports vast amount of new PM callbacks that are crucial for proper suspend and hibernation support in drivers. This patch implements support for dev_pm_ops, preserving support for legacy callbacks. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-