- 15 Feb, 2013 21 commits
-
-
Michael Neuling authored
This hooks the new transactional memory code into context switching, FP/VMX/VMX unavailable and exception return. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
We do lazy FP but not lazy TM (ie. userspace starts with MSR TM=1 FP=0). Hence if userspace does an FP instruction during a transaction, we'll take an fp unavailable exception. This adds functions needed to handle this case. We have to inject the current FP state into the checkpoint so that the hardware can decide what to do with the transaction. We can't inject only the FP so we have to do a full treclaim and recheckpoint to inject just the FP state. This will cause the transaction to be marked as aborted by the hardware. This just add the routines needed to do this for FP, VMX and VSX. It doesn't hook them into the rest of the code yet. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
These should never happen since we always turn on MSR TM when in userspace. We don't do lazy TM. Hence if we hit this, we barf and kill the task as something's gone horribly wrong. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
powerpc: Add reclaim and recheckpoint functions for context switching transactional memory processes When we switch out a task, we need to save both the checkpointed and the speculated state into the thread struct. Similarly when we are switching in a task we need to load both the checkpointed and speculated state. If the task was using FP, we non-lazily reload both the original and the speculative FP register states. This is because the kernel doesn't see if/when a TM rollback occurs, so if we take an FP unavoidable later, we are unable to determine which set of FP regs need to be restored. This simply adds these functions. It doesn't hook them into the existing code yet. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
This adds functions to restore the state of the FP/VSX registers from what's stored in the thread_struct. Two version for FP/VSX are required since one restores them from transactional/checkpoint side of the thread_struct and the other from the speculated side. Similar functions are added for VMX registers. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Here we add the helper functions to be used when context switching. These allow us to fully reclaim and recheckpoint a transaction. We introduce a new paca field called tm_scratch to help us store away register values when doing the low level tm reclaim register save. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Add transactional memory paca scratch register to show_regs. This is useful for debugging. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Defines for MSR bits and transactional memory related SPRs TFIAR, TEXASR and TEXASRU. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
This adds new macros for saving and restoring checkpointed architected state from and to the thread_struct. It also adds some debugging macros for when your brain explodes trying to debug your transactional memory enabled kernel. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Set of new archtected state for saving away on context switch. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Here we define the new instructions we need for transactional memory in the kernel. This is so we can support compiling with binutils that don't support the new transactional memory instructions. Transactional memory results in two sets of architected state (GPRs/VSRs etc). treclaim allows us to read the checkpointed state (from the tbegin) so that we can store it away on a context switch. It does this by overwriting the exiting architected state, so you have to save that away before you treclaim. treclaim will also abort a transaction, so you can give a register value which contains an abort reason. trecheckpoint allows us to inject into the checkpointed state as if it were at the tbegin. It does this by copying the current architected state into the checkpointed state. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
In commit 466921c5 we added a hack to set the paca data_offset to zero so that per-cpu accesses would work on the boot cpu prior to per-cpu areas being setup. This fixed a problem with lockdep touching per-cpu areas very early in boot. However if we combine CONFIG_LOCK_STAT=y with any of the PPC_EARLY_DEBUG options, we can hit the same problem in udbg_early_init(). To avoid that we need to set the data_offset of the boot_paca also. So factor out the fixup logic and call it for both the boot_paca, and "the paca of the boot cpu". Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Tested-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Geoff Levand authored
The powerpc boot_paca symbol is now only used within the early_setup() routine, so move it from its global definition into early_setup(). Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Geoff Levand authored
Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Geoff Levand authored
Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Geoff Levand authored
To allow more control of the verbosity of ps3_result() add a check for the preprocessor macro PS3_VERBOSE_RESULT that builds a verbose verion of the ps3_result() routine. Signed-off-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Mackerras authored
Commit a413f474 ("powerpc: Disable relocation on exceptions whenever PR KVM is active") added calls to pSeries_disable_reloc_on_exc() and pSeries_enable_reloc_on_exc() to book3s_pr.c, and added declarations of those functions to <asm/hvcall.h>, but didn't add an include of <asm/hvcall.h> to book3s_pr.c. 64-bit kernels seem to get hvcall.h included via some other path, but 32-bit kernels fail to compile with: arch/powerpc/kvm/book3s_pr.c: In function ‘kvmppc_core_init_vm’: arch/powerpc/kvm/book3s_pr.c:1300:4: error: implicit declaration of function ‘pSeries_disable_reloc_on_exc’ [-Werror=implicit-function-declaration] arch/powerpc/kvm/book3s_pr.c: In function ‘kvmppc_core_destroy_vm’: arch/powerpc/kvm/book3s_pr.c:1316:4: error: implicit declaration of function ‘pSeries_enable_reloc_on_exc’ [-Werror=implicit-function-declaration] cc1: all warnings being treated as errors make[2]: *** [arch/powerpc/kvm/book3s_pr.o] Error 1 make[1]: *** [arch/powerpc/kvm] Error 2 make: *** [sub-make] Error 2 This fixes it by adding an include of hvcall.h. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Mackerras authored
The CFAR (Come-From Address Register) is a useful debugging aid that exists on POWER7 processors. Currently HV KVM doesn't save or restore the CFAR register for guest vcpus, making the CFAR of limited use in guests. This adds the necessary code to capture the CFAR value saved in the early exception entry code (it has to be saved before any branch is executed), save it in the vcpu.arch struct, and restore it on entry to the guest. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Mackerras authored
Some of the interrupt vectors on 64-bit POWER server processors are only 32 bytes long, which is not enough for the full first-level interrupt handler. For these we currently just have a branch to an out-of-line handler. However, this means that we corrupt the CFAR (come-from address register) on POWER7 and later processors. To fix this, we split the EXCEPTION_PROLOG_1 macro into two pieces: EXCEPTION_PROLOG_0 contains the part up to the point where the CFAR is saved in the PACA, and EXCEPTION_PROLOG_1 contains the rest. We then put EXCEPTION_PROLOG_0 in the short interrupt vectors before we branch to the out-of-line handler, which contains the rest of the first-level interrupt handler. To facilitate this, we define new _OOL (out of line) variants of STD_EXCEPTION_PSERIES, etc. In order to get EXCEPTION_PROLOG_0 to be short enough, i.e., no more than 6 instructions, it was necessary to move the stores that move the PPR and CFAR values into the PACA into __EXCEPTION_PROLOG_1 and to get rid of one of the two HMT_MEDIUM instructions. Previously there was a HMT_MEDIUM_PPR_DISCARD before the prolog, which was nop'd out on processors with the PPR (POWER7 and later), and then another HMT_MEDIUM inside the HMT_MEDIUM_PPR_SAVE macro call inside __EXCEPTION_PROLOG_1, which was nop'd out on processors without PPR. Now the HMT_MEDIUM inside EXCEPTION_PROLOG_0 is there unconditionally and the HMT_MEDIUM_PPR_DISCARD is not strictly necessary, although this leaves it in for the interrupt vectors where there is room for it. Previously we had a handler for hypervisor maintenance interrupts at 0xe50, which doesn't leave enough room for the vector for hypervisor emulation assist interrupts at 0xe40, since we need 8 instructions. The 0xe50 vector was only used on POWER6, as the HMI vector was moved to 0xe60 on POWER7. Since we don't support running in hypervisor mode on POWER6, we just remove the handler at 0xe50. This also changes denorm_exception_hv to use EXCEPTION_PROLOG_0 instead of open-coding it, and removes the HMT_MEDIUM_PPR_DISCARD from the relocation-on vectors (since any CPU that supports relocation-on interrupts also has the PPR). Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Paul Mackerras authored
The Cell processor doesn't support relocation-on interrupts, so we don't need relocation-on versions of the interrupt vectors that are purely Cell-specific. This removes them. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 Feb, 2013 4 commits
-
-
Ian Munsie authored
This patch adds support for enabling and context switching the Target Address Register in Power8. The TAR is a new special purpose register that can be used for computed branches with the bctar[l] (branch conditional to TAR) instruction in the same manner as the count and link registers. Signed-off-by: Ian Munsie <imunsie@au1.ibm.com> Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Nishanth Aravamudan authored
pseries/iommu: remove DDW on kexec We currently insert a property in the device-tree when we successfully configure DDW for a given slot. This was meant to be an optimization to speed up kexec/kdump, so that we don't need to make the RTAS calls again to re-configured DDW in the new kernel. However, we end up tripping a plpar_tce_stuff failure on kexec/kdump because we unconditionally parse the ibm,dma-window property for the node at bus/dev setup time. This property contains the 32-bit DMA window LIOBN, which is distinct from the DDW window's. We pass that LIOBN (via iommu_table_init -> iommu_table_clear -> tce_free -> tce_freemulti_pSeriesLP) to plpar_tce_stuff, which fails because that 32-bit window is no longer present after 25ebc45b ("powerpc/pseries/iommu: remove default window before attempting DDW manipulation"). I believe the simplest, easiest-to-maintain fix is to just change our initcall to, rather than detecting and updating the new kernel's DDW knowledge, just remove all DDW configurations. When the drivers re-initialize, we will set everything back up as it was before. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Nishanth Aravamudan authored
The parameter is unused, and complicates a following fix. Just remove it. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Daniel Borkmann authored
It seems, we're fine with just annotating the two functions. Thus, this fixes the following build warnings on ppc64: WARNING: arch/powerpc/sysdev/xics/built-in.o(.text+0x1664): The function .ics_rtas_init() references the function __init .xics_register_ics(). This is often because .ics_rtas_init lacks a __init annotation or the annotation of .xics_register_ics is wrong. WARNING: arch/powerpc/sysdev/built-in.o(.text+0x6044): The function .ics_rtas_init() references the function __init .xics_register_ics(). This is often because .ics_rtas_init lacks a __init annotation or the annotation of .xics_register_ics is wrong. WARNING: arch/powerpc/kernel/built-in.o(.text+0x2db30): The function .start_secondary() references the function __cpuinit .vdso_getcpu_init(). This is often because .start_secondary lacks a __cpuinit annotation or the annotation of .vdso_getcpu_init is wrong. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 Jan, 2013 13 commits
-
-
Nishanth Aravamudan authored
There are now two kinds of DMA windows that might be presented by PowerVM DDW support -- huge windows (that can map all of system memory regardless of the LPAR configuration) and non-huge windows (which can't). They are implemented slightly differently in PowerVM, and thus have different characteristics. The most obvious is that slot isolate doesn't clear the TCEs/window for us with non-huge windows. Thus, when a DLPAR operation occurs on a slot using a non-huge window, TCEs are still present (the notifier chain doesn't currently remove them explicitly) and the DLPAR fails. Fix this by calling remove_ddw() first, which will unmap the DDW TCEs. Note: a corresponding change to drmgr is needed to actually successfully DLPAR, such that the device-tree update (which causes the notifier chain to fire) occurs before slot isolate. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Nishanth Aravamudan authored
tce_clearrange_multi_pSeriesLP is attempting to iterate over all TCEs in a given range. However, is it not advancing the dma_offset value passed to plpar_tce_stuff via the next value. This prevents DLPAR from completing, because TCEs are still present at slot isolation time. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Change the hardware breakpoint code so that we can support wider ranged breakpoints. This means both ptrace and perf hardware breakpoints can use upto 512 byte long breakpoints when using the DAWR and only 8 byte when using the DABR. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Currently we set the length field in the DAWR to 0 which defaults it to one double word (64bits) which is the same as the DABR. Change this so that we can set it to longer values as supported by the DAWR. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
sukadev@linux.vnet.ibm.com authored
perf/Power: PERF_EVENT_IOC_ENABLE does not reenable event If we disable a perf event because we exceeded the specified ->event_limit, power_pmu_stop() sets the PERF_HES_STOPPED flag on the event. If the application then re-enables the event using PERF_EVENT_IOC_ENABLE ioctl, we don't ever clear this STOPPED flag. Consequently, the user space is never notified of the event. Following message has more background and test case. http://lists.eecs.utk.edu/pipermail/ptools-perfapi/2012-October/002528.html Used the following test cases to verify that this patch works on latest PAPI. $ papi.git/src/ctests/nonthread PAPI_TOT_CYC@5000000 $ papi.git/src/ctests/overflow_single_event Changelog[v2]: - [Paul Mackerras] Also clear PERF_HES_UPTODATE flag since we are restarting the event; cleanup comments and patch description. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Li Zhong authored
Use local_paca directly in macro SHARED_PROCESSOR, as all processors have the same value for the field shared_proc, so we don't need care racy here. Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Suzuki K. Poulose authored
Uprobes uses emulate_step in sstep.c, but we haven't explicitly specified the dependency. On pseries HAVE_HW_BREAKPOINT protects us, but 44x has no such luxury. Consolidate other users that depend on sstep and create a new config option. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: linuxppc-dev@ozlabs.org Cc: stable@vger.kernel.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Collins authored
CTS-1000 is based on P4080. GPIO 27 is used to signal the FPGA to switch off power, and also associates IRQ 8 with front-panel button press (which we use to call orderly_poweroff()). The relevant device-tree looks like this: gpio0: gpio@130000 { compatible = "fsl,qoriq-gpio"; reg = <0x130000 0x1000>; interrupts = <55 2 0 0>; #gpio-cells = <2>; gpio-controller; /* Allows powering off the system via GPIO signal. */ gpio-halt@27 { compatible = "sgy,gpio-halt"; gpios = <&gpio0 27 0>; interrupts = <8 1 0 0>; }; }; Because the driver cannot match on sgy,gpio-halt (because the node is never processed through of_platform), it matches on fsl,qoriq-gpio and then checks child nodes for the matching sgy,gpio-halt. This also ensures that the GPIO controller is detected prior to sgy_cts1000's probe callback, since that node wont match via of_platform until the controller is registered. Also, because the GPIO handler for triggering system poweroff might sleep, the IRQ uses a workqueue to call orderly_poweroff(). As a final note, this driver may be expanded for other features specific to the CTS-1000. Signed-off-by: Ben Collins <ben.c@servergy.com> Cc: Jack Smith <jack.s@servergy.com> Cc: Vihar Rai <vihar.r@servergy.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Scott Wood authored
While this should be harmless now that distribute_irqs obeys MPIC_SINGLE_DEST_CPU, there's no reason to enable this on mpc85xx/mpc86xx since MPIC_SINGLE_DEST_CPU will always be set. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Scott Wood authored
Previously we were setting an illegal configuration on mpc85xx MPICs if CONFIG_IRQ_ALL_CPUS is enabled (which for some reason it is in mpc85xx_smp_defconfig). Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
We have a mix of decimal and hex here, so lets make them consistently hex. Also, strace will print them in hex if it can't decode them, so having them in hex here makes it easier to match up. No functional change. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Cody P Schafer authored
The only persistent change made by this loop is calling memblock_set_node() once for each memblock, which is not useful (and has no effect) as memblock_set_node() is not called with any memblock-specific parameters. Subsistute a single memblock_set_node(). Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
Merge "merge" branch to bring in various bug fixes that are going into 3.8
-
- 28 Jan, 2013 2 commits
-
-
Tiejun Chen authored
With lazy interrupt, we always call __check_irq_replaysome with decrementers_next_tb to check if we need to replay timer interrupt. So in hotplug case we also need to set decrementers_next_tb as MAX to make sure __check_irq_replay don't replay timer interrupt when return as we expect, otherwise we'll trap here infinitely. Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Cong Ding authored
the variable backup_current_thread_info isn't freed before existing the function. Signed-off-by: Cong Ding <dinggnu@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-