1. 09 Jul, 2008 14 commits
  2. 03 Jul, 2008 11 commits
  3. 01 Jul, 2008 15 commits
    • Michael Neuling's avatar
      powerpc: Update for VSX core file and ptrace · f3e909c2
      Michael Neuling authored
      This correctly hooks the VSX dump into Roland McGrath core file
      infrastructure.  It adds the VSX dump information as an additional elf
      note in the core file (after talking more to the tool chain/gdb guys).
      This also ensures the formats are consistent between signals, ptrace
      and core files.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      f3e909c2
    • Michael Neuling's avatar
      powerpc: Fix compile error for CONFIG_VSX · 436db693
      Michael Neuling authored
      Fix compile error when CONFIG_VSX is enabled.
      
      arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
      arch/powerpc/kernel/signal_64.c:241: error: 'i' undeclared (first use in this function)
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      436db693
    • Eric B Munson's avatar
      powerpc: Keep 3 high personality bytes across exec · a91a03ee
      Eric B Munson authored
      Currently when a 32 bit process is exec'd on a powerpc 64 bit host the
      value in the top three bytes of the personality is clobbered.  patch
      adds a check in the SET_PERSONALITY macro that will carry all the
      values in the top three bytes across the exec.
      
      These three bytes currently carry flags to disable address randomisation,
      limit the address space, force zeroing of an mmapped page, etc.  Should an
      application set any of these bits they will be maintained and honoured on
      homogeneous environment but discarded and ignored on a heterogeneous
      environment.  So if an application requires all mmapped pages to be initialised
      to zero and a wrapper is used to setup the personality and exec the target,
      these flags will remain set on an all 32 or all 64 bit envrionment, but they
      will be lost in the exec on a mixed 32/64 bit environment.  Losing these bits
      means that the same application would behave differently in different
      environments.  Tested on a POWER5+ machine with 64bit kernel and a mixed
      64/32 bit user space.
      Signed-off-by: default avatarEric B Munson <ebmunson@us.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      a91a03ee
    • Bart Van Assche's avatar
      powerpc: Make sure that include/asm-powerpc/spinlock.h does not trigger compilation warnings · 89b5810f
      Bart Van Assche authored
      When compiling kernel modules for ppc that include <linux/spinlock.h>,
      gcc prints a warning message every time it encounters a function
      declaration where the inline keyword appears after the return type.
      This makes sure that the order of the inline keyword and the return
      type is as gcc expects it.  Additionally, the __inline__ keyword is
      replaced by inline, as checkpatch expects.
      Signed-off-by: default avatarBart Van Assche <bart.vanassche@gmail.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      89b5810f
    • Stephen Rothwell's avatar
      powerpc: Explicitly copy elements of pt_regs · fcbc5a97
      Stephen Rothwell authored
      Gcc 4.3 produced this warning:
      
      arch/powerpc/kernel/signal_64.c: In function 'restore_sigcontext':
      arch/powerpc/kernel/signal_64.c:161: warning: array subscript is above array bounds
      
      This is caused by us copying to aliases of elements of the pt_regs
      structure.  Make those explicit.
      
      This adds one extra __get_user and unrolls a loop.
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      fcbc5a97
    • Bernhard Walle's avatar
      powerpc: Remove experimental status of kdump on 64-bit powerpc · 3420b5da
      Bernhard Walle authored
      This removes the experimental status of kdump on PPC64.  kdump is on
      PPC64 now since more than one year and it has proven to be stable.
      Signed-off-by: default avatarBernhard Walle <bwalle@suse.de>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      3420b5da
    • Andy Whitcroft's avatar
      powerpc: Add 64 bit version of huge_ptep_set_wrprotect · 016b33c4
      Andy Whitcroft authored
      The implementation of huge_ptep_set_wrprotect() directly calls
      ptep_set_wrprotect() to mark a hugepte write protected.  However this
      call is not appropriate on ppc64 kernels as this is a small page only
      implementation.  This can lead to the hash not being flushed correctly
      when a mapping is being converted to COW, allowing processes to continue
      using the original copy.
      
      Currently huge_ptep_set_wrprotect() unconditionally calls
      ptep_set_wrprotect().  This is fine on ppc32 kernels as this call is
      generic.  On 64 bit this is implemented as:
      
      	pte_update(mm, addr, ptep, _PAGE_RW, 0);
      
      On ppc64 this last parameter is the page size and is passed directly on
      to hpte_need_flush():
      
      	hpte_need_flush(mm, addr, ptep, old, huge);
      
      And this directly affects the page size we pass to flush_hash_page():
      
      	flush_hash_page(vaddr, rpte, psize, ssize, 0);
      
      As this changes the way the hash is calculated we will flush the wrong
      pages, potentially leaving live hashes to the original page.
      
      Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
      headers.
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Acked-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      016b33c4
    • Andrew Lewis's avatar
      powerpc: Prevent memory corruption due to cache invalidation of unaligned DMA buffer · 03d70617
      Andrew Lewis authored
      On PowerPC processors with non-coherent cache architectures the DMA
      subsystem calls invalidate_dcache_range() before performing a DMA read
      operation.  If the address and length of the DMA buffer are not aligned
      to a cache-line boundary this can result in memory outside of the DMA
      buffer being invalidated in the cache.  If this memory has an
      uncommitted store then the data will be lost and a subsequent read of
      that address will result in an old value being returned from main memory.
      
      Only when the DMA buffer starts on a cache-line boundary and is an exact
      mutiple of the cache-line size can invalidate_dcache_range() be called,
      otherwise flush_dcache_range() must be called.  flush_dcache_range()
      will first flush uncommitted writes, and then invalidate the cache.
      
      Signed-off-by: Andrew Lewis <andrew-lewis at netspace.net.au>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      03d70617
    • Kumar Gala's avatar
      powerpc/bootwrapper: Pad .dtb by default · 9d4ae9fc
      Kumar Gala authored
      Since most bootloaders or wrappers tend to update or add some information
      to the .dtb they a handled they need some working space to do that in.
      
      By default add 1K of padding via a default setting of DTS_FLAGS.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      9d4ae9fc
    • Michael Neuling's avatar
      powerpc: Add CONFIG_VSX config option · 96d5b52c
      Michael Neuling authored
      Add CONFIG_VSX config build option.  Must compile with POWER4, FPU and ALTIVEC.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      96d5b52c
    • Michael Neuling's avatar
      powerpc: Add VSX context save/restore, ptrace and signal support · ce48b210
      Michael Neuling authored
      This patch extends the floating point save and restore code to use the
      VSX load/stores when VSX is available.  This will make FP context
      save/restore marginally slower on FP only code, when VSX is available,
      as it has to load/store 128bits rather than just 64bits.
      
      Mixing FP, VMX and VSX code will get constant architected state.
      
      The signals interface is extended to enable access to VSR 0-31
      doubleword 1 after discussions with tool chain maintainers.  Backward
      compatibility is maintained.
      
      The ptrace interface is also extended to allow access to VSR 0-31 full
      registers.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      ce48b210
    • Michael Neuling's avatar
      powerpc: Add VSX assembler code macros · 72ffff5b
      Michael Neuling authored
      This adds the macros for the VSX load/store instruction as most
      binutils are not going to support this for a while.
      
      Also add VSX register save/restore macros and vsr[0-63] register definitions.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      72ffff5b
    • Michael Neuling's avatar
      powerpc: Add VSX CPU feature · b962ce9d
      Michael Neuling authored
      Add a VSX CPU feature.  Also add code to detect if VSX is available
      from the device tree.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarJoel Schopp <jschopp@austin.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      b962ce9d
    • Michael Neuling's avatar
      powerpc: Introduce VSX thread_struct and CONFIG_VSX · c6e6771b
      Michael Neuling authored
      The layout of the new VSR registers and how they overlap on top of the
      legacy FPR and VR registers is:
      
                         VSR doubleword 0               VSR doubleword 1
                ----------------------------------------------------------------
        VSR[0]  |             FPR[0]            |                              |
                ----------------------------------------------------------------
        VSR[1]  |             FPR[1]            |                              |
                ----------------------------------------------------------------
                |              ...              |                              |
                |              ...              |                              |
                ----------------------------------------------------------------
        VSR[30] |             FPR[30]           |                              |
                ----------------------------------------------------------------
        VSR[31] |             FPR[31]           |                              |
                ----------------------------------------------------------------
        VSR[32] |                             VR[0]                            |
                ----------------------------------------------------------------
        VSR[33] |                             VR[1]                            |
                ----------------------------------------------------------------
                |                              ...                             |
                |                              ...                             |
                ----------------------------------------------------------------
        VSR[62] |                             VR[30]                           |
                ----------------------------------------------------------------
        VSR[63] |                             VR[31]                           |
                ----------------------------------------------------------------
      
      VSX has 64 128bit registers.  The first 32 regs overlap with the FP
      registers and hence extend them with and additional 64 bits.  The
      second 32 regs overlap with the VMX registers.
      
      This commit introduces the thread_struct changes required to reflect
      this register layout.  Ptrace and signals code is updated so that the
      floating point registers are correctly accessed from the thread_struct
      when CONFIG_VSX is enabled.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      c6e6771b
    • Michael Neuling's avatar
      powerpc: Make load_up_fpu and load_up_altivec callable · 6f3d8e69
      Michael Neuling authored
      Make load_up_fpu and load_up_altivec callable so they can be reused by
      the VSX code.
      Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      6f3d8e69