1. 04 Jan, 2012 7 commits
    • Kumar Gala's avatar
      powerpc/fsl: update compatiable on fsl 16550 uart nodes · f706bed1
      Kumar Gala authored
      The Freescale serial port's are pretty much a 16550, however there are
      some FSL specific bugs and features.  Add a "fsl,ns16550" compatiable
      string to allow code to handle those FSL specific issues.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      f706bed1
    • Timur Tabi's avatar
      powerpc/85xx: fix PCI and localbus properties in p1022ds.dts · 07256963
      Timur Tabi authored
      PCI ranges, localbus reg and localbus chip-select 2 range do not match
      the memory map setup by bootloader.
      Signed-off-by: default avatarTimur Tabi <timur@freescale.com>
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      07256963
    • Timur Tabi's avatar
      powerpc/85xx: re-enable ePAPR byte channel driver in corenet32_smp_defconfig · 3900fad3
      Timur Tabi authored
      Commit 7c4b2f09 (powerpc: Update mpc85xx/corenet 32-bit defconfigs)
      accidentally disabled the ePAPR byte channel driver in the defconfig for
      Freescale CoreNet platforms.
      Signed-off-by: default avatarTimur Tabi <timur@freescale.com>
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      3900fad3
    • Kumar Gala's avatar
      powerpc/fsl: Update defconfigs to enable some standard FSL HW features · 389a6c52
      Kumar Gala authored
      corenet64_smp_defconfig:
       - enabled rapidio
      
      corenet32_smp_defconfig:
       - enabled hugetlbfs, rapidio
      
      mpc85xx_smp_defconfig:
       - enabled P1010RDB, hugetlbfs, SPI, SDHC, Crypto/CAAM
      
      mpc85xx_smp_defconfig:
       - enabled hugetlbfs, SPI, SDHC, Crypto/CAAM
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      389a6c52
    • Andy Fleming's avatar
      powerpc: Add TBI PHY node to first MDIO bus · 22066949
      Andy Fleming authored
      Systems which use the fsl_pq_mdio driver need to specify an
      address for TBI PHY transactions such that the address does
      not conflict with any PHYs on the bus (all transactions to
      that address are directed to the onboard TBI PHY). The driver
      used to scan for a free address if no address was specified,
      however this ran into issues when the PHY Lib was fixed so
      that all MDIO transactions were protected by a mutex. As it
      is, the code was meant to serve as a transitional tool until
      the device trees were all updated to specify the TBI address.
      
      The best fix for the mutex issue was to remove the scanning code,
      but it turns out some of the newer SoCs have started to omit
      the tbi-phy node when SGMII is not being used. As such, these
      devices will now fail unless we add a tbi-phy node to the first
      mdio controller.
      Signed-off-by: default avatarAndy Fleming <afleming@freescale.com>
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      22066949
    • Paul Gortmaker's avatar
      sbc834x: put full compat string in board match check · dabc7840
      Paul Gortmaker authored
      The commit 883c2cfc:
      
       "fix of_flat_dt_is_compatible() to match the full compatible string"
      
      causes silent boot death on the sbc8349 board because it was
      just looking for 8349 and not 8349E -- as originally there
      were non-E (no SEC/encryption) chips available.  Just add the
      E to the board detection string since all boards I've seen
      were manufactured with the E versions.
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      dabc7840
    • Kumar Gala's avatar
      powerpc/fsl-pci: Allow 64-bit PCIe devices to DMA to any memory address · 96ea3b4a
      Kumar Gala authored
      There is an issue on FSL-BookE 64-bit devices (P5020) in which PCIe
      devices that are capable of doing 64-bit DMAs (like an Intel e1000) do
      not function and crash the kernel if we have >4G of memory in the system.
      
      The reason is that the existing code only sets up one inbound window for
      access to system memory across PCIe.  That window is limited to a 32-bit
      address space.  So on systems we'll end up utilizing SWIOTLB for dma
      mappings.  However SWIOTLB dma ops implement dma_alloc_coherent() as
      dma_direct_alloc_coherent().  Thus we can end up with dma addresses that
      are not accessible because of the inbound window limitation.
      
      We could possibly set the SWIOTLB alloc_coherent op to
      swiotlb_alloc_coherent() however that does not address the issue since
      the swiotlb_alloc_coherent() will behave almost identical to
      dma_direct_alloc_coherent() since the devices coherent_dma_mask will be
      greater than any address allocated by swiotlb_alloc_coherent() and thus
      we'll never bounce buffer it into a range that would be dma-able.
      
      The easiest and best solution is to just make it so that a 64-bit
      capable device is able to DMA to any internal system address.
      
      We accomplish this by opening up a second inbound window that maps all
      of memory above the internal SoC address width so we can set it up to
      access all of the internal SoC address space if needed.
      
      We than fixup the dma_ops and dma_offset for PCIe devices with a dma
      mask greater than the maximum internal SoC address.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      96ea3b4a
  2. 03 Jan, 2012 4 commits
    • Li Zhong's avatar
      powerpc: Fix unpaired probe_hcall_entry and probe_hcall_exit · e4f387d8
      Li Zhong authored
      Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen
      as following, which could cause incorrect preempt count.
      
      __trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry =>
      get_cpu_var => preempt_disable
      
      __trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit =>
      put_cpu_var => preempt_enable
      
      where:
      A => B and A -> B means A calls B, but
      => means A will call B through function name, and B will definitely be
      called.
      -> means A will call B through function pointer, so B might not be
      called if the function pointer is not set.
      
      So error happens when only one of probe_hcall_entry and probe_hcall_exit
      get called during a hcall.
      
      This patch tries to move the preempt count operations from
      probe_hcall_entry and probe_hcall_exit to its callers.
      Reported-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarLi Zhong <zhong@linux.vnet.ibm.com>
      Tested-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      CC: stable@kernel.org [v2.6.32+]
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e4f387d8
    • Benjamin Herrenschmidt's avatar
      offb: Fix setting of the pseudo-palette for >8bpp · 1bb0b7d2
      Benjamin Herrenschmidt authored
      When using a >8bpp framebuffer, offb advertises truecolor, not directcolor,
      and doesn't touch the color map even if it has a corresponding access method
      for the real hardware.
      
      Thus it needs to set the pseudo-palette with all 3 components of the color,
      like other truecolor framebuffers, not with copies of the color index like
      a directcolor framebuffer would do.
      
      This went unnoticed for a long time because it's pretty hard to get offb
      to kick in with anything but 8bpp (old BootX under MacOS will do that and
      qemu does it).
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: stable@kernel.org
      1bb0b7d2
    • Benjamin Herrenschmidt's avatar
      offb: Add palette hack for qemu "standard vga" framebuffer · 9b961ed2
      Benjamin Herrenschmidt authored
      We rename the mach64 hack to "simple" since that's also applicable
      to anything using VGA-style DAC IO ports (set to 8-bit DAC) and we
      use it for qemu vga.
      
      Note that this is keyed on a device-tree "compatible" property that
      is currently only set by an upcoming version of SLOF when using the
      qemu "pseries" platform. This is on purpose as other qemu ppc platforms
      using OpenBIOS aren't properly setting the DAC to 8-bit at the time of
      the writing of this patch.
      
      We can fix OpenBIOS later to do that and add the required property, in
      which case it will be matched by this change.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9b961ed2
    • Benjamin Herrenschmidt's avatar
      offb: Fix bug in calculating requested vram size · c055fe07
      Benjamin Herrenschmidt authored
      We used to try to request 8 times more vram than needed, which would
      fail if the card has a too small BAR (observed with qemu & kvm).
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: stable@kernel.org
      c055fe07
  3. 21 Dec, 2011 1 commit
  4. 20 Dec, 2011 8 commits
    • Josh Boyer's avatar
      powerpc/44x: Fix build error on currituck platform · eb975652
      Josh Boyer authored
      The MPIC_PRIMARY define was recently made "default" and the meaning was
      inverted to MPIC_SECONDARY.  This causes compile errors in currituck now, so
      fix it to the new manner of allocating mpics.
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      eb975652
    • Suzuki Poulose's avatar
      powerpc/boot: Change the load address for the wrapper to fit the kernel · c55aef0e
      Suzuki Poulose authored
      The wrapper code which uncompresses the kernel in case of a 'ppc' boot
      is by default loaded at 0x00400000 and the kernel will be uncompressed
      to fit the location 0-0x00400000. But with dynamic relocations, the size
      of the kernel may exceed 0x00400000(4M). This would cause an overlap
      of the uncompressed kernel and the boot wrapper, causing a failure in
      boot.
      
      The message looks like :
      
         zImage starting: loaded at 0x00400000 (sp: 0x0065ffb0)
         Allocating 0x5ce650 bytes for kernel ...
         Insufficient memory for kernel at address 0! (_start=00400000, uncompressed size=00591a20)
      
      This patch shifts the load address of the boot wrapper code to the next
      higher MB, according to the size of  the uncompressed vmlinux.
      
      With the patch, we get the following message while building the image :
      
       WARN: Uncompressed kernel (size 0x5b0344) overlaps the address of the wrapper(0x400000)
       WARN: Fixing the link_address of wrapper to (0x600000)
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      c55aef0e
    • Suzuki Poulose's avatar
      powerpc/44x: Enable CRASH_DUMP for 440x · 5b2e478d
      Suzuki Poulose authored
      Now that we have relocatable kernel, supporting CRASH_DUMP only requires
      turning the switches on for UP machines.
      
      We don't have kexec support on 47x yet. Enabling SMP support would be done
      as part of enabling the PPC_47x support.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      5b2e478d
    • Suzuki Poulose's avatar
      powerpc/44x: Enable CONFIG_RELOCATABLE for PPC44x · 26ecb6c4
      Suzuki Poulose authored
      The following patch adds relocatable kernel support - based on processing
      of dynamic relocations - for PPC44x kernel.
      
      We find the runtime address of _stext and relocate ourselves based
      on the following calculation.
      
      	virtual_base = ALIGN(KERNELBASE,256M) +
      			MODULO(_stext.run,256M)
      
      relocate() is called with the Effective Virtual Base Address (as
      shown below)
      
                  | Phys. Addr| Virt. Addr |
      Page (256M) |------------------------|
      Boundary    |           |            |
                  |           |            |
                  |           |            |
      Kernel Load |___________|_ __ _ _ _ _|<- Effective
      Addr(_stext)|           |      ^     |Virt. Base Addr
                  |           |      |     |
                  |           |      |     |
                  |           |reloc_offset|
                  |           |      |     |
                  |           |      |     |
                  |           |______v_____|<-(KERNELBASE)%256M
                  |           |            |
                  |           |            |
                  |           |            |
      Page(256M)  |-----------|------------|
      Boundary    |           |            |
      
      The virt_phys_offset is updated accordingly, i.e,
      
      	virt_phys_offset = effective. kernel virt base - kernstart_addr
      
      I have tested the patches on 440x platforms only. However this should
      work fine for PPC_47x also, as we only depend on the runtime address
      and the current TLB XLAT entry for the startup code, which is available
      in r25. I don't have access to a 47x board yet. So, it would be great if
      somebody could test this on 47x.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Tony Breeds <tony@bakeyournoodle.com>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      26ecb6c4
    • Suzuki Poulose's avatar
      powerpc: Define virtual-physical translations for RELOCATABLE · 368ff8f1
      Suzuki Poulose authored
      We find the runtime address of _stext and relocate ourselves based
      on the following calculation.
      
      	virtual_base = ALIGN(KERNELBASE,KERNEL_TLB_PIN_SIZE) +
      			MODULO(_stext.run,KERNEL_TLB_PIN_SIZE)
      
      relocate() is called with the Effective Virtual Base Address (as
      shown below)
      
                  | Phys. Addr| Virt. Addr |
      Page        |------------------------|
      Boundary    |           |            |
                  |           |            |
                  |           |            |
      Kernel Load |___________|_ __ _ _ _ _|<- Effective
      Addr(_stext)|           |      ^     |Virt. Base Addr
                  |           |      |     |
                  |           |      |     |
                  |           |reloc_offset|
                  |           |      |     |
                  |           |      |     |
                  |           |______v_____|<-(KERNELBASE)%TLB_SIZE
                  |           |            |
                  |           |            |
                  |           |            |
      Page        |-----------|------------|
      Boundary    |           |            |
      
      On BookE, we need __va() & __pa() early in the boot process to access
      the device tree.
      
      Currently this has been defined as :
      
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) -
      						PHYSICAL_START + KERNELBASE)
      where:
       PHYSICAL_START is kernstart_addr - a variable updated at runtime.
       KERNELBASE	is the compile time Virtual base address of kernel.
      
      This won't work for us, as kernstart_addr is dynamic and will yield different
      results for __va()/__pa() for same mapping.
      
      e.g.,
      
      Let the kernel be loaded at 64MB and KERNELBASE be 0xc0000000 (same as
      PAGE_OFFSET).
      
      In this case, we would be mapping 0 to 0xc0000000, and kernstart_addr = 64M
      
      Now __va(1MB) = (0x100000) - (0x4000000) + 0xc0000000
      		= 0xbc100000 , which is wrong.
      
      it should be : 0xc0000000 + 0x100000 = 0xc0100000
      
      On platforms which support AMP, like PPC_47x (based on 44x), the kernel
      could be loaded at highmem. Hence we cannot always depend on the compile
      time constants for mapping.
      
      Here are the possible solutions:
      
      1) Update kernstart_addr(PHSYICAL_START) to match the Physical address of
      compile time KERNELBASE value, instead of the actual Physical_Address(_stext).
      
      The disadvantage is that we may break other users of PHYSICAL_START. They
      could be replaced with __pa(_stext).
      
      2) Redefine __va() & __pa() with relocation offset
      
      #ifdef	CONFIG_RELOCATABLE_PPC32
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) - PHYSICAL_START + (KERNELBASE + RELOC_OFFSET)))
      #define __pa(x) ((unsigned long)(x) + PHYSICAL_START - (KERNELBASE + RELOC_OFFSET))
      #endif
      
      where, RELOC_OFFSET could be
      
        a) A variable, say relocation_offset (like kernstart_addr), updated
           at boot time. This impacts performance, as we have to load an additional
           variable from memory.
      
      		OR
      
        b) #define RELOC_OFFSET ((PHYSICAL_START & PPC_PIN_SIZE_OFFSET_MASK) - \
                            (KERNELBASE & PPC_PIN_SIZE_OFFSET_MASK))
      
         This introduces more calculations for doing the translation.
      
      3) Redefine __va() & __pa() with a new variable
      
      i.e,
      
      #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
      
      where VIRT_PHYS_OFFSET :
      
      #ifdef CONFIG_RELOCATABLE_PPC32
      #define VIRT_PHYS_OFFSET virt_phys_offset
      #else
      #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
      #endif /* CONFIG_RELOCATABLE_PPC32 */
      
      where virt_phy_offset is updated at runtime to :
      
      	Effective KERNELBASE - kernstart_addr.
      
      Taking our example, above:
      
      virt_phys_offset = effective_kernelstart_vaddr - kernstart_addr
      		 = 0xc0400000 - 0x400000
      		 = 0xc0000000
      	and
      
      	__va(0x100000) = 0xc0000000 + 0x100000 = 0xc0100000
      	 which is what we want.
      
      I have implemented (3) in the following patch which has same cost of
      operation as the existing one.
      
      I have tested the patches on 440x platforms only. However this should
      work fine for PPC_47x also, as we only depend on the runtime address
      and the current TLB XLAT entry for the startup code, which is available
      in r25. I don't have access to a 47x board yet. So, it would be great if
      somebody could test this on 47x.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      368ff8f1
    • Suzuki Poulose's avatar
      powerpc: Process dynamic relocations for kernel · 9c5f7d39
      Suzuki Poulose authored
      The following patch implements the dynamic relocation processing for
      PPC32 kernel. relocate() accepts the target virtual address and relocates
       the kernel image to the same.
      
      Currently the following relocation types are handled :
      
      	R_PPC_RELATIVE
      	R_PPC_ADDR16_LO
      	R_PPC_ADDR16_HI
      	R_PPC_ADDR16_HA
      
      The last 3 relocations in the above list depends on value of Symbol indexed
      whose index is encoded in the Relocation entry. Hence we need the Symbol
      Table for processing such relocations.
      
      Note: The GNU ld for ppc32 produces buggy relocations for relocation types
      that depend on symbols. The value of the symbols with STB_LOCAL scope
      should be assumed to be zero. - Alan Modra
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Alan Modra <amodra@au1.ibm.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      9c5f7d39
    • Suzuki Poulose's avatar
      powerpc/44x: Enable DYNAMIC_MEMSTART for 440x · 23913245
      Suzuki Poulose authored
      DYNAMIC_MEMSTART(old RELOCATABLE) was restricted only to PPC_47x variants
      of 44x. This patch enables DYNAMIC_MEMSTART for 440x based chipsets.
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      23913245
    • Suzuki Poulose's avatar
      powerpc: Rename mapping based RELOCATABLE to DYNAMIC_MEMSTART for BookE · 0f890c8d
      Suzuki Poulose authored
      The current implementation of CONFIG_RELOCATABLE in BookE is based
      on mapping the page aligned kernel load address to KERNELBASE. This
      approach however is not enough for platforms, where the TLB page size
      is large (e.g, 256M on 44x). So we are renaming the RELOCATABLE used
      currently in BookE to DYNAMIC_MEMSTART to reflect the actual method.
      
      The CONFIG_RELOCATABLE for PPC32(BookE) based on processing of the
      dynamic relocations will be introduced in the later in the patch series.
      
      This change would allow the use of the old method of RELOCATABLE for
      platforms which can afford to enforce the page alignment (platforms with
      smaller TLB size).
      
      Changes since v3:
      
      * Introduced a new config, NONSTATIC_KERNEL, to denote a kernel which is
        either a RELOCATABLE or DYNAMIC_MEMSTART(Suggested by: Josh Boyer)
      Suggested-by: default avatarScott Wood <scottwood@freescale.com>
      Tested-by: default avatarScott Wood <scottwood@freescale.com>
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki@in.ibm.com>
      Cc: Scott Wood <scottwood@freescale.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Josh Boyer <jwboyer@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org>
      Signed-off-by: default avatarJosh Boyer <jwboyer@gmail.com>
      0f890c8d
  5. 19 Dec, 2011 8 commits
    • Benjamin Herrenschmidt's avatar
      powerpc: Fix old bug in prom_init setting of the color · 3f53638c
      Benjamin Herrenschmidt authored
      We have an array of 16 entries and a loop of 32 iterations... oops.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3f53638c
    • Paul Mackerras's avatar
      powerpc: Only use initrd_end as the limit for alloc_bottom if it's inside the RMO. · 64968f60
      Paul Mackerras authored
      As the kernels and initrd's get bigger boot-loaders and possibly
      kexec-tools will need to place the initrd outside the RMO.  When this
      happens we end up with no lowmem and the boot doesn't get very far.
      
      Only use initrd_end as the limit for alloc_bottom if it's inside the
      RMO.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarTony Breeds <tony@bakeyournoodle.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      64968f60
    • Anton Blanchard's avatar
      powerpc: Fix comment explaining our VSID layout · b206590c
      Anton Blanchard authored
      We support 16TB of user address space and half a million contexts
      so update the comment to reflect this.
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b206590c
    • Andreas Schwab's avatar
      powerpc: Fix wrong divisor in usecs_to_cputime · 9f5072d4
      Andreas Schwab authored
      Commit d57af9b2 (taskstats: use real microsecond granularity for CPU times)
      renamed msecs_to_cputime to usecs_to_cputime, but failed to update all
      numbers on the way.  This causes nonsensical cpu idle/iowait values to be
      displayed in /proc/stat (the only user of usecs_to_cputime so far).
      
      This also renames __cputime_msec_factor to __cputime_usec_factor, adapting
      its value and using it directly in cputime_to_usecs instead of doing two
      multiplications.
      Signed-off-by: default avatarAndreas Schwab <schwab@linux-m68k.org>
      Acked-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9f5072d4
    • David Rientjes's avatar
      powerpc/mm: Fix section mismatch for read_n_cells · 2011b1d0
      David Rientjes authored
      read_n_cells() cannot be marked as .devinit.text since it is referenced
      from two functions that are not in that section: of_get_lmb_size() and
      hot_add_drconf_scn_to_nid().
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2011b1d0
    • David Rientjes's avatar
      powerpc/mm: Fix section mismatch for mark_reserved_regions_for_nid · 28e86bdb
      David Rientjes authored
      mark_reserved_regions_for_nid() is only called from do_init_bootmem(),
      which is in .init.text, so it must be in the same section to avoid a
      section mismatch warning.
      Reported-by: default avatarSubrata Modak <subrata@linux.vnet.ibm.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      28e86bdb
    • Matt Evans's avatar
      powerpc: Add __SANE_USERSPACE_TYPES__ to asm/types.h for LL64 · 2c9c6ce0
      Matt Evans authored
      PPC64 uses long long for u64 in the kernel, but powerpc's asm/types.h
      prevents 64-bit userland from seeing this definition, instead defaulting
      to u64 == long in userspace.  Some user programs (e.g. kvmtool) may actually
      want LL64, so this patch adds a check for __SANE_USERSPACE_TYPES__ so that,
      if defined, int-ll64.h is included instead.
      Signed-off-by: default avatarMatt Evans <matt@ozlabs.org>
      Acked-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2c9c6ce0
    • Anton Blanchard's avatar
      powerpc: POWER7 optimised copy_to_user/copy_from_user using VMX · a66086b8
      Anton Blanchard authored
      Implement a POWER7 optimised copy_to_user/copy_from_user using VMX.
      For large aligned copies this new loop is over 10% faster, and for
      large unaligned copies it is over 200% faster.
      
      If we take a fault we fall back to the old version, this keeps
      things relatively simple and easy to verify.
      
      On POWER7 unaligned stores rarely slow down - they only flush when
      a store crosses a 4KB page boundary. Furthermore this flush is
      handled completely in hardware and should be 20-30 cycles.
      
      Unaligned loads on the other hand flush much more often - whenever
      crossing a 128 byte cache line, or a 32 byte sector if either sector
      is an L1 miss.
      
      Considering this information we really want to get the loads aligned
      and not worry about the alignment of the stores. Microbenchmarks
      confirm that this approach is much faster than the current unaligned
      copy loop that uses shifts and rotates to ensure both loads and
      stores are aligned.
      
      We also want to try and do the stores in cacheline aligned, cacheline
      sized chunks. If the store queue is unable to merge an entire
      cacheline of stores then the L2 cache will have to do a
      read/modify/write. Even worse, we will serialise this with the stores
      in the next iteration of the copy loop since both iterations hit
      the same cacheline.
      
      Based on this, the new loop does the following things:
      
      1 - 127 bytes
      Get the source 8 byte aligned and use 8 byte loads and stores. Pretty
      boring and similar to how the current loop works.
      
      128 - 4095 bytes
      Get the source 8 byte aligned and use 8 byte loads and stores,
      1 cacheline at a time. We aren't doing the stores in cacheline
      aligned chunks so we will potentially serialise once per cacheline.
      Even so it is much better than the loop we have today.
      
      4096 - bytes
      If both source and destination have the same alignment get them both
      16 byte aligned, then get the destination cacheline aligned. Do
      cacheline sized loads and stores using VMX.
      
      If source and destination do not have the same alignment, we get the
      destination cacheline aligned, and use permute to do aligned loads.
      
      In both cases the VMX loop should be optimal - we always do aligned
      loads and stores and are always doing stores in cacheline aligned,
      cacheline sized chunks.
      
      To be able to use VMX we must be careful about interrupts and
      sleeping. We don't use the VMX loop when in an interrupt (which should
      be rare anyway) and we wrap the VMX loop in disable/enable_pagefault
      and fall back to the existing copy_tofrom_user loop if we do need to
      sleep.
      
      The VMX breakpoint of 4096 bytes was chosen using this microbenchmark:
      
      http://ozlabs.org/~anton/junkcode/copy_to_user.c
      
      Since we are using VMX and there is a cost to saving and restoring
      the user VMX state there are two broad cases we need to benchmark:
      
      - Best case - userspace never uses VMX
      
      - Worst case - userspace always uses VMX
      
      In reality a userspace process will sit somewhere between these two
      extremes. Since we need to test both aligned and unaligned copies we
      end up with 4 combinations. The point at which the VMX loop begins to
      win is:
      
      0% VMX
      aligned		2048 bytes
      unaligned	2048 bytes
      
      100% VMX
      aligned		16384 bytes
      unaligned	8192 bytes
      
      Considering this is a microbenchmark, the data is hot in cache and
      the VMX loop has better store queue merging properties we set the
      breakpoint to 4096 bytes, a little below the unaligned breakpoints.
      
      Some future optimisations we can look at:
      
      - Looking at the perf data, a significant part of the cost when a
        task is always using VMX is the extra exception we take to restore
        the VMX state. As such we should do something similar to the x86
        optimisation that restores FPU state for heavy users. ie:
      
              /*
               * If the task has used fpu the last 5 timeslices, just do a full
               * restore of the math state immediately to avoid the trap; the
               * chances of needing FPU soon are obviously high now
               */
              preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;
      
        and
      
              /*
               * fpu_counter contains the number of consecutive context switches
               * that the FPU is used. If this is over a threshold, the lazy fpu
               * saving becomes unlazy to save the trap. This is an unsigned char
               * so that after 256 times the counter wraps and the behavior turns
               * lazy again; this to deal with bursty apps that only use FPU for
               * a short time
               */
      
      - We could create a paca bit to mirror the VMX enabled MSR bit and check
        that first, avoiding multiple calls to calling enable_kernel_altivec.
        That should help with iovec based system calls like readv.
      
      - We could have two VMX breakpoints, one for when we know the user VMX
        state is loaded into the registers and one when it isn't. This could
        be a second bit in the paca so we can calculate the break points quickly.
      
      - One suggestion from Ben was to save and restore the VSX registers
        we use inline instead of using enable_kernel_altivec.
      
      [BenH: Fixed a problem with preempt and fixed build without CONFIG_ALTIVEC]
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a66086b8
  6. 16 Dec, 2011 8 commits
  7. 09 Dec, 2011 4 commits