1. 08 Aug, 2017 1 commit
  2. 21 Jun, 2017 1 commit
  3. 20 Jun, 2017 1 commit
  4. 01 Apr, 2017 1 commit
  5. 31 Mar, 2017 2 commits
    • Aneesh Kumar K.V's avatar
      powerpc/mm/hash: Support 68 bit VA · e6f81a92
      Aneesh Kumar K.V authored
      
      Inorder to support large effective address range (512TB), we want to
      increase the virtual address bits to 68. But we do have platforms like
      p4 and p5 that can only do 65 bit VA. We support those platforms by
      limiting context bits on them to 16.
      
      The protovsid -> vsid conversion is verified to work with both 65 and 68
      bit va values. I also documented the restrictions in a table format as
      part of code comments.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e6f81a92
    • Aneesh Kumar K.V's avatar
      powerpc/mm/hash: Use context ids 1-4 for the kernel · 941711a3
      Aneesh Kumar K.V authored
      
      Currently we use the top 4 context ids (0x7fffc-0x7ffff) for the kernel.
      Kernel VSIDs are built using these top context values and effective the
      segement ID. In subsequent patches we want to increase the max effective
      address to 512TB. We will achieve that by increasing the effective
      segment IDs there by increasing virtual address range.
      
      We will be switching to a 68bit virtual address in the following patch.
      But platforms like Power4 and Power5 only support a 65 bit virtual
      address. We will handle that by limiting the context bits to 16 instead
      of 19 on those platforms. That means the max context id will have a
      different value on different platforms.
      
      So that we don't have to deal with the kernel context ids changing
      between different platforms, move the kernel context ids down to use
      context ids 1-4.
      
      We can't use segment 0 of context-id 0, because that maps to VSID 0,
      which we want to keep as invalid, so we avoid context-id 0 entirely.
      Similarly we can't use the last segment of the maximum context, so we
      avoid it too.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [mpe: Switch from 0-3 to 1-4 so VSID=0 remains invalid]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      941711a3
  6. 16 Feb, 2017 2 commits
  7. 13 Sep, 2016 1 commit
    • Paul Mackerras's avatar
      powerpc/mm: Preserve CFAR value on SLB miss caused by access to bogus address · f0f558b1
      Paul Mackerras authored
      
      Currently, if userspace or the kernel accesses a completely bogus address,
      for example with any of bits 46-59 set, we first take an SLB miss interrupt,
      install a corresponding SLB entry with VSID 0, retry the instruction, then
      take a DSI/ISI interrupt because there is no HPT entry mapping the address.
      However, by the time of the second interrupt, the Come-From Address Register
      (CFAR) has been overwritten by the rfid instruction at the end of the SLB
      miss interrupt handler.  Since bogus accesses can often be caused by a
      function return after the stack has been overwritten, the CFAR value would
      be very useful as it could indicate which function it was whose return had
      led to the bogus address.
      
      This patch adds code to create a full exception frame in the SLB miss handler
      in the case of a bogus address, rather than inserting an SLB entry with a
      zero VSID field.  Then we call a new slb_miss_bad_addr() function in C code,
      which delivers a signal for a user access or creates an oops for a kernel
      access.  In the latter case the oops message will show the CFAR value at the
      time of the access.
      
      In the case of the radix MMU, a segment miss interrupt indicates an access
      outside the ranges mapped by the page tables.  Previously this was handled
      by the code for an unrecoverable SLB miss (one with MSR[RI] = 0), which is
      not really correct.  With this patch, we now handle these interrupts with
      slb_miss_bad_addr(), which is much more consistent.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f0f558b1
  8. 08 Sep, 2016 1 commit
    • Paul Mackerras's avatar
      powerpc/mm: Don't alias user region to other regions below PAGE_OFFSET · f077aaf0
      Paul Mackerras authored
      In commit c60ac569 ("powerpc: Update kernel VSID range", 2013-03-13)
      we lost a check on the region number (the top four bits of the effective
      address) for addresses below PAGE_OFFSET.  That commit replaced a check
      that the top 18 bits were all zero with a check that bits 46 - 59 were
      zero (performed for all addresses, not just user addresses).
      
      This means that userspace can access an address like 0x1000_0xxx_xxxx_xxxx
      and we will insert a valid SLB entry for it.  The VSID used will be the
      same as if the top 4 bits were 0, but the page size will be some random
      value obtained by indexing beyond the end of the mm_ctx_high_slices_psize
      array in the paca.  If that page size is the same as would be used for
      region 0, then userspace just has an alias of the region 0 space.  If the
      page size is different, then no HPTE will be found for the access, and
      the process will get a SIGSEGV (since hash_page_mm() will refuse to create
      a HPTE for the...
      f077aaf0
  9. 11 May, 2016 1 commit
  10. 01 May, 2016 1 commit
    • Aneesh Kumar K.V's avatar
      powerpc/mm: Make page table size a variable · dd1842a2
      Aneesh Kumar K.V authored
      
      Radix and hash MMU models support different page table sizes. Make
      the #defines a variable so that existing code can work with variable
      sizes.
      
      Slice related code is only used by hash, so use hash constants there. We
      will replicate some of the boundary conditions with resepct to TASK_SIZE
      using radix values too. Right now we do boundary condition check using
      hash constants.
      
      Swapper pgdir size is initialized in asm code. We select the max pgd
      size to keep it simple. For now we select hash pgdir. When adding radix
      we will switch that to radix pgdir which is 64K.
      
      BUILD_BUG_ON check which is removed is already done in hugepage_init()
      using MAYBE_BUILD_BUG_ON().
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      dd1842a2
  11. 11 Apr, 2016 1 commit
    • Michael Ellerman's avatar
      powerpc/mm: Remove long disabled SLB code · 1f4c66e8
      Michael Ellerman authored
      We have a bunch of SLB related code in the tree which is there to handle
      dynamic VSIDs - but currently it's all disabled at compile time. The
      comments say "Keep that around for when we re-implement dynamic VSIDs".
      
      But that was over 10 years ago (commit 3c726f8d
      
       ("[PATCH] ppc64:
      support 64k pages")). The chance that it would still work unchanged is
      minimal, and in the meantime it's confusing to folks browsing/grepping
      the code. If we ever want to re-instate it, it's in the git history.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: default avatarBalbir Singh <bsingharora@gmail.com>
      1f4c66e8
  12. 30 Apr, 2014 1 commit
  13. 23 Apr, 2014 1 commit
    • Anton Blanchard's avatar
      powerpc: Fix branch patching code for ABIv2 · b86206e4
      Anton Blanchard authored
      
      The MMU hashtable and SLB branch patching code uses function
      pointers for the update sites. This creates a difference between
      ABIv1 and ABIv2 because we don't have function descriptors on
      ABIv2.
      
      Get rid of the function pointer and just point at the update
      sites directly. This works on both ABIs.
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      b86206e4
  14. 17 Mar, 2013 2 commits
  15. 17 Sep, 2012 4 commits
  16. 08 Mar, 2012 1 commit
  17. 27 Apr, 2011 1 commit
  18. 14 Oct, 2009 1 commit
    • Benjamin Herrenschmidt's avatar
      powerpc/mm: Fix hang accessing top of vmalloc space · 8d8997f3
      Benjamin Herrenschmidt authored
      
      On pSeries, we always force the IO space to be mapped using 4K
      pages even with a 64K base page size to cope with some limitations
      in the HV interface to some devices.
      
      However, the SLB miss handler code to discriminate between vmalloc
      and ioremap space uses a CPU feature section such that the code
      is nop'ed out when the processor support large pages non-cachable
      mappings.
      
      Thus, we end up always using the ioremap page size for vmalloc
      segments on such processors, causing a discrepency between the
      segment and the hash table, and thus a hang continously hashing
      the page.
      
      It works for the first segment of the vmalloc space since that
      segment is "bolted" in by C code correctly, and thankfully we
      almost never use the vmalloc space beyond the first segment,
      but the new percpu code made the bug happen.
      
      This fixes it by removing the feature section from the assembly,
      we now always do the comparison between vmalloc and ioremap.
      
      Signed-off-by; Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d8997f3
  19. 15 May, 2008 1 commit
    • Benjamin Herrenschmidt's avatar
      [POWERPC] vmemmap fixes to use smaller pages · cec08e7a
      Benjamin Herrenschmidt authored
      
      This changes vmemmap to use a different region (region 0xf) of the
      address space, and to configure the page size of that region
      dynamically at boot.
      
      The problem with the current approach of always using 16M pages is that
      it's not well suited to machines that have small amounts of memory such
      as small partitions on pseries, or PS3's.
      
      In fact, on the PS3, failure to allocate the 16M page backing vmmemmap
      tends to prevent hotplugging the HV's "additional" memory, thus limiting
      the available memory even more, from my experience down to something
      like 80M total, which makes it really not very useable.
      
      The logic used by my match to choose the vmemmap page size is:
      
       - If 16M pages are available and there's 1G or more RAM at boot,
         use that size.
       - Else if 64K pages are available, use that
       - Else use 4K pages
      
      I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages)
      and it seems to work fine.
      
      Note that I intend to change the way we organize the kernel regions &
      SLBs so the actual region will change from 0xf back to something else at
      one point, as I simplify the SLB miss handler, but that will be for a
      later patch.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      cec08e7a
  20. 11 Dec, 2007 1 commit
  21. 12 Oct, 2007 1 commit
    • Paul Mackerras's avatar
      [POWERPC] Use 1TB segments · 1189be65
      Paul Mackerras authored
      
      This makes the kernel use 1TB segments for all kernel mappings and for
      user addresses of 1TB and above, on machines which support them
      (currently POWER5+, POWER6 and PA6T).
      
      We detect that the machine supports 1TB segments by looking at the
      ibm,processor-segment-sizes property in the device tree.
      
      We don't currently use 1TB segments for user addresses < 1T, since
      that would effectively prevent 32-bit processes from using huge pages
      unless we also had a way to revert to using 256MB segments.  That
      would be possible but would involve extra complications (such as
      keeping track of which segment size was used when HPTEs were inserted)
      and is not addressed here.
      
      Parts of this patch were originally written by Ben Herrenschmidt.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      1189be65
  22. 09 May, 2007 1 commit
    • Benjamin Herrenschmidt's avatar
      [POWERPC] Introduce address space "slices" · d0f13e3c
      Benjamin Herrenschmidt authored
      
      The basic issue is to be able to do what hugetlbfs does but with
      different page sizes for some other special filesystems; more
      specifically, my need is:
      
       - Huge pages
      
       - SPE local store mappings using 64K pages on a 4K base page size
      kernel on Cell
      
       - Some special 4K segments in 64K-page kernels for mapping a dodgy
      type of powerpc-specific infiniband hardware that requires 4K MMU
      mappings for various reasons I won't explain here.
      
      The main issues are:
      
       - To maintain/keep track of the page size per "segment" (as we can
      only have one page size per segment on powerpc, which are 256MB
      divisions of the address space).
      
       - To make sure special mappings stay within their allotted
      "segments" (including MAP_FIXED crap)
      
       - To make sure everybody else doesn't mmap/brk/grow_stack into a
      "segment" that is used for a special mapping
      
      Some of the necessary mechanisms to handle that were present in the
      hugetlbfs code, but mostly in ways not suitable for anything else.
      
      The patch relies on some changes to the generic get_unmapped_area()
      that just got merged.  It still hijacks hugetlb callbacks here or
      there as the generic code hasn't been entirely cleaned up yet but
      that shouldn't be a problem.
      
      So what is a slice ?  Well, I re-used the mechanism used formerly by our
      hugetlbfs implementation which divides the address space in
      "meta-segments" which I called "slices".  The division is done using
      256MB slices below 4G, and 1T slices above.  Thus the address space is
      divided currently into 16 "low" slices and 16 "high" slices.  (Special
      case: high slice 0 is the area between 4G and 1T).
      
      Doing so simplifies significantly the tracking of segments and avoids
      having to keep track of all the 256MB segments in the address space.
      
      While I used the "concepts" of hugetlbfs, I mostly re-implemented
      everything in a more generic way and "ported" hugetlbfs to it.
      
      Slices can have an associated page size, which is encoded in the mmu
      context and used by the SLB miss handler to set the segment sizes.  The
      hash code currently doesn't care, it has a specific check for hugepages,
      though I might add a mechanism to provide per-slice hash mapping
      functions in the future.
      
      The slice code provide a pair of "generic" get_unmapped_area() (bottomup
      and topdown) functions that should work with any slice size.  There is
      some trickiness here so I would appreciate people to have a look at the
      implementation of these and let me know if I got something wrong.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      d0f13e3c
  23. 03 Oct, 2006 1 commit
  24. 30 Jun, 2006 1 commit
  25. 15 Jun, 2006 1 commit
    • Paul Mackerras's avatar
      powerpc: Use 64k pages without needing cache-inhibited large pages · bf72aeba
      Paul Mackerras authored
      
      Some POWER5+ machines can do 64k hardware pages for normal memory but
      not for cache-inhibited pages.  This patch lets us use 64k hardware
      pages for most user processes on such machines (assuming the kernel
      has been configured with CONFIG_PPC_64K_PAGES=y).  User processes
      start out using 64k pages and get switched to 4k pages if they use any
      non-cacheable mappings.
      
      With this, we use 64k pages for the vmalloc region and 4k pages for
      the imalloc region.  If anything creates a non-cacheable mapping in
      the vmalloc region, the vmalloc region will get switched to 4k pages.
      I don't know of any driver other than the DRM that would do this,
      though, and these machines don't have AGP.
      
      When a region gets switched from 64k pages to 4k pages, we do not have
      to clear out all the 64k HPTEs from the hash table immediately.  We
      use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page
      was hashed in as a 64k page or a set of 4k pages.  If hash_page is
      trying to insert a 4k page for a Linux PTE and it sees that it has
      already been inserted as a 64k page, it first invalidates the 64k HPTE
      before inserting the 4k HPTE.  The hash invalidation routines also use
      the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a
      set of 4k HPTEs to remove.  With those two changes, we can tolerate a
      mix of 4k and 64k HPTEs in the hash table, and they will all get
      removed when the address space is torn down.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      bf72aeba
  26. 10 Feb, 2006 1 commit
  27. 09 Jan, 2006 1 commit
    • Michael Ellerman's avatar
      [PATCH] powerpc: Separate usage of KERNELBASE and PAGE_OFFSET · b5666f70
      Michael Ellerman authored
      
      This patch separates usage of KERNELBASE and PAGE_OFFSET. I haven't
      looked at any of the PPC32 code, if we ever want to support Kdump on
      PPC we'll have to do another audit, ditto for iSeries.
      
      This patch makes PAGE_OFFSET the constant, it'll always be 0xC * 1
      gazillion for 64-bit.
      
      To get a physical address from a virtual one you subtract PAGE_OFFSET,
      _not_ KERNELBASE.
      
      KERNELBASE is the virtual address of the start of the kernel, it's
      often the same as PAGE_OFFSET, but _might not be_.
      
      If you want to know something's offset from the start of the kernel
      you should subtract KERNELBASE.
      Signed-off-by: default avatarMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      b5666f70
  28. 07 Nov, 2005 2 commits
    • David Gibson's avatar
      [PATCH] ppc64: Fix bug in SLB miss handler for hugepages · 7d24f0b8
      David Gibson authored
      This patch, however, should be applied on top of the 64k-page-size patch to
      fix some problems with hugepage (some pre-existing, another introduced by
      this patch).
      
      The patch fixes a bug in the SLB miss handler for hugepages on ppc64
      introduced by the dynamic hugepage patch (commit id
      c594adad) due to a misunderstanding of the
      srd instruction's behaviour (mea culpa).  The problem arises when a 64-bit
      process maps some hugepages in the low 4GB of the address space (unusual).
      In this case, as well as the 256M segment in question being marked for
      hugepages, other segments at 32G intervals will be incorrectly marked for
      hugepages.
      
      In the process, this patch tweaks the semantics of the hugepage bitmaps to
      be more sensible.  Previously, an address below 4G was marked for hugepages
      if the appropriate segment bit in the "low areas" bitmask was set *or* if
      the low bit in the "high areas" bitmap was set (which would mark all
      addresses below 1TB for hugepage).  With this patch, any given address is
      governed by a single bitmap.  Addresses below 4GB are marked for hugepage
      if and only if their bit is set in the "low areas" bitmap (256M
      granularity).  Addresses between 4GB and 1TB are marked for hugepage iff
      the low bit in the "high areas" bitmap is set.  Higher addresses are marked
      for hugepage iff their bit in the "high areas" bitmap is set (1TB
      granularity).
      
      To avoid conflicts, this patch must be applied on top of BenH's pending
      patch for 64k base page size [0].  As such, this patch also addresses a
      hugepage problem introduced by that patch.  That patch allows hugepages of
      1MB in size on hardware which supports it, however, that won't work when
      using 4k pages (4 level pagetable), because in that case hugepage PTEs are
      stored at the PMD level, and each PMD entry maps 2MB.  This patch simply
      disallows hugepages in that case (we can do something cleverer to re-enable
      them some other day).
      
      Built, booted, and a handful of hugepage related tests passed on POWER5
      LPAR (both ARCH=powerpc and ARCH=ppc64).
      
      [0] http://gate.crashing.org/~benh/ppc64-64k-pages.diff
      
      Signed-off-by: default avatarDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      7d24f0b8
    • Benjamin Herrenschmidt's avatar
      [PATCH] ppc64: support 64k pages · 3c726f8d
      Benjamin Herrenschmidt authored
      
      Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel
      base page size to 64K.  The resulting kernel still boots on any
      hardware.  On current machines with 4K pages support only, the kernel
      will maintain 16 "subpages" for each 64K page transparently.
      
      Note that while real 64K capable HW has been tested, the current patch
      will not enable it yet as such hardware is not released yet, and I'm
      still verifying with the firmware architects the proper to get the
      information from the newer hypervisors.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      3c726f8d
  29. 10 Oct, 2005 1 commit
  30. 09 Sep, 2005 1 commit
  31. 01 Sep, 2005 1 commit
  32. 29 Aug, 2005 2 commits
    • David Gibson's avatar
      [PATCH] Dynamic hugepage addresses for ppc64 · c594adad
      David Gibson authored
      Paulus, I think this is now a reasonable candidate for the post-2.6.13
      queue.
      
      Relax address restrictions for hugepages on ppc64
      
      Presently, 64-bit applications on ppc64 may only use hugepages in the
      address region from 1-1.5T.  Furthermore, if hugepages are enabled in
      the kernel config, they may only use hugepages and never normal pages
      in this area.  This patch relaxes this restriction, allowing any
      address to be used with hugepages, but with a 1TB granularity.  That
      is if you map a hugepage anywhere in the region 1TB-2TB, that entire
      area will be reserved exclusively for hugepages for the remainder of
      the process's lifetime.  This works analagously to hugepages in 32-bit
      applications, where hugepages can be mapped anywhere, but with 256MB
      (mmu segment) granularity.
      
      This patch applies on top of the four level pagetable patch
      (http://patchwork.ozlabs.org/linuxppc64/patch?id=1936
      
      ).
      Signed-off-by: default avatarDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      c594adad
    • David Gibson's avatar
      [PATCH] Four level pagetables for ppc64 · e28f7faf
      David Gibson authored
      
      Implement 4-level pagetables for ppc64
      
      This patch implements full four-level page tables for ppc64, thereby
      extending the usable user address range to 44 bits (16T).
      
      The patch uses a full page for the tables at the bottom and top level,
      and a quarter page for the intermediate levels.  It uses full 64-bit
      pointers at every level, thus also increasing the addressable range of
      physical memory.  This patch also tweaks the VSID allocation to allow
      matching range for user addresses (this halves the number of available
      contexts) and adds some #if and BUILD_BUG sanity checks.
      Signed-off-by: default avatarDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      e28f7faf