1. 25 Sep, 2018 3 commits
    • Jun Yao's avatar
      arm64/mm: use fixmap to modify swapper_pg_dir · 2330b7ca
      Jun Yao authored
      Once swapper_pg_dir is in the rodata section, it will not be possible to
      modify it directly, but we will need to modify it in some cases.
      
      To enable this, we can use the fixmap when deliberately modifying
      swapper_pg_dir. As the pgd is only transiently mapped, this provides
      some resilience against illicit modification of the pgd, e.g. for
      Kernel Space Mirror Attack (KSMA).
      Signed-off-by: default avatarJun Yao <yaojun8558363@gmail.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      [Mark: simplify ifdeffery, commit message]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2330b7ca
    • Jun Yao's avatar
      arm64/mm: Separate boot-time page tables from swapper_pg_dir · 2b5548b6
      Jun Yao authored
      Since the address of swapper_pg_dir is fixed for a given kernel image,
      it is an attractive target for manipulation via an arbitrary write. To
      mitigate this we'd like to make it read-only by moving it into the
      rodata section.
      
      We require that swapper_pg_dir is at a fixed offset from tramp_pg_dir
      and reserved_ttbr0, so these will also need to move into rodata.
      However, swapper_pg_dir is allocated along with some transient page
      tables used for boot which we do not want to move into rodata.
      
      As a step towards this, this patch separates the boot-time page tables
      into a new init_pg_dir, and reduces swapper_pg_dir to the single page it
      needs to be. This allows us to retain the relationship between
      swapper_pg_dir, tramp_pg_dir, and swapper_pg_dir, while cleanly
      separating these from the boot-time page tables.
      
      The init_pg_dir holds all of the pgd/pud/pmd/pte levels needed during
      boot, and all of these levels will be freed when we switch to the
      swapper_pg_dir, which is initialized by the existing code in
      paging_init(). Since we start off on the init_pg_dir, we no longer need
      to allocate a transient page table in paging_init() in order to ensure
      that swapper_pg_dir isn't live while we initialize it.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarJun Yao <yaojun8558363@gmail.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      [Mark: place init_pg_dir after BSS, fold mm changes, commit message]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2b5548b6
    • Jun Yao's avatar
      arm64/mm: Pass ttbr1 as a parameter to __enable_mmu() · 693d5639
      Jun Yao authored
      In subsequent patches we'll use a transient pgd during the primary cpu's
      boot process. To make this work while allowing secondary cpus to use the
      swapper_pg_dir, we need to pass the relevant TTBR1 pgd as a parameter
      to __enable_mmu().
      
      This patch updates __enable__mmu() to take this as a parameter, updating
      callsites to pass swapper_pg_dir for now.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarJun Yao <yaojun8558363@gmail.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      [Mark: simplify assembly, clarify commit message]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      693d5639
  2. 24 Sep, 2018 1 commit
  3. 21 Sep, 2018 5 commits
  4. 19 Sep, 2018 1 commit
  5. 18 Sep, 2018 2 commits
  6. 17 Sep, 2018 1 commit
    • Suzuki K Poulose's avatar
      arm64: sysreg: Clean up instructions for modifying PSTATE fields · 74e24828
      Suzuki K Poulose authored
      Instructions for modifying the PSTATE fields which were not supported
      in the older toolchains (e.g, PAN, UAO) are generated using macros.
      We have so far used the normal sys_reg() helper for defining the PSTATE
      fields. While this works fine, it is really difficult to correlate the
      code with the Arm ARM definition.
      
      As per Arm ARM, the PSTATE fields are defined only using Op1, Op2 fields,
      with fixed values for Op0, CRn. Also the CRm field has been reserved
      for the Immediate value for the instruction. So using the sys_reg()
      looks quite confusing.
      
      This patch cleans up the instruction helpers by bringing them
      in line with the Arm ARM definitions to make it easier to correlate
      code with the document. No functional changes.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      74e24828
  7. 14 Sep, 2018 9 commits
  8. 11 Sep, 2018 9 commits
  9. 10 Sep, 2018 5 commits
  10. 07 Sep, 2018 3 commits
    • Will Deacon's avatar
      Merge branch 'tlb/asm-generic' into aarch64/for-next/core · cbbac1c3
      Will Deacon authored
      As agreed on the list, merge in the core mmu_gather changes which allow
      us to track the levels of page-table being cleared. We'll build on this
      in our low-level flushing routines, and Nick and Peter also have plans
      for other architectures.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      cbbac1c3
    • Will Deacon's avatar
      MAINTAINERS: Add entry for MMU GATHER AND TLB INVALIDATION · 7526aa54
      Will Deacon authored
      We recently had to debug a TLB invalidation problem on the munmap()
      path, which was made more difficult than necessary because:
      
        (a) The MMU gather code had changed without people realising
        (b) Many people subtly misunderstood the operation of the MMU gather
            code and its interactions with RCU and arch-specific TLB invalidation
        (c) Untangling the intended behaviour involved educated guesswork and
            plenty of discussion
      
      Hopefully, we can avoid getting into this mess again by designating a
      cross-arch group of people to look after this code. It is not intended
      that they will have a separate tree, but they at least provide a point
      of contact for anybody working in this area and can co-ordinate any
      proposed future changes to the internal API.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      7526aa54
    • Peter Zijlstra's avatar
      mm/memory: Move mmu_gather and TLB invalidation code into its own file · 196d9d8b
      Peter Zijlstra authored
      In preparation for maintaining the mmu_gather code as its own entity,
      move the implementation out of memory.c and into its own file.
      
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      196d9d8b
  11. 04 Sep, 2018 1 commit
    • Will Deacon's avatar
      asm-generic/tlb: Track which levels of the page tables have been cleared · a6d60245
      Will Deacon authored
      It is common for architectures with hugepage support to require only a
      single TLB invalidation operation per hugepage during unmap(), rather than
      iterating through the mapping at a PAGE_SIZE increment. Currently,
      however, the level in the page table where the unmap() operation occurs
      is not stored in the mmu_gather structure, therefore forcing
      architectures to issue additional TLB invalidation operations or to give
      up and over-invalidate by e.g. invalidating the entire TLB.
      
      Ideally, we could add an interval rbtree to the mmu_gather structure,
      which would allow us to associate the correct mapping granule with the
      various sub-mappings within the range being invalidated. However, this
      is costly in terms of book-keeping and memory management, so instead we
      approximate by keeping track of the page table levels that are cleared
      and provide a means to query the smallest granule required for invalidation.
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      a6d60245