1. 09 Aug, 2019 4 commits
    • Steve Capper's avatar
      arm64: dump: De-constify VA_START and KASAN_SHADOW_START · 99426e5e
      Steve Capper authored
      The kernel page table dumper assumes that the placement of VA regions is
      constant and determined at compile time. As we are about to introduce
      variable VA logic, we need to be able to determine certain regions at
      boot time.
      
      Specifically the VA_START and KASAN_SHADOW_START will depend on whether
      or not the system is booted with 52-bit kernel VAs.
      
      This patch adds logic to the kernel page table dumper s.t. these regions
      can be computed at boot time.
      Signed-off-by: default avatarSteve Capper <steve.capper@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      99426e5e
    • Steve Capper's avatar
      arm64: kasan: Switch to using KASAN_SHADOW_OFFSET · 6bd1d0be
      Steve Capper authored
      KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
      line argument and affects the codegen of the inline address sanetiser.
      
      Essentially, for an example memory access:
          *ptr1 = val;
      The compiler will insert logic similar to the below:
          shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
          if (somethingWrong(shadowValue))
              flagAnError();
      
      This code sequence is inserted into many places, thus
      KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
      text.
      
      If we want to run a single kernel binary with multiple address spaces,
      then we need to do this with KASAN_SHADOW_OFFSET fixed.
      
      Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
      shadow addresses we know that the end of the shadow region is constant
      w.r.t. VA space size:
          KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
      
      This means that if we increase the size of the VA space, the start of
      the KASAN region expands into lower addresses whilst the end of the
      KASAN region is fixed.
      
      Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
      build scripts with the VA size used as a parameter. (There are build
      time checks in the C code too to ensure that expected values are being
      derived). It is sufficient, and indeed is a simplification, to remove
      the build scripts (and build time checks) entirely and instead provide
      KASAN_SHADOW_OFFSET values.
      
      This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
      arm64 Makefile, and instead we adopt the approach used by x86 to supply
      offset values in kConfig. To help debug/develop future VA space changes,
      the Makefile logic has been preserved in a script file in the arm64
      Documentation folder.
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      6bd1d0be
    • Steve Capper's avatar
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper authored
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      14c127c9
    • Steve Capper's avatar
      arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START · 9cb1c5dd
      Steve Capper authored
      Currently there are assumptions about the alignment of VMEMMAP_START
      and PAGE_OFFSET that won't be valid after this series is applied.
      
      These assumptions are in the form of bitwise operators being used
      instead of addition and subtraction when calculating addresses.
      
      This patch replaces these bitwise operators with addition/subtraction.
      Signed-off-by: default avatarSteve Capper <steve.capper@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      9cb1c5dd
  2. 05 Aug, 2019 1 commit
  3. 04 Aug, 2019 10 commits
  4. 03 Aug, 2019 25 commits