1. 03 Nov, 2017 15 commits
  2. 02 Nov, 2017 11 commits
  3. 31 Oct, 2017 1 commit
  4. 30 Oct, 2017 2 commits
    • Nick Desaulniers's avatar
      arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27 · fd9dde6a
      Nick Desaulniers authored
      Upon upgrading to binutils 2.27, we found that our lz4 and gzip
      compressed kernel images were significantly larger, resulting is 10ms
      boot time regressions.
      
      As noted by Rahul:
      "aarch64 binaries uses RELA relocations, where each relocation entry
      includes an addend value. This is similar to x86_64.  On x86_64, the
      addend values are also stored at the relocation offset for relative
      relocations. This is an optimization: in the case where code does not
      need to be relocated, the loader can simply skip processing relative
      relocations.  In binutils-2.25, both bfd and gold linkers did this for
      x86_64, but only the gold linker did this for aarch64.  The kernel build
      here is using the bfd linker, which stored zeroes at the relocation
      offsets for relative relocations.  Since a set of zeroes compresses
      better than a set of non-zero addend values, this behavior was resulting
      in much better lz4 compression.
      
      The bfd linker in binutils-2.27 is now storing the actual addend values
      at the relocation offsets. The behavior is now consistent with what it
      does for x86_64 and what gold linker does for both architectures.  The
      change happened in this upstream commit:
      https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613
      Since a bunch of zeroes got replaced by non-zero addend values, we see
      the side effect of lz4 compressed image being a bit bigger.
      
      To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
      flag can be used:
      $ LDFLAGS="--no-apply-dynamic-relocs" make
      With this flag, the compressed image size is back to what it was with
      binutils-2.25.
      
      If the kernel is using ASLR, there aren't additional runtime costs to
      --no-apply-dynamic-relocs, as the relocations will need to be applied
      again anyway after the kernel is relocated to a random address.
      
      If the kernel is not using ASLR, then presumably the current default
      behavior of the linker is better. Since the static linker performed the
      dynamic relocs, and the kernel is not moved to a different address at
      load time, it can skip applying the relocations all over again."
      
      Some measurements:
      
      $ ld -v
      GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117
                          ^
      $ ls -l vmlinux
      -rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux
      $ ls -l Image.lz4-dtb
      -rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb
      
      $ ld -v
      GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315
                          ^
      pre patch:
      $ ls -l vmlinux
      -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux
      $ ls -l Image.lz4-dtb
      -rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb
      
      post patch:
      $ ls -l vmlinux
      -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux
      $ ls -l Image.lz4-dtb
      -rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb
      
      By Siqi's measurement w/ gzip:
      binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
      Image 41535488
      Image.gz 13404067
      
      binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
      Image 41535488
      Image.gz 14125516
      
      Any compression scheme should be able to get better results from the
      longer runs of zeros, not just GZIP and LZ4.
      
      10ms boot time savings isn't anything to get excited about, but users of
      arm64+compression+bfd-2.27 should not have to pay a penalty for no
      runtime improvement.
      Reported-by: default avatarGopinath Elanchezhian <gelanchezhian@google.com>
      Reported-by: default avatarSindhuri Pentyala <spentyala@google.com>
      Reported-by: default avatarWei Wang <wvw@google.com>
      Suggested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Suggested-by: default avatarRahul Chaudhry <rahulchaudhry@google.com>
      Suggested-by: default avatarSiqi Lin <siqilin@google.com>
      Suggested-by: default avatarStephen Hines <srhines@google.com>
      Signed-off-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: added comment to Makefile]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      fd9dde6a
    • Catalin Marinas's avatar
      arm64: Implement arch-specific pte_access_permitted() · 6218f96c
      Catalin Marinas authored
      The generic pte_access_permitted() implementation only checks for
      pte_present() (together with the write permission where applicable).
      However, for both kernel ptes and PROT_NONE mappings pte_present() also
      returns true on arm64 even though such mappings are not user accessible.
      Additionally, arm64 now supports execute-only user permission
      (PROT_EXEC) which is implemented by clearing the PTE_USER bit.
      
      With this patch the arm64 implementation of pte_access_permitted()
      checks for the PTE_VALID and PTE_USER bits together with writable access
      if applicable.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      6218f96c
  5. 27 Oct, 2017 4 commits
  6. 25 Oct, 2017 3 commits
  7. 24 Oct, 2017 4 commits