An error occurred fetching the project authors.
  1. 14 Apr, 2020 1 commit
  2. 25 Mar, 2020 1 commit
  3. 21 Mar, 2020 1 commit
  4. 12 Mar, 2020 1 commit
  5. 23 Dec, 2019 1 commit
    • Omar Sandoval's avatar
      x86/crash: Define arch_crash_save_vmcoreinfo() if CONFIG_CRASH_CORE=y · 8757dc97
      Omar Sandoval authored
      On x86 kernels configured with CONFIG_PROC_KCORE=y and
      CONFIG_KEXEC_CORE=n, the vmcoreinfo note in /proc/kcore is incomplete.
      
      Specifically, it is missing arch-specific information like the KASLR
      offset and whether 5-level page tables are enabled. This breaks
      applications like drgn [1] and crash [2], which need this information
      for live debugging via /proc/kcore.
      
      This happens because:
      
      1. CONFIG_PROC_KCORE selects CONFIG_CRASH_CORE.
      2. kernel/crash_core.c (compiled if CONFIG_CRASH_CORE=y) calls
         arch_crash_save_vmcoreinfo() to get the arch-specific parts of
         vmcoreinfo. If it is not defined, then it uses a no-op fallback.
      3. x86 defines arch_crash_save_vmcoreinfo() in
         arch/x86/kernel/machine_kexec_*.c, which is only compiled if
         CONFIG_KEXEC_CORE=y.
      
      Therefore, an x86 kernel with CONFIG_CRASH_CORE=y and
      CONFIG_KEXEC_CORE=n uses the no-op fallback and gets incomplete
      vmcoreinfo data. This isn't relevant to kdump, which requires
      CONFIG_KEXEC_CORE. It only affects applications which read vmcoreinfo at
      runtime, like the ones mentioned above.
      
      Fix it by moving arch_crash_save_vmcoreinfo() into two new
      arch/x86/kernel/crash_core_*.c files, which are gated behind
      CONFIG_CRASH_CORE.
      
      1: https://github.com/osandov/drgn/blob/73dd7def1217e24cc83d8ca95c995decbd9ba24c/libdrgn/program.c#L385
      2: https://github.com/crash-utility/crash/commit/60a42d709280cdf38ab06327a5b4fa9d9208ef86Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Kairui Song <kasong@redhat.com>
      Cc: Lianbo Jiang <lijiang@redhat.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/0589961254102cca23e3618b96541b89f2b249e2.1576858905.git.osandov@fb.com
      8757dc97
  6. 26 Nov, 2019 1 commit
  7. 15 Nov, 2019 1 commit
  8. 07 Nov, 2019 1 commit
  9. 18 May, 2019 1 commit
  10. 25 Apr, 2019 1 commit
  11. 06 Jan, 2019 1 commit
    • Masahiro Yamada's avatar
      jump_label: move 'asm goto' support test to Kconfig · e9666d10
      Masahiro Yamada authored
      Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label".
      
      The jump label is controlled by HAVE_JUMP_LABEL, which is defined
      like this:
      
        #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
        # define HAVE_JUMP_LABEL
        #endif
      
      We can improve this by testing 'asm goto' support in Kconfig, then
      make JUMP_LABEL depend on CC_HAS_ASM_GOTO.
      
      Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will
      match to the real kernel capability.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: default avatarSedat Dilek <sedat.dilek@gmail.com>
      e9666d10
  12. 11 Dec, 2018 1 commit
  13. 13 Nov, 2018 1 commit
    • Nayna Jain's avatar
      x86/ima: define arch_ima_get_secureboot · 0914ade2
      Nayna Jain authored
      Distros are concerned about totally disabling the kexec_load syscall.
      As a compromise, the kexec_load syscall will only be disabled when
      CONFIG_KEXEC_VERIFY_SIG is configured and the system is booted with
      secureboot enabled.
      
      This patch defines the new arch specific function called
      arch_ima_get_secureboot() to retrieve the secureboot state of the system.
      Signed-off-by: default avatarNayna Jain <nayna@linux.ibm.com>
      Suggested-by: default avatarSeth Forshee <seth.forshee@canonical.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Dave Young <dyoung@redhat.com>
      Signed-off-by: default avatarMimi Zohar <zohar@linux.ibm.com>
      0914ade2
  14. 03 Jul, 2018 1 commit
    • Nick Desaulniers's avatar
      x86/paravirt: Make native_save_fl() extern inline · d0a8d937
      Nick Desaulniers authored
      native_save_fl() is marked static inline, but by using it as
      a function pointer in arch/x86/kernel/paravirt.c, it MUST be outlined.
      
      paravirt's use of native_save_fl() also requires that no GPRs other than
      %rax are clobbered.
      
      Compilers have different heuristics which they use to emit stack guard
      code, the emittance of which can break paravirt's callee saved assumption
      by clobbering %rcx.
      
      Marking a function definition extern inline means that if this version
      cannot be inlined, then the out-of-line version will be preferred. By
      having the out-of-line version be implemented in assembly, it cannot be
      instrumented with a stack protector, which might violate custom calling
      conventions that code like paravirt rely on.
      
      The semantics of extern inline has changed since gnu89. This means that
      folks using GCC versions >= 5.1 may see symbol redefinition errors at
      link time for subdirs that override KBUILD_CFLAGS (making the C standard
      used implicit) regardless of this patch. This has been cleaned up
      earlier in the patch set, but is left as a note in the commit message
      for future travelers.
      
      Reports:
       https://lkml.org/lkml/2018/5/7/534
       https://github.com/ClangBuiltLinux/linux/issues/16
      
      Discussion:
       https://bugs.llvm.org/show_bug.cgi?id=37512
       https://lkml.org/lkml/2018/5/24/1371
      
      Thanks to the many folks that participated in the discussion.
      Debugged-by: default avatarAlistair Strachan <astrachan@google.com>
      Debugged-by: default avatarMatthias Kaehlcke <mka@chromium.org>
      Suggested-by: default avatarArnd Bergmann <arnd@arndb.de>
      Suggested-by: default avatarH. Peter Anvin <hpa@zytor.com>
      Suggested-by: default avatarTom Stellar <tstellar@redhat.com>
      Reported-by: default avatarSedat Dilek <sedat.dilek@gmail.com>
      Tested-by: default avatarSedat Dilek <sedat.dilek@gmail.com>
      Signed-off-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Acked-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@redhat.com
      Cc: akataria@vmware.com
      Cc: akpm@linux-foundation.org
      Cc: andrea.parri@amarulasolutions.com
      Cc: ard.biesheuvel@linaro.org
      Cc: aryabinin@virtuozzo.com
      Cc: astrachan@google.com
      Cc: boris.ostrovsky@oracle.com
      Cc: brijesh.singh@amd.com
      Cc: caoj.fnst@cn.fujitsu.com
      Cc: geert@linux-m68k.org
      Cc: ghackmann@google.com
      Cc: gregkh@linuxfoundation.org
      Cc: jan.kiszka@siemens.com
      Cc: jarkko.sakkinen@linux.intel.com
      Cc: joe@perches.com
      Cc: jpoimboe@redhat.com
      Cc: keescook@google.com
      Cc: kirill.shutemov@linux.intel.com
      Cc: kstewart@linuxfoundation.org
      Cc: linux-efi@vger.kernel.org
      Cc: linux-kbuild@vger.kernel.org
      Cc: manojgupta@google.com
      Cc: mawilcox@microsoft.com
      Cc: michal.lkml@markovi.net
      Cc: mjg59@google.com
      Cc: mka@chromium.org
      Cc: pombredanne@nexb.com
      Cc: rientjes@google.com
      Cc: rostedt@goodmis.org
      Cc: thomas.lendacky@amd.com
      Cc: tweek@google.com
      Cc: virtualization@lists.linux-foundation.org
      Cc: will.deacon@arm.com
      Cc: yamada.masahiro@socionext.com
      Link: http://lkml.kernel.org/r/20180621162324.36656-4-ndesaulniers@google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d0a8d937
  15. 20 Mar, 2018 1 commit
  16. 08 Mar, 2018 1 commit
  17. 23 Jan, 2018 1 commit
    • Josh Poimboeuf's avatar
      x86/ftrace: Fix ORC unwinding from ftrace handlers · e2ac83d7
      Josh Poimboeuf authored
      Steven Rostedt discovered that the ftrace stack tracer is broken when
      it's used with the ORC unwinder.  The problem is that objtool is
      instructed by the Makefile to ignore the ftrace_64.S code, so it doesn't
      generate any ORC data for it.
      
      Fix it by making the asm code objtool-friendly:
      
      - Objtool doesn't like the fact that save_mcount_regs pushes RBP at the
        beginning, but it's never restored (directly, at least).  So just skip
        the original RBP push, which is only needed for frame pointers anyway.
      
      - Annotate some functions as normal callable functions with
        ENTRY/ENDPROC.
      
      - Add an empty unwind hint to return_to_handler().  The return address
        isn't on the stack, so there's nothing ORC can do there.  It will just
        punt in the unlikely case it tries to unwind from that code.
      
      With all that fixed, remove the OBJECT_FILES_NON_STANDARD Makefile
      annotation so objtool can read the file.
      
      Link: http://lkml.kernel.org/r/20180123040746.ih4ep3tk4pbjvg7c@trebleReported-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      e2ac83d7
  18. 14 Jan, 2018 1 commit
  19. 08 Nov, 2017 1 commit
    • Ricardo Neri's avatar
      x86/umip: Add emulation code for UMIP instructions · 1e5db223
      Ricardo Neri authored
      The feature User-Mode Instruction Prevention present in recent Intel
      processor prevents a group of instructions (sgdt, sidt, sldt, smsw, and
      str) from being executed with CPL > 0. Otherwise, a general protection
      fault is issued.
      
      Rather than relaying to the user space the general protection fault caused
      by the UMIP-protected instructions (in the form of a SIGSEGV signal), it
      can be trapped and the instruction emulated to provide a dummy result.
      This allows to both conserve the current kernel behavior and not reveal the
      system resources that UMIP intends to protect (i.e., the locations of the
      global descriptor and interrupt descriptor tables, the segment selectors of
      the local descriptor table, the value of the task state register and the
      contents of the CR0 register).
      
      This emulation is needed because certain applications (e.g., WineHQ and
      DOSEMU2) rely on this subset of instructions to function. Given that sldt
      and str are not commonly used in programs that run on WineHQ or DOSEMU2,
      they are not emulated. Also, emulation is provided only for 32-bit
      processes; 64-bit processes that attempt to use the instructions that UMIP
      protects will receive the SIGSEGV signal issued as a consequence of the
      general protection fault.
      
      The instructions protected by UMIP can be split in two groups. Those which
      return a kernel memory address (sgdt and sidt) and those which return a
      value (smsw, sldt and str; the last two not emulated).
      
      For the instructions that return a kernel memory address, applications such
      as WineHQ rely on the result being located in the kernel memory space, not
      the actual location of the table. The result is emulated as a hard-coded
      value that lies close to the top of the kernel memory. The limit for the
      GDT and the IDT are set to zero.
      
      The instruction smsw is emulated to return the value that the register CR0
      has at boot time as set in the head_32.
      
      Care is taken to appropriately emulate the results when segmentation is
      used. That is, rather than relying on USER_DS and USER_CS, the function
      insn_get_addr_ref() inspects the segment descriptor pointed by the
      registers in pt_regs. This ensures that we correctly obtain the segment
      base address and the address and operand sizes even if the user space
      application uses a local descriptor table.
      Signed-off-by: default avatarRicardo Neri <ricardo.neri-calderon@linux.intel.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chen Yucong <slaoub@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Huang Rui <ray.huang@amd.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V. Shankar <ravi.v.shankar@intel.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: ricardo.neri@intel.com
      Link: http://lkml.kernel.org/r/1509935277-22138-8-git-send-email-ricardo.neri-calderon@linux.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1e5db223
  20. 02 Nov, 2017 1 commit
    • Greg Kroah-Hartman's avatar
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman authored
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: default avatarKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: default avatarPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  21. 20 Oct, 2017 1 commit
    • Andrey Ryabinin's avatar
      x86/kasan: Use the same shadow offset for 4- and 5-level paging · 12a8cc7f
      Andrey Ryabinin authored
      We are going to support boot-time switching between 4- and 5-level
      paging. For KASAN it means we cannot have different KASAN_SHADOW_OFFSET
      for different paging modes: the constant is passed to gcc to generate
      code and cannot be changed at runtime.
      
      This patch changes KASAN code to use 0xdffffc0000000000 as shadow offset
      for both 4- and 5-level paging.
      
      For 5-level paging it means that shadow memory region is not aligned to
      PGD boundary anymore and we have to handle unaligned parts of the region
      properly.
      
      In addition, we have to exclude paravirt code from KASAN instrumentation
      as we now use set_pgd() before KASAN is fully ready.
      
      [kirill.shutemov@linux.intel.com: clenaup, changelog message]
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20170929140821.37654-4-kirill.shutemov@linux.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      12a8cc7f
  22. 14 Oct, 2017 1 commit
  23. 28 Sep, 2017 1 commit
  24. 29 Aug, 2017 2 commits
  25. 26 Jul, 2017 1 commit
    • Josh Poimboeuf's avatar
      x86/unwind: Add the ORC unwinder · ee9f8fce
      Josh Poimboeuf authored
      Add the new ORC unwinder which is enabled by CONFIG_ORC_UNWINDER=y.
      It plugs into the existing x86 unwinder framework.
      
      It relies on objtool to generate the needed .orc_unwind and
      .orc_unwind_ip sections.
      
      For more details on why ORC is used instead of DWARF, see
      Documentation/x86/orc-unwinder.txt - but the short version is
      that it's a simplified, fundamentally more robust debugninfo
      data structure, which also allows up to two orders of magnitude
      faster lookups than the DWARF unwinder - which matters to
      profiling workloads like perf.
      
      Thanks to Andy Lutomirski for the performance improvement ideas:
      splitting the ORC unwind table into two parallel arrays and creating a
      fast lookup table to search a subset of the unwind table.
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/0a6cbfb40f8da99b7a45a1a8302dc6aef16ec812.1500938583.git.jpoimboe@redhat.com
      [ Extended the changelog. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ee9f8fce
  26. 30 Jun, 2017 1 commit
  27. 29 Jun, 2017 1 commit
  28. 24 Mar, 2017 3 commits
  29. 28 Feb, 2017 1 commit
  30. 31 Jan, 2017 1 commit
    • Kees Cook's avatar
      x86/mm: Remove CONFIG_DEBUG_NX_TEST · 3ad38ceb
      Kees Cook authored
      CONFIG_DEBUG_NX_TEST has been broken since CONFIG_DEBUG_SET_MODULE_RONX=y
      was added in v2.6.37 via:
      
        84e1c6bb ("x86: Add RO/NX protection for loadable kernel modules")
      
      since the exception table was then made read-only.
      
      Additionally, the manually constructed extables were never fixed when
      relative extables were introduced in v3.5 via:
      
        70627654 ("x86, extable: Switch to relative exception table entries")
      
      However, relative extables won't work for test_nx.c, since test instruction
      memory areas may be more than INT_MAX away from an executable fixup
      (e.g. stack and heap too far away from executable memory with the fixup).
      
      Since clearly no one has been using this code for a while now, and similar
      tests exist in LKDTM, this should just be removed entirely.
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jinbum Park <jinb.park7@gmail.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170131003711.GA74048@beastSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3ad38ceb
  31. 30 Nov, 2016 1 commit
  32. 29 Nov, 2016 1 commit
    • Thomas Gleixner's avatar
      x86/tsc: Store and check TSC ADJUST MSR · 8b223bc7
      Thomas Gleixner authored
      The TSC_ADJUST MSR shows whether the TSC has been modified. This is helpful
      in a two aspects:
      
      1) It allows to detect BIOS wreckage, where SMM code tries to 'hide' the
         cycles spent by storing the TSC value at SMM entry and restoring it at
         SMM exit. On affected machines the TSCs run slowly out of sync up to the
         point where the clocksource watchdog (if available) detects it.
      
         The TSC_ADJUST MSR allows to detect the TSC modification before that and
         eventually restore it. This is also important for SoCs which have no
         watchdog clocksource and therefore TSC wreckage cannot be detected and
         acted upon.
      
      2) All threads in a package are required to have the same TSC_ADJUST
         value. Broken BIOSes break that and as a result the TSC synchronization
         check fails.
      
         The TSC_ADJUST MSR allows to detect the deviation when a CPU comes
         online. If detected set it to the value of an already online CPU in the
         same package. This also allows to reduce the number of sync tests
         because with that in place the test is only required for the first CPU
         in a package.
      
         In principle all CPUs in a system should have the same TSC_ADJUST value
         even across packages, but with physical CPU hotplug this assumption is
         not true because the TSC starts with power on, so physical hotplug has
         to do some trickery to bring the TSC into sync with already running
         packages, which requires to use an TSC_ADJUST value different from CPUs
         which got powered earlier.
      
         A final enhancement is the opportunity to compensate for unsynced TSCs
         accross nodes at boot time and make the TSC usable that way. It won't
         help for TSCs which run apart due to frequency skew between packages,
         but this gets detected by the clocksource watchdog later.
      
      The first step toward this is to store the TSC_ADJUST value of a starting
      CPU and compare it with the value of an already online CPU in the same
      package. If they differ, emit a warning and adjust it to the reference
      value. The !SMP version just stores the boot value for later verification.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: http://lkml.kernel.org/r/20161119134017.655323776@linutronix.deSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      8b223bc7
  33. 24 Nov, 2016 1 commit
    • Tim Chen's avatar
      x86: Enable Intel Turbo Boost Max Technology 3.0 · 5e76b2ab
      Tim Chen authored
      On platforms supporting Intel Turbo Boost Max Technology 3.0, the maximum
      turbo frequencies of some cores in a CPU package may be higher than for
      the other cores in the same package.  In that case, better performance
      (and possibly lower energy consumption as well) can be achieved by
      making the scheduler prefer to run tasks on the CPUs with higher max
      turbo frequencies.
      
      To that end, set up a core priority metric to abstract the core
      preferences based on the maximum turbo frequency.  In that metric,
      the cores with higher maximum turbo frequencies are higher-priority
      than the other cores in the same package and that causes the scheduler
      to favor them when making load-balancing decisions using the asymmertic
      packing approach.  At the same time, the priority of SMT threads with a
      higher CPU number is reduced so as to avoid scheduling tasks on all of
      the threads that belong to a favored core before all of the other cores
      have been given a task to run.
      
      The priority metric will be initialized by the P-state driver with the
      help of the sched_set_itmt_core_prio() function.  The P-state driver
      will also determine whether or not ITMT is supported by the platform
      and will call sched_set_itmt_support() to indicate that.
      Co-developed-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Co-developed-by: default avatarSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Signed-off-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
      Cc: linux-pm@vger.kernel.org
      Cc: peterz@infradead.org
      Cc: jolsa@redhat.com
      Cc: rjw@rjwysocki.net
      Cc: linux-acpi@vger.kernel.org
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: bp@suse.de
      Link: http://lkml.kernel.org/r/cd401ccdff88f88c8349314febdc25d51f7c48f7.1479844244.git.tim.c.chen@linux.intel.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      5e76b2ab
  34. 20 Sep, 2016 1 commit
    • Josh Poimboeuf's avatar
      x86/unwind: Add new unwind interface and implementations · 7c7900f8
      Josh Poimboeuf authored
      The x86 stack dump code is a bit of a mess.  dump_trace() uses
      callbacks, and each user of it seems to have slightly different
      requirements, so there are several slightly different callbacks floating
      around.
      
      Also there are some upcoming features which will need more changes to
      the stack dump code, including the printing of stack pt_regs, reliable
      stack detection for live patching, and a DWARF unwinder.  Each of those
      features would at least need more callbacks and/or callback interfaces,
      resulting in a much bigger mess than what we have today.
      
      Before doing all that, we should try to clean things up and replace
      dump_trace() with something cleaner and more flexible.
      
      The new unwinder is a simple state machine which was heavily inspired by
      a suggestion from Andy Lutomirski:
      
        https://lkml.kernel.org/r/CALCETrUbNTqaM2LRyXGRx=kVLRPeY5A3Pc6k4TtQxF320rUT=w@mail.gmail.com
      
      It's also similar to the libunwind API:
      
        http://www.nongnu.org/libunwind/man/libunwind(3).html
      
      Some if its advantages:
      
      - Simplicity: no more callback sprawl and less code duplication.
      
      - Flexibility: it allows the caller to stop and inspect the stack state
        at each step in the unwinding process.
      
      - Modularity: the unwinder code, console stack dump code, and stack
        metadata analysis code are all better separated so that changing one
        of them shouldn't have much of an impact on any of the others.
      
      Two implementations are added which conform to the new unwind interface:
      
      - The frame pointer unwinder which is used for CONFIG_FRAME_POINTER=y.
      
      - The "guess" unwinder which is used for CONFIG_FRAME_POINTER=n.  This
        isn't an "unwinder" per se.  All it does is scan the stack for kernel
        text addresses.  But with no frame pointers, guesses are better than
        nothing in most cases.
      Suggested-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/6dc2f909c47533d213d0505f0a113e64585bec82.1474045023.git.jpoimboe@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7c7900f8
  35. 18 Aug, 2016 1 commit
    • Jessica Yu's avatar
      livepatch/x86: apply alternatives and paravirt patches after relocations · d4c3e6e1
      Jessica Yu authored
      Implement arch_klp_init_object_loaded() for x86, which applies
      alternatives/paravirt patches. This fixes the order in which relocations
      and alternatives/paravirt patches are applied.
      
      Previously, if a patch module had alternatives or paravirt patches,
      these were applied first by the module loader before livepatch can apply
      per-object relocations. The (buggy) sequence of events was:
      
      (1) Load patch module
      (2) Apply alternatives and paravirt patches to patch module
          * Note that these are applied to the new functions in the patch module
      (3) Apply per-object relocations to patch module when target module loads.
          * This clobbers what was written in step 2
      
      This lead to crashes and corruption in general, since livepatch would
      overwrite or step on previously applied alternative/paravirt patches.
      The correct sequence of events should be:
      
      (1) Load patch module
      (2) Apply per-object relocations to patch module
      (3) Apply alternatives and paravirt patches to patch module
      
      This is fixed by delaying paravirt/alternatives patching until after
      relocations are applied. Any .altinstructions or .parainstructions
      sections are prefixed with ".klp.arch.${objname}" and applied in
      arch_klp_init_object_loaded().
      Signed-off-by: default avatarJessica Yu <jeyu@redhat.com>
      Acked-by: default avatarMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      d4c3e6e1
  36. 08 Aug, 2016 1 commit
  37. 22 Apr, 2016 1 commit
    • Luis R. Rodriguez's avatar
      x86/init: Rename EBDA code file · f2d85299
      Luis R. Rodriguez authored
      This makes it clearer what this is.
      Signed-off-by: default avatarLuis R. Rodriguez <mcgrof@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: andrew.cooper3@citrix.com
      Cc: andriy.shevchenko@linux.intel.com
      Cc: bigeasy@linutronix.de
      Cc: boris.ostrovsky@oracle.com
      Cc: david.vrabel@citrix.com
      Cc: ffainelli@freebox.fr
      Cc: george.dunlap@citrix.com
      Cc: glin@suse.com
      Cc: jgross@suse.com
      Cc: jlee@suse.com
      Cc: josh@joshtriplett.org
      Cc: julien.grall@linaro.org
      Cc: konrad.wilk@oracle.com
      Cc: kozerkov@parallels.com
      Cc: lenb@kernel.org
      Cc: lguest@lists.ozlabs.org
      Cc: linux-acpi@vger.kernel.org
      Cc: lv.zheng@intel.com
      Cc: matt@codeblueprint.co.uk
      Cc: mbizon@freebox.fr
      Cc: rjw@rjwysocki.net
      Cc: robert.moore@intel.com
      Cc: rusty@rustcorp.com.au
      Cc: tiwai@suse.de
      Cc: toshi.kani@hp.com
      Cc: xen-devel@lists.xensource.com
      Link: http://lkml.kernel.org/r/1460592286-300-14-git-send-email-mcgrof@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f2d85299