1. 03 Dec, 2020 23 commits
  2. 26 Nov, 2020 7 commits
  3. 25 Nov, 2020 1 commit
  4. 23 Nov, 2020 2 commits
    • Stephen Rothwell's avatar
      powerpc/64s: Fix allnoconfig build since uaccess flush · b6b79dd5
      Stephen Rothwell authored
      Using DECLARE_STATIC_KEY_FALSE needs linux/jump_table.h.
      
      Otherwise the build fails with eg:
      
        arch/powerpc/include/asm/book3s/64/kup-radix.h:66:1: warning: data definition has no type or storage class
           66 | DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
      
      Fixes: 9a32a7e7 ("powerpc/64s: flush L1D after user accesses")
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      [mpe: Massage change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20201123184016.693fe464@canb.auug.org.au
      b6b79dd5
    • Michael Ellerman's avatar
      Merge tag 'powerpc-cve-2020-4788' into fixes · 962f8e64
      Michael Ellerman authored
      From Daniel's cover letter:
      
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern.
      
      This patch series flushes the L1 cache on kernel entry (patch 2) and after the
      kernel performs any user accesses (patch 3). It also adds a self-test and
      performs some related cleanups.
      962f8e64
  5. 19 Nov, 2020 7 commits
    • Daniel Axtens's avatar
      powerpc/64s: rename pnv|pseries_setup_rfi_flush to _setup_security_mitigations · da631f7f
      Daniel Axtens authored
      pseries|pnv_setup_rfi_flush already does the count cache flush setup, and
      we just added entry and uaccess flushes. So the name is not very accurate
      any more. In both platforms we then also immediately setup the STF flush.
      
      Rename them to _setup_security_mitigations and fold the STF flush in.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      da631f7f
    • Daniel Axtens's avatar
      selftests/powerpc: refactor entry and rfi_flush tests · 0d239f3b
      Daniel Axtens authored
      For simplicity in backporting, the original entry_flush test contained
      a lot of duplicated code from the rfi_flush test. De-duplicate that code.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      0d239f3b
    • Daniel Axtens's avatar
      selftests/powerpc: entry flush test · 89a83a0c
      Daniel Axtens authored
      Add a test modelled on the RFI flush test which counts the number
      of L1D misses doing a simple syscall with the entry flush on and off.
      
      For simplicity of backporting, this test duplicates a lot of code from
      rfi_flush. We clean that up in the next patch.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      89a83a0c
    • Michael Ellerman's avatar
      powerpc: Only include kup-radix.h for 64-bit Book3S · 178d52c6
      Michael Ellerman authored
      In kup.h we currently include kup-radix.h for all 64-bit builds, which
      includes Book3S and Book3E. The latter doesn't make sense, Book3E
      never uses the Radix MMU.
      
      This has worked up until now, but almost by accident, and the recent
      uaccess flush changes introduced a build breakage on Book3E because of
      the bad structure of the code.
      
      So disentangle things so that we only use kup-radix.h for Book3S. This
      requires some more stubs in kup.h and fixing an include in
      syscall_64.c.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      178d52c6
    • Nicholas Piggin's avatar
      powerpc/64s: flush L1D after user accesses · 9a32a7e7
      Nicholas Piggin authored
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache after user accesses.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      9a32a7e7
    • Nicholas Piggin's avatar
      powerpc/64s: flush L1D on kernel entry · f7964378
      Nicholas Piggin authored
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache on kernel entry.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f7964378
    • Russell Currey's avatar
      selftests/powerpc: rfi_flush: disable entry flush if present · fcb48454
      Russell Currey authored
      We are about to add an entry flush. The rfi (exit) flush test measures
      the number of L1D flushes over a syscall with the RFI flush enabled and
      disabled. But if the entry flush is also enabled, the effect of enabling
      and disabling the RFI flush is masked.
      
      If there is a debugfs entry for the entry flush, disable it during the RFI
      flush and restore it later.
      Reported-by: default avatarSpoorthy S <spoorts2@in.ibm.com>
      Signed-off-by: default avatarRussell Currey <ruscur@russell.cc>
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      fcb48454