1. 03 Oct, 2018 11 commits
  2. 19 Sep, 2018 29 commits
    • Hari Bathini's avatar
      powerpc/fadump: re-register firmware-assisted dump if already registered · 0823c68b
      Hari Bathini authored
      Firmware-Assisted Dump (FADump) needs to be registered again after any
      memory hot add/remove operation to update the crash memory ranges. But
      currently, the kernel returns '-EEXIST' if we try to register without
      uregistering it first. This could expose the system to racing issues
      while unregistering and registering FADump from userspace during udev
      events. Spare the userspace of this and let it be taken care of in the
      kernel space for a simpler interface.
      
      Since this change, running 'echo 1 > /sys/kernel/fadump_registered'
      would result in re-regisering (unregistering and registering) FADump,
      if it was already registered.
      Signed-off-by: default avatarHari Bathini <hbathini@linux.ibm.com>
      Acked-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      0823c68b
    • Suraj Jitindar Singh's avatar
      powerpc/pseries: Remove VLA from lparcfg_write() · 74422e2b
      Suraj Jitindar Singh authored
      In lparcfg_write we hard code kbuf_sz and then use this as the variable
      length of kbuf creating a variable length array. Since we're hard coding
      the length anyway just define the array using this as the length and
      remove the need for kbuf_sz, thus removing the variable length array.
      Signed-off-by: default avatarSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Reviewed-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      74422e2b
    • Suraj Jitindar Singh's avatar
      powerpc/prom: Remove VLA in prom_check_platform_support() · ab912399
      Suraj Jitindar Singh authored
      In prom_check_platform_support() we retrieve and parse the
      "ibm,arch-vec-5-platform-support" property of the chosen node.
      Currently we use a variable length array however to avoid this use an
      array of constant length 8.
      
      This property is used to indicate the supported options of vector 5
      bytes 23-26 of the ibm,architecture.vec node. Each of these options
      is a pair of bytes, thus for 4 options we have a max length of 8 bytes.
      Signed-off-by: default avatarSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Reviewed-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ab912399
    • Anton Blanchard's avatar
      powerpc: Fix duplicate const clang warning in user access code · e00d93ac
      Anton Blanchard authored
      This re-applies commit b91c1e3e ("powerpc: Fix duplicate const
      clang warning in user access code") (Jun 2015) which was undone in
      commits:
        f2ca8090 ("powerpc/sparse: Constify the address pointer in __get_user_nosleep()") (Feb 2017)
        d466f6c5 ("powerpc/sparse: Constify the address pointer in __get_user_nocheck()") (Feb 2017)
        f84ed59a ("powerpc/sparse: Constify the address pointer in __get_user_check()") (Feb 2017)
      
      We see a large number of duplicate const errors in the user access
      code when building with llvm/clang:
      
        include/linux/pagemap.h:576:8: warning: duplicate 'const' declaration specifier [-Wduplicate-decl-specifier]
              ret = __get_user(c, uaddr);
      
      The problem is we are doing const __typeof__(*(ptr)), which will hit
      the warning if ptr is marked const.
      
      Removing const does not seem to have any effect on GCC code
      generation.
      Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
      Signed-off-by: default avatarJoel Stanley <joel@jms.id.au>
      Reviewed-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e00d93ac
    • Joel Stanley's avatar
      powerpc/boot: Ensure _zimage_start is a weak symbol · ee9d21b3
      Joel Stanley authored
      When building with clang crt0's _zimage_start is not marked weak, which
      breaks the build when linking the kernel image:
      
       $ objdump -t arch/powerpc/boot/crt0.o |grep _zimage_start$
       0000000000000058 g       .text  0000000000000000 _zimage_start
      
       ld: arch/powerpc/boot/wrapper.a(crt0.o): in function '_zimage_start':
       (.text+0x58): multiple definition of '_zimage_start';
       arch/powerpc/boot/pseries-head.o:(.text+0x0): first defined here
      
      Clang requires the .weak directive to appear after the symbol is
      declared. The binutils manual says:
      
       This directive sets the weak attribute on the comma separated list of
       symbol names. If the symbols do not already exist, they will be
       created.
      
      So it appears this is different with clang. The only reference I could
      see for this was an OpenBSD mailing list post[1].
      
      Changing it to be after the declaration fixes building with Clang, and
      still works with GCC.
      
       $ objdump -t arch/powerpc/boot/crt0.o |grep _zimage_start$
       0000000000000058  w      .text	0000000000000000 _zimage_start
      
      Reported to clang as https://bugs.llvm.org/show_bug.cgi?id=38921
      
      [1] https://groups.google.com/forum/#!topic/fa.openbsd.tech/PAgKKen2YCYSigned-off-by: default avatarJoel Stanley <joel@jms.id.au>
      Reviewed-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ee9d21b3
    • Joel Stanley's avatar
      powerpc/configs: Update skiroot defconfig · cbc39809
      Joel Stanley authored
      Disable new features from recent releases, and clean out some other
      unused options:
      
        - Enable EXPERT, so we can disable some things
        - Disable non-powerpc BPF decoders
        - Disable TASKSTATS
        - Disable unused syscalls
        - Set more things to be modules
        - Turn off unused network vendors
        - PPC_OF_BOOT_TRAMPOLINE and FB_OF are unused on powernv
        - Drop unused Radeon and Matrox GPU drivers
        - IPV6 support landed in petitboot
        - Bringup related command line powersave=off dropped, switch to quiet
      
      Set CONFIG_I2C_CHARDEV=y as the module is not loaded automatically, and
      without this i2cget etc. will fail in the skiroot environment.
      
      This defconfig gets us build coverage of KERNEL_XZ, which was broken in
      the 4.19 merge window for powerpc.
      Signed-off-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      cbc39809
    • Nathan Fontenot's avatar
      powerpc/pseries: Disable CPU hotplug across migrations · 85a88cab
      Nathan Fontenot authored
      When performing partition migrations all present CPUs must be online
      as all present CPUs must make the H_JOIN call as part of the migration
      process. Once all present CPUs make the H_JOIN call, one CPU is returned
      to make the rtas call to perform the migration to the destination system.
      
      During testing of migration and changing the SMT state we have found
      instances where CPUs are offlined, as part of the SMT state change,
      before they make the H_JOIN call. This results in a hung system where
      every CPU is either in H_JOIN or offline.
      
      To prevent this this patch disables CPU hotplug during the migration
      process.
      Signed-off-by: default avatarNathan Fontenot <nfont@linux.vnet.ibm.com>
      Reviewed-by: default avatarTyrel Datwyler <tyreld@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      85a88cab
    • Nathan Fontenot's avatar
      powerpc/pseries: Remove unneeded uses of dlpar work queue · fd12527a
      Nathan Fontenot authored
      There are three instances in which dlpar hotplug events are invoked;
      handling a hotplug interrupt (in a kvm guest), handling a dlpar
      request through sysfs, and updating LMB affinity when handling a
      PRRN event. Only in the case of handling a hotplug interrupt do we
      have to put the work on a workqueue, the other cases can handle the
      dlpar request directly.
      
      This patch exports the handle_dlpar_errorlog() function so that
      dlpar hotplug events can be handled directly and updates the two
      instances mentioned above to use the direct invocation.
      Signed-off-by: default avatarNathan Fontenot <nfont@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      fd12527a
    • Nathan Fontenot's avatar
      powerpc/pseries: Remove prrn_work workqueue · cd24e457
      Nathan Fontenot authored
      When a PRRN event is received we are already running in a worker
      thread. Instead of spawning off another worker thread on the prrn_work
      workqueue to handle the PRRN event we can just call the PRRN handler
      routine directly.
      
      With this update we can also pass the scope variable for the PRRN
      event directly to the handler instead of it being a global variable.
      
      This patch fixes the following oops mnessage we are seeing in PRRN testing:
      
        Oops: Bad kernel stack pointer, sig: 6 [#1]
        SMP NR_CPUS=2048 NUMA pSeries
        Modules linked in: nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace sunrpc fscache binfmt_misc reiserfs vfat fat rpadlpar_io(X) rpaphp(X) tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag af_packet xfs libcrc32c dm_service_time ibmveth(X) ses enclosure scsi_transport_sas rtc_generic btrfs xor raid6_pq sd_mod ibmvscsi(X) scsi_transport_srp ipr(X) libata sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod autofs4
        Supported: Yes, External                                                     54
        CPU: 7 PID: 18967 Comm: kworker/u96:0 Tainted: G                 X 4.4.126-94.22-default #1
        Workqueue: pseries hotplug workque pseries_hp_work_fn
        task: c000000775367790 ti: c00000001ebd4000 task.ti: c00000070d140000
        NIP: 0000000000000000 LR: 000000001fb3d050 CTR: 0000000000000000
        REGS: c00000001ebd7d40 TRAP: 0700   Tainted: G                 X  (4.4.126-94.22-default)
        MSR: 8000000102081000 <41,VEC,ME5  CR: 28000002  XER: 20040018   4
        CFAR: 000000001fb3d084 40 419   1                                3
        GPR00: 000000000000000040000000000010007 000000001ffff400 000000041fffe200
        GPR04: 000000000000008050000000000000000 000000001fb15fa8 0000000500000500
        GPR08: 000000000001f40040000000000000001 0000000000000000 000005:5200040002
        GPR12: 00000000000000005c000000007a05400 c0000000000e89f8 000000001ed9f668
        GPR16: 000000001fbeff944000000001fbeff94 000000001fb545e4 0000006000000060
        GPR20: ffffffffffffffff4ffffffffffffffff 0000000000000000 0000000000000000
        GPR24: 00000000000000005400000001fb3c000 0000000000000000 000000001fb1b040
        GPR28: 000000001fb240004000000001fb440d8 0000000000000008 0000000000000000
        NIP [0000000000000000] 5         (null)
        LR [000000001fb3d050] 031fb3d050
        Call Trace:            4
        Instruction dump:      4                                       5:47 12    2
        XXXXXXXX XXXXXXXX XXXXX4XX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX
        XXXXXXXX XXXXXXXX XXXXX5XX XXXXXXXX 60000000 60000000 60000000 60000000
        ---[ end trace aa5627b04a7d9d6b ]---                                       3NMI watchdog: BUG: soft lockup - CPU#27 stuck for 23s! [kworker/27:0:13903]
        Modules linked in: nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace sunrpc fscache binfmt_misc reiserfs vfat fat rpadlpar_io(X) rpaphp(X) tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag af_packet xfs libcrc32c dm_service_time ibmveth(X) ses enclosure scsi_transport_sas rtc_generic btrfs xor raid6_pq sd_mod ibmvscsi(X) scsi_transport_srp ipr(X) libata sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod autofs4
        Supported: Yes, External
        CPU: 27 PID: 13903 Comm: kworker/27:0 Tainted: G      D          X 4.4.126-94.22-default #1
        Workqueue: events prrn_work_fn
        task: c000000747cfa390 ti: c00000074712c000 task.ti: c00000074712c000
        NIP: c0000000008002a8 LR: c000000000090770 CTR: 000000000032e088
        REGS: c00000074712f7b0 TRAP: 0901   Tainted: G      D          X  (4.4.126-94.22-default)
        MSR: 8000000100009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 22482044  XER: 20040000
        CFAR: c0000000008002c4 SOFTE: 1
        GPR00: c000000000090770 c00000074712fa30 c000000000f09800 c000000000fa1928 6:02
        GPR04: c000000775f5e000 fffffffffffffffe 0000000000000001 c000000000f42db8
        GPR08: 0000000000000001 0000000080000007 0000000000000000 0000000000000000
        GPR12: 8006210083180000 c000000007a14400
        NIP [c0000000008002a8] _raw_spin_lock+0x68/0xd0
        LR [c000000000090770] mobility_rtas_call+0x50/0x100
        Call Trace:            59                                        5
        [c00000074712fa60] [c000000000090770] mobility_rtas_call+0x50/0x100
        [c00000074712faf0] [c000000000090b08] pseries_devicetree_update+0xf8/0x530
        [c00000074712fc20] [c000000000031ba4] prrn_work_fn+0x34/0x50
        [c00000074712fc40] [c0000000000e0390] process_one_work+0x1a0/0x4e0
        [c00000074712fcd0] [c0000000000e0870] worker_thread+0x1a0/0x6105:57       2
        [c00000074712fd80] [c0000000000e8b18] kthread+0x128/0x150
        [c00000074712fe30] [c0000000000096f8] ret_from_kernel_thread+0x5c/0x64
        Instruction dump:
        2c090000 40c20010 7d40192d 40c2fff0 7c2004ac 2fa90000 40de0018 5:540030   3
        e8010010 ebe1fff8 7c0803a6 4e800020 <7c210b78> e92d0000 89290009 792affe3
      Signed-off-by: default avatarJohn Allen <jallen@linux.ibm.com>
      Signed-off-by: default avatarHaren Myneni <haren@us.ibm.com>
      Signed-off-by: default avatarNathan Fontenot <nfont@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      cd24e457
    • Nathan Fontenot's avatar
      powerpc/pseries/memory-hotplug: Only update DT once per memory DLPAR request · 063b8b12
      Nathan Fontenot authored
      The updates to powerpc numa and memory hotplug code now use the
      in-kernel LMB array instead of the device tree. This change allows the
      pseries memory DLPAR code to only update the device tree once after
      successfully handling a DLPAR request.
      
      Prior to the in-kernel LMB array, the numa code looked up the affinity
      for memory being added in the device tree, the code now looks this up
      in the LMB array. This change means the memory hotplug code can just
      update the affinity for an LMB in the LMB array instead of updating
      the device tree.
      
      This also provides a savings in kernel memory. When updating the
      device tree old properties are never free'ed since there is no
      usecount on properties. This behavior leads to a new copy of the
      property being allocated every time a LMB is added or removed (i.e. a
      request to add 100 LMBs creates 100 new copies of the property). With
      this update only a single new property is created when a DLPAR request
      completes successfully.
      Signed-off-by: default avatarNathan Fontenot <nfont@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      063b8b12
    • Nicholas Piggin's avatar
    • Nicholas Piggin's avatar
    • Nicholas Piggin's avatar
      powerpc: remove old GCC version checks · f2910f0e
      Nicholas Piggin authored
      GCC 4.6 is the minimum supported now.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f2910f0e
    • Nicholas Piggin's avatar
      powerpc/64s/hash: Add a SLB preload cache · 89ca4e12
      Nicholas Piggin authored
      When switching processes, currently all user SLBEs are cleared, and a
      few (exec_base, pc, and stack) are preloaded. In trivial testing with
      small apps, this tends to miss the heap and low 256MB segments, and it
      will also miss commonly accessed segments on large memory workloads.
      
      Add a simple round-robin preload cache that just inserts the last SLB
      miss into the head of the cache and preloads those at context switch
      time. Every 256 context switches, the oldest entry is removed from the
      cache to shrink the cache and require fewer slbmte if they are unused.
      
      Much more could go into this, including into the SLB entry reclaim
      side to track some LRU information etc, which would require a study of
      large memory workloads. But this is a simple thing we can do now that
      is an obvious win for common workloads.
      
      With the full series, process switching speed on the context_switch
      benchmark on POWER9/hash (with kernel speculation security masures
      disabled) increases from 140K/s to 178K/s (27%).
      
      POWER8 does not change much (within 1%), it's unclear why it does not
      see a big gain like POWER9.
      
      Booting to busybox init with 256MB segments has SLB misses go down
      from 945 to 69, and with 1T segments 900 to 21. These could almost all
      be eliminated by preloading a bit more carefully with ELF binary
      loading.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      89ca4e12
    • Nicholas Piggin's avatar
      powerpc/64s/hash: provide arch_setup_exec hooks for hash slice setup · 2e162674
      Nicholas Piggin authored
      This will be used by the SLB code in the next patch, but for now this
      sets the slb_addr_limit to the correct size for 32-bit tasks.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      2e162674
    • Nicholas Piggin's avatar
    • Nicholas Piggin's avatar
      powerpc/64s/hash: SLB allocation status bitmaps · 655deecf
      Nicholas Piggin authored
      Add 32-entry bitmaps to track the allocation status of the first 32
      SLB entries, and whether they are user or kernel entries. These are
      used to allocate free SLB entries first, before resorting to the round
      robin allocator.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      655deecf
    • Nicholas Piggin's avatar
      powerpc/64s/hash: remove user SLB data from the paca · 8fed04d0
      Nicholas Piggin authored
      User SLB mappig data is copied into the PACA from the mm->context so
      it can be accessed by the SLB miss handlers.
      
      After the C conversion, SLB miss handlers now run with relocation on,
      and user SLB misses are able to take recursive kernel SLB misses, so
      the user SLB mapping data can be removed from the paca and accessed
      directly.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8fed04d0
    • Nicholas Piggin's avatar
      powerpc/64s/hash: convert SLB miss handlers to C · 5e46e29e
      Nicholas Piggin authored
      This patch moves SLB miss handlers completely to C, using the standard
      exception handler macros to set up the stack and branch to C.
      
      This can be done because the segment containing the kernel stack is
      always bolted, so accessing it with relocation on will not cause an
      SLB exception.
      
      Arbitrary kernel memory may not be accessed when handling kernel space
      SLB misses, so care should be taken there. However user SLB misses can
      access any kernel memory, which can be used to move some fields out of
      the paca (in later patches).
      
      User SLB misses could quite easily reconcile IRQs and set up a first
      class kernel environment and exit via ret_from_except, however that
      doesn't seem to be necessary at the moment, so we only do that if a
      bad fault is encountered.
      
      [ Credit to Aneesh for bug fixes, error checks, and improvements to bad
        address handling, etc ]
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      
      Since RFC:
      - Added MSR[RI] handling
      - Fixed up a register loss bug exposed by irq tracing (Aneesh)
      - Reject misses outside the defined kernel regions (Aneesh)
      - Added several more sanity checks and error handling (Aneesh), we may
        look at consolidating these tests and tightenig up the code but for
        a first pass we decided it's better to check carefully.
      
      Since v1:
      - Fixed SLB cache corruption (Aneesh)
      - Fixed untidy SLBE allocation "leak" in get_vsid error case
      - Now survives some stress testing on real hardware
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      5e46e29e
    • Nicholas Piggin's avatar
      powerpc/64s/hash: Use POWER9 SLBIA IH=3 variant in switch_slb · 82d8f4c2
      Nicholas Piggin authored
      POWER9 introduces SLBIA IH=3, which invalidates all SLB entries and
      associated lookaside information that have a class value of 1, which
      Linux assigns to user addresses. This matches what switch_slb wants,
      and allows a simple fast implementation that avoids the slb_cache
      complexity.
      
      As a side-effect, the POWER5 < DD2.1 SLB invalidation workaround is
      also avoided on POWER9.
      
      Process context switching rate is improved about 2.2% for a small
      process that hits the slb cache which is the best case for the current
      code.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      82d8f4c2
    • Nicholas Piggin's avatar
      powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb · 5141c182
      Nicholas Piggin authored
      The SLBIA IH=1 hint will remove all non-zero SLBEs, but only
      invalidate ERAT entries associated with a class value of 1, for
      processors that support the hint (e.g., POWER6 and newer), which
      Linux assigns to user addresses.
      
      This prevents kernel ERAT entries from being invalidated when
      context switchig (if the thread faulted in more than 8 user SLBEs).
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      5141c182
    • Nicholas Piggin's avatar
      powerpc/64s/hash: remove the vmalloc segment from the bolted SLB · 85376e2a
      Nicholas Piggin authored
      Remove the vmalloc segment from bolted SLBEs. This is not required to
      be bolted, and seems like it was added to help pre-load the SLB on
      context switch. However there are now other segments like the vmemmap
      segment and non-zero node memory that often take misses after a context
      switch, so it is better to solve this in a more general way.
      
      A subsequent change will track free SLB entries and uses those rather
      than round-robin overwrite valid entries, which makes it far less
      likely for kernel SLBEs to be evicted after they are installed.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      85376e2a
    • Nicholas Piggin's avatar
      powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed · 8b92887c
      Nicholas Piggin authored
      The POWER5 < DD2.1 issue is that slbie needs to be issued more than
      once. It came in with this change:
      
      ChangeSet@1.1608, 2004-04-29 07:12:31-07:00, david@gibson.dropbear.id.au
        [PATCH] POWER5 erratum workaround
      
        Early POWER5 revisions (<DD2.1) have a problem requiring slbie
        instructions to be repeated under some circumstances.  The patch below
        adds a workaround (patch made by Anton Blanchard).
      
      (aka. 3e4520f7605243abf66a7ccd3d2e49e48e8c0483 in the full history tree)
      
      The extra slbie in switch_slb is done even for the case where slbia is
      called (slb_flush_and_rebolt). I don't believe that is required
      because there are other slb_flush_and_rebolt callers which do not
      issue the workaround slbie, which would be broken if it was required.
      
      It also seems to be fine inside the isync with the first slbie, as it
      is in the kernel stack switch code.
      
      So move this workaround to where it is required. This is not much of
      an optimisation because this is the fast path, but it makes the code
      more understandable and neater.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      [mpe: Retain slbie_data initialisation to avoid compiler warning]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8b92887c
    • Nicholas Piggin's avatar
      powerpc/64s/hash: avoid the POWER5 < DD2.1 slb invalidate workaround on POWER8/9 · 505ea82e
      Nicholas Piggin authored
      I only have POWER8/9 to test, so just remove it for those.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      505ea82e
    • Nicholas Piggin's avatar
      powerpc/64s/hash: Fix stab_rr off by one initialization · 09b4438d
      Nicholas Piggin authored
      This causes SLB alloation to start 1 beyond the start of the SLB.
      There is no real problem because after it wraps it stats behaving
      properly, it's just surprisig to see when looking at SLB traces.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      09b4438d
    • Mahesh Salgaonkar's avatar
      powernv/pseries: consolidate code for mce early handling. · db7d31ac
      Mahesh Salgaonkar authored
      Now that other platforms also implements real mode mce handler,
      lets consolidate the code by sharing existing powernv machine check
      early code. Rename machine_check_powernv_early to
      machine_check_common_early and reuse the code.
      Signed-off-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      db7d31ac
    • Mahesh Salgaonkar's avatar
      powerpc/pseries: Dump the SLB contents on SLB MCE errors. · c6d15258
      Mahesh Salgaonkar authored
      If we get a machine check exceptions due to SLB errors then dump the
      current SLB contents which will be very much helpful in debugging the
      root cause of SLB errors. Introduce an exclusive buffer per cpu to hold
      faulty SLB entries. In real mode mce handler saves the old SLB contents
      into this buffer accessible through paca and print it out later in virtual
      mode.
      
      With this patch the console will log SLB contents like below on SLB MCE
      errors:
      
      [  507.297236] SLB contents of cpu 0x1
      [  507.297237] Last SLB entry inserted at slot 16
      [  507.297238] 00 c000000008000000 400ea1b217000500
      [  507.297239]   1T  ESID=   c00000  VSID=      ea1b217 LLP:100
      [  507.297240] 01 d000000008000000 400d43642f000510
      [  507.297242]   1T  ESID=   d00000  VSID=      d43642f LLP:110
      [  507.297243] 11 f000000008000000 400a86c85f000500
      [  507.297244]   1T  ESID=   f00000  VSID=      a86c85f LLP:100
      [  507.297245] 12 00007f0008000000 4008119624000d90
      [  507.297246]   1T  ESID=       7f  VSID=      8119624 LLP:110
      [  507.297247] 13 0000000018000000 00092885f5150d90
      [  507.297247]  256M ESID=        1  VSID=   92885f5150 LLP:110
      [  507.297248] 14 0000010008000000 4009e7cb50000d90
      [  507.297249]   1T  ESID=        1  VSID=      9e7cb50 LLP:110
      [  507.297250] 15 d000000008000000 400d43642f000510
      [  507.297251]   1T  ESID=   d00000  VSID=      d43642f LLP:110
      [  507.297252] 16 d000000008000000 400d43642f000510
      [  507.297253]   1T  ESID=   d00000  VSID=      d43642f LLP:110
      [  507.297253] ----------------------------------
      [  507.297254] SLB cache ptr value = 3
      [  507.297254] Valid SLB cache entries:
      [  507.297255] 00 EA[0-35]=    7f000
      [  507.297256] 01 EA[0-35]=        1
      [  507.297257] 02 EA[0-35]=     1000
      [  507.297257] Rest of SLB cache entries:
      [  507.297258] 03 EA[0-35]=    7f000
      [  507.297258] 04 EA[0-35]=        1
      [  507.297259] 05 EA[0-35]=     1000
      [  507.297260] 06 EA[0-35]=       12
      [  507.297260] 07 EA[0-35]=    7f000
      Suggested-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Suggested-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      c6d15258
    • Mahesh Salgaonkar's avatar
      powerpc/pseries: Display machine check error details. · 8f0b8056
      Mahesh Salgaonkar authored
      Extract the MCE error details from RTAS extended log and display it to
      console.
      
      With this patch you should now see mce logs like below:
      
      [  142.371818] Severe Machine check interrupt [Recovered]
      [  142.371822]   NIP [d00000000ca301b8]: init_module+0x1b8/0x338 [bork_kernel]
      [  142.371822]   Initiator: CPU
      [  142.371823]   Error type: SLB [Multihit]
      [  142.371824]     Effective address: d00000000ca70000
      Signed-off-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8f0b8056
    • Mahesh Salgaonkar's avatar
      powerpc/pseries: Flush SLB contents on SLB MCE errors. · a43c1590
      Mahesh Salgaonkar authored
      On pseries, as of today system crashes if we get a machine check
      exceptions due to SLB errors. These are soft errors and can be fixed
      by flushing the SLBs so the kernel can continue to function instead of
      system crash. We do this in real mode before turning on MMU. Otherwise
      we would run into nested machine checks. This patch now fetches the
      rtas error log in real mode and flushes the SLBs on SLB/ERAT errors.
      Signed-off-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichal Suchanek <msuchanek@suse.com>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      a43c1590