• Andrew Morton's avatar
    [PATCH] ppc64: SLB rewrite · 326f372c
    Andrew Morton authored
    From: Anton Blanchard <anton@samba.org>
    
    The current SLB handling code has a number of problems:
    
    - We loop trying to find an empty SLB entry before deciding to cast one
      out.  On large working sets this really hurts since the SLB is always full
      and we end up looping through all 64 entries unnecessarily.
    
    - During castout we currently invalidate the entry we are replacing.  This
      is to avoid a nasty race where the entry is in the ERAT but not the SLB and
      another cpu does a tlbie that removes the ERAT at a critical point.  If
      this race is fixed the SLB can be removed.
    
    - The SLB prefault code doesnt work properly
    
    The following patch addresses all the above concerns and adds some more
    optimisations:
    
    - feature nop out some segment table only code
    
    - slb invalidate the kernel segment on context switch (avoids us having to
      slb invalidate at each cast out)
    
    - optimise flush on context switch, the lazy tlb stuff avoids it being
      called when going from userspace to kernel thread, but it gets called when
      going to kernel thread to userspace.  In many cases we are returning to the
      same userspace task, we now check for this and avoid the flush
    
    - use the optimised POWER4 mtcrf where possible
    326f372c
mmu.h 12.3 KB