• Nicholas Piggin's avatar
    powerpc/64s: Fix restore_math unnecessarily changing MSR · 01eb0187
    Nicholas Piggin authored
    Before returning to user, if there are missing FP/VEC/VSX bits from the
    user MSR then those registers had been saved and must be restored again
    before use. restore_math will decide whether to restore immediately, or
    skip the restore and let fp/vec/vsx unavailable faults demand load the
    registers.
    
    Each time restore_math restores one of the FP/VSX or VEC register sets
    is loaded, an 8-bit counter is incremented (load_fp and load_vec). When
    these wrap to zero, restore_math no longer restores that register set
    until after they are next demand faulted.
    
    It's quite usual for those counters to have different values, so if one
    wraps to zero and restore_math no longer restores its registers or user
    MSR bit but the other is not zero yet does not need to be restored
    (because the kernel is not frequently using the FPU), then restore_math
    will be called and it will also not return in the early exit check.
    This causes msr_check_and_set to test and set the MSR at every kernel
    exit despite having no work to do.
    
    This can cause workloads (e.g., a NULL syscall microbenchmark) to run
    fast for a time while both counters are non-zero, then slow down when
    one of the counters reaches zero, then speed up again after the second
    counter reaches zero. The cost is significant, about 10% slowdown on a
    NULL syscall benchmark, and the jittery behaviour is very undesirable.
    
    Fix this by having restore_math test all conditions first, and only
    update MSR if we will be loading registers.
    Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
    Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
    Link: https://lore.kernel.org/r/20200623234139.2262227-2-npiggin@gmail.com
    01eb0187
syscall_64.c 10 KB