• K Prateek Nayak's avatar
    sched/fair: Consider CPU affinity when allowing NUMA imbalance in find_idlest_group() · f5b2eeb4
    K Prateek Nayak authored
    In the case of systems containing multiple LLCs per socket, like
    AMD Zen systems, users want to spread bandwidth hungry applications
    across multiple LLCs. Stream is one such representative workload where
    the best performance is obtained by limiting one stream thread per LLC.
    To ensure this, users are known to pin the tasks to a specify a subset
    of the CPUs consisting of one CPU per LLC while running such bandwidth
    hungry tasks.
    
    Suppose we kickstart a multi-threaded task like stream with 8 threads
    using taskset or numactl to run on a subset of CPUs on a 2 socket Zen3
    server where each socket contains 128 CPUs
    (0-63,128-191 in one socket, 64-127,192-255 in another socket)
    
    Eg: numactl -C 0,16,32,48,64,80,96,112 ./stream8
    
    Here each CPU in the list is from a different LLC and 4 of those LLCs
    are on one socket, while the other 4 are on another socket.
    
    Ideally we would prefer that each stream thread runs on a different
    CPU from the allowed list of CPUs. However, the current heuristics in
    find_idlest_group() do not allow this during the initial placement.
    
    Suppose the first socket (0-63,128-191) is our local group from which
    we are kickstarting the stream tasks. The first four stream threads
    will be placed in this socket. When it comes to placing the 5th
    thread, all the allowed CPUs are from the local group (0,16,32,48)
    would have been taken.
    
    However, the current scheduler code simply checks if the number of
    tasks in the local group is fewer than the allowed numa-imbalance
    threshold. This threshold was previously 25% of the NUMA domain span
    (in this case threshold = 32) but after the v6 of Mel's patchset
    "Adjust NUMA imbalance for multiple LLCs", got merged in sched-tip,
    Commit: e496132e ("sched/fair: Adjust the allowed NUMA imbalance
    when SD_NUMA spans multiple LLCs") it is now equal to number of LLCs
    in the NUMA domain, for processors with multiple LLCs.
    (in this case threshold = 8).
    
    For this example, the number of tasks will always be within threshold
    and thus all the 8 stream threads will be woken up on the first socket
    thereby resulting in sub-optimal performance.
    
    The following sched_wakeup_new tracepoint output shows the initial
    placement of tasks in the current tip/sched/core on the Zen3 machine:
    
    stream-5313    [016] d..2.   627.005036: sched_wakeup_new: comm=stream pid=5315 prio=120 target_cpu=032
    stream-5313    [016] d..2.   627.005086: sched_wakeup_new: comm=stream pid=5316 prio=120 target_cpu=048
    stream-5313    [016] d..2.   627.005141: sched_wakeup_new: comm=stream pid=5317 prio=120 target_cpu=000
    stream-5313    [016] d..2.   627.005183: sched_wakeup_new: comm=stream pid=5318 prio=120 target_cpu=016
    stream-5313    [016] d..2.   627.005218: sched_wakeup_new: comm=stream pid=5319 prio=120 target_cpu=016
    stream-5313    [016] d..2.   627.005256: sched_wakeup_new: comm=stream pid=5320 prio=120 target_cpu=016
    stream-5313    [016] d..2.   627.005295: sched_wakeup_new: comm=stream pid=5321 prio=120 target_cpu=016
    
    Once the first four threads are distributed among the allowed CPUs of
    socket one, the rest of the treads start piling on these same CPUs
    when clearly there are CPUs on the second socket that can be used.
    
    Following the initial pile up on a small number of CPUs, though the
    load-balancer eventually kicks in, it takes a while to get to {4}{4}
    and even {4}{4} isn't stable as we observe a bunch of ping ponging
    between {4}{4} to {5}{3} and back before a stable state is reached
    much later (1 Stream thread per allowed CPU) and no more migration is
    required.
    
    We can detect this piling and avoid it by checking if the number of
    allowed CPUs in the local group are fewer than the number of tasks
    running in the local group and use this information to spread the
    5th task out into the next socket (after all, the goal in this
    slowpath is to find the idlest group and the idlest CPU during the
    initial placement!).
    
    The following sched_wakeup_new tracepoint output shows the initial
    placement of tasks after adding this fix on the Zen3 machine:
    
    stream-4485    [016] d..2.   230.784046: sched_wakeup_new: comm=stream pid=4487 prio=120 target_cpu=032
    stream-4485    [016] d..2.   230.784123: sched_wakeup_new: comm=stream pid=4488 prio=120 target_cpu=048
    stream-4485    [016] d..2.   230.784167: sched_wakeup_new: comm=stream pid=4489 prio=120 target_cpu=000
    stream-4485    [016] d..2.   230.784222: sched_wakeup_new: comm=stream pid=4490 prio=120 target_cpu=112
    stream-4485    [016] d..2.   230.784271: sched_wakeup_new: comm=stream pid=4491 prio=120 target_cpu=096
    stream-4485    [016] d..2.   230.784322: sched_wakeup_new: comm=stream pid=4492 prio=120 target_cpu=080
    stream-4485    [016] d..2.   230.784368: sched_wakeup_new: comm=stream pid=4493 prio=120 target_cpu=064
    
    We see that threads are using all of the allowed CPUs and there is
    no pileup.
    
    No output is generated for tracepoint sched_migrate_task with this
    patch due to a perfect initial placement which removes the need
    for balancing later on - both across NUMA boundaries and within
    NUMA boundaries for stream.
    
    Following are the results from running 8 Stream threads with and
    without pinning on a dual socket Zen3 Machine (2 x 64C/128T):
    
    During the testing of this patch, the tip sched/core was at
    commit: 089c02ae "ftrace: Use preemption model accessors for trace
    header printout"
    
    Pinning is done using: numactl -C 0,16,32,48,64,80,96,112 ./stream8
    
    	           5.18.0-rc1               5.18.0-rc1                5.18.0-rc1
                   tip sched/core           tip sched/core            tip sched/core
                     (no pinning)                + pinning              + this-patch
    								       + pinning
    
     Copy:   109364.74 (0.00 pct)     94220.50 (-13.84 pct)    158301.28 (44.74 pct)
    Scale:   109670.26 (0.00 pct)     90210.59 (-17.74 pct)    149525.64 (36.34 pct)
      Add:   129029.01 (0.00 pct)    101906.00 (-21.02 pct)    186658.17 (44.66 pct)
    Triad:   127260.05 (0.00 pct)    106051.36 (-16.66 pct)    184327.30 (44.84 pct)
    
    Pinning currently hurts the performance compared to unbound case on
    tip/sched/core. With the addition of this patch, we are able to
    outperform tip/sched/core by a good margin with pinning.
    
    Following are the results from running 16 Stream threads with and
    without pinning on a dual socket IceLake Machine (2 x 32C/64T):
    
    NUMA Topology of Intel Skylake machine:
    Node 1: 0,2,4,6 ... 126 (Even numbers)
    Node 2: 1,3,5,7 ... 127 (Odd numbers)
    
    Pinning is done using: numactl -C 0-15 ./stream16
    
    	           5.18.0-rc1               5.18.0-rc1                5.18.0-rc1
                   tip sched/core           tip sched/core            tip sched/core
                     (no pinning)                 +pinning              + this-patch
    								       + pinning
    
     Copy:    85815.31 (0.00 pct)     149819.21 (74.58 pct)    156807.48 (82.72 pct)
    Scale:    64795.60 (0.00 pct)      97595.07 (50.61 pct)     99871.96 (54.13 pct)
      Add:    71340.68 (0.00 pct)     111549.10 (56.36 pct)    114598.33 (60.63 pct)
    Triad:    68890.97 (0.00 pct)     111635.16 (62.04 pct)    114589.24 (66.33 pct)
    
    In case of Icelake machine, with single LLC per socket, pinning across
    the two sockets reduces cache contention, thus showing great
    improvement in pinned case which is further benefited by this patch.
    Signed-off-by: default avatarK Prateek Nayak <kprateek.nayak@amd.com>
    Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
    Reviewed-by: default avatarSrikar Dronamraju <srikar@linux.vnet.ibm.com>
    Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
    Link: https://lkml.kernel.org/r/20220407111222.22649-1-kprateek.nayak@amd.com
    f5b2eeb4
fair.c 310 KB