• Maciej Fijalkowski's avatar
    ice: optimize XDP_TX workloads · 9610bd98
    Maciej Fijalkowski authored
    Optimize Tx descriptor cleaning for XDP. Current approach doesn't
    really scale and chokes when multiple flows are handled.
    
    Introduce two ring fields, @next_dd and @next_rs that will keep track of
    descriptor that should be looked at when the need for cleaning arise and
    the descriptor that should have the RS bit set, respectively.
    
    Note that at this point the threshold is a constant (32), but it is
    something that we could make configurable.
    
    First thing is to get away from setting RS bit on each descriptor. Let's
    do this only once NTU is higher than the currently @next_rs value. In
    such case, grab the tx_desc[next_rs], set the RS bit in descriptor and
    advance the @next_rs by a 32.
    
    Second thing is to clean the Tx ring only when there are less than 32
    free entries. For that case, look up the tx_desc[next_dd] for a DD bit.
    This bit is written back by HW to let the driver know that xmit was
    successful. It will happen only for those descriptors that had RS bit
    set. Clean only 32 descriptors and advance the DD bit.
    
    Actual cleaning routine is moved from ice_napi_poll() down to the
    ice_xmit_xdp_ring(). It is safe to do so as XDP ring will not get any
    SKBs in there that would rely on interrupts for the cleaning. Nice side
    effect is that for rare case of Tx fallback path (that next patch is
    going to introduce) we don't have to trigger the SW irq to clean the
    ring.
    
    With those two concepts, ring is kept at being almost full, but it is
    guaranteed that driver will be able to produce Tx descriptors.
    
    This approach seems to work out well even though the Tx descriptors are
    produced in one-by-one manner. Test was conducted with the ice HW
    bombarded with packets from HW generator, configured to generate 30
    flows.
    
    Xdp2 sample yields the following results:
    <snip>
    proto 17:   79973066 pkt/s
    proto 17:   80018911 pkt/s
    proto 17:   80004654 pkt/s
    proto 17:   79992395 pkt/s
    proto 17:   79975162 pkt/s
    proto 17:   79955054 pkt/s
    proto 17:   79869168 pkt/s
    proto 17:   79823947 pkt/s
    proto 17:   79636971 pkt/s
    </snip>
    
    As that sample reports the Rx'ed frames, let's look at sar output.
    It says that what we Rx'ed we do actually Tx, no noticeable drops.
    Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s txcmp/s  rxmcst/s   %ifutil
    Average:       ens4f1 79842324.00 79842310.40 4678261.17 4678260.38 0.00      0.00      0.00     38.32
    
    with tx_busy staying calm.
    
    When compared to a state before:
    Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s txcmp/s  rxmcst/s   %ifutil
    Average:       ens4f1 90919711.60 42233822.60 5327326.85 2474638.04 0.00      0.00      0.00     43.64
    
    it can be observed that the amount of txpck/s is almost doubled, meaning
    that the performance is improved by around 90%. All of this due to the
    drops in the driver, previously the tx_busy stat was bumped at a 7mpps
    rate.
    Signed-off-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
    Tested-by: default avatarGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com>
    Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
    9610bd98
ice_txrx.h 12.4 KB