Commit 3dcc1edc authored by Li RongQing's avatar Li RongQing Committed by David S. Miller

virtio_net: reduce raw_smp_processor_id() calling in virtnet_xdp_get_sq

smp_processor_id()/raw* will be called once each when not
more queues in virtnet_xdp_get_sq() which is called in
non-preemptible context, so it's safe to call the function
smp_processor_id() once.
Signed-off-by: default avatarLi RongQing <lirongqing@baidu.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 9b0df250
......@@ -528,19 +528,20 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
* functions to perfectly solve these three problems at the same time.
*/
#define virtnet_xdp_get_sq(vi) ({ \
int cpu = smp_processor_id(); \
struct netdev_queue *txq; \
typeof(vi) v = (vi); \
unsigned int qp; \
\
if (v->curr_queue_pairs > nr_cpu_ids) { \
qp = v->curr_queue_pairs - v->xdp_queue_pairs; \
qp += smp_processor_id(); \
qp += cpu; \
txq = netdev_get_tx_queue(v->dev, qp); \
__netif_tx_acquire(txq); \
} else { \
qp = smp_processor_id() % v->curr_queue_pairs; \
qp = cpu % v->curr_queue_pairs; \
txq = netdev_get_tx_queue(v->dev, qp); \
__netif_tx_lock(txq, raw_smp_processor_id()); \
__netif_tx_lock(txq, cpu); \
} \
v->sq + qp; \
})
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment