Commit 80e509db authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller

fq_codel: fix NET_XMIT_CN behavior

My prior attempt to fix the backlogs of parents failed.

If we return NET_XMIT_CN, our parents wont increase their backlog,
so our qdisc_tree_reduce_backlog() should take this into account.

v2: Florian Westphal pointed out that we could drop the packet,
so we need to save qdisc_pkt_len(skb) in a temp variable before
calling fq_codel_drop()

Fixes: 9d18562a ("fq_codel: add batch ability to fq_codel_drop()")
Fixes: 2ccccf5f ("net_sched: update hierarchical backlog too")
Reported-by: default avatarStas Nichiporovich <stasn77@gmail.com>
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Acked-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
Acked-by: default avatarCong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 5b6c1b4d
...@@ -199,6 +199,7 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch) ...@@ -199,6 +199,7 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch)
unsigned int idx, prev_backlog, prev_qlen; unsigned int idx, prev_backlog, prev_qlen;
struct fq_codel_flow *flow; struct fq_codel_flow *flow;
int uninitialized_var(ret); int uninitialized_var(ret);
unsigned int pkt_len;
bool memory_limited; bool memory_limited;
idx = fq_codel_classify(skb, sch, &ret); idx = fq_codel_classify(skb, sch, &ret);
...@@ -230,6 +231,8 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch) ...@@ -230,6 +231,8 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch)
prev_backlog = sch->qstats.backlog; prev_backlog = sch->qstats.backlog;
prev_qlen = sch->q.qlen; prev_qlen = sch->q.qlen;
/* save this packet length as it might be dropped by fq_codel_drop() */
pkt_len = qdisc_pkt_len(skb);
/* fq_codel_drop() is quite expensive, as it performs a linear search /* fq_codel_drop() is quite expensive, as it performs a linear search
* in q->backlogs[] to find a fat flow. * in q->backlogs[] to find a fat flow.
* So instead of dropping a single packet, drop half of its backlog * So instead of dropping a single packet, drop half of its backlog
...@@ -237,14 +240,23 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch) ...@@ -237,14 +240,23 @@ static int fq_codel_enqueue(struct sk_buff *skb, struct Qdisc *sch)
*/ */
ret = fq_codel_drop(sch, q->drop_batch_size); ret = fq_codel_drop(sch, q->drop_batch_size);
q->drop_overlimit += prev_qlen - sch->q.qlen; prev_qlen -= sch->q.qlen;
prev_backlog -= sch->qstats.backlog;
q->drop_overlimit += prev_qlen;
if (memory_limited) if (memory_limited)
q->drop_overmemory += prev_qlen - sch->q.qlen; q->drop_overmemory += prev_qlen;
/* As we dropped packet(s), better let upper stack know this */
qdisc_tree_reduce_backlog(sch, prev_qlen - sch->q.qlen,
prev_backlog - sch->qstats.backlog);
return ret == idx ? NET_XMIT_CN : NET_XMIT_SUCCESS; /* As we dropped packet(s), better let upper stack know this.
* If we dropped a packet for this flow, return NET_XMIT_CN,
* but in this case, our parents wont increase their backlogs.
*/
if (ret == idx) {
qdisc_tree_reduce_backlog(sch, prev_qlen - 1,
prev_backlog - pkt_len);
return NET_XMIT_CN;
}
qdisc_tree_reduce_backlog(sch, prev_qlen, prev_backlog);
return NET_XMIT_SUCCESS;
} }
/* This is the specific function called from codel_dequeue() /* This is the specific function called from codel_dequeue()
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment